title
stringlengths
15
163
paper_decision
stringclasses
4 values
review_1
stringlengths
853
32.6k
rebuttals_1
stringlengths
0
15.1k
review_2
stringlengths
1.03k
35.6k
rebuttals_2
stringlengths
0
15.1k
review_3
stringlengths
807
27.4k
rebuttals_3
stringlengths
0
15k
review_4
stringlengths
780
22.2k
rebuttals_4
stringlengths
0
15.1k
review_5
stringclasses
171 values
rebuttals_5
stringclasses
166 values
review_6
stringclasses
25 values
rebuttals_6
stringclasses
24 values
review_7
stringclasses
4 values
rebuttals_7
stringclasses
4 values
Backdoor Attacks in Token Selection of Attention Mechanism
Accept (poster)
Summary: Motivated by the need for theoretical foundations underpinning backdoor attacks on self-attention transformers/LLMs (good), this paper: (1) investigates LLM backdoor attacks targeting the token selection mechanism of attention, (2) proves that "single-head attention transformers can interpolate poisoned training data through gradient descent", and (3) identifies the theoretical conditions enabling such attacks. These conditions are supported empirically with “simple experiments on synthetic datasets”. Claims And Evidence: The authors claim that single head self-attention transformers trained using gradient descent can interpolate poisoned training data and maintain good generalisation on clean data. This is evidenced using the mechanics of gradient descent and the probabilities of selecting relevant / poisoned tokens after training on standard / poisoned signal vectors. The proof seems sound. The extent of evidence / the proof is limited in scope by the focus on gradient descent which is more susceptible to overfitting than e.g., Adam or when regularisation is applied. Nevertheless, this is interesting and novel as a first step towards building a theoretical foundation – as stated by the authors. Empirical results back up the claim and proofs. Methods And Evaluation Criteria: Yes they seem to although the dataset size is very small (n=20) and the number of poisoned samples is a generously large proportion (10% and 40% is used). This seems OK for the purposes of validating the theoretical results but limits ability to extrapolate to real-world phenomena. Theoretical Claims: I read the proof sketches in the main paper which seem sound (I did not attempt to parse the full proofs from the Appendix). Experimental Designs Or Analyses: A very small synthetic dataset is composed (n=20) and used to validate the earlier theoretical claims. 10 and 40% of the dataset is poisoned for different experiments. It does not seem that multiple training runs occurred but likely unnecessary because (line ~196) the weights are initialised to 0. Though limiting the ability to extrapolate, the experimental design allows for total control over the moving parts during optimisation and for the theory to be backed up empirically. Supplementary Material: No Relation To Broader Scientific Literature: The authors position their contributions w.r.t the literature in Sections 1 and 2. Their proof and results are aligned and provide some theoretical foundations to explain prior work (e.g., Dai et al. 2019, Wan et al. 2023) ~ that backdoor attacks are feasible on language models. This work incorporates and extends work by Tarzanagh et al 2023a;b which proves convergence in the direction of a max-margin solution separating locally-optimal tokens from non-optimal tokens in the attention mechanism of transofmer models. The novelty w.r.t prior work is proving how gradient descent interpolates backdoors in the attention mechanism of a single-head self-attention transformer model. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The contributions seem novel and I appreciate the advancement of theoretical foundations underpinning attacks on transformer models. I think the paper would benefit from a more detailed framing of the result in terms of extrapolating to real-world attacks or defences. Other Comments Or Suggestions: ~ 22: “The behavior of backdoor attack” -> attacks ~24: The vulnerability of large language models (LLMs) -> you have already defined LLM acronym ~98: “e.g.” -> “e.g.,” for consistency with the rest of the paper. ~137: “The rest tokens remains unchanged” -> The rest of the tokens remain unchanged ~139: “1 control the strength of the poisoned signal.” -> controls the strength… ~152: “are generated i.i.d.” -> is generated i.i.d. ~313-316: “To interpolate all training data, Lemma 5.1 guarantees that the attention mechanism select a relevant token for clean training data, while prioritizes the poisoned tokens for poisoned training data.” -> “To interpolate all training data, Lemma 5.1 guarantees that the attention mechanism selects a relevant token for clean training data, yet prioritizes the poisoned tokens for poisoned training data.” Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewer's interest and recognition of our contributions and the novelty of our work. We will correct the typo in the final version. Regarding the connection with practical settings, we have some conjectures about possible defense mechanisms. Suppose that the learner has knowledge of the relevant tokens. In that case, a simple sanity check could be performed: after training the transformer by optimizing only the tunable token $\mathrm{p}$, one can examine whether $\mathrm{p}$ exhibits a strong correlation with signals that are not relevant tokens. If such a correlation exists, it may indicate that the transformer has been compromised by a backdoor attack. Since optimizing $\mathrm{p}$ is equivalent to optimizing $\mathrm{W}$, our results suggest that backdoor triggers are injected into the key and query matrices. Therefore, another potential defense strategy in practice would be to apply dropout layers for the attention model. By randomly masking out poisoned neurons in the key or query matrices, dropout could introduce inconsistencies in the model’s output if the model has been poisoned. One can also adopt the idea from [1] to detect poisoned data within the training sample by identifying cases where a small proportion of the extracted features differ significantly from the rest. These are immature ideas and require thorough investigation, particularly in the context of practical transformer architectures, which is beyond the scope of this paper. [1] Tran, Brandon, Jerry Li, and Aleksander Madry. "Spectral signatures in backdoor attacks." NeurIPS 2018.
Summary: This paper discusses the vulnerability of the attention module to backdoor attacks from an interesting perspective, and this work provides theoretical analysis and simulation verification. This paper proves that a layer of attention module does remember poisoned samples after some assumptions are met. Claims And Evidence: The claims made in the paper are reasonable and verifiable. Methods And Evaluation Criteria: The evaluation method used (simulation experiment of synthetic data) is reasonable, but has some limitations. Theoretical Claims: The proof of theorem 4.1 provided in this paper is reasonable Experimental Designs Or Analyses: All the results of the experimental demonstration part of the paper are checked. Supplementary Material: Read all the supporting materials Relation To Broader Scientific Literature: This paper presents their work in relation to the work discussing the security of LLM Essential References Not Discussed: N/A Other Strengths And Weaknesses: This paper makes a theoretical analysis of the fragility of attention. My main concerns are as follows: 1. Is the time step tao_0 in Theorem 4.1 bounded? This time step needs to be large enough to make sure that the theorem holds, and what variables does this time step depend on. It needs to be analyzed that the time step tao is greater than tao_0 is actually a condition that can be met. 2. Lack of verification of theory correctness on real world data sets. For example, experimental demonstration can be performed on IMDB and sentiment140 datasets 3. In the introduction of poisoning data generation, the author says that the union of P and R needs to be an empty set. What is the reason for this condition? Other Comments Or Suggestions: Some formulas in the paper have punctuation, some formulas lack punctuation, need to unify the use of punctuation Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewer's interest and recognition of our contributions and the novelty of our work. W1: The lower bound on the number of iterations $\tau_0$ depends on the proportion of relevant tokens $\zeta_R$, the proportion of irrelevant tokens $\zeta_P$, the number of tokens $T$, and the strength of poisoned signal $\alpha$. $\tau\geq \tau_0$ is required to guarantee that, for any give $\epsilon>0$, the softmax probability of the relevant token is at least $1-\epsilon$ for the standard training sample, and the softmax probability of the poisoned token is at least $\frac{1}{|\mathcal{P}|}-\epsilon$ for the poisoned training sample. Such condition can be met due to Lemma 5.1. The proof of generalization guarantee requires $\epsilon\lesssim \min(\frac{T}{(\frac{1}{\zeta_R}-1)^4}, \frac{1}{(\frac{1}{\zeta_P}-1)^{\frac{4}{\alpha}-1}})$. Smaller $\epsilon$ leads to larger $\tau_0$. W3: We assume that the union of $\mathcal{P}$ and $\mathcal{R}$ is an empty set to guarantee that adding poisoned tokens does not alter the semantic meaning of the original input. Intuitively, if a poison pattern modifies an image in a way that changes the original object, or if a modified word changes the semantic meaning of the sentence, the classifier should not be expected to predict the original label. We will include a remark on this in the final version. We will unify the use of punctuation of formulas as suggested by the reviewer.
Summary: This paper presents a theoretical analysis of backdoor attacks targeting the token selection process in single-head self-attention transformers. The authors demonstrate that gradient descent can interpolate poisoned training data and establish conditions under which backdoor triggers dominate model predictions while preserving generalization on clean data. Empirical experiments on synthetic data validate the theoretical findings. Claims And Evidence: Yes Methods And Evaluation Criteria: - The experiments use synthetic data only and lack real-world benchmarks. - Fixed linear head $\nu$ simplifies analysis but limits practical relevance. Theoretical Claims: • **Orthogonality Assumption**: Signals $\mu_{\pm 1}$ and $\tilde{\mu}_{\pm 1}$ are orthogonal (Assumption 1). In practice, triggers (e.g., rare words) may correlate with natural tokens, weakening the theory’s applicability. • **Fixed Linear Head**: Training $\nu$ and $p$ jointly could alter dynamics; this is not addressed. Experimental Designs Or Analyses: While useful for controlled analysis, real-world relevance is unclear. For example, real triggers (e.g., "James Bond") may exhibit complex interactions with context. Supplementary Material: Yes. Appendix includes proofs and additional experiments. Relation To Broader Scientific Literature: This is the first theoretical study of backdoors in attention mechanisms (prior work focused on empirical attack designs). Essential References Not Discussed: Theoretical work on multi-head attention is omitted but relevant for extensions. Other Strengths And Weaknesses: **Strengths**: • Theoretically grounded conditions for attack success. • Clear exposition of attention manipulation dynamics. **Weaknesses**: • Narrow scope (single-head, synthetic data). • Assumptions may not generalize to real-world models. Other Comments Or Suggestions: None Questions For Authors: 1. How do your theoretical conditions translate to real-world triggers (e.g., phrases or syntax patterns) that may correlate with natural tokens? 2. Could joint optimization of $\nu$ and $p$ weaken or strengthen backdoor success? 3. Have you tested the approach on transformers pre-trained on large corpora? 4. Are the conclusions revealed in the paper instructive for defense? Discussing this will help increase the value of the work. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewer's interest and recognition of our contributions and the novelty of our work. Q1: Regarding the orthogonality assumption, such assumption can be relaxed to the setting where the relevant signals $\mu_{\pm 1}$ and the poisoned signals $\tilde\mu_{\pm 1}$ are correlated, and our proof still holds with minor modifications. We discuss this on page 6, left column, lines 295–303. We can add more clarification in our final version. Q2: We conduct several experiments using the same synthetic dataset as described in the paper. We choose $|\mathcal{R}|=|\mathcal{P}|=1$ and vary $\beta$ across $0.1,0.2,0.3,0.4$ and $\alpha$ across $1,2,3,4$. We compare the poison accuracy between jointly optimizing $\nu$ and $\mathrm{p}$ versus optimizing only $\mathrm{p}$ under the same $\alpha$ and $\beta$. Our results show that while the final poison accuracy is similar in both cases after sufficient training iterations, joint optimization leads to a faster convergence rate. We hypothesize that jointly optimizing $\nu$ and $\mathrm{p}$ may strengthen the backdoor attack in more practical scenarios, such as when training on a more complex dataset or using a more sophisticated attention architecture. Understanding the effects of joint optimization remains an interesting research direction, which we highlight as a future avenue in our paper (page 6, right column, lines 322–324). Q3: We didn't run experiments on transformers pre-trained on large corpora. Q4: This is an excellent question. We have some conjectures about possible defense mechanisms. Suppose that the learner has knowledge of the relevant tokens. In that case, a simple sanity check could be performed: after training the transformer by optimizing only the tunable token $\mathrm{p}$, one can examine whether $\mathrm{p}$ exhibits a strong correlation with signals that are not relevant tokens. If such a correlation exists, it may indicate that the transformer has been compromised by a backdoor attack. Since optimizing $\mathrm{p}$ is equivalent to optimizing $\mathrm{W}$, our results suggest that backdoor triggers are injected into the key and query matrices. Therefore, another potential defense strategy in practice would be to apply dropout layers for the attention model. By randomly masking out poisoned neurons in the key or query matrices, dropout could introduce inconsistencies in the model’s output if the model has been poisoned. One can also adopt the idea from [1] to detect poisoned data within the training sample by identifying cases where a small proportion of the extracted features differ significantly from the rest. These are immature ideas and require thorough investigation, particularly in the context of practical transformer architectures, which is beyond the scope of this paper. [1] Tran, Brandon, Jerry Li, and Aleksander Madry. "Spectral signatures in backdoor attacks." NeurIPS 2018. Regarding missing references: we have discussed some of the theoretical work on multi-head attention in Section 2, page 2, right column, lines 72-78. We will include more references in our final version.
Summary: This paper uses extensive mathematical proofs to reveal how backdoor triggers affect model optimization. If the signal from the backdoor trigger is strong enough but not overly dominant, an attacker can successfully manipulate the model predictions. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes, but I'm a bit skeptical that the A1-5 assumptions hold up with practical and large data. Experimental Designs Or Analyses: Yes, the experiment is simple and small just only to support the theoretical results... Supplementary Material: Yes, I scan the proofs and additional experiments. Relation To Broader Scientific Literature: The process of backdoor attack has been investigated at both the mathematical analysis and theoretical levels and have contributed to the theory and interpretability of Transformer-based models. Essential References Not Discussed: none Other Strengths And Weaknesses: Strengths: +Extensive mathematical proofs. +Revealing how backdoor triggers affect model optimization. +Reveals and defines the necessary conditions for a successful backdoor attack in a single-head self-attention transformer. Weaknesses: -The experiment is too simple. Only test a single layer self-attention transformer. -Whether vector L2 norm is truly representative of trigger signal strength in deep learning needs to be further explored. -Too many assumptions, whether it applies in real large-scale transformer or large-scale datasets is a question. Other Comments Or Suggestions: N/A Questions For Authors: I do not have further questions to authors. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewer's interest and recognition of our contributions and the novelty of our work. We acknowledge that our current results rely on restrictive assumptions and that our experiments serve primarily as a proof-of-concept for our theoretical findings. We have discussed these limitations in the paper. As this is the first work to address how backdoor triggers influence the optimization of attention-based models, our goal is to provide valuable insights into this problem. We agree that refining and relaxing these assumptions is an important direction for future research.
null
null
null
null
null
null
ERICT: Enhancing Robustness by Identifying Concept Tokens in Zero-Shot Vision Language Models
Accept (poster)
Summary: This paper introduces ERICT, a novel method to enhance model robustness by identifying concept tokens and mitigating spurious correlations at the inference stage. ERICT operates in two key steps: (1) identifying invariant concept tokens using auxiliary prompts to generate a token-level mask and (2) applying the mask to the CLS token's attention weights in the vision encoder, ensuring the model focuses on relevant image regions. Claims And Evidence: No, there are several unclear aspects that need further clarification: (i) The paper observes that “tokens whose semantics align more closely with the prompt tend to have lower similarity scores,” but provides only limited examples to support this claim. Given that this observation appears counterintuitive, stronger empirical validation is necessary. (ii) Based on this observation, the method should ideally select the top-k lowest similarity token embeddings. However, it seems to do the opposite, which requires further explanation. (iii) The selection of auxiliary prompts appears to require domain knowledge, and ERICT-C is only evaluated using the top-3 version. Both aspects require a more in-depth investigation. Methods And Evaluation Criteria: Yes. The proposed method is pretty straightforward and should be easily used to enhance the zero-shot performance of VLMs. Theoretical Claims: Yes, there is a simple proof to support the assumption, which is correct. Experimental Designs Or Analyses: Yes. The main experiments should be fine. However, some ablation studies are missed. For example, how to select a better auxiliary prompt and ERICT-C with diverse top-k. Supplementary Material: Yes. Additional materials are given to better support the assumption, which is useful. Relation To Broader Scientific Literature: Compared with existing works, this work focuses on zero-shot learning without the help of extra LLMs and labeled data. However, expert knowledge to design the auxiliary prompt is needed, which might not be trivial for all tasks. The alternative method, ERICT-C, is not well evaluated. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: +The proposed method is simple and elegant. +The experiments on selected datasets show the effectiveness of the method. Weaknesses: - Several unclear aspects exist in this work as mentioned before. - Compared with baseline method, ROBOSHOT, the method missed experiments on other datasets, e.g., PACS, VLCS and CXR14. Other Comments Or Suggestions: No. Questions For Authors: My questions have been mainly mentioned above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > **Q1: The paper observes that “tokens whose semantics align more closely with the prompt tend to have lower similarity scores,” but provides only limited examples to support this claim. Given that this observation appears counterintuitive, stronger empirical validation is necessary.** **A1**: Thanks for the reviewer’s suggestion. We have provided additional image results (https://anonymous.4open.science/r/rebuttal-6618/more-image.pdf) from different datasets to further support this point. In each row, the leftmost image is the original image, followed by the score heatmaps and score maps for three different concepts in the image. These results further validate our findings. > **Q2: Based on this observation, the method should ideally select the top-k lowest similarity token embeddings. However, it seems to do the opposite, which requires further explanation.** **A2**: We thank the reviewer for pointing out this error. In Equation (6), $S_i$ should actually be $S_{i}^{'}$, with $S^{'} = Sort(-1 \times S)$. In the implementation, consistent with the finding in line 178, we use sparsification to select tokens with lower scores to obtain the final visual representation. We will correct this error in the revised version. Additionally, we would like to re-explain the top-k strategy, which is used in ERICT-C to select potential class prompts for aggregation as auxiliary embeddings. In this process, we sort the inference results from vanilla CLIP and select the class prompts that are most similar to the image. > **Q3: The selection of auxiliary prompts appears to require domain knowledge, and ERICT-C is only evaluated using the top-3 version. Both aspects require a more in-depth investigation.** **A3**: ERICT uses the superclass of the task category as prior knowledge to compute the auxiliary embedding. This prior does not require domain experts and is not complex. It is also used in the baseline method, PerceptionCLIP. ERICT-C, on the other hand, does not rely on any prior knowledge. It computes embeddings for each class prompt in the classification task and aggregates the top-k embeddings with the highest similarity to obtain the auxiliary embedding. To demonstrate the effectiveness of the top-k strategy, we presented results with top-3 in the paper on multi-category datasets such as ImageNet-R. Additionally, to further illustrate this, we have provided more results for different values of k. | ViT-L/14 | Imagenet-1k | Imagenet-A | Imagenet-R| | :-: | :-: | :-: | :-: | | K=2 | 72.32 | 69.95 | 87.30 | | K=3 | 72.15 | 70.03 | 87.19 | | K=5 | 72.03 | 69.76 | 86.99 | > **Q4 Compared with existing works, this work focuses on zero-shot learning without the help of extra LLMs and labeled data. However, expert knowledge to design the auxiliary prompt is needed, which might not be trivial for all tasks. The alternative method, ERICT-C, is not well evaluated.** **A4**: ERICT does require the introduction of certain priors to use auxiliary prompts, but these priors are similar to the simple assumption of "bird in photo" in the waterbirds dataset, which is also used in the widely discussed work PerceptionCLIP in this field. At the same time, ERICT-C does not rely on any priors, addressing this limitation and demonstrating superior performance. We have also provided additional experiments (e.g., Tables 1, 2, 3, 6 in our manuscript and the table in Q5) to validate the effectiveness of our method. > **Q5: Compared with baseline method, ROBOSHOT, the method missed experiments on other datasets, e.g., PACS, VLCS and CXR14.** **A5**: We sincerely clarify that our paper primarily focuses on the issue of spurious correlations, while PACS, VLCS, and CXR14 are domain generalization (DG) benchmarks, not spurious correlation problems. Following the reviewer's suggestion and the experimental setup of ROBOSHOT, we conducted a comprehensive comparison on PACS, VLCS, and CXR14. The experimental results show that our method still demonstrates strong robustness on these DG benchmarks. |ViT-L/14||PACS|||VLCS|||CXR14(BiomedCLIP)|| |:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |Model|WG|AVG|Gap|WG|AVG|Gap|WG|AVG|Gap| |ZSCLIP|79.8| 98.1 | 18.3 | 4.2| 72.6 | 68.4 | 28.9 | 55.3 |26.4| |ROBOSHOT|83.9 | 98.1 | 14.2 | 12.6 | 71.1 | 58.5 | 41.6 | 56.2 | 14.6 | |ERICT|83.1| 96.4 | 13.3 | 39.8 | 73.2 | 33.4 | 42.3 | 56.7 | 14.4 | |ERICT-C|83.3| 97.2 | 13.9 | 44.3 | 78.9 | 34.6 | 45.6 | 57.2 | 11.6 | --- Rebuttal Comment 1.1: Comment: Thanks for the authors' response, which has solved most of my concerns. I will increase my score and please add these responses to the revision. --- Reply to Comment 1.1.1: Comment: We are glad to know that your concerns have been effectively addressed. We are very grateful for your constructive comments and questions, which help improve the clarity and quality of our paper. Thanks again!
Summary: This paper presents ERICT, a novel method to enhance the robustness of vision-language models (VLMs) by mitigating spurious correlations at the inference stage. The approach identifies concept tokens to create a token-level mask, which is then applied to the vision encoder’s attention mechanism. Experimental results demonstrate that ERICT improves overall and worst-case performance, achieving state-of-the-art results. Claims And Evidence: In this paper, the authors argue that while fine-tuning methods are somewhat effective in mitigating the spurious correlation problem, they come with additional computational costs and rely heavily on the quality of prompts, without fully leveraging the vision modality. To address these limitations, the authors propose the ERICT method, which enhances robustness without requiring additional training, assistance from LLMs, or access to group labels. To validate the effectiveness of the proposed approach, the authors employ an error probability bound of the model. The paper lacks an in-depth discussion on the disentanglement of invariant and spurious factors in Section 5. While it is intuitively feasible to decompose image samples into invariant and spurious factors, a more detailed exploration of the disentanglement approach is necessary. Methods And Evaluation Criteria: The proposed method is generally reasonable and has been validated for effectiveness from both average performance and worst-case perspectives. Additionally, the authors have theoretically demonstrated that the method can reduce the model’s probabilistic error bound. However, there are still some aspects of the method that warrant further discussion. Theoretical Claims: The authors validate the effectiveness of the proposed binary mask approach by analyzing the model’s error probability bound. A thorough examination of the theoretical analysis in Section 5 and the corresponding appendix confirms its correctness. Experimental Designs Or Analyses: I have reviewed the authors’ experimental design. They conducted experiments on three widely used datasets with spurious correlations and compared their method against several state-of-the-art approaches from multiple perspectives. Additionally, visualization experiments were designed to further demonstrate the robustness of the proposed method. However, the experiments on different prompts, as shown in Table 5, were not presented. Supplementary Material: I have reviewed all the contents of the supplementary material. Relation To Broader Scientific Literature: The key contribution of this paper lies in proposing a novel method to mitigate the spurious correlation problem in vision-language models. By directly applying a concept token mask during the inference stage, this method enhances the model’s robustness and generalization ability, overcoming the limitations of existing methods that rely on large-scale language models and labeled data. Therefore, it represents a novel contribution to the literature. Essential References Not Discussed: To the best of my knowledge, the paper includes imporant related works. Other Strengths And Weaknesses: Strengths - The proposed ERICT method mitigates the spurious correlation problem without the need for fine-tuning or additional complex prompts, reducing computational costs and dependency on prompt quality. - Extensive experiments demonstrate that ERICT improves overall model performance, including that of the worst-performing group, and achieves new state-of-the-art results, highlighting its effectiveness in addressing the robustness challenges of vision-language models. Weaknesses - The authors introduce a threshold $l$ in Equation (5), but do not further analyze it in the experimental section. It is recommended that the authors include two parts in the ablation study: (1) the impact of different threshold values on the experimental results; (2) the effect of different search threshold methods on performance. - In line 189, the authors mention that the auxiliary prompt is very important, but they do not further analyze why the superclass of the task category is better than the ground-truth category. Additionally, are there other forms of prompts worth further discussion and investigation? Other Comments Or Suggestions: The caption of Figure 2 should describe the specific functions of the two steps to help readers quickly understand the content of the figure. Questions For Authors: - The authors should further clarify the rationale behind using a binary mask to mitigate spurious correlation issues in VLMs. Specifically, they should elaborate on how the binary mask influences feature selection, information filtering, and the model’s final decision-making process to enhance the understanding and persuasiveness of the proposed approach. - Pre-trained VLMs typically exhibit strong generalization capabilities but can still be affected by spurious correlations. What are the underlying reasons for this? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: > **Q1: Section 5 lacks a detailed discussion on disentangling invariant and spurious factors, requiring further exploration.** **A1**: Consistent with previous works[1-2], our theoretical analysis adopts the classic data assumption, which disentangles spurious datasets into invariant and spurious parts. In this paper, our method identifies concept tokens to effectively extract both invariant and spurious information. [1] Robust Learning with Progressive Data Expansion Against Spurious Correlation.NeurIPS 2023. [2] A Sober Look at the Robustness of CLIPs to Spurious Features. NeurIPS 2024. > **Q2: The experiments on different prompts were not presented.** **A2**: We thank the reviewer for pointing out this issue. The experiments in our paper used the commonly adopted prompt template "a photo of a {}". Additionally, we have conducted extensive experiments on CelebA to evaluate the performance of ERICT under different prompt templates, where ``Prompt 80`` represents the average of 80 templates designed by OpenAI for CLIP. |ViT-L/14|Prompt template|WG|AVG|Gap| |:-:|:-:|:-:|:-:|:-:| |ERICT|a photo of a {CLASS}|85.00|86.43|1.43| |ERICT|Prompt 80|84.51|86.36|1.85| |ERICT|a {CLASS}|84.89|86.80|1.91| |ERICT-C|a photo of a {CLASS}|85.44|86.53|1.09| |ERICT-C|Prompt 80|84.48|86.16|1.69| |ERICT-C|a {CLASS}|85.00|87.81|2.81| > **Q3: The authors introduce a threshold l in Eq (5), but don't analyze its impact or the effect of different threshold methods.** **A3**: We further clarify the threshold ``l``. In this paper, ``l`` is dynamically calculated based on the sparsification process, considering that the number of invariant information tokens varies across different images. Each image in the task has a different threshold depending on the variation in spurious information (see Equations (4) and (5)). The key parameter influencing the sparsification process is the temperature coefficient, which we have already analyzed in the ablation study section (lines 385-420). To further demonstrate the effectiveness of this process, we provide comparative experiments on different threshold settings using the CelebA. |ViT-L/14||WG|AVG|Gap| |:-:|:-:|:-:|:-:|:-:| |ERICT|sparse|**85.00**|86.43|**1.43**| |ERICT|50%|76.67|88.01|11.34| |ERICT|80%|79.44|88.82|9.38| |ERICT-C|sparse|**85.44**|86.53|**1.09**| |ERICT-C|50%|78.33|87.76|9.43| |ERICT-C|80%|84.44|86.24|1.80| > **Q4: The discussion about the auxiliary prompt. Other worthy prompt forms.** **A4**: We sincerely apologize for not clearly expressing this point in the submitted paper. What we actually want to convey is that the auxiliary embedding is very important as it guides model inference. Specifically, ERICT utilizes superclass information, while ERICT-C employs a Top-K strategy to aggregate class prompt embeddings. Since they obtain auxiliary embeddings in different ways, their performance varies across different tasks. We do not want to emphasize that the "superclass of the task category is better than the ground-truth category", but to point out the limitations of the auxiliary prompt (which requires certain priors), and then introduce ERICT-C. We will clarify this issue in the final version. > **Q5: The caption of Figure 2.** **A5**: Thanks for the reviewer's suggestion. Figure 2 mainly includes two steps: In Step 1, we construct an auxiliary embedding $z_t^a$ to identify tokens containing invariant information, obtaining a token-level mask. In Step 2, we apply this token-level mask within the attention mechanism of the vision encoder, making tokens containing spurious information "invisible" to the [CLS] token. > **Q6: The authors should clarify how the binary mask mitigates spurious correlations, particularly its impact on feature selection, information filtering, and decision-making.** **A6**: Thanks for the interesting question raised by the reviewer. According to our findings (line 178), the non-CLS tokens in the visual encoder capture local information. The original visual representation is obtained through the attention interaction mechanism between the CLS token and other tokens. By using a binary mask, we "shield" the tokens that focus on spurious information from the CLS token, so that the final representation no longer attends to spurious-related information (as demonstrated clearly in our T-SNE visualization). This prevents the model from making decisions based on spurious information. > **Q7: What are the underlying reasons for spurious correlation in VLMs?** **A7**: Thanks for your interesting question. In our view, although pre-trained VLMs use large-scale image-text pairs, they still fundamentally follow a supervised learning paradigm, which cannot fully avoid the influence of fundamental learning issues, a point supported by related work [3]. Unfortunately, real-world data are full of statistical biases. [3] Mitigating Spurious Correlations in Multi-modal Models during Fine-tuning. ICML 2023 --- Rebuttal Comment 1.1: Comment: Decided to raise my score according to the thorough rebuttal, which solves most of my questions. --- Reply to Comment 1.1.1: Comment: We are pleased to learn that your questions have been effectively addressed. Thank you sincerely for your constructive feedback, which has significantly improved the clarity and quality of our paper. Thanks again!
Summary: The paper introduces ERICT and ERICT-C, mitigate spurious correlations in vision-language models (VLMs) during zero-shot inference. These approaches aim to enhance model robustness by identifying invariant features within image tokens and focusing the model's attention on relevant regions through token masking. The methods are theoretically grounded and demonstrate significant improvements in performance. ## update after rebuttal The rebuttal resolves my concerns, I am inclined to accept this paper. Claims And Evidence: The authors' claims regarding improved robustness and reduced spurious correlations are supported by experimental results. However, the distinction between ERICT and ERICT-C needs clarification, and the underperformance of ERICT-C in some scenarios requires further investigation. Methods And Evaluation Criteria: The proposed methods aim to address spurious correlations by leveraging invariant features in image tokens. However, the distinction between ERICT and ERICT-C is not clearly articulated. The evaluation criteria (WG, AVG, Gap) are appropriate for assessing robustness. Theoretical Claims: The error probability bound theorem supports the effectiveness of the approach. Experimental Designs Or Analyses: The experimental designs are robust, with evaluations on multiple datasets and backbones. However, the choice of the top-3 strategy in ERICT-C lacks justification, and the limitations mentioned in the text are not adequately addressed in subsequent sections. Supplementary Material: I have reviewed all parts of supplementary material. Relation To Broader Scientific Literature: The work contributes to the field of robust vision-language models by proposing a novel approach to mitigate spurious correlations without requiring additional training data or group labels. Essential References Not Discussed: The paper could benefit from experiments on more data such as MetaShift, and Living-17 datasets, as well as refering more debiasing algorithm like [1][2] [1] Debiased Fine-Tuning for Vision-language Models by Prompt Regularization [2] CLIPood: Generalizing CLIP to Out-of-Distributions Other Strengths And Weaknesses: Strengths: • The approach is efficient as it does not require additional training or group labels. • The paper provides visualizations and ablation studies that help understand the method's effectiveness. Weaknesses: • The distinction between ERICT and ERICT-C is not clearly explained. • ERICT-C underperforms ERICT in some cases as shown in Table 1, 2, 3, suggesting limitations in the auxiliary prompt strategy. • The choice of the top-3 strategy in ERICT-C lacks experimental justification. • The paper does not compare with ERM[1], AFR[2], and CFR[3] algorithms. [1]Principles of risk minimization for learning theory. [2]Simple and fast group robustness by automatic feature reweighting. [3]Calibrating multimodal representations: A pursuit of group robustness without annotations. Other Comments Or Suggestions: The paper presents a novel and effective approach to enhancing robustness in VLMs. To strengthen the work, the authors should: 1. Clarify the distinction between ERICT and ERICT-C. 2. Investigate why ERICT-C underperforms in certain scenarios. 3. Provide justification for the top-3 strategy choice. 4. Address the limitations mentioned in the text. 5. Include results on additional datasets and compare with more algorithms. Questions For Authors: see weaknesses Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > **Q1: However, the distinction between ERICT and ERICT-C needs clarification** **A1**: The key difference between ERICT and ERICT-C lies in the way they obtain the auxiliary embeddings. As shown in Step 1 of Figure 2, ERICT uses auxiliary prompts (e.g., "bird in photo") as input to the text encoder to obtain auxiliary embeddings. In contrast, ERICT-C constructs prompts using class names (e.g., "a photo of a landbird") as input to the text encoder. Then, ERICT-C ranks the embeddings output by the class names based on similarity and aggregates the top-K most similar embeddings to obtain the auxiliary embedding, where K is a hyperparameter less than or equal to the maximum number of categories in the dataset. > **Q2: The underperformance of ERICT-C in some scenarios requires further investigation.** **A2**: ERICT and ERICT-C each have their strengths and weaknesses under different settings, primarily due to their use of two different strategies (as shown in Q1) for obtaining auxiliary embeddings. The difference in how these embeddings are obtained leads to varying performance in the task. > **Q3: The choice of the top-3 strategy in ERICT-C lacks justification and the limitations mentioned in the text are not adequately addressed in subsequent sections.** **A3**: We apologize for any confusion caused to the reviewer. We have clarified the Top-K strategy. In the experiments, the value of k should be less than or equal to the number of categories in the dataset. For example, for the binary classification task on the Waterbirds dataset, K=2. Similarly, we have provided experiments with different values of K on three distorted versions of the Imagenet dataset, as shown below. The limitation mentioned in the paper mainly refers to the suboptimality of the auxiliary prompt. In fact, we discuss this limitation in the "Impact of the auxiliary prompt" section of the ablation study (lines 400-412). How to obtain an auxiliary embedding with the best guiding significance is indeed a meaningful topic for further exploration. However, the main contribution of this paper lies in enhancing the model's robustness during the inference phase without relying on training, the assistance of LLMs, or group labels. We hope to address this limitation for future work. | ViT-L/14 |Imagenet-1k |Imagenet-A|Imagenet-R| | :-: | :-: | :-: | :-: | |K=2|72.32|69.95|87.30| |K=3|72.15| 70.03| 87.19| |K=5|72.03| 69.76| 86.99| > **Q4: The paper could benefit from experiments on more data such as MetaShift, and Living-17 datasets, as well as referring to more debiasing algorithms.** **A4**: As suggested by the reviewer, we add extensive experiments comparing the MetaShift and Living-17 datasets. The experimental results are shown below. These experiments demonstrate the robustness of our method across different datasets. Please note that the code for "Debiased Fine-Tuning for Vision-Language Models by Prompt Regularization" mentioned has not been open-sourced. The experiments for CLIPood and additional baseline results are presented in Q5. | ViT-B/16 | | MetaShift | | | Living-17 | | | :-: | :-: | :-: | :-: | :-: | :-: | :-: | | model | wg | avg | gap | wg | avg | gap | | ZSCLIP | 87.56 | 94.96 | 7.4 | 31.5 | 93.7 | 62.2 | | ERICT | **91.89** | 95.38 | **3.49** | 38.2 | 94.1 | 55.9 | | ERICT-C | 90.27 | 95.33 | 5.06 | **38.5** | 94.2 | **55.7** | > **Q5: The paper does not compare with ERM, AFR and CFR algorithms.** **A5**: We thank the reviewer for pointing out this issue. We have added detailed experiments comparing our method with extensive baseline methods. It is worth noting that although these baseline methods are training-based, our approach is training-free. However, in certain settings, our method outperforms these training-based methods, further demonstrating the robustness of our approach. | ViT-L/14 | | | Waterbirds | | | CelebA | | | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | | | Training | Wg | Avg | Gap | Wg | Avg | Gap | | ERM | Yes | 57.9 | 97.6 | 39.7 | 30.4 | 94.6 | 64.2 | | AFR | Yes | 73.4 | 88.2 | 14.8 | 70.0 | 85.2 | 15.2 | | CFR | Yes | 88.2 | 96.8 | 8.6 | 84.8 | 87.8 | 3.0 | | CLIPood | Yes | 79.6 | 94.1 | 14.5 | 31.1 | 95.3 | 64.2 | | ERICT | NO | 61.2 | 74.1 | 12.9 | 85.0 | 88.6 | 3.6 | | ERICT-C | NO | 64.8 | 79.4 | 14.6 | 83.3 | 89.0 | 5.7 |
Summary: This paper proposes ERICT, a zero-shot method to improve robustness in vision-language models by identifying “concept tokens” that represent invariant image features. ERICT uses auxiliary prompts to generate masks applied to attention weights, aiming to reduce spurious correlations. The authors evaluate ERICT on standard benchmarks (Waterbirds, CelebA, Urbancars), claiming substantial improvements, especially on worst-group accuracy. Claims And Evidence: The claims made in the paper are largely supported by thorough experimental results on multiple benchmarks (Waterbirds, CelebA, Urbancars), showing clear improvements in worst-group accuracy. Visualizations provided further illustrate that ERICT effectively shifts attention away from spurious features. Methods And Evaluation Criteria: Yes, the proposed method directly addresses the spurious correlation issue in zero-shot VLMs, and the benchmarks used (Waterbirds, CelebA, Urbancars) are appropriate, clearly capturing improvements in robustness and worst-group accuracy. Theoretical Claims: Theorem 5.2 establishes the theoretical error bound after applying the proposed mask. The theorem itself is well-formulated, and the intuition (reducing spurious correlations lowers the error probability) is reasonable. Experimental Designs Or Analyses: The experimental design is sound and clearly demonstrates improved robustness. Supplementary Material: I didn't fully check the proof. Relation To Broader Scientific Literature: The paper builds on existing literature addressing spurious correlations in VLMs, offering a zero-shot solution distinct from previous fine-tuning or LLM-based methods. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your positive feedback on our work. We **sincerely appreciate your recognition of our contributions and the significance of the research problems we address**. Your support further strengthens our confidence in the proposed approach. If you have any additional questions or suggestions, we would be happy to discuss them.
null
null
null
null
null
null
SENSEI: Semantic Exploration Guided by Foundation Models to Learn Versatile World Models
Accept (poster)
Summary: This paper introduces SENSEI (SEmaNtically Sensible ExploratIon), a LM-based framework for guiding the exploration phase of RL agents towards "interesting" states. It trains a reward model based on VLM-generated rankings of interestingness on prior exploration data. The final exploration reward is the sum of the aforementioned interestingness score and an uncertainty score computed by ensemble disagreement (taken from Plan2Explore [1]). It dynamically reweights these two scores to first go towards an interesting state, then explore from there. Experiments are performed in two domains: MiniHack (navigating dungeons and finding keys) and Robodesk (simulated robot arm interacts with objects on a desk). Results show that this method, compared to baselines: 1. succeeds in directing towards the agent towards semantically meaningful states during exploration 2. solves tasks faster Claims And Evidence: Examining the evidence presented for the main claims made in the submission (1 & 2 in summary): 1. SENSEI guides LMs towards "interesting" states: The evidence shown for this claim includes a small amount of qualitative samples (Figure 4), and a quantitive analysis showing evidence that agents generally interact more with objects under SENSEI than P2X. The precie definition of "interestingness" is worth some discussion in the paper: For example, why are object interactions considered "interesting"? Breakdown of other types of interactions deemed interesting by the reward model could be useful: e.g. seeing information-dense states. Are different types of interactions on the same object considered more or less interesting (e.g. pressing a button vs. moving it)? Furthermore, can interestingness be adjustable based on existing information (e.g. a new object is more interesting than an object seen before)? This claim can benefit from being slightly more precise -- or including a more rigorous analyses of what types of states have high "interesting" reward. 2. SENSEI solves task faster: This seems fairly well-substantiated from the end-to-end experiments in Figure 4. SENSEI outperforms Plan2Explore, DreamerV3, and PPO, all of which do not use a interestingness-based reward. It could be clarified that P2E is basically exactly the same as SENSEI but without the interesting-ness reward, representing a minimal ablation of the contributed portion. Methods And Evaluation Criteria: SENSEI relies on data that is pre-collected from an exploration policy in order to distill an "interesting-ness" reward function from. Given that the policy relies on pre-collected data, it seems like an unfair comparison to compare against baselines that are purely from-scratch and not bootstrapped with any initial exploration data: perhaps a fairer comparison (and a more realistic setting) would be to look at the transferability of the distilled interesting-ness model across environments (even though this itself would mean SENSEI has access to some extra domain-specific information that other methods don't), or start P2X with the world model it's built up with the existing exploration data, and have it continue exploring from there. Why not just use the VLM online to give interestingness rewards? The choice of evaluation datasets makes sense to me. The paper shows improvements on two very different domains: one with navigating a dungeon and one with interacting with objects on a desk. Perhaps adding a real-world domain would also be useful here (but I don't think it's essential for this work). Theoretical Claims: No theoretical claims in paper Experimental Designs Or Analyses: The experimental designs generally seem sound. Experiments were conducted on two different (simulated) environments. Comparisons against Plan2Explore (the method upon which this paper is based -- which is missing the interestingness score) and Random Network Distillation (a model-free method trained with PPO), demonstrate 1) evidence of increased semantically-meaningful object interactions, 2) empirical improvements on downstream tasks, when the interestingness score is incorporated into the model. One question is how much these environments may already be in GPT-4o, which may already a sense of what goals exist in these environments, and thus output "interestingness" scores based on proximity to the goal. There is also a question of how large and diverse these environments are, whereby the goal state may be the only "interesting" state there is. Some additional analyses to consider: 1. Is downstream performance improvement predicated on the goal state (or a subgoal state) being one of (or proximate to) the identified interesting states? What happens in settings where goals may not necessarily be aligned with "interestingness"? 2. To what extent is GPT4-o necessary for ranking "interestingness" and not just heuristics like total information content of the state? 3. To what extent is GPT4-o's rating of interestingness generalizable to diverse (potentially real-world) environments and goals? Supplementary Material: Yes, I looked through the Appendix. Relation To Broader Scientific Literature: Using LLMs to provide semantic signal for exploration has been studied extensively over the past few years, including by certain methods cited by the authors themselves (e.g. ELLM, OMNI, LAMP). It's unclear to me why this method would be preferred over others. For example, ELLM does not require exploration data to be gathered ahead of time, so might even be preferable in certain cases. (On the other hand, if the distilled model is general across environments, I can see a cost-based argument for not needing to re-query GPT-4o each time a new environment is encountered -- though I am not certain that is the case for this method.) A comparison to these methods should be included as baseline(s), or at the very least a discussion on when/why would SENSEI be preferred over these techniques. Essential References Not Discussed: None Other Strengths And Weaknesses: Overall, the paper is quite comprehensive with its experiments/analyses and has done a good job making a case for the necessity of an interestingness score. The two phases of exploration seems like a very interesting concept. My main concerns have been listed above: specifically, I think the two critical concerns I have with this paper are: 1. Novelty with respect to prior literature: there has been lots of prior work on LLM/VLM-guided exploration, and it is unclear why SENSEI is advantageous over these prior methods. It would be great if the authors to include these baselines, or at least a discussion of when/why SENSEI is better than them. 2. The method assumes access to a *pre-collected* dataset of exploration data on the specific environment that the agent is trying to explore. It is unclear to me when this setting where pop up in the real world, and why this data isn't just used to initialize the agent, who can then choose to continue exploring from there (if it so desires). Moreover, none of the baselines assume access to this pre-collected dataset, so the evaluation isn't quite comparable. Other Comments Or Suggestions: None Questions For Authors: If you have any important questions for the authors, please carefully formulate them here. Please reserve your questions for cases where the response would likely change your evaluation of the paper, clarify a point in the paper that you found confusing, or address a critical limitation you identified. Please number your questions so authors can easily refer to them in the response, and explain how possible responses would change your evaluation of the paper. 1. it is unclear to me why the distillation step is even necessary: why not just use the VLM directly online? 2. it is unclear whether the VLM is even necessary, or what exactly it is doing -- it would be insightful to compare to heuristic baselines like information content of the image, number of objects in the state, etc. 3. What kinds of states are "interesting" and how is this distinct from (or perhaps the same as) generating human-like subgoals from earlier papers? Please also see questions from above. More details about baselines Plan2Explore and RND would be useful in the main paper. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive review. We appreciate that you found our paper comprehensive, the introduced concepts interesting, and our experimental design sound. ### VLM-based exploration baselines Existing VLM-based exploration methods (Omni, Motif, ELLM) rely on assumptions that limit their applicability to our setting, such as requiring expert demonstrations (Motif), language-grounded environments (Omni, Motif, ELLM), or high-level actions (Omni, ELLM). In contrast, SENSEI operates in a more general RL setup without these constraints. Motif also uses an initial offline expert dataset. Omni requires access to the full task list and corresponding rewards. Motif and ELLM both rely on event-counting to bias novelty. In contrast, we show how novelty and semantic interestingness can be seamlessly combined with minimal assumptions, enabling deployment in low-level control environments while preserving scalability. Additionally, none of these methods use model-based RL, which is central to SENSEI’s sample-efficient exploration. For fairness, we adapt Motif into our VLM-Motif baseline. Our results show SENSEI outperforms VLM-Motif, highlighting the benefits of dynamic reweighting and the importance of incorporating information gain within a model-based approach. We will add this discussion to the final version. ### Offline VLM usage For details on why offline VLM usage is necessary/preferred and on our distillation procedure, please see our answer to Reviewer 7mHN (Section: Approximating human interestingness). ### Diversity of Envs and corresponding interestingness > There is also a question of how large and diverse these environments are, whereby the goal state may be the only "interesting" state there is. We now include a diverse environment: Pokémon Red. It requires navigating a large map and mastering battle mechanics to catch/fight wild Pokémon and defeat Trainers. Here, interestingness is multi-faceted (map progression, Pokémon collection, etc.), and SENSEI shows consistent gains. Please see our response to Reviewer 7mHN and our rebuttal webpage: [https://sites.google.com/view/sensei-icml-rebuttal](https://sites.google.com/view/sensei-icml-rebuttal). ### Pre-Collected Dataset SENSEI uses pre-collected data, unlike Plan2Explore, but our dataset size is chosen to ensure coverage of interesting states—not to be minimal. Crucially, SENSEI remains effective with significantly less data. In a new ablation, we show that in MiniHack-KeyRoom, using just ¼ of the dataset (25k samples) yields similar interaction statistics (Webpage Figure E5). In real-world settings, various offline robotics datasets (e.g., OpenX-Embodiment, TriFinger RL) contain expert and non-expert control data, making SENSEI directly applicable. Alternatively, SENSEI could begin with exploration purely via $r^{dis}$, then distill the reward from collected data. We appreciate this suggestion and will include the discussion in the final version. It aligns well with our generational SENSEI perspective, further supported by the Pokémon experiment (see response to Reviewer 9xtC). ### Alignment Between Goals and "Interestingness" If a task’s goals or subgoals don’t align with "interestingness", SENSEI relies on epistemic uncertainty ($r^{dis}$) to discover them. In practice, this has not been limiting. For instance, in MiniHack-KeyRoom, the goal is to reach a staircase (Fig. 4, top, img 5), which GPT-4 does not consider especially interesting. Standing near the exit yields lower semantic rewards (point 5) than standing near the key and door (points 3&4). Still, SENSEI reliably solves the task and collects more rewards than Plan2Explore by reaching the staircase. ### Interestingness vs Information content Interestingness captures semantic relationships between entities, which may not be reflected in simple information content heuristics. For example, in MiniHack-KeyChest, compare: (1) “Agent is next to a locked chest with a key in the inventory” and (2) “Agent is next to a locked chest with no key”. Both have similar information content, but (1) is more interesting due to the key-chest relation. > Can interestingness be adjusted based on existing information (e.g., a new object is more interesting than a familiar one)? This is precisely what SENSEI achieves by combining interestingness with disagreement (information gain). We aim to explore regions that are both interesting and novel. > What kinds of states are "interesting"? How is this distinct from human-like subgoals in earlier work? We assume VLMs incorporate priors aligned with human preferences. Thus, human-like subgoals would likely be favored by VLMs over random states. However, unlike prior goal-based work, we don’t assume access to a subgoal set. Instead, we use Motif to compute a continuous reward, approximating interestingness. We hope this response addresses the reviewer’s concerns and we would be happy to clarify further if needed.
Summary: The paper introduces a novel framework for intrinsic motivation in reinforcement learning (RL) agents, enabling them to explore environments meaningfully without relying on task-specific rewards. The authors propose SEmaNtically Sensible ExploratIon (SENSEI), which leverages Vision Language Models (VLMs) to guide exploration by distilling a reward signal reflecting observations' semantic interestingness. The method demonstrates the ability to discover meaningful behaviors in complex environments and promises to accelerate downstream task learning. Claims And Evidence: The claims in the SENSEI paper are supported mainly by clear and convincing evidence, though some aspects could benefit from further validation or broader testing. The paper suggests SENSEI could scale to real-world applications, but experiments are limited to simulated environments (MiniHack, Robodesk). Photorealism and occlusion handling are acknowledged limitations. The paper lacks comparisons to recent exploration methods beyond P2X (e.g., Curiosity-Driven Exploration via Disagreement, Skew-Fit). This limits the strength of claims about SENSEI’s superiority. Methods And Evaluation Criteria: The SENSEI paper's proposed methods and evaluation criteria are well-aligned with the problem of semantically guided exploration in reinforcement learning (RL). There are some potential improvements for this paper: (1) Including more recent exploration methods (e.g., Curiosity-Driven Exploration via Disagreement, Skew-Fit) would strengthen the evaluation by providing a broader context for SENSEI’s performance. (2) Evaluating SENSEI on a wider range of tasks (e.g., navigation, manipulation, and social interaction) could further demonstrate its versatility. Theoretical Claims: There is no proof in the paper. Experimental Designs Or Analyses: Yes. The experimental designs and analyses in the SENSEI paper are mostly sound, with clear motivations and appropriate metrics. However, the robustness of VLM annotations, the quality of the world model, and the diversity of tasks could be more rigorously tested. Addressing these issues would strengthen the paper’s claims and provide a more comprehensive evaluation of the proposed methods. Supplementary Material: Yes, all parts. Relation To Broader Scientific Literature: The SENSEI paper's key contributions are closely related to the broader scientific literature on reinforcement learning (RL), intrinsic motivation, and the use of foundation models for exploration. Intrinsic Motivation: SENSEI builds on intrinsic motivation in RL, where agents explore environments without external rewards. Prior methods (e.g., curiosity-driven exploration, prediction error, Bayesian surprise) focus on low-level interactions, while SENSEI leverages VLMs to guide exploration toward semantically meaningful behaviors (similar to children’s play). Model-Based RL: SENSEI uses a Recurrent State Space Model (RSSM) to predict semantic rewards, aligning with prior work on world models (Ha & Schmidhuber, 2018; Hafner et al., 2023). The key innovation is predicting semantic rewards directly from latent states, enabling the agent to evaluate hypothetical states. Essential References Not Discussed: In most real-world scenarios, there are potential complex disturbances that can affect the model's learning and judgment. Task scenarios like Noisy TV [1,2,3] may lead to excessive exploration of novel areas, resulting in exploration failure. **References** [1] urgen Schmidhuber, J. (1991). Adaptive confidence and adaptive curiosity. Technical Report FKI {149 {91 (revised), Technische Universit at M unchen, Institut f ur Informatik. [2] Mavor-Parker, A., Young, K., Barry, C., & Griffin, L. (2022, June). How to stay curious while avoiding noisy tvs using aleatoric uncertainty estimation. In International Conference on Machine Learning (pp. 15220-15240). PMLR. [3] Huang, K., Wan, S., Shao, M., Sun, H. H., Gan, L., Feng, S., & Zhan, D. C. (2025). Leveraging Separated World Model for Exploration in Visually Distracted Environments. Advances in Neural Information Processing Systems, 37, 82350-82374. Other Strengths And Weaknesses: **Strengths** 1. The paper is well-organized, clearly explaining the method, experiments, and results. 2. The experiments are thorough, with ablation studies (e.g., Figure 17 for fixed reward weighting, Figure 18 for hyperparameter sensitivity) and comparisons to strong baselines (e.g., Plan2Explore, RND). The results are presented with standard errors and multiple seeds, ensuring reproducibility. **Weaknesses** 1. The paper lacks comparisons to state-of-the-art exploration methods like Curiosity-Driven Exploration via Disagreement or Skew-Fit. Including these would provide a more comprehensive evaluation of SENSEI’s performance. 2. The evaluation focuses on two MiniHack tasks and a subset of Robodesk tasks. Testing SENSEI on a broader range of tasks (e.g., navigation, social interaction) would better demonstrate its versatility. 3. This paper lacks a detailed methodological comparison with Plan2Explore. However, if viewed from another perspective, it resembles an exploration of the intrinsic rewards of disagreement based on the world model's ensemble dynamics in Plan2Explore, adding a VLMs reward related to the task's interestingness, and then considering how to balance these two rewards. Of course, engineering improvements are also important, especially if the results are outstanding. Other Comments Or Suggestions: See questions. Questions For Authors: 1. In Figure 8, why do you choose observations from view (b) apart from the reasons you mentioned in the paper? Have you conducted any ablation studies on this design? 2. From the provided prompts, the rewards given by the large model are based on interestingness. However, the definition of interestingness is a very subjective concept. Is the exploration of all tasks considered interesting? 3. In Figure 7, SENSEI has already trained for 500K steps before fine-tuning on downstream tasks. Would it be fair to let DreamerV3 train for an additional 500K steps before conducting a formal comparison of downstream tasks? 4. In most real-world scenarios, there are potential complex disturbances that can affect the model's learning and judgment. Task scenarios like Noisy TV [1,2,3] may lead to excessive exploration of novel areas, resulting in exploration failure. Have you considered these noisy exploration scenarios? **References** [1] urgen Schmidhuber, J. (1991). Adaptive confidence and adaptive curiosity. Technical Report FKI {149 {91 (revised), Technische Universit at M unchen, Institut f ur Informatik. [2] Mavor-Parker, A., Young, K., Barry, C., & Griffin, L. (2022, June). How to stay curious while avoiding noisy tvs using aleatoric uncertainty estimation. In International Conference on Machine Learning (pp. 15220-15240). PMLR. [3] Huang, K., Wan, S., Shao, M., Sun, H. H., Gan, L., Feng, S., & Zhan, D. C. (2025). Leveraging Separated World Model for Exploration in Visually Distracted Environments. Advances in Neural Information Processing Systems, 37, 82350-82374. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback and greatly appreciate that they found our paper well-organized, our experiments thorough, and our evidence clear and convincing. We aim to address the remaining concerns below. > The paper lacks comparisons to recent exploration methods beyond P2X like Curiosity-Driven Exploration via Disagreement or Skew-Fit First, we note that we compare not only against P2X, but also RND and VLM-Motif—a reimplementation of Motif adapted to our world model and VLM setting. Thus, we compare against 3 strong exploration baselines. Second, Plan2Explore represents the state-of-the-art in curiosity-driven exploration via disagreement for pixel-based inputs. We are not aware of other disagreement-based methods suitable for comparison. Skew-Fit, on the other hand, falls into the goal-based exploration paradigm, where goals are sampled to maximize state coverage. Our RND baseline also targets state-space coverage. We believe goal-based methods are orthogonal to our setting. In addition, we note that Hu el al. 2023 [1] shows (Fig. 4 of main paper) that an improved model-based adaptation of Skew-Fit is performing worse or similar to Plan2Explore on a range of environments, making Plan2Explore the stronger baseline to compare to. > Testing SENSEI on a broader range of tasks (e.g., navigation, social interaction) would better demonstrate its versatility. We have now added a new environment—Pokémon Red—which involves navigation over a large map and interactions with Pokémon and gym leaders. See our response to Reviewer 7mHN for details and our rebuttal webpage (https://sites.google.com/view/sensei-icml-rebuttal) for results. > This paper lacks a detailed methodological comparison with Plan2Explore. Our method builds on Plan2Explore by adding semantic rewards and dynamic reward scaling. We apologize if this was unclear. We have clarified this connection in the method section and discussion. We also empirically analyze the contributions of both additions via comparisons to VLM-Motif and through ablations on dynamic reward scaling (Suppl. D.6). > In Figure 8, why do you choose observations from view (b) apart from the reasons you mentioned in the paper? In Robodesk’s default view, the robot arm often occludes objects, presenting challenges for learning. Early tests also showed better VLM annotations from the right camera view, which we therefore used. > However, the definition of interestingness is a very subjective concept. Is the exploration of all tasks considered interesting? We agree that interestingness is subjective. However, we argue that across cultures, commonalities exist in what humans find interesting—reflected in the large-scale training data of VLMs. The tasks in our environments were all designed by humans, implying inherent human interest. For annotation with SENSEI-General in Robodesk, we first ask the VLM for a description of the environment and what could be interesting. We observe strong alignment between the VLM’s responses and the environment’s task distribution (Supp. C.3.2). > In Figure 7, SENSEI has already trained for 500K steps before fine-tuning on downstream tasks. Would it be fair to let DreamerV3 train for an additional 500K steps? Excellent point—our method does spend more time in the environment. However, note that SENSEI does not optimize for any particular task during exploration. Even with additional training, Dreamer, in our experiments, does not match SENSEI’s performance. For example, in MiniHack-KeyRoomS15, SENSEI solves the task reliably after 300K steps of task-based training (on top of 500K steps exploration), totaling 800K steps. Dreamer, even after 1M steps, does not reliably solve the task across seeds. To emphasize this further, we now compare SENSEI in a Robodesk task (Upright-Block-Off-Table) after 1M exploration steps to Dreamer trained from scratch. SENSEI solves the task across all seeds in 600K steps; Dreamer only succeeds in one seed after 2M steps. Even counting SENSEI’s 1M exploration, it outperforms Dreamer with 1.6M total steps (see our rebuttal webpage, Figure E4). > Task scenarios like Noisy TV may lead to excessive exploration of novel areas, resulting in exploration failure. Have you considered these? Indeed, this is a known issue in intrinsically motivated RL. However, due to the ensemble disagreement formulation—used in Plan2Explore and our method—this is not a concern. In stochastic environments, the ensemble predictions converge to the process mean with enough samples, resulting in 0 disagreement and no further reward. We refer the reviewer to [2] for more details. [1] Hu, E. et al., Planning goals for exploration, ICLR 2023. [2] Pathak, D. et al., Self-supervised exploration via disagreement, ICML 2019. --- Rebuttal Comment 1.1: Comment: Thanks for the author's reply. My problems have been solved, and I will keep the score unchanged.
Summary: The paper proposes SENSEI, a framework designed to enhance exploration in model-based RL by integrating semantic guidance from VLMs. SENSEI distills a reward signal of interestingness from VLM-generated annotations of observations, guiding agents toward semantically meaningful interactions. This intrinsic reward system enables the learning of versatile world models, with internal models of interestingness, improving exploration efficiency. Empirical results in robotic and video game simulations demonstrate that SENSEI successfully encourages agents to discover meaningful behaviors from low-level actions and raw image observations, significantly enhancing downstream task learning. Claims And Evidence: Yes, the claims are supported by clear and convincing evidence. Methods And Evaluation Criteria: Yes, the proposed method, e.g., learning an internal model of interestingness via SENSEI, is novel and the evaluation criteria make sense for the problem. In the Robodesk evaluation, it would be useful to include a human evaluation of the produced behaviors to support the claim that “our semantic exploration reward seems to lead to more meaningful behavior than purely epistemic uncertainty-based exploration.” Theoretical Claims: Yes, they are correct. The writing in Section 2.3 can make the distinction between r_t_sem and r_t_sem_hat clearer. Experimental Designs Or Analyses: Yes, the experiments are sound and valid. Experiments are conducted over two domains and thorough analyses are done on both. I appreciate the SENSEI General experiments on Robodesk, showing that SENSEI is not dependent on the injection of external environment knowledge. However, why is SENSEI General not shown in the MiniHack environment? It will be useful to show it as an ablation in both domains. Supplementary Material: I read the appendices. Relation To Broader Scientific Literature: SENSEI is most related to intrinsically motivated RL. Unlike traditional intrinsic exploration methods, which emphasize novelty or state-space coverage, this work integrates semantic guidance from VLMs to provide more human-aligned, meaningful intrinsic rewards, following recent trends of leveraging foundation models for guiding exploration (e.g., Motif, OMNI). Additionally, by incorporating model-based RL frameworks and world models, the approach advances the capability of RL agents to predict semantic “interestingness,” thus bridging human-defined semantic priors with effective exploration in complex environments. Essential References Not Discussed: I don’t have any additional ones to suggest. Other Strengths And Weaknesses: Strengths - The proposed method and idea is novel and interesting. Thorough experiments also show how having an internal model of interestingness in the world model can contribute to downstream tasks. Weaknesses - See other sections. Other Comments Or Suggestions: See other sections. Questions For Authors: 1. What happens if exploring a behavior in one area unlocks uncertain behaviors in a previously explored area? For example, suppose the agent has already explored a box in a room but then presses a button at the other end of the room, opening the box. Now, previously explored spaces contain newly unlocked, interesting behaviors. How might SENSEI or future extensions handle this scenario? 2. The current version of SENSEI makes use of a fixed distilled reward function. However, what happens if what’s considered interesting changes over time? For example, in robot manipulation, initially, interacting with objects might be interesting, but once the robot has mastered basic manipulation, the focus of interestingness might shift towards more challenging tasks, such as assembling or stacking objects. How might SENSEI perform in scenarios where the definition of interesting behaviors evolves? 3. Relatedly, since the distilled reward function is not updated during world model training, what if the agent discovers behaviors or states not present in the initial self-supervised exploration data? For instance, consider an agent accidentally falling into a ditch. Since the initial exploration data did not include this environment, the distilled reward function lacks information regarding the interestingness of exploring this new scenario. How might SENSEI cope with this situation? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate your thoughtful feedback and recognition of SENSEI’s novelty, thorough evaluations, and strong empirical evidence. Below, we address your key questions and minor comments. ### SENSEI in newly unlocked areas > What happens if exploring a behavior in one area unlocks uncertain behaviors in a previously explored area? As the reviewer correctly noted, we differentiate two cases: 1) the newly unlocked behavior exists in the initial self-exploration dataset, 2) the behavior is entirely novel. Case 1 is naturally accommodated by SENSEI, as these semantic relationships are reflected in the learned VLM-Motif. For example, in MiniHack-KeyChest, an agent might initially discover a locked chest before finding the key. Upon acquiring the key, the previously explored chest gains new semantic significance, yielding higher semantic rewards. SENSEI adapts seamlessly, as semantic meaning evolves with discoveries. In contrast, Plan2Explore and other uncertainty-driven methods lack an inherent mechanism for handling such scenarios. Case 2 connects to our new experiments and the reviewer's question 3: ### Second Generation of SENSEI Annotations > Since the distilled reward function is fixed during world model training, what if the agent discovers behaviors or states absent from the initial self-supervised data? Great point. If an agent encounters entirely novel states, the semantic reward may contain mostly noise, as it’s out of distribution for the trained VLM-Motif. Here, exploration relies on epistemic uncertainty rewards ($r^{dis}$), similar to Plan2Explore. In such cases, we can refine our notion of interestingness with new data. We adopt a generational perspective: run SENSEI for a first generation, then perform a second round of VLM annotations to distill new semantic rewards based on newly explored regions. We tested this in Pokémon Red, comparing SENSEI-General to Plan2Explore (see Reviewer 7mHN response for environment details and results). The initial SENSEI dataset contains 100K pairs from a Plan2Explore run (500K steps), reaching at most Viridian Forest. SENSEI reaches the frontier of Viridian Forest more often, occasionally breaking into Pewter City, where the first gym is located. However, the first-generation VLM-Motif has never seen this gym and thus relies on information gain to explore beyond Viridian Forest. After 750K steps, we sample 50K new pairs from a SENSEI run and re-annotate them, forming a refined second-generation semantic reward function ($R_\psi$) now informed by previously unseen maps. Qualitative results (Figure E2) on our rebuttal webpage [https://sites.google.com/view/sensei-icml-rebuttal](https://sites.google.com/view/sensei-icml-rebuttal) show that second-generation annotations correctly emphasize Pokémon Gyms—unlike the initial generation. This is essential for the agent to further explore the gym and beat the gym leader. We hope this motivates extending SENSEI to complex domains where new behaviors are unlocked through generations. ### Semantic Rewards and Evolving Experience > What happens if what’s considered interesting changes over time? While our semantic reward $r^{sem}$ is independent of agent experience, we assume a VLM inherently ranks more complex tasks (e.g., stacking) higher than simpler ones (e.g., pushing). However, SENSEI also incorporates epistemic uncertainty ($r^{dis}$) into its exploration reward ($r^{expl}$, Eq. 7), so the agent engages in interesting and yet novel behaviors. As exploration progresses, $r^{dis}$ naturally decreases as the world model improves (Eq. 6). Consider a case where the VLM assigns equal interestingness to pushing and stacking. Initially, both are uncertain, yielding similar $r^{expl}$. As the agent masters pushing, $r^{dis}$ diminishes, lowering $r^{expl}$ for that skill. The agent then prioritizes stacking—aligning exploration with increasingly complex tasks. ### Minor Comments **Human evaluations in Robodesk** > In the Robodesk evaluation, it would be useful to include a human evaluation of behaviors to support the claim that “our semantic exploration reward leads to more meaningful behavior than purely epistemic uncertainty-based exploration.” To avoid overstating, we revise the sentence to: “On average, our semantic reward leads to more object interactions than purely epistemic uncertainty-based exploration.” > Why is SENSEI-General not shown in MiniHack? The MiniHack prompt is already highly general. We describe it as a “rogue-like game” with a generic description and note the egocentric view, without explicitly mentioning doors, chests. The key is only mentioned if it is part of the agent's inventory (Supp C3.3). This differs from Robodesk, where we specify relevant objects. Given this broad framing, we don’t expect SENSEI-General to yield significantly different results. We appreciate the reviewer’s insightful questions and welcome any further discussion.
Summary: The paper proposes incorporating human priors into RL exploration to encourage policies to internalize a model of *interestingness*. This is done by first annotating pairs of frames for interestingness using VLMs (which, owing to training on internet-scale human data, has incorporated these priors). This is then distilled into a reward model, which in turn is distilled into the world model of a model-based Reinforcement Learning policy. The authors additionally use an epistemic uncertainty-based reward, incentivizing the agent to try new behaviors in interesting states. Claims And Evidence: While the experimental results are interesting, in my opinion, results on just two environments (Minihack, Robodesk) fall short of concretely establishing the claims. Methods And Evaluation Criteria: Yes Theoretical Claims: N/A Experimental Designs Or Analyses: I did not find any major issues with the experimental designs. Supplementary Material: No. Relation To Broader Scientific Literature: Intrinsic Motivation and Reward Shaping quite important in the field of Reinforcement Learning. Essential References Not Discussed: N/A Other Strengths And Weaknesses: While the paper proposes a novel take on intrinsic motivation in terms of human priors, the major weakness is the limited experimental analysis. I would like to see this framework extended to at least one other environment. Other Comments Or Suggestions: N/A Questions For Authors: * It appears that the two-stage process of transferring the VLM's interestingness analysis into the agent's world model -- via a distilled reward model -- may lead to significant information loss. At this stage, the approach seems like a proxy of a proxy for human interestingness, making it quite indirect and prone to potential failure points. Did the authors explore simpler alternatives? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for reviewing our paper and highlighting that our research direction is important and acknowledging that our experimental results are interesting. ### New Environment: Pokémon Red As per your suggestion, we now apply SENSEI to another environment: the classic Game Boy game Pokémon Red. This is a challenging environment due to its (1) vast, interconnected world and (2) complex battle system requiring semantic knowledge (e.g. Water beats Rock). Strong exploration is essential for progressing toward the main goal: becoming the Pokémon Champion by defeating Gym Leaders. We use the PokeGym v0.1.1 setup [1], based on [2], with: - Observations: Only the raw 64×64 game screen, unlike previous RL applications which additionally used internal game states or memory [2,3]. - Actions: A 6-button Game Boy action space (Left, Right, Up, Down, A, B). - Rewards: Event-based rewards following [1]. - Episode Length: Episode length is increased with environment steps: 1k until 250k steps, 2k during 250–500k, 4k during 500–750k, etc. We take actions every 1.5seconds of gameplay. - Game Start: We skip the tutorial, until the agent can move freely and catch Pokémon, starting after receiving the Pokédex. We use the same checkpoint as [2], beginning with a level 6 Squirtle. We compare SENSEI-General to Plan2Explore. Both receive extrinsic task-based rewards alongside self-generated exploration rewards. SENSEI's initial dataset is collected using 500k steps of Plan2Explore (100K pairs). We use a general prompt: 'Your task is to help me play the Game Boy game Pokémon Red. I just obtained my starter Pokémon, Squirtle. My goal is to find and defeat the Gym Leader, Brock. What do I need to do, and which areas do I need to traverse to achieve this goal? Keep it very short.' We generate five different responses from GPT-4 and sample from them uniformly as context for image-based comparisons. The results are shown on our rebuttal webpage: https://sites.google.com/view/sensei-icml-rebuttal. We plot the histogram of the maximum map reached by the agent at each episode throughout the training run for SENSEI and Plan2Explore (5 seeds). Our results show that SENSEI progresses further into the game, and most notably manages to reach the first Gym in Pewter City, unlike Plan2Explore. Both SENSEI and Plan2Explore explore catching and training Pokémon, with SENSEI typically assembling a party of slightly higher levelled Pokémon than Plan2Explore. In order to succeed in the game you need to have a diverse team with high levels. This means achieving a high party level (higher counts on the right side of the histogram).As Pokémon unlocks new maps over time, we also explore using generations of SENSEI: refining the semantic reward using data from an earlier SENSEI run. See our response to Reviewer 9xtC for details. Thank you for your great suggestion of adding another environment. Let us know if you have further questions about our setup or results. ### Approximating human interestingness We assume “proxy-of-a-proxy” refers to our two-step distillation process of 1) first training motif from VLM annotations and 2) learning to predict semantic rewards in the world model. We understand the reviewer’s concern, but want to emphasize why both design choices are essential. We will explain this here in detail and add more emphasis on these aspects in our updated paper. 1. Distilling a semantic reward function $R_\psi$ from VLM annotations is necessary as it allows us to not deal with a large and slow VLM inside the fast RL loop. Additionally, directly querying VLMs can lead to noisy outputs, which can further derail the agent’s learning. 2. Learning a mapping from world model states to semantic rewards is a necessary design choice when working with latent state world models as we point out in the discussion and detail in Suppl. A. 3. World models such as the RSSM typically encode and predict dynamics fully in a self-learned latent state. Thus, for a world model to predict $r^{sem}_t$ at any point in time, we need a mapping from latent states to semantic rewards. This begs the question whether our mapping is the best way to do it. Another option would be to decode the latent state to images and use those as inputs for Motif. However, we believe this has several disadvantages: (1) Inefficiency—image decoding from latent states is expensive and slows down training. (2) Artifacts—decoded images can be blurry or unrealistic, causing a distribution shift and making Motif, trained on real environment images, encounter out-of-distribution errors. We hope this motivates our two-step distillation process. We thank the reviewer for allowing us to clarify this and update the paper to better emphasize this in the method section and discussion. [1] Suarez (2024): https://pypi.org/project/pokegym/0.1.1/ [2] Whidden (2023), https://github.com/PWhiddy/PokemonRedExperiments, [3] Pleines et al. (2025): Pokemon Red via Reinforcement Learning --- Rebuttal Comment 1.1: Comment: Dear Authors, Thank you for the new experiments and detailed explanations. About ```Approximating human interestingness```, could you further elaborate on the following: 1. Distilling a semantic reward function from VLM: How does the size of the reward function module compare to the VLM? Could a smaller VLM be used instead of the distillation process? 2. More importantly, what are the potential pitfalls of the two-stage approximation process that you use? --- Reply to Comment 1.1.1: Comment: Thank you for the follow-up questions. ### 1. Size of the Reward Function vs. VLM Our reward model (VLM-Motif) is a lightweight CNN, using under 1M parameters (see Suppl. A2. for the architecture), and thus significantly more efficient than even small VLMs, which would significantly slow down training when queried at each environment step. But here we want to highlight another important factor justifying the overall need for distillation: Even if we were to use a VLM in the loop, we need a way to extract a scalar reward signal given an observation image as input. Directly using VLM outputs lacks a stable frame of reference: without comparisons, rewards are not consistent across states or episodes. In contrast, VLM-Motif learns a scalar reward function using standard techniques from RLHF, which enforce transitivity and consistency by training on ranked observation pairs. ### 2. Pitfalls of the Two-Stage Approximation We see two potential pitfalls: (1) Reward smoothing — VLM annotations can be noisy, and both VLM-Motif and the reward head help smooth this signal. While this is often beneficial for learning stability, it can be risky when visually similar states should receive vastly different rewards. (2) Model capacity — VLM-Motif and the reward head must have sufficient expressiveness. If they are too small, they may underfit and fail to capture important semantic distinctions from the VLM. In the applications we considered, we did not encounter these issues; both our lightweight VLM-Motif and the default DreamerV3 output heads were sufficient. We thank the reviewer for raising these thoughtful questions and we will include a discussion of the pitfalls of our two-stage approximation process in the discussion section of the final version of the paper. We would appreciate it if the reviewer would reconsider their score in light of our new experiments in the new environment Pokémon Red and our added explanations regarding our double-distillation process.
null
null
null
null
null
null
Prediction via Shapley Value Regression
Accept (poster)
Summary: This paper introduces a framework for estimating Shapley values in explainable AI. Typically, a single observation $\mathbf{x}$ is considered for attribution and gives rise to a set function $v: 2^{[n]} \to \mathbb{R}$. Then we compute the Shapley values for just the set function $v$ i.e., $\phi_i = \frac1{n} \sum_{S \subseteq [n] \setminus \{i\}} \frac{v(S \cup \{i\}) - v(S)}{\binom{n-1}{|S|}}$. If we have many observations $\mathbf{x}$ that we want to explain, this is clearly a computational issue. In prior work (FastSHAP) and this work (ViaSHAP), the goal is to learn one function that simultaneously outputs the Shapley values for many observations $\mathbf{x}$ and the induced set functions $v$. The authors propose a training method with two loss functions: • the prediction loss which is the standard measure of how accurately the model can recover the label $y$ for a given input $\mathbf{x}$ • the loss on the Shapley estimates produced by the model, the goal is for the Shapley values to sum to the correct value on a given coalition $S$ They evaluate their method on 25 (impressive!) datasets and four different networks. However, they don't seem to use the ground truth Shapley values and instead estimate them with approximation methods like Kernel SHAP. Claims And Evidence: They claim "we generate ground truth Shapley values by running KernelSHAP until it converges since it has been demonstrated that KernelSHAP will converge to the true Shapley values when given a sufficiently large number of data samples". But, KernelSHAP produces estimates which are slightly biased so the "sufficiently large number of data samples" required to recover the true Shapley values is $2^n$ since KernelSHAP samples without replacement. Methods And Evaluation Criteria: As mentioned above, I'm concerned about how they compute ground truth Shapley values. I would recommend running experiments using all of the following approaches: • Use a small value of $n$ where the Shapley values can be computed exactly • Use a linear model as the model to be explained where the Shapley values are simply the coefficients of the model • Use a decision tree or forest model as the model to be explained where the Shapley values can be exactly computed using Tree SHAP Theoretical Claims: They state three lemmas and a theorem, but I would describe these as "decorative theory" that are straightforward and of marginal value. Experimental Designs Or Analyses: I think an obvious and simple baseline is missing: Train a decision tree model (e.g., XGBoost). Then for a given input $\mathbf{x}$, exactly (and efficiently) compute the Shapley values using Tree SHAP. This approach is far simpler than creating their model to produce Shapley values, and the computation of the Shapley values is exact; the only approximation is from the original explainer model to the decision tree model. Supplementary Material: I did not read the supplementary material. Relation To Broader Scientific Literature: Inspired by and a follow up work to Fast SHAP. Essential References Not Discussed: I don't think so. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: Miscellaneous comments: • Third paragraph in 2.2 looks like it was supposed to be commented out? • 2.3 "There are $2^n-1$ possible coalitions", I guess this is semantic but doesn't the empty set trivially count as a coalition? • Notation of $p(S)$ and $p(x)$ in 2.4 is confusing because two different distributions but written like one function that takes both input data $x$ and coalitions $S$. • 3.3 second paragraph "bounded domain can [be] represented by a finite sum" • Figure 3 is very strange to me. Could you represent as a table? Or bar plots? Questions For Authors: • In the image experiments, are you computing the Shapley values for each pixel in the image input? • The title "Prediction via Shapley Value Regression" is very broad, and could include e.g., Kernel SHAP. Can you make it more specific and relevant to your work e.g., "A Framework for Simultaneously Predicting Shapley Values on Many Observations"? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We appreciate the reviewer's time and feedback. Please find our responses below. > A) It's unclear to me what it means to run KernelSHAP until convergence... > B) KernelSHAP produces estimates which are slightly biased... We agree with the reviewer that KernelSHAP can be biased, therefore, we employed the unbiased KernelSHAP [1]. For the tabular datasets, we allowed unbiased KernelSHAP to continue sampling and updating the learned values iteratively until convergence, meaning that the sampling is not restricted to a fixed number of coalitions but continues until the estimates are stable. The bias of KernelSHAP shrinks to zero as the sample size grows [1,2]. Furthermore, it has been shown that the unbiased KernelSHAP converges to the true Shapley values given a sufficiently large number of samples [1,3], which we provided for the tabular data. We employ the same evaluation setup that has been employed in [3]. > I'm concerned about how they compute ground truth...Use a small value of $2^n$... We included datasets with a small number of features $n$, such as Phonemes (5 $n$), Pollen (5 $n$), Mozilla 4 (5 $n$), Abalone (8 $n$), Electricity (8 $n$), and MagicTelescope (10 $n$). For these datasets, the total number of possible coalitions is small enough that the unbiased KernelSHAP solution provides the exact Shapley values, as all possible coalitions are generated. > ... Use a linear model.... Use a decision tree We appreciate the reviewer's perspective. However, training a linear model or TreeSHAP to obtain the exact Shapley values may not be applicable to evaluating the explainability of ViaSHAP, as ViaSHAP explains its own predictions rather than the predictions of a separate model, e.g., a decision tree where exact Shapley values can be obtained. Also, ViaSHAP, in the current form, cannot be implemented using decision trees. > They state three lemmas and a theorem, but I would describe these as "decorative theory"... We respectfully disagree that the lemmas and the theorem are of marginal value for the following reasons: 1-The proofs of Lemmas 3.2 and 3.3 **provide upper bounds on the error associated with satisfying the missingness and consistency** properties under realistic optimization assumptions. 2-Without the theoretical results, there would be no formal guarantee that ViaSHAP computes Shapley values, given it differs from post-hoc explainers and computes the Shapley values before making predictions. > simple baseline is missing: Train a decision tree model... We want to clarify again that ViaSHAP explains itself rather than acting as a post-hoc explainer. Since ViaSHAP, as currently proposed, can only be implemented with algorithms that can be optimized using backpropagation (i.e., cannot be implemented using decision trees), we cannot see how training a decision tree-based model and explaining it could help evaluate ViaSHAP's explainability. > "There are $2^n - 1$ possible coalitions", I guess this is semantic but doesn't the empty set trivially count as a coalition? We mean that there are $2^n - 1$ possible coalitions that can be used to compute the exact Shapley values, since we cannot add a new player $i$ to the grand coalition $S$ of all players as follows: $\frac{|S|! (n - |S| - 1)!}{n!} (v(S \cup \\{i\\}) - v(S))$. We will update the sentence to be clear. > Notation of $p(S)$ and $p(x)$ in 2.4 is confusing.. We agree with the reviewer and will update the notation to eliminate confusion. > Figure 3 is very strange... Figure 3 is a standard visualization for summarizing the results of Friedman-Nemenyi statistical significance tests. Similar plots have been used in [4,5,6]. For a detailed breakdown of the results, please refer to Table 1. > are you computing the Shapley values for each pixel... We computed Shapley values using super-pixels of size 2×2. However, this is a hyperparameter, and users can choose between pixel-wise explanation or larger super-pixel sizes. > title "Prediction via Shapley Value Regression" is very broad... Methods like KernelSHAP indeed involve Shapley value regression. However, a key difference is that they do not formulate predictions using the regressed Shapley values, which is central to our approach. We appreciate the reviewer’s suggestion for an alternative title and are open to modifying the title to be more specific. [1]-Covert, I., et al. Improving kernelshap: Practical shapley value estimation using linear regression. AISTATS 2021. [2]-Covert, I., et al. Stochastic Amortization: A Unified Approach to Accelerate Feature and Data Attribution. NeurIPS 2024. [3]-Jethani, N., et al. FastSHAP: Real-time shapley value estimation. ICLR, 2022 [4]-Dedja, K., et al. BELLATREX: Building Explanations through a LocaLly AccuraTe Rule EXtractor. [5]-Werner, H., et al. Evaluating Different Approaches to Calibrating Conformal Predictive Systems. [6]-Pugnana, A., et al. Deep Neural Network Benchmarks for Selective Classification
Summary: The paper presents a method called ViaSHAP which aims to learn a function to compute the Shapley Values as the model trains. This function predicts the Shapley Values from inputs, directly uses those values to form the model output and bypasses the need for post-hoc computation (i.e. to fit a KernelSHAP to the model). ViaSHAP is implemented as an constraint on a network for an MLP and KAN architecture. The Shapley results are shown to perform comparably to KernelSHAP and FastSHAP across several experiments without a significant loss in model performance. Claims And Evidence: The claims are quite clear and the experiments are quite exhaustive. Methods And Evaluation Criteria: Comparisons with KernelSHAP and FastSHAP across the tested datasets makes sense for this application. The performance of ViaSHAP is then compared with XGBoost, TabNet, and Random Forests to validate the model performance. Theoretical Claims: I don't see any particular issues with the proofs presented. Experimental Designs Or Analyses: I'm a bit concerned with how the comparisons between ViaSHAP and the ground truth are made, as noted in lines 358-360, the ground truth KernelSHAP values are computed using the ViaSHAP network as a black box. Is the comparison then how different the output of a model are to a (weighted) least squares approximation of that model itself? Other than that, the experimental design all made sense to me. Supplementary Material: I looked through the code and the appendices, it appears to support the claims made in the paper. Relation To Broader Scientific Literature: This paper contributes a method of significantly reducing the cost of computing Shapley Values by removing the need for post-hoc fitting. Essential References Not Discussed: The citations are quite exhaustive Other Strengths And Weaknesses: Posed in questions section. Other Comments Or Suggestions: Overall I quite like the idea of the paper, some minor issues in below sections. Minor Comments: - The paper focuses on the classification case with regression being briefly discussed in the appendices, which doesn't seem to reflect the paper title very well Questions For Authors: Questions: - I would suggest incorporating more discussion about when this method would be best used, and why one would choose it over other methods. - Why is it the case that the results are particularly weak over certain datasets (such as pollen)? - In Table 1, why is it the case that the AUC of XGBoost does not have a +- value associated? Additionally, what exactly do the +- values indicate, are they variations over random (seed) initialization of the model, or over different train/test splits? - What exactly does Figure 5 show? I'm not sure how much this illustration contributes to the main paper. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We appreciate the reviewer's time and feedback on our paper. Below, we provide our responses and clarifications. > I'm a bit concerned with how the comparisons between ViaSHAP and the ground truth are made... We appreciate the reviewer’s concern and would like to clarify our experimental setup. In the explainability evaluation, we treated ViaSHAP models as standard black boxes and computed the ground truth Shapley values using the unbiased KernelSHAP [1], which involves solving an optimization problem for each prediction. We then compared the explanations generated by ViaSHAP to the ground truth obtained from the unbiased KernelSHAP. As established in previous work, the bias of KernelSHAP shrinks to zero as the sample size grows [1, 2], and with a sufficiently large number of samples, it converges to the true Shapley values [1, 3]. For the tabular datasets, we allowed the unbiased KernelSHAP to continue sampling and updating the learned values until convergence. We followed the same evaluation setup using the unbiased KernelSHAP that has been employed in [3]. > The paper focuses on the classification case with regression being briefly discussed in the appendices... We appreciate the reviewer’s feedback and acknowledge that incorporating additional regression datasets could further strengthen the experiments. However, due to the constraints of the rebuttal phase, we are unable to add new results at this stage. We would also like to clarify that the term *regression* in the paper title, *Prediction via Shapley Value Regression*, specifically refers to the fact that we solve the regression task of Shapley values and use Shapley values in the prediction task itself. In other words, the title is intended to convey *"(Classification and Regression) via Shapley Value Regression."* > incorporating more discussion about when this method would be best used... We sincerely appreciate the reviewer’s suggestion and will update the manuscript accordingly. ViaSHAP is particularly advantageous in settings where computational resources or time are limited, as it removes the need to train and run separate models for prediction and explanation. Moreover, it is well-suited for scenarios where high-fidelity explanations are crucial, as the model’s predictions are inherently tied to the explanations. Post-hoc methods might be more suitable in cases where a pre-trained black-box model is already employed or when users cannot access or modify the black-box model. > Why is it the case that the results are particularly weak over certain datasets (such as pollen)? Similar to other machine learning models, ViaSHAP's performance can vary depending on the characteristics of the dataset. Certain datasets pose inherent challenges, such as a limited number of training examples, a high-dimensional feature space, complex data distributions, or noisy data, all of which can impact model performance. While ViaSHAP's performance on the Pollen dataset is relatively weaker compared to other datasets, it is noteworthy that it also outperforms XGBoost, Random Forests, and TabNet on this dataset, which suggests that the Pollen dataset is inherently challenging, rather than an issue specific to ViaSHAP. > ...why is it the case that the AUC of XGBoost does not have a +- value... All compared models are trained and evaluated using the same data splits to ensure a fair comparison. The $\pm$ values indicate variations in performance due to different random initializations, as stated in the experimental setup (Lines [312, 315]): *"If the model’s performance varies with different random seeds, it will be trained using five different seeds, and the average result will be reported alongside the standard deviation."* Since XGBoost with the default settings is deterministic and its performance does not vary across different random seeds, its AUC is reported without a $\pm$ value. > What exactly does Figure 5 show? I'm not sure how much this illustration contributes to the main paper. Figure 5 illustrates that the explanations generated by ViaSHAP are more precise compared to those from FastSHAP when applied to the same image. Specifically, FastSHAP tends to highlight broader regions, whereas ViaSHAP primarily focuses on the shape of the classified instance within the image, providing a more precise and sparse explanation. [1] Covert, I. and Lee, S.-I. Improving kernelshap: Practical shapley value estimation using linear regression. In Proceedings of The 24th International Conference on Artificial Intelligence and Statistics. [2] Covert, I., Kim, C., Lee, S., Zou, J., Hashimoto, T. Stochastic Amortization: A Unified Approach to Accelerate Feature and Data Attribution. In The Thirty-eighth Annual Conference on Neural Information Processing Systems 2024. [3] Jethani, N., Sudarshan, M., Covert, I. C., Lee, S.-I., and Ranganath, R. FastSHAP: Real-time shapley value estimation. In the International Conference on Learning Representations, 2022 --- Rebuttal Comment 1.1: Comment: Thanks answering my questions. I have concerns about the evaluation methodology used in this paper. As answered in the question response, the results are averaged over initialization (i.e. random seeds), but not choices of train/test splits which means this comparison would not hold more generally, is there any reason that this comparison was not made? --- Reply to Comment 1.1.1: Comment: We thank the reviewer for pointing out their concern in a direct question. Our experimental setup is motivated by the following reasons: **1-** Since our primary objective is to evaluate the predictive performance of each model relative to the competing algorithms, all models are trained and evaluated on the same splits to ensure that performance differences arise from model differences rather than variations in the data splits. **2-** Using fixed train/test splits allows for isolating the impact of random initializations from that of data partitioning. Introducing both multiple data splits and multiple random seeds simultaneously would make it difficult to disentangle their individual effects on performance variations, i.e., would blur **the evaluation of the model's robustness to random initializations**. **3-** While we agree that using **a single split** on **a single dataset** can possibly make the evaluation biased. However, we here employ a large number of medium to large-sized datasets (as detailed in Table 19). Therefore, a bias due to a specific train/test split is eliminated by the large number of datasets as well as the large number of data examples, and the results generalize beyond a specific split. **4-** We employ statistical significance tests (Friedman and Nemenyi) to confirm whether performance differences are not due to random chance but are meaningful comparisons, which directly counters the concern that performance variations might be due to dataset-specific splits rather than actual model differences. We hope that our answer sufficiently addresses the reviewer's concerns.
Summary: This paper proposes a method that simultaneously computes both the Shapley values and the predicted output, where the predicted output is equal to the sum of the Shapley values. To achieve this, the authors train the network to learn and approximate Shapley values by minimizing the weighted least squares in Eq. (6), while simultaneously optimizing the predicted output in Eq. (7). The authors employ MLP and KAN as the model architectures. Experimental results on several tabular datasets demonstrate the effectiveness of the proposed explanation method. Claims And Evidence: 1. My major concern is its positioning within the existing literature. If the proposed method is model-specific, prior works such as [1][2] have already demonstrated approaches to compute exact Shapley values in a single forward pass. If the method is intended to be model-agnostic, various approaches [3][4][5] already exist to compute Shapley values either in an unbiased or biased manner. Since the proposed method does not apply to arbitrary black-box models, but rather trains an inherently interpretable model that also predicts outputs, the authors should sufficiently compare it with prior works (not limited to the listed interpretable models), and highlight the advantages of the proposed method. 2. The main challenge of jointly training a model to predict Shapley values while performing task predictions is that, it may be difficult to achieve great model performance on complex tasks with relatively accurate Shapley values. For example, on complex datasets such as CIFAR-10, CIFAR-100, and Tiny ImageNet, the authors should report both the classification accuracy and the approximation error of the Shapley values (e.g., root mean square error). Besides, the choice of the trade-off parameter $\beta$ in Algorithm 1 should be reported in different tasks. [1] Chen et al. HarsanyiNet: Computing Accurate Shapley Values in a Single Forward Propagation. ICML, 2023. [2] Wang et al. Shapley explanation networks. ICLR, 2021. [3] Castro et al. Polynomial calculation of the shapley value based on sampling. Computers & Operations Research, 36(5):1726–1730, 2009. [4] Mitchell et al. Sampling permutations for shapley value estimation. Journal of Machine Learning Research, 23:1–46, 2022. [5] Covert et al. Improving kernelshap: Practical shapley value estimation using linear regression. International Conference on Artificial Intelligence and Statistics, 2021. Methods And Evaluation Criteria: Is the second categorical loss in Eq. (7) incorrectly written? Why is it $-j log (\hat{j})$, where $j$ is the category, $j\in\\{1, \cdots, d\\}$, and $\hat{j}$ is the probability of the $j$-th category. In addition, it would be helpful to explicitly explain the physical meaning of Eq. (6) to improve clarity. Theoretical Claims: Theorem 1 is not carefully checked. Experimental Designs Or Analyses: 1. In Lines 278-283, using KernelSHAP as the ground-truth Shapley value may be inappropriate. One concern is that the number of samples required for KernelSHAP to produce reliable estimates depends on the complexity of the task [1]. A more rigorous approach would be to compute exact Shapley values using the definition for datasets with a small number of input variables. For larger image datasets, the authors should compare their method with unbiased sampling-based Shapley value estimation [3]. 2. Using cosine similarity as the metric for evaluating the difference between approximated and ground-truth Shapley values may be insufficient. The authors could consider incorporating additional metrics, such as root mean square error, to provide a more comprehensive assessment. 3. In Section 4.3, when comparing interpretability methods, it is necessary to include a broader set of model-agnostic approaches, such as those proposed in [3], [4], and [5]. Supplementary Material: No Relation To Broader Scientific Literature: See "Claims And Evidence" Essential References Not Discussed: The paper lacks a discussion of related works in closely related directions. 1. Methods that compute exact Shapley values and predict model outputs in a single forward pass: [1] Chen et al. HarsanyiNet: Computing Accurate Shapley Values in a Single Forward Propagation. ICML, 2023. [2] Wang et al. Shapley explanation networks. ICLR, 2021. 2. Approximation algorithms for Shapley values: [3] Castro et al. Polynomial calculation of the shapley value based on sampling. Computers & Operations Research, 36(5):1726–1730, 2009. [4] Mitchell et al. Sampling permutations for shapley value estimation. Journal of Machine Learning Research, 23:1–46, 2022. [5] Covert et al. Improving kernelshap: Practical shapley value estimation using linear regression. International Conference on Artificial Intelligence and Statistics, 2021. 3. Other methods for attribution computation in a single forward pass Other Strengths And Weaknesses: Strengths: 1. The paper is well written and easy to follow. 2. The authors explored the possibility of training both predicted shapley values and predicted outcomes. Weaknesses: See “Claims And Evidence”,“Methods And Evaluation Criteria” and “Experimental Designs Or Analyses.” Other Comments Or Suggestions: N/A Questions For Authors: See “Claims And Evidence”,“Methods And Evaluation Criteria” and “Experimental Designs Or Analyses.” Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewer's feedback. We provide answers in the following part. > My major concern is its positioning... The proposed method does not fall strictly into the categories of model-specific or model-agnostic approaches. Instead, we introduce a framework for training by-design explainable models, where predictions and their Shapley value explanations are learned simultaneously. This distinguishes our approach from existing methods that compute Shapley values post-hoc or in a separate forward pass. > ... authors should sufficiently compare it with prior works... We appreciate the reviewer’s suggestion. Our primary objective, however, is to demonstrate that the proposed method is inherently explainable and that the generated explanations are accurate. To this end, we compared our approach to the unbiased KernelSHAP, as well as FastSHAP, which directly inspired our method and represents the most closely related work. Unfortunately, due to the constraints of the rebuttal phase, we are unable to add new results at this stage. However, we acknowledge the value of broader comparisons. > ...on complex datasets such as CIFAR-10... We have indeed reported the model’s predictive performance on complex datasets such as CIFAR-10, along with the approximation error of the explanation, and compared the results to FastSHAP. We kindly refer the reviewer to Appendix F, which provides detailed results for CIFAR-10. The evaluation of Shapley value estimates on images using the root mean square error requires access to ground truth Shapley values, which is computationally infeasible for image datasets. Instead, we assess explanation quality using inclusion and exclusion curves, which measure an explanation’s ability to identify informative image regions. This evaluation strategy is commonly used for image datasets, as seen in [1,2,3]. > Is the second categorical loss in Eq. (7) incorrectly written?... We thank the reviewer for pointing out that the notations in Eq.7 are confusing. Indeed, j in Eq.7 represents the true probability to be estimated for each category. We will correct and clarify this in the revised paper. > ...explicitly explain the physical meaning of Eq. (6)... We appreciate the reviewer’s suggestion. We explained the intuition behind Eq.6 in lines [173,186] and illustrated the main idea in Figure 2 and Algorithm 1. However, as recommended by the reviewer, we will further refine our explanation to enhance clarity. > ...using KernelSHAP as the ground-truth... We agree with the reviewer that KernelSHAP is unreliable, particularly for high-dimensional data. Therefore, we employed an unbiased version of KernelSHAP [4], which aligns with the reviewer’s suggested approach [5], as mentioned in lines 358 and 359 " the ground truth Shapley values ($\phi$), computed by the unbiased KernelSHAP". The same evaluation setup, using unbiased KernelSHAP, has been employed in [3]. For the tabular datasets, we allowed the unbiased KernelSHAP to keep sampling until convergence. The bias of KernelSHAP shrinks to zero as the sample size grows [4,6]. It has been shown that the unbiased KernelSHAP converges to the true values if provided with a large enough number of samples [3,4], which we did for the tabular data in the experiments. > Using cosine similarity as the metric... We completely agree with the reviewer that cosine similarity alone is insufficient for comparing explanations. Therefore, we employed three metrics: Spearman rank correlation, $R^2$, and cosine similarity. The results are reported using the three metrics in Tables 7, 9, 12, 14, and 16. > ... include a broader set of model-agnostic approaches... We acknowledge that incorporating additional experiments can enhance the quality of the paper. We want to clarify that the method in reference [5] provided by the reviewer (Covert et al. Improving KernelSHAP...) is the unbiased KernelSHAP method, which we have already compared our proposed method with in our evaluation. > lacks a discussion of related works... We appreciate the reviewer’s suggestions for additional related work to discuss. We kindly clarify that the papers (Mitchell et al. Sampling Permutations for Shapley Value Estimation) and (Covert et al. Improving KernelSHAP) are already cited multiple times in our manuscript. [1] Hooker, S., et al. A benchmark for interpretability methods in deep neural networks. [2] Jethani, N., et al. Have we learned to explain?: How interpretability methods can learn to encode predictions in their interpretations. AISTATS 2021 [3] Jethani, N., et al. FastSHAP: Real-time shapley value estimation. ICLR, 2022 [4] Covert, I. et al. Improving kernelshap: Practical shapley value estimation using linear regression. AISTATS 2021 [5] Castro, J., et al. Polynomial calculation of the shapley value based on sampling. Computers & Operations Research [6] Covert, I., et al. Stochastic Amortization: A Unified Approach to Accelerate Feature and Data Attribution. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response. Your response has basically addressed my concerns, and I will raise my score accordingly. However, I strongly encourage the authors to incorporate the following revisions in the next version of the manuscript, which would further improve clarity and better highlight the contribution of the work, **1. Clarification of the unbiased KernelSHAP** Please clearly state in Section 2.3 or in the Introduction that you use the **unbiased KernelSHAP** method as the baseline, rather than introducing this only in the experimental section. This clarification is particularly important as Reviewer 1ExE also raised concerns regarding estimation bias. Since one of the goals in Shapley value estimation is to approximate the ground-truth values accurately, both **the choice of ground truth values** and **the evaluation metrics** must be clearly stated. **2. Comparisons with other unbiased methods** As representative unbiased model-agnostic methods, [3] and [4] should be included in experimental comparisons, both on tabular datasets and on image datasets. These methods, like unbiased KernelSHAP, serve as natural baselines. It would be helpful to show that the gap between the proposed method and these unbiased estimators narrows as the number of samples increases. Such results would strengthen the empirical justification of your method. I recommend presenting these results in the main paper, possibly in a format similar to Figure 1 in [6]. **3. Similarity Metrics** In addition to the similarity metric currently used, please consider including more standard metrics that are commonly used to evaluate Shapley value approximation accuracy, such as the root mean squared error (RMSE) used in [6] or mean squared error (MSE) used in [5]. Please also cite the respective works where these metrics were used to improve clarity. **4. Distinguishing from Prior Work** Please explicitly distinguish the proposed method from that of [1,2],especially in terms of application scenarios. These works train inherently explainable models that output both predictions and exact Shapley values in a single forward pass. Given the constraints of the rebuttal phase and the absence of above results in the current version, I cannot give a higher score. Nonetheless, I believe these additions would substantially enhance the quality of the manuscript and more clearly convey the significance of the contribution. [6] Ancona et al. Explaining Deep Neural Networks with a Polynomial Time Algorithm for Shapley Values Approximation, ICML 2019. --- Reply to Comment 1.1.1: Comment: Thank you very much for your constructive feedback. We sincerely appreciate your positive assessment. We fully agree with your suggestions, and we acknowledge that incorporating these revisions will significantly improve both the clarity and strength of the manuscript. Currently, we are updating our manuscript accordingly and are committed to implementing your recommendations in the next revision of the manuscript.
Summary: The paper proposes a new method, ViaSHAP, which learns a function that computes Shapley values alongside the model’s prediction. This method works by training a machine learning model that minimizes a weighed least squares lost similar to that of KernelSHAP and FastSHAP. The authors present many experiments showing that the predictions of the model on tabular data are on-par with state-of-the-art tabular models (XGBoost, MLP, Kolmogorov-Arnold Networks) and that the explanations are more accurate than FastSHAP. Claims And Evidence: Strengths The authors perform experiments across a large number of datasets, showing that the predictive power of ViaSHAP is on-par with other methods. Additionally, they show that ViaSHAP generates more accurate explanations than FastSHAP. The main selling point of ViaSHAP is to train a model that simultaneously provides accurate predictions and Shapley values. Weaknesses The authors argue that one major drawback of existing methods is that Shapley values are computed post-hoc, that is, after a model has already been trained. More specifically, “generating instance-base explanations or learning a pre-trained explainer always demands further data, time, and resources” (page 1). The authors’ main selling point of ViaSHAP, then, is the ability to train a single model that generates both accurate predictions and Shapley values—training a model plus training a FastSHAP explainer model unnecessarily increases computational resources. However, the importance of computing Shapley values efficiently lies in explaining already existing machine learning models. Therefore, one might argue that the authors’ claim that generating Shapley explanations post-hoc is a drawback is a fundamentally biased view of existing Shapley literature. The methodologies are highly similar and inspired by FashSHAP, even though FastSHAP is a posthoc method and ViaSHAP is not. Methods And Evaluation Criteria: The method is explained clearly, and the evaluation criteria make sense. Theoretical Claims: No theoretical claims. Experimental Designs Or Analyses: Extensive empirical evaluations have been performed. Several of these are supplementary, perhaps due to space constraints. But it would be good to have some summary tables of these results in the main text. Supplementary Material: Yes. Relation To Broader Scientific Literature: There are already several works on computing Shapley's explanation faster. Again, even though this work is new from the angel learning to explain rather than postdoc explanation, still the methologies is not very distinct from FastSHAP. Essential References Not Discussed: It would be good to discuss the recent papers on amortized SHAP, which directly related to the fast computation of SHAP, for example, [1] Stochastic Amortization: A Unified Approach to Accelerate Feature and Data Attribution https://arxiv.org/pdf/2401.15866 [2] SHAP zero Explains Genomic Models with Near-zero Marginal Cost for Future Queried Sequences https://arxiv.org/abs/2410.19236 Other Strengths And Weaknesses: Discussed before. Other Comments Or Suggestions: Discussed before. Questions For Authors: 1) In what problem settings should one decide to use ViaSHAP instead of training a regular model and then apply FastSHAP? 2) In total, how much computation time will be saved using ViaSHAP in the previous question? 3) Can ViaSHAP be used while training LLMs? What are the tradeoffs and fundamental computational limits? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their time and valuable feedback on our manuscript. We provide answers and clarifications below. > The authors argue that one major drawback of existing methods is that Shapley values are computed post-hoc.... We do not intend to suggest that computing Shapley values in a post-hoc setup is inherently a drawback. Rather, our work builds on existing post-hoc methods and aims to further reduce computational costs by eliminating the need for training a separate explainer model and running two models at inference time. In particular, it is true that in cases where a pre-trained model already exists or when users cannot access or modify the black-box model, post-hoc methods might be more suitable. Nonetheless, this is not always the case. ViaSHAP offers an alternative for users who prioritize developing an inherently explainable model, as well as for scenarios where computational efficiency and resources are important considerations. > There are already several works on computing Shapley's explanation faster. Again, even though this work is new from the angel learning to explain rather than postdoc explanation, still the methologies is not very distinct from FastSHAP. Our proposed method is indeed inspired by FastSHAP. However, a key difference is that we do not start with a pre-trained black-box model that requires post-hoc explanations. This difference is fundamental in the settings, i.e., building a by-design explainable model vs. explaining a pre-trained model. In the adapted setting, ViaSHAP thus reduces the computational cost needed from training two models to a single one. The use of a by-design explainable model also ensures that the model explains itself, thus providing explanations that align with the prediction. On the other hand, explaining a black-box model through post-hoc methods leaves the explanations open to the inherent randomness issues of statistical learning [1]. > It would be good to discuss the recent papers on amortized SHAP... We thank the reviewer for pointing out the missing recent paper. We will indeed add it to the discussion. > In what problem settings should one decide to use ViaSHAP instead of training a regular model and then apply FastSHAP? ViaSHAP is particularly beneficial in environments with limited time or computational resources, as it eliminates the need to train and run separate models for prediction and explanation. Additionally, it is well-suited for scenarios where high-fidelity explanations are a central concern, as the model’s predictions are inherently tied to its explanations. On the other hand, post-hoc explanation methods remain necessary when users cannot access or modify the black-box model or when the model does not support optimization via gradients and backpropagation. We do not propose to replace or discard post-hoc methods, but rather to offer an alternative approach that promotes explainability while addressing computational efficiency. > how much computation time will be saved using ViaSHAP... When comparing ViaSHAP to KernelSHAP, ViaSHAP significantly reduces computational cost at inference time by avoiding the need to solve a separate optimization problem for each prediction. Therefore, computational cost is reduced by an order of magnitude, as demonstrated in Appendix L and Table 15. When compared to real-time explanation methods such as FastSHAP, ViaSHAP further reduces computation at training time (by avoiding the training of a separate explainer model) and at inference time (by running inference on a single model instead of two). More importantly, the computational cost of ViaSHAP at inference time remains the same as a similar model that does not compute Shapley values. Detailed computational cost analyses for ViaSHAP can be found in Appendix N. > Can ViaSHAP be used while training LLMs? What are the tradeoffs and fundamental computational limits? In principle, ViaSHAP could be adapted to train LLMs. However, there are significant challenges. For instance, LLM training is exceptionally computationally intensive, and incorporating Shapley-based explanations during training would add more computational burden at training time, as ViaSHAP requires sampling from the input and propagating gradients. From the previously evoked results (Appendix N), the inference time should not be harmed by this procedure. Another possible solution is to use ViaSHAP on pre-trained LLMs, where all the learned parameters of the model are fixed, and one or two output layers are modified and optimized to approximate the explanation. That being said, we cannot provide an accurate answer without a proper study similar to what we did with tabular data and images. [1] Rudin et al., Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead, 2019
null
null
null
null
null
null
A Near-Optimal Single-Loop Stochastic Algorithm for Convex Finite-Sum Coupled Compositional Optimization
Accept (poster)
Summary: The authors study a convex compositional problem with a particular structure and propose a new algorithm called Alexr. ## update after rebuttal I think the paper deserves to be accepted and I am confident that the authors will make the recommended changes to make the paper even better. Claims And Evidence: The claims are convincing. But my confidence with respect to the novelty and state of the art is low, as this topic is far from my expertise area. Methods And Evaluation Criteria: The evaluation setups are good. Theoretical Claims: I did not check the correctness of the proofs. Experimental Designs Or Analyses: The experiments look sound. Supplementary Material: I looked at the supplementary material rapidly. Relation To Broader Scientific Literature: The literature review seems good to me. Some papers I had in mind on saddle point problems are cited. Essential References Not Discussed: I am not an expert of this class of problems. (2) is a convex-concave saddle-point problem with non-bilinear coupling. The function $g_i$ in the coupling is accessed via a stochastic oracle, and I am not familiar with this setting. Can you tell me if the following papers are relevant? * Alacaoglu et al. "Forward‑reflected‑backward method with variance reduction", 2021 * Alacaoglu and Malitsky "Stochastic Variance Reduction for Variational Inequality Methods" 2022 * Zhu et al. "Accelerated Primal-Dual Algorithms for a Class of Convex-Concave Saddle-Point Problems with Non-Bilinear Coupling Term," 2020 Other Strengths And Weaknesses: Again, this topic is not in my expertise area, but this class of optimization problems seems to have interesting applications in machine learning. Other Comments Or Suggestions: typos: * regularizatoin * line 1362: minF(x) Questions For Authors: Line 60: "By the convex conjugate" do you mean that this follows from some duality framework? Can you give some references on this? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for taking the time to read our paper and greatly appreciate your valuable feedback. > **Q1:** Can you tell me if the following papers are relevant? (1) Alacaoglu et al. "Forward‑reflected‑backward method with variance reduction", 2021; (2) Alacaoglu and Malitsky "Stochastic Variance Reduction for Variational Inequality Methods" 2022; (3) Zhu et al. "Accelerated Primal-Dual Algorithms for a Class of Convex-Concave Saddle-Point Problems with Non-Bilinear Coupling Term," 2020. **A:** Thanks for bring these interesting works to our attention. - Zhu et al. study the convex-concave min-max optimization problem with a non-bilinear coupling term, which is similar to the reformulation (2) in our work. However, there are key differences in our problem setting: - i) In our formulation (2), the problem is separable with respect to each coordinate of the dual variable $y$, whereas the problem in Zhu et al. is not; - ii) Zhu et al. focus on the deterministic setting, while our work focuses on the stochastic setting. - Both Alacaoglu et al. and Alacaoglu and Malitsky study the monotone variational inequality problem with a finite-sum structure, which includes the concave min-max optimization problem with a finite-sum structure as a special case. Although our problem in (2) also has a finite-sum structure, there are two differences: - i) Our problem (2) is separable with respect to each coordinate of the dual variable $y$, whereas the problems considered in Alacaoglu et al. and Alacaoglu and Malitsky are not; - ii) Alacaoglu et al. and Alacaoglu and Malitsky assume that each summand in the finite sum of losses is directly available. In contrast, our work only requires that each summand be accessible through stochastic oracles. We will add these discussions in the revision. > **Q2:** Line 60: "By the convex conjugate" do you mean that this follows from some duality framework? Can you give some references on this? **A:** Sorry for the confusion. We refer to that the value of a convex and and lower semi-continuous function $f_i$ on any $u$ in its domain can be represented as $f_i(u) = \max_{y^{(i)}\in\mathcal{Y}_i}\{u^\top y^{(i)} - f_i^*(y^{(i)})\}$, where $f_i^*$ is the convex conjugate of $f_i. The formal definition can be found in Section 12 of Rockafellar (1997). Rockafellar, R.T., 1997. Convex analysis (Vol. 28). Princeton university press. > **Q3:** typos: regularizatoin; line 1362: minF(x) **A:** Thanks for pointing these out. We will fix them in the next version of our draft.
Summary: This paper introduces ALEXR, a single-loop, primal-dual block-coordinate algorithm aimed at convex finite-sum coupled compositional problems (both strongly convex and merely convex, even if nonsmooth). By carefully interleaving primal updates with an extrapolated dual variable, the authors achieve improved or near-optimal oracle complexities, bolstered by new lower complexity bounds that highlight the theoretical tightness of their approach. Empirical demonstrations on distributionally robust optimization and partial AUC show consistent performance gains, underscoring the method’s practicality. Claims And Evidence: The paper’s primary strength lies in its near-optimal theoretical guarantees, achieved through a novel single-loop, primal-dual algorithmic design that replaces more conventional nested structures. This not only simplifies analysis but also clarifies step-by-step convergence arguments, particularly under nonsmooth convex conditions. The authors further bolster the method’s theoretical significance by establishing new lower complexity bounds, underscoring the algorithm’s tightness and rigor. Additionally, they demonstrate clear empirical benefits on group distributionally robust optimization and partial AUC tasks, showing that their approach is both grounded in strong theory and relevant to practical ML scenarios. Methods And Evaluation Criteria: Yes Theoretical Claims: No Experimental Designs Or Analyses: The reviewer briefly looked over the numerical experiments to verify their design but did not check the details. Supplementary Material: No Relation To Broader Scientific Literature: The paper builds on and extends prior work on coupled compositional optimization – notably approaches like those by Wang et al. (2022) and Jiang et al. (2022) – by providing a single-loop alternative to the previously more common nested-loop frameworks. The main motivation is the dual formulation (2) of the primal problem (1), inspired by the duality result of Levy et al. (2020). This duality applies to both GDRO and partial AUC. The applications in GDRO and partial AUC are in line with the growing interest in robust and fair optimization problems. Essential References Not Discussed: No Other Strengths And Weaknesses: No Other Comments Or Suggestions: No Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for taking the time to read our paper and greatly appreciate your valuable feedback.
Summary: The paper presents ALEXR, a novel stochastic algorithm for convex Finite-sum Coupled Compositional Optimization (cFCCO). It reformulates cFCCO as a convex-concave min-max problem and introduces a single-loop primal-dual block-coordinate stochastic approach. ALEXR applies mirror ascent for the dual variable and proximal gradient descent for the primal variable. The authors establish convergence rates under various smoothness conditions and derive lower complexity bounds, proving ALEXR’s near-optimality. Experiments on GDRO and pAUC maximization highlight its strong performance. ## update after rebuttal The authors' response addressed my concern. I’m keeping my score—not because the paper lacks merit, but because there's no option for 3.5. I recommend for accept in general. Claims And Evidence: The claims made in the paper are generally well-supported by clear and convincing evidence. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are appropriate for the problem at hand. The authors focus on cFCCO problems, which have applications in machine learning tasks such as GDRO and pAUC maximization. The use of benchmark datasets like Adult and CelebA for GDRO, and Covtype, Cardiomegaly, Lung-mass, and Higgs for pAUC maximization, aligns with the problem domains. The evaluation metrics, such as test accuracy and pAUC, are relevant and provide meaningful insights into the performance of the algorithms. Theoretical Claims: The theoretical claims in the paper are supported by detailed proofs provided in the appendices. I didn't carefully check proofs in the appendix, but they appear to be right based on my previous understanding of existing literature. Experimental Designs Or Analyses: The experimental designs and analyses are sound and valid. The authors compare ALEXR against several baseline algorithms on GDRO and pAUC maximization tasks, using appropriate datasets and evaluation metrics. The results show that ALEXR outperforms the baselines in most cases, demonstrating its effectiveness. However, the paper could benefit from more extensive ablation studies to understand the impact of different hyperparameters and algorithmic components on the performance of ALEXR. Supplementary Material: I didn't carefully review all proofs in the supplementary material. The proofs appear to be correct and well-structured, with clear explanations of the key steps and assumptions. But I did check additional experimental results and details provided in Section G. Relation To Broader Scientific Literature: The key contributions of the paper are well-aligned with the broader scientific literature on stochastic optimization and machine learning. The authors build on prior work in convex stochastic compositional optimization, coupled compositional optimization, and primal-dual methods for empirical risk minimization. They extend these ideas to the cFCCO setting and propose a novel algorithm that improves upon existing methods in terms of oracle complexity and applicability to non-smooth problems. Essential References Not Discussed: Not that I am aware of Other Strengths And Weaknesses: ### Strengths 1. **Originality:** The proposed ALEXR algorithm is a novel single-loop stochastic method for convex finite-sum coupled compositional optimization (cFCCO). It improves upon prior methods by removing the nested inner loop that make previous approaches computationally complex. The algorithm incorporates a primal-dual block-coordinate approach, motivated by the literature. 2. **Significance:** The study has theoretical and practical relevance, proving lower bounds that confirm the near-optimality of ALEXR. It expands the application of cFCCO to non-smooth problems, including GDRO and partial AUC maximization. The algorithm demonstrates superior performance in real-world ML applications. 3. **Clarity:** The paper is well-written.The comparisons to previous algorithms are well-structured in tables and thoroughly discussed. ### Weaknesses 1. While ALEXR performs well in theoretical benchmarks, the paper does not discuss practical constraints such as hyperparameter sensitivity and tuning difficulty. 2. There is no in-depth ablation study on how different components of ALEXR (e.g., choice of $\psi$ functions, step size schedules) affect performance. Other Comments Or Suggestions: 1. Some hyperparameter settings are not explicitly stated (e.g., specific learning rates, batch sizes for all algorithms). A table summarizing the experimental setup would improve reproducibility. 2. It would be helpful to present and discuss runtime performance across different algorithms, not just convergence speed in terms of oracle complexity. Questions For Authors: 1. The paper provides strong theoretical results, but there is limited discussion on the sensitivity of ALEXR to hyperparameter choices (e.g., extrapolation parameter $\theta$, step sizes, and choice of $\psi_i$). How sensitive is ALEXR’s performance to those choices? 2. How does the wall-clock runtime of ALEXR compare to baselines like SOX and MSVR? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for taking the time to read our paper and greatly appreciate your valuable feedback. > **Q1:** The paper provides strong theoretical results, but there is limited discussion on the sensitivity of ALEXR to hyperparameter choices. How sensitive is ALEXR’s performance to those choices? **A:** ALEXR has three hyperparameters: $\eta$, $\tau$, and $\theta$. Although $\eta$ and $\tau$ need to be tuned, the sensitivity analysis below shows that ALEXR is not highly sensitive to small variations of their values. Fixing $\theta = 0.1$ seems to work well across all of our experiments and datasets. Thus, the tuning effort of ALEXR is similar to baselines SOX, MSVR, and OOA. ### i) Sensitivity to step sizes Note that in ALEXR, $1/\eta$ is the step size for the primal variable $x$, while $1/\tau$ is step size for the dual variable $y$. We sweep over different values of step sizes in the Group DRO experiments. The top-tier choices of step sizes are highlighted. (Adult Dataset, $\alpha=0.1$) |Primal/dual stepsizes|$\frac{1}{\eta}=$ 0.02 | 0.05 | 0.1 | 0.2 | 0.5 | 1.0 :-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----: **$\frac{1}{\tau}=$ 0.02** |0.776 | 0.804| 0.789|0.782 | 0.770|0.773 **0.05** | 0.739| 0.716| 0.721| 0.703| 0.719|0.737 **0.1** | 0.694| 0.691| 0.682| 0.688| 0.704|0.704 **0.2** | **0.681** | **0.675**| **0.678**| 0.687|0.684 |0.707 **0.5** | **0.677**|**0.678**| **0.680**| **0.677**| 0.689|0.762 **1.0** |**0.679**| **0.679**| **0.676**| **0.680**| 0.695|1.074 (CelebA Dataset, $\alpha=0.1$, '-' refers to divergence) |Primal/dual stepsizes|$\frac{1}{\eta}=$ 0.002 | 0.005 | 0.01 | 0.02 | 0.05 | 0.1 :-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----: **$\frac{1}{\tau}=$ 0.002**|0.501 | 0.501| 0.501 | 0.500| 0.506|0.670 **0.005** | 0.497 | 0.487 | 0.490 | 0.486 | 0.493|0.721 **0.01**| 0.486 | 0.482 | 0.477 | 0.477 | 0.481|0.785 **0.02** | 0.479| **0.470**| **0.468**|**0.467** |**0.470** |- **0.05** | 0.474| **0.467** | **0.461** | **0.459**| 0.484| - **0.1** | 0.475 | **0.467** | **0.464** | **0.464**|0.495| - As shown in the results above, while the step sizes require tuning for each dataset, they are **not** highly sensitive to small variations. ### ii) Choice of $\psi_i$ Since the GDRO problem with the CVaR divergence has a non-smooth $f_i$, we report the experimental results of ALEXR with the choice quadratic choice $\psi_i(u)=\frac{1}{2}\||u\||_2^2$, as discussed in Line 199 of our paper. Next, we empirically compare the results under the quadratic choice with those under another choice $\psi_i=f_i^*$. We separately tune the step sizes for the choice $\psi_i=f_i^*$. | Choice of $\psi_i$| Adult ($\alpha=0.1$) | Adult ($\alpha=0.15$) | CelebA ($\alpha=0.1$) | CelebA ($\alpha=0.15$) :-----:|:-----:|:-----:|:-----:|:-----: $\psi_i=f_i^*$| 0.7165| 0.6866 | 0.5139|0.4746 **Quadratic** (chosen in our paper) | 0.6763 | 0.6646 | 0.4593| 0.4521 The results show that $\psi_i(u)=\frac{1}{2}\\||u\\||_2^2$ is indeed a better choice than $\psi_i=f_i^*$ for the non-smooth GDRO problem. > **Q2:** It would be helpful to present and discuss runtime performance across different algorithms, not just convergence speed in terms of oracle complexity. How does the wall-clock runtime of ALEXR compare to baselines like SOX and MSVR? **A:** Next, we report the total wall-clock runtime for the GDRO experiment on the larger dataset CelebA. | Algorithms| OOA | SOX | MSVR| ALEXR (Ours) :-----:|:-----:|:-----:|:-----:|:-----: Total Runtime (s) | 45.31 | 44.56 | 47.88|45.89 The result above shows that runtime of ALEXR is comparable to the baselines while achieving a faster convergence rate. > **Q3:** Some hyperparameter settings are not explicitly stated (e.g., specific learning rates, batch sizes for all algorithms). A table summarizing the experimental setup would improve reproducibility. Thank you for the suggestion! The tuned algorithm-specific hyperparameter settings for the GDRO experiments are: || SGD | SGD-UW | OOA | BSGD | SOX | MSVR | ALEXR :-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----: **Adult** | lr=0.1 | lr=0.1 | lr=1.0, lr\_dual=0.001 | lr=0.01|lr=0.01,$\gamma$=0.1 |lr=0.01,$\gamma$=0.9|$\frac{1}{\eta}$=0.1,$\frac{1}{\tau}$=1.0, $\theta$=0.1 **CelebA** | lr=0.05 | lr=0.05|lr=0.01, lr\_dual=0.05 | lr=0.01 | lr=0.01, $\gamma$=0.5 | lr=0.01, $\gamma$=0.5| $\frac{1}{\eta}$=0.02,$\frac{1}{\tau}$=0.05, $\theta$=0.1 The shared settings for all algorithms in the GDRO experiments are: || Batch sizes | \#Iterations | Quantile $\alpha$ :-----:|:-----:|:-----:|:-----: **Adult** |64 (B=8 from each S=8)|2500|10\%, 15\% **CelebA**| 64 (B=8 from each S=8) |15000 |10\%, 15\% Due to character limitations, we will also include settings for the pAUC experiments in the next version of our draft.
Summary: This paper studies a regularized convex finite-sum coupled compositional optimization (cFCCO) problems: $\min F(x), \text{ where } F(x) := \frac{1}{n}\sum_{i = 1}^n f_i(g_i(x)) + r(x)$. Here, all functions ($f_i, g_i, r, F$) are convex, and each inner function $g_i(x) = E_{\xi_i \sim P_i} [g_i(x; \xi_i)]$ for distinct distributions and is accessed via stochastic oracle queries. The authors also assume that $g_i, f_i$ are Lipschitz continuous (and possibly smooth), and when $g_i$ nonlinear $f_i$ is assumed monotonically non-decreasing, and the variances of the zeroth-order and first-order stochastic oracles are bounded. This kind of problems find applications in group distributionally robust optimization and learning with imbalanced data, which later their experiments study. The authors reformulate the problem into a convex-concave min-max problem using the conjugacy of $f_i$ and propose a single-loop primal-dual algorithm, namely ALEXER, which can viewed as a randomized extrapolated coordinate algorithm on the dual side and a proximal stochastic gradient method on the primal side. Convergence analysis and nonasymptotic guarantees are provided for the proposed algorithm, under both strongly convex and convex settings, with improved dependence on the number of components $n$ and the strong convexity parameter $\mu$. The authors also include certain worst-case examples to show the tightness of their complexity results. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: I do not have time to check the proofs in the appendix, but the theoretical claims in the main body look reasonable to me. Experimental Designs Or Analyses: This paper provides quite substantial experiments. Supplementary Material: n/a Relation To Broader Scientific Literature: This work should relate to prior work on (stochastic) compositional optimization, primal dual algorithms, and coordinate algorithms. Essential References Not Discussed: - It would be helpful to cite the seminal works [Chambolle & Pock (2011); Chambolle et al. (2018)] when the authors introduce the min-max reformulation for the original minimization problem. - It is helpful to discuss the prior work on cyclic coordinate methods, such as Song & Diakonikolas, 2021. - For the min-max reformulation and the perspective of coordinates methods on the dual side, it is helpful to cite the prior work on shuffled SGD. Chambolle, A. and Pock, T. A first-order primal-dual algorithm for convex problems with applications to imaging. Journal of Mathematical Imaging and Vision, 40(1):120–145, 2011. Chambolle, A., Ehrhardt, M. J., Richtárik, P., and Schonlieb, C.-B. Stochastic primal-dual hybrid gradient algorithm with arbitrary sampling and imaging applications. SIAM Journal on Optimization, 28(4):2783–2808, 2018. Chaobing Song and Jelena Diakonikolas. Cyclic coordinate dual averaging with extrapolation. SIAM Journal on Optimization, 33(4):2935–2961, 2023. doi: 10.1137/22M1470104. Xufeng Cai, Cheuk Yin Lin, and Jelena Diakonikolas. Tighter convergence bounds for shuffled SGD via primal-dual perspective. In Proc. NeurIPS’24, 2024. Other Strengths And Weaknesses: **Strengths** - It is interesting to see the viewpoint and the analysis of block coordinate updates on the dual side, though it is natural from the min-max reformulation and the stochastic design on the primal side. - The convergence analysis is substantial under convex settings, including both smooth and nonsmooth cases. The authors also include lower bound results. **Weaknesses** - Assumption 4 looks strong to me: the authors assume the bounded variance for both zeroth- and first-order oracles. - It seems that the analysis mainly relies on the Lipschitzness of the functions, and the smoothness only helps improve the batch size requirements. Other Comments Or Suggestions: - What is the notation $u_t$ appearing in Section 3.1-3.2 and in the proofs in the appendix? Seems that the authors have refactored the writeup but are not careful to unify the notation. - In Eq. (5), it seems that there should be $L(x, \bar y_{t + 1}) - L(x_{t + 1}, y)$ instead of $L(x_{t + 1}, y) - L(x, \bar y_{t + 1})$ on the right-hand side, based on the discussion and Lemma 5 in the appendix. ## Update after rebuttal I appreciate author's responses, and they addressd my concerns. I found this paper is interesting, from the aspects of (cf) nested stochastic optimization or bilevel optimization, though the complexity results appear to be a bit hard to parse. Possibly other dependece besides $epsilon$ can be improved. I also suggest: 1. Make the notation more clearly defined, such as $u_t$. 2. Have the batch sizes explicitly stated in the main body and the comparison table Overall, I recommend acceptance. Questions For Authors: - Could the authors explain why they need the extrapolation (Line 6 of Algorithm 1) on the dual side? - I do not quite get how the authors derive $B, S = O(1)$ from the main body. Could the authors explain their choice of $B$ and $S$? - For the challenge the authors discussed in Eq. (5), it is usual that the analysis of the algorithm with full updates can be adapted because here one only have randomized coordinate updates to deal with (in contrast to cyclic coordinate methods). Could the authors explain the additional technical challenges here? - For Theorem 2, what is the step size constraints for the algorithm for smooth $g_i$ and $f_i$? Do they still require $O(1/\epsilon)$ small step sizes, or they only depend on the smoothness constants? - What are the assumption (cf Assumption 4 here) the prior work on stochastic compositional optimization made? Do they also require the bounded variance for both zeroth- and first-order oracles? - What is the key challenge of removing the monotonicity assumption of $f_i$? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for taking the time to read our paper and greatly appreciate your valuable feedback. > **Q1:** Assumption 4 looks strong to me: the authors assume the bounded variance for both zeroth- and first-order oracles. What are the assumption (cf Assumption 4 here) the prior work on stochastic compositional optimization made? Do they also require the bounded variance for both zeroth- and first-order oracles? **A:** All prior works on stochastic compositional optimization make the same or stronger assumptions. For example, the seminal works [1, 2] assume both of the following hold: (1) The variance of the zeroth-order oracle is bounded, which is the same as ours. (2) The first-order oracle is either almost surely bounded or has a bounded 2nd moment, which are stronger than the bounded variance assumption in our work. Recent works such as SOX and MSVR all assume the same conditions as ours. [1] Wang et al., 2017. Stochastic compositional gradient descent: algorithms for minimizing compositions of expected-value functions. Math. Program. [2] Wang et al., 2017. Accelerating stochastic composition optimization. JMLR. > **Q2:** It seems that the analysis mainly relies on the Lipschitzness of the functions, and the smoothness only helps improve the batch size requirements. **A:** That is partially true. First, the smoothness of $f_i$ also allows us to derive the optimal complexity of $O(1/\epsilon)$ for strongly convex problems, which is **better** than $O(1/\epsilon^2)$ for non-smooth $f_i$. While it is true that the smoothness of the inner functions $g_i$ only affects the batch size scaling in iteration complexity, this is expected, as standard SGD exhibits a similar distinction between smooth and non-smooth problems. > **Q3:** What is the notation $u_t$ appearing in Section 3.1-3.2 and in the proofs in the appendix? Seems that the authors have refactored the writeup but are not careful to unify the notation. **A:** Sorry for the confusion, but $u_t$ is not a notation inconsistency. Eq. (3) in our paper shows that if we set $\psi_i=f_i^*$, Lines 4–8 of our ALEXR algorithm can be rewritten in a form similar to the previous algorithms SOX and MSVR. To be specific, the updated $y_{t+1}^{(i)}$ can be expressed as the gradient $\nabla f_i$ evaluated at a moving-average estimator $u_{t+1}^{(i)} = \frac{\tau}{1+\tau}u_t^{(i)} + \frac{1}{1+\tau}\tilde{g}_t^{(i)}$. The form in (3) based on $u_t$ facilitates the discussion of relationship to SOX and MSVR in Section 3.2. >**Q4:** In Eq. (5), it seems that there should be $L(x,\bar{y}\_{t+1}) - L(x\_{t+1},y)$ instead of $L(x\_{t+1},y) - L(x,\bar{y}\_{t+1})$ on the right-hand side, based on the discussion and Lemma 5 in the appendix. **A:** Thank you for pointing out this typo! We will fix it in the next version of our draft. > **Q5:** Could the authors explain why they need the extrapolation (Line 6 of Algorithm 1) on the dual side? **A:** The extrapolation allows us to enjoy the batch size scaling in the iteration complexity by leveraging the smoothness of $g_i$. Without extrapolation (i.e., $\theta=0$), there is an additional term $\Gamma_{t+1}= \frac{1}{n} \sum_{i=1}^n (g_i(x_{t+1}) - g_i(x_t))^\top (y_*^{(i)}-y_{t+1}^{(i)})$ on the R.H.S. of (18) in Lemma 6. Directly bound this term leads to a worse rate when $g_i$ is smooth. With extrapolation, we can form a telescoping sum of $\Gamma_t$ and cancel it out. Similar idea is used in Lemma 10 for the merely convex case. We have the results in the full version of our paper and will include the discussions in the revision. > **Q6:** I do not quite get how the authors derive $B,S=O(1)$ from the main body. Could the authors explain their choice of $B$ and $S$? **A:** $B,S=O(1)$ in Table 1 means that our algorithm only requires a constant mini-batch size to converge (c.f. impractically large batch size of $O(\epsilon^{-1})$ or $O(\epsilon^{-1})$ in previous work BSGD) and enjoys a parallel speedup using mini-batches. > **Q7:** For the challenge the authors discussed in Eq. (5), it is usual that the analysis of the algorithm with full updates can be adapted because here one only have randomized coordinate updates to deal with (in contrast to cyclic coordinate methods). Could the authors explain the additional technical challenges here? **A:** There are two additional technical challenges here: - First, our algorithm includes the randomized coordinate stochastic updates for the dual variable, which cause the **dependence issue** discussed above Theorem 2, making it difficult to directly apply standard convergence analyses used in full-update methods. - The second challenge comes from the **stochastic** mirror ascent update for the sampled $y_i$, which make it more involved to derive a batch size scaled convergence rate when the inner functions are smooth. > **Q8:** Suggestions on citing prior works. **A:** Thank you! We will add these references in the next version of our draft.
Summary: This work studied convex FCCO problems, by reformulating into a convex-concave min-max problem, this work proposed ALEXR algorithm by incorporating coordinate descent and SAPD, the convergence guarantees are provided, also the lower bound analysis show that the complexity of proposed algorithm is near-optimal. Claims And Evidence: The claims come with convincing evidence in general. Here are some questions or comments: 1. For the lower bound part, there seems to be no clear definition of the oracle, regarding the layer-wise structure, you need to access $\nabla f_i$ and $\nabla g_i$ separately, also you further need to compute the proximal operator, I expect you can explicitly state the definition of the oracle, with the corresponding input and query output, similar to Zhang & Lan (2020) as you cited. 2. The lower bound result $\Omega(\max(S/\mu,n)\epsilon^{-1})$ looks weird to me, I think lower bound should be problem-intrinsic and depends on the function behavior, so the dependence on $n$ and $\mu$ (intrinsic problem parameters) looks fine to me; but $S$ should be a user-specified parameter, if $\mu$ is small enough, then you can choose $S$ to have a varing lower bound, which looks weird to me. 3. The algorithm for nonsmooth case works for simple $f$ only to compute the proximal mapping. Even though authors provided some examples for illustration, but the restriction may still hinder the applicability of the algorithm in general nonsmooth functions, compared to existing works. 4. What is the potential obstacles if you extend your cFCCO problem from finite-sum to stochastic problem ($\frac{1}{n}\sum_{i=1}^nf_i$ to $\mathbb{E}_if_i$), or you just intentionally choose the finite-sum problem in the study? With that I think the work provided some interesting results for cFCCO, while some novel contribution should be further clarified to enhance the significance. I am open for further discussion. Methods And Evaluation Criteria: Make sense in general. Theoretical Claims: See above Experimental Designs Or Analyses: See above Supplementary Material: See above Relation To Broader Scientific Literature: This work will advance the theoretical understanding of the algorithms for coupled compositional optimization problem. Essential References Not Discussed: No Other Strengths And Weaknesses: / Other Comments Or Suggestions: / Questions For Authors: / Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for taking the time to read our paper and greatly appreciate your valuable feedback. > **Q1:** For the lower bound part, there seems to be no clear definition of the oracle, regarding the layer-wise structure, you need to access $\nabla f_i$ and $\nabla g_i$ separately, also you further need to compute the proximal operator, I expect you can explicitly state the definition of the oracle, with the corresponding input and query output, similar to Zhang & Lan (2020) as you cited. **A:** The oracle complexity in lower bound corresponds to the total number of calls of $g_i(x;\zeta^{(i)})$ and $\nabla g_i(x;\tilde{\zeta}^{(i)})$ required by any algorithm within the abstract scheme to achieve an $\epsilon$-accurate solution. In addition, we assume the proximal mapping of $f_i^*$ can be computed when $f_i$ is non-smooth similar to Zhang & Lan (2020). Nevertheless, the number of proximal mappings of $f_i^*$ is bounded by the number of calls of $g_i(x;\zeta^{(i)})$. We will include this clarification in the next version of our draft. > **Q2:** The lower bound result $\Omega(\max(S/\mu,n)\epsilon^{-1})$ looks weird to me, I think lower bound should be problem-intrinsic and depends on the function behavior, so the dependence on $n$ and $\mu$ (intrinsic problem parameters) looks fine to me; but $S$ should be a user-specified parameter, if $\mu$ is small enough, then you can choose $S$ to have a varing lower bound, which looks weird to me. **A:** Thanks for your insightful question. While the lower bound naturally depends on important problem parameters such as $n$ and $\mu$, it is also tied to an abstract update scheme that describes how the space of the iterate evolves across the iterations. Please note that $S$ is a parameter of our abstract scheme presented in pg.6, which covers the typical setting $S=1$ for lower bound analysis of randomized coordinate algorithms (e.g., Ch. 5.1.4 in [1]). We intended to be general so that the abstract scheme covers our ALEXR and some baselines listed in Table 1. Our lower bound $\Omega(\max(S/\mu,n)\epsilon^{-1})$ characterizes the different behaviors of the abstract scheme in two different configurations of the problem-dependent parameters $n$, $\mu$: - Case I ($\frac{n}{S}>\frac{1}{\mu}$): In this case, the problem's hardness primarily arises from the high dimension of the dual variable $y$. Here, the block-coordinate update of $y$ makes the term $\frac{n}{S\epsilon}$ dominate the iteration complexity, resulting in an oracle complexity bound of $\Omega(n\epsilon^{-1})$. Increasing $S$ makes more coordinates of $y$ are updated per iteration, thereby reducing the number of required iterations to find an $\epsilon$-accurate solution of the whole problem. - Case II ($\frac{n}{S}\leq \frac{1}{\mu}$): In this case, the main hardness of the problem is from the ill-conditioning of the stochastic proximal update of primal variable $x$. Thus, increasing $S$ (i.e., updating more coordinates of the dual variable $y$) does not improve the leading term $\frac{1}{\mu\epsilon}$ of the iteration complexity, leading to a higher oracle complexity. > **Q3:** The algorithm for nonsmooth case works for simple $f$ only to compute the proximal mapping. Even though authors provided some examples for illustration, but the restriction may still hinder the applicability of the algorithm in general nonsmooth functions, compared to existing works. **A:** We agree with the reviewer. However, existing works on cFCCO that only require computing the gradient of $f_i$, either focus on smooth problems (e.g., SOX and MSVR) or require a gigantic batch size to converge (e.g., BSGD). All of them have suboptimal complexities. Our work is the first to study the more challenging non-smooth cFCCO problem and achieve the optimal rates. In cases where the the proximal mapping of $f_i^*$ is not easily computed, our algorithm can be combined with an approach that solves the proximal sub-problems inexactly (e.g., [2]). Then, the convergence guarantee can be still established. > **Q4:** What is the potential obstacles if you extend your cFCCO problem from finite-sum to stochastic problem ($\frac{1}{n}\sum_{i=1}^n f_i$ to $\mathbb{E}_i f_i$), or you just intentionally choose the finite-sum problem in the study? **A:** If there are an infinite number of $f_i$, our reformulation and algorithm cannot be applied directly, because the dual variable $y$ becomes infinite-dimensional. Hu et al. (2020) considered a more general setting that covers the stochastic problem, but their algorithm requires a large batch size to converge (in both finite-sum and stochastic settings). Given the broad applicability of the finite-sum structure in machine learning, we believe this is an important class to study. **Refs:** [1] Lan, 2020. First-order and Stochastic Optimization Methods for Machine Learning. [2] Wang et al., 2017. Inexact proximal stochastic gradient method for convex composite optimization. --- Rebuttal Comment 1.1: Comment: Thank you for the reply, I will keep the score.
null
null
null
null
Hadamard Representations: Augmenting Hyperbolic Tangents in RL
Reject
Summary: This paper addresses the issue of "dying neurons" in reinforcement learning, focusing particularly on continuously differentiable activation functions like hyperbolic tangent (tanh). The authors demonstrate that the dying neuron phenomenon is not exclusive to ReLU activations but also affects tanh, where saturation turns weights into "hidden biases" rather than pruning them as in ReLU. The paper proposes a "Hadamard representation" (HR) that defines a hidden layer as the Hadamard product of two separately parameterized activation vectors. Experiments on various Atari games using DQN, PPO, and PQN show that tanh with HR leads to significant performance improvements, reduced dead neurons, and increased effective rank compared to standard tanh and ReLU activations. Claims And Evidence: The core claims regarding reduced dead neurons and improved performance when using Hadamard representations with tanh are supported by the experimental evidence presented. The theoretical analysis of why Hadamard products help with tanh but not with ReLU or sigmoid is sound and verified by the experimental results shown in Figure 14. However, the claim that this is the best approach to address dying neurons lacks comparative evidence against other existing solutions like layer normalization, which is only briefly explored in one ablation experiment. Methods And Evaluation Criteria: The methods proposed are logical for addressing the stated problem. The evaluation on multiple reinforcement learning algorithms (DQN, PPO, PQN) across Atari environments is appropriate for benchmarking in RL. The metrics used (dying neuron rate, effective rank, and performance) directly measure the phenomena being studied. However, the evaluation would be stronger if it included more comparative baselines from the literature that specifically address neuron saturation and plasticity (such as DMC). Theoretical Claims: I have checked the theoretical derivations in Section 4, particularly the product saturation probability analysis and Theorem 4.1. The mathematical reasoning appears sound, where the authors show that the probability of dying neurons is reduced from p to p² when using Hadamard representation with tanh. The proof that weights connected to dead tanh neurons become biases (Theorem 4.1) is valid and provides a useful insight into why dying tanh neurons can be more problematic than dying ReLUs. Experimental Designs Or Analyses: The experimental design is generally sound. The authors test their approach across multiple RL algorithms and environments, use appropriate metrics, and include ablation studies. The kernel density estimation approach for detecting saturated neurons is reasonable. However, I found the learning rate sensitivity experiment for the larger 1024-dimensional network in Figure 13(d) to be incomplete, as it doesn't fully explore whether the performance gains could be achieved through simpler hyperparameter tuning alone. Supplementary Material: No Relation To Broader Scientific Literature: This work connects to literature on network capacity in RL, particularly the dying ReLU problem, but expands it to continuously differentiable activations. Essential References Not Discussed: The paper lacks sufficient discussion of layer normalization (Ba et al., 2016) as a potential solution for the dying neuron problem. While it's mentioned briefly in Figure 8(a), a more thorough comparison is needed given that normalization techniques are widely used to address activation saturation. The authors should also discuss more recent work on maintaining plasticity in RL networks, such as weight reinitialization techniques or specialized regularization approaches that target the same problem but may be simpler to implement. Other Strengths And Weaknesses: The paper provides a novel perspective on dying neurons in tanh activations and offers a conceptually simple solution with the Hadamard representation. Other Comments Or Suggestions: The paper would benefit from clearer exposition in some sections, particularly in explaining the practical implementation details of HR. Questions For Authors: 1. Have you conducted experiments combining Hadamard representations with layer normalization? Since layer normalization can also mitigate neuron saturation, it's important to understand whether your approach provides complementary benefits or if the simpler normalization approach alone achieves similar results. 2. The paper focuses primarily on Atari environments. Have you tested the approach in continuous control tasks or other RL domains where the state representation is different? This would help establish the generality of your findings beyond Atari. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We appreciate Reviewer KgES’s detailed review and appreciation of our work’s experiments and theoretical claims. Your suggestions are valuable, and we hope to address them here: 1. **Layer Normalization (LN):** We now see that we could have emphasized the effect of LN with respect to dying neurons and effective rank more strongly. We have therefore ran experiments examining the effect of LN on dead neurons and the effective rank (ER) similar to Fig. 7a,b. These experiments further show that a reduction in dead neurons does not always correlate with an increase in a layer's ER. As stated in [1], LN enforces a 'simplicity bias' throughout the layer, generally favoring lower-frequency, less complex solutions. This could further explain the performance discrepancy we observe between the Hadamard Representation (HR) and other regularizing techniques that force neuron resets or impose specific distributions. We will update Figures 7a,b in the revised manuscript to include dead neurons and ER during training with LN. Additionally, in the corresponding Experiments sections, we will further highlight the differences between LN and HR with respect to dead neurons, ER, and their ratio. We will also highlight this difference in the introduction. | Activation | Effective Rank (ER) | Dead Neurons | |----------|---------|--------| | Tanh (HR) | **428.3** | 0.297 | | Layernorm | 347.5 | **0.227** | | Tanh | 363.4 | 0.393 | | Sigmoid | 376.8| 0.449| | ReLU | 245.6| 0.618 | - **Combining LN & HR:** You are intuitively correct to suggest combining them, and we overlooked mentioning this in the manuscript. We would therefore like to point out that the baseline of the 51-Atari Environment PQN [4] experiments already uses LN. We implemented HR on top of this baseline, meaning it combines LN with HR (Layer Output -> LN -> Nonlinear (Tanh) -> HR). Thus, they can very well be used together. As said before, this was not clarified by us in the manuscript, and we will add a detailed explanation of the PQN implementation in the final Experiments subsection. We will also update the Figure 1 caption in the revised manuscript to explicitly show that the PQN baselines are Layer-Normalized. - **Continuous Control Tasks:** We have now conducted experiments applying HR to state-based continuous control (Mujoco) with PPO. Preliminary results indicate that both HR and LN perform similarly to the baseline. We attribute this to: 1) A negligible occurrence of dead neurons in the Baseline, and 2) The low-dimensional state observation and resulting encoder structure. We hypothesize that encoders compressing high-dimensional input spaces (e.g., Atari, pixels) benefit more from HR due to its increased representation capacity, as measured by the ER. Recent related work [2] also shows a strong simplicity bias preference in state-based RL. However, we will add an Appendix section discussing these results and explaining the differences between encoders in low-dimensional state-based RL and high-dimensional pixel-based RL. Additionally, regarding environment diversity, we also have strong HR results in another pixel-based (50x50 pixels) toy RL environment that was used early in our HR research. For completeness, we will include this in another Appendix section. - **Literature:** We identified a recent related paper [3] investigating the causes of plasticity loss in Deep Value-Based RL. They found that combining regularizations yields good results, as standalone L2 norm or Batchnorm underperforms in Atari compared to LN alone. We will include this paper in our related work and discuss its relevance with our research in the revised manuscript. If there is any other research missing, please let us know. - **Implementation and Hyperparameters:** We thank the Reviewer for highlighting this and will provide a clearer overview of the precise HR implementation at the beginning of the *Experiments* section. We will also make sure to emphasize the possibility of combining HR with LN. Furthermore, we would like to point out that no hyperparameter tuning was used for any of the HR implementations. We only used a learning rate sweep in the ablation for the 1024 latent dimension baseline (Fig. 8a, sweep seen in Fig. 13d), to make the 1024 dimensional baseline stronger, and add to the notion that HR does not receive its benefits from a mere parameter increase. Finally, we are grateful for your input, and believe that the revisions based on your review will enhance the paper’s scope and clarity. [1] Teney et al, 2024: Neural Redshift: Random Networks are not Random Functions. CVPR 2024 [2] Lee et al, 2024: SimBa: Simplicity Bias for Scaling Up Parameters in Deep Reinforcement Learning. [3] Lyle et al, 2024: Disentangling the Causes of Plasticity Loss in Neural Networks. CoLLas 2024 [4] Gallici et al, 2025: Simplifying Deep Temporal Difference Learning.
Summary: The paper demonstrated through experiments that activation functions such as tanh, sigmoid suffer from dying neurons in a comparable scale to that of relus in RL settings. A hadamard product with a carry gate is used to mitigate the dying neuron issue. It was shown that the hadamard representation method effictively reduces the dying neuron rate. Claims And Evidence: Yes Methods And Evaluation Criteria: Make sense to me. Theoretical Claims: Looks correct to me. Experimental Designs Or Analyses: Looks reasonable to me. Supplementary Material: N/A Relation To Broader Scientific Literature: The paper discussed the dying neuron effect for different types of activation functions in RL settings, bringing more insights to why relu works better over other activation functions. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: The paper conducted a lot of experiments to support the discussion and provided evidence why relu is preferred over other activation functions in RL settings. In addition, a hadamard product based method was proposed to mitigate the dying neuron effect for tanh activation function. The paper is well written. Weaknesses: This is not weakness but a question. How different is the hadamard representation method in the paper from the idea of using product of hiden layers and a 'gate' proposed in (Srivastava et al., 2015) for supervised learning? Other Comments Or Suggestions: N/A Questions For Authors: See the strengths and weaknesses part. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank Reviewer BWA7 for the review, and for acknowledging our experimental efforts and discussion around the dying neuron effect. We are happy to clarify your question about Srivastava et al. [1] . While both approaches use products of hidden layers, they slightly differ in purpose and design. Highway Networks use a learnable gate, T(x), in H(x) = F(x) · T(x) + x · (1 - T(x)), with x as the input and T(x) often sigmoid-activated, targeting improving gradient flow in deep supervised learning. Our Hadamard Representation (HR) defines z(x) = z^enc(x) · z^(x), with z^enc(x) and z^(x) as independent Tanh-activated layers from the same input, no carry gate or residual. HR uses Tanh’s properties to reduce dead neurons in RL, and to boost effective rank. Highway Networks focus on preserving gradients throughout deep networks by using learnable, partial residual connections. In other words, the gate learns in what proportion the input directly travels to the next layer instead of using the layers' weights and activation. Additionally, following the Reviewer's question, we have ran experiments and tested a Highway Network variant, following the Highway network paper and using a sigmoid gate in PQN. Interestingly, it looks to perform slightly better than the ReLU baseline. However, it doesn't show the same performance benefits as the Hadamard Representation, although further research into these architectures should be interesting future work. | Activation Function | Median Human-Normalized Score (51 Games) | |---------------------------------|-------------------| | ReLU | 1.057| | ReLU (Highway) | 1.154 | | Tanh | 0.597| | Tanh (HR) | **1.340** | To further implement changes according to the Reviewer's question, a clearer overview of the distinction between Highway Networks and the Hadamard Representation will be added to our revised manuscript in the corresponding section (Line 197). We will also add a new Appendix section, showing the Highway Networks' result on PQN and comparing it with the Hadamard Representation. [1] Srivastava et al, 2015. Highway Networks.
Summary: The paper is about developing new strategies to mitigate dead neurons that are prominent in typical reinforcement learning settings. The authors proposes Haamard representations, which uses two hidden layers and an activation function. Experiments were done using Atari games using DQN, PPO and PQN (which is a parallel Q-learning algorithm). The proposed method showed lesser dead neurons when compared with methods that use other types of activation functions such as Relu, Continual Relu, Tanh and Sigmoid. While not directly contributing to better performance, the use of Haamard representations also lead to better effective rank of the representations. Claims And Evidence: Yes, the authors did sufficient work to support their claims. Beyond the analysis of the performance of the agents, the authors also presented studies relating to dead neurons. Methods And Evaluation Criteria: Yes, the authors chose the complex setting of Atari games to evaluate their hypothesis. Theoretical Claims: Yes, I have verified that they are correct to the best of my knowledge. Experimental Designs Or Analyses: Yes, the experimental design are sound. Supplementary Material: Only briefly since I was familiar with the topic discussed in the paper. Most of the concepts in the paper are covered sufficiently and there isn't a lot of reference from the main paper to the supplementary material that required plenty attention. Relation To Broader Scientific Literature: It relates to the current work on improving continual learnability of deep RL. Many current studies have looked at dead neurons and introduces various type of activation functions to ensure better gradient propagation to the shallower layers. Here, the authors use a combination of layer and tanh activations to show how dead neurons can be reduced. Therefore, this piece of work aligns well with the body of work exploring representations and activation functions in reducing dead neurons. Essential References Not Discussed: None. Other Strengths And Weaknesses: # Strength 1. Paper is well written and clear. Made it easy to read and understand the paper. 2. Relevant literatures were discussed. # Weakness 1. Only evaluated in Atari Benchmark and discrete actions Other Comments Or Suggestions: Line Column 143: "Proof. Let us consider a set of neurons αj i and forward connected weights wjαi in layer z_j. "Should it be layer j instead of layer z_j? Questions For Authors: I only have one question, which is to see some analysis done for continuous actions benchmark such as mujoco. Would be interesting to see how state observations and the use of tanh activation function in the actor network may or may not lead to a different set of results. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We’re grateful for Reviewer X4xE’s positive comments on our paper! - You’re correct about the typo on Line 143. It should be "layer j," or it could then also be called "the hidden layer z^j" and we will fix it in the revision. We thank the Reviewer for pointing it out! - We have now ran preliminary experiments applying a HR to state-based continuous control (Mujoco) with PPO. Preliminary results show that both the HR and LN have similar performance to the baseline. We credit this to two reasons: 1 - A near-zero occurence of dead neurons in the baseline, and 2 - the different encoder structure: We believe that an encoder that compresses high dimensional input spaces (Atari, Pixels etc.) benefits more from the Hadamard Representation due to the added representation capacity - measured through Effective Rank (ER). We also see that recent related work [1] shows a preference in state-based RL for a simplicity bias [2]. We will add a section to our Appendix showing the results and explaining differences between encoders in low dimensional state-based RL and high-dimensional pixel based RL. - Additionally, for completeness and to add another environment different from Atari, we also have strong results using HR in a 50 x 50 greyscale pixel toy RL environment, which we will add to the Appendix. These results were acquired in the early stages of our research as a predecessor to the Atari domain. [1] Lee et al, 2024: SimBa: Simplicity Bias for Scaling Up Parameters in Deep Reinforcement Learning. [2] Teney et al, 2024: Neural Redshift: Random Networks are not Random Functions. CVPR 2024 --- Rebuttal Comment 1.1: Comment: Thank you to the authors for running the additional experiments on Mujoco. It was mainly to assure that similar phenomena are also observed in continuous actions setting and the proposed method does not hurt. Nonetheless, these additional studies will allow this work to be more comprehensive and the studies can be added the appendix. Having read the other reviews from the other reviewers, I will keep my score as it is.
null
null
null
null
null
null
null
null
Scaling Laws for Upcycling Mixture-of-Experts Language Models
Accept (poster)
Summary: This paper studies the computationally efficient training of large language models (LLMs) through upcycling where smaller pretrained dense models are utilized as initial checkpoints to train larger Mixture-of-Experts (MoE) models. Given that training large-scale language models from scratch demands considerable computational resources, the authors explore empirical scaling laws related to upcycling dense checkpoints into MoE models which remains relatively unexplored. Through experiments involving models of varying sizes (up to 7B parameters) and datasets containing hundreds of billions of tokens, the authors establish empirical scaling laws that characterize how model performance (measured by cross-entropy loss) depends on factors such as dataset size, number of tokens used in dense pretraining (D1), tokens used in upcycled MoE training (D2), model size, and model sparsity (ratio of total parameters to actively-used parameters). More specifically, a novel multiplicative scaling law is proposed, which describes the MoE model’s cross-entropy loss as a function of both the dense pretrained tokens (D1) and additional MoE training tokens (D2). This scaling law incorporates a previously unidentified interaction term between the dense and MoE datasets, highlighting diminishing efficiency of upcycling as the initial dense training budget (the sunk cost) increases. The authors also identify that the performance of upcycled MoE models consistently improves with increased sparsity and number of active parameters. The paper offers explicit guidance, suggesting scenarios under which upcycling is beneficial or not. For instance, they derive a threshold token count (D*) as a function of dense model size, providing a practical rule to decide whether upcycling is computationally advantageous. Claims And Evidence: Yes, overall, the claims made in the submission are supported by experimental evidence. Empirical scaling laws for the interaction between dense pretraining tokens (D1_1​), upcycled MoE tokens (D_2​), and model configurations are convincingly supported by experimental results. Methods And Evaluation Criteria: Yes, the methods and evaluation criteria make sense. Theoretical Claims: NA, the paper does not provide explicit theoretical proofs for its claims. Experimental Designs Or Analyses: The experimental designs and analyses are sound and valid. Supplementary Material: Yes, the supplementary material was briefly reviewed. Relation To Broader Scientific Literature: The scaling laws derived from this work can impact mixture of experts (MOE) models in diverse domains. However, it is likely limited to only upcycled MOE models. Essential References Not Discussed: NA Other Strengths And Weaknesses: Strengths: + The paper addresses a relatively under-explored topic: the scaling laws specific to "upcycling" dense pretrained models into Mixture-of-Experts (MoE) architectures. + The writing is clear, well-organized and easy to follow. + The provided scaling laws and guidelines offer practical insights for training large models efficiently, which is highly relevant to the community given the computational expense of training language models. Weaknesses: - Limited Generalizability to Larger Models: The experiments are limited to models of size up to 7B parameters, it remains unclear if the results extend to significantly larger MoE models (e.g., 70B+ parameters). - Lack of Theoretical Explanation - Scope seems limited to only upcycled MoE models. Other Comments Or Suggestions: Discussing the similarities and differences between the scaling laws in these different settings can make the contribution more impactful and of interest to a wider audience. Figure 1 (left): it is not easy to see the cross-entropy values, it might be clearer to include a color scale for these values. While scaling laws for upcycling MOE is novelty, there are previous work on related application such as transfer learning. Questions For Authors: How do the scaling laws here compare with other scaling laws in related setting such as transfer learning? Ethical Review Concerns: NA Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the thoughtful and positive evaluation of our work. We especially appreciate the **recognition of its novelty and relevance, as well as the clarity of our presentation**. We also appreciate the constructive suggestions. Let us clarify some of the questions and comments raised in the following. # Question on Relevance to transfer learning and other scaling laws We have briefly discussed connections to transfer learning (and cited relevant two-stage training methods like model growth and pretraining/SFT) in **Section 7**, but we agree that this deserves a more in-depth discussion, which we describe below. - Prior work (e.g., Mikami et al 2022) on transfer learning has proposed scaling laws using multiplicative or additive forms involving $D_1$ (pretraining data) and $D_2$ (fine-tuning data). - However, as far as we know, there has been no attempt to incorporate an interaction term such as ours, namely $D_1\log D_2$. We believe this interaction is **novel and meaningful**: it is both empirically validated and theoretically motivated by our derivation in **Section 4.1** (see also **Appendix C.3**). - While our focus is on upcycling MoE models, we expect the interaction term to also be applicable in transfer learning settings. For example, transfer learning literature has observed *ossification* effects (Hernandez et al 2021), where models pretrained on large datasets can become resistant to adaptation during fine-tuning. - The diminishing returns we observe from increasing $D_1$ align with this notion. Our interaction term $D_1 \log D_2$ captures precisely such diminishing gains and may offer a useful framework for modeling ossification quantitatively in transfer learning scaling laws (and other two-stage training methods). We will revise the discussion to emphasize this connection. # Comment on Limited Generalizability to Larger Models - We acknowledge the concern about generalizability to larger models; however, we note that this limitation is common across empirical studies of scaling laws, including highly influential ones. - Training and evaluating models at the 70B+ scale requires orders of magnitude more computational resources which is currently infeasible in an academic setting. We estimate this to require 5,000 times more FLOPs (70x larger model and 70x more tokens) than what we have run (see Appendix A.10), amounting to an additional cost of 25 million USD (even assuming 1 USD per GPU hour) . - Despite this, we provide empirical evidence supporting the robustness of our findings: in **Figure 6**, we show that scaling behavior observed in sub-0.5B parameter models can **reliably predict** the performance of larger models (up to 1B parameters). - This suggests that the scaling laws identified are stable across model sizes, and we expect the trends to extend to even larger models, consistent with patterns observed in prior work (e.g., Chinchilla paper). # Comment on theoretical explanation - While we indeed have not included formal theoretical proofs, we emphasize that the proposed scaling law is not purely empirical. Its functional form is guided by well-motivated principles, as detailed in **Section 4.1**, and was **noted by two other Reviewers to be a reasonable theoretical contribution to empirical scaling law studies**. - Our work also follows the tradition of impactful scaling law work (e.g., OpenAI, Chinchilla paper) that focused on empirical observations and practical utility, without extensive theoretical justification. - That being said, we agree that theoretical understanding is important, and although it is not the primary focus of this paper, we provide related references in **Section 7**. # Comment on Limitation to upcycled models We respectfully disagree with the characterization that our work is limited in scope due to its focus on upcycled MoE models. - In practice, both MoE architectures and upcycling strategies have become central components of modern LLM development. Recent state-of-the-art models, including DeepSeek, Qwen, Skywork-MoE, and Mixtral, adopt MoE architectures while also leveraging dense-to-sparse upcycling. This growing trend highlights that upcycling is not just a research curiosity but a practical and widely adopted technique in the current LLM landscape. Our findings are therefore both timely and relevant, offering insight into how/when upcycling can be efficient. - Although our experiments specifically focus on upcycling into MoE models, the core insights, such as the interaction between dense and upcycled training budgets, are useful for formulating two-stage training regimes more broadly, including transfer learning as mentioned above, and potentially model growth and pretraining-SFT framework as cited in the main text. We hope these address your concern. Please let us know if further clarification would be helpful.
Summary: This work investigates the scaling behavior of upcycling dense LLMs into mixture-of-experts architectures. Through extensive experiments, the authors design and fit scaling laws that describe how language modeling performance depends on dataset size and MoE configuration, including sparsity and number of experts. They find that while upcycling can reduce initial losses and accelerate convergence, its efficiency diminishes as the size of the dense pretrained model and upcycled dataset size increase. Beyond a certain computational budget, from-scratch MoE training becomes more effective than upcycling. The study provides a quantitative framework for evaluating when and how to adopt upcycling and offers guidance on scaling dataset size and model configuration for efficient pretraining. Claims And Evidence: The claims in this work, specifically the empirically determined scaling laws, are backed up by good experimental evidence. Methods And Evaluation Criteria: The use of a Llama-like architecture makes sense for evaluating scaling laws. As does the use of SlimPajama, a dataset composed of a number of commonly used subdomains. Theoretical Claims: I’m not an expert in scaling laws, but the functional forms of the scaling laws they fit for both dense, and MoE models makes sense at a high level. Experimental Designs Or Analyses: I think the experimental design (train dense models at various sizes with varying token budgets, then upcycle to MoEs with varying token budget and various number of experts and sparsity) is very appropriate. Supplementary Material: I did not review the supplementary material. Relation To Broader Scientific Literature: The experiments in this work are interesting, but also very narrow. The authors implement only the most basic form of upcycling. If the authors explored other methods of initializing the upcycled MoE, this work could be of interest to a wider community Essential References Not Discussed: None that I am familiar with. Other Strengths And Weaknesses: Strengths: - This work provides extensive experimentation and demonstrates the quality of their fitted scaling laws for multiple settings - The exploration of multiple functional forms makes the results more sound Weaknesses: - While this work discusses a threshold at which it is better to train an MoE from scratch, there is no clear guidance on how to predict or calculate this threshold in practice for new settings. This area could benefit from more exploration and a deeper analysis Other Comments Or Suggestions: - I think Figure 3 can be made more clear. At the moment, it appears that only 2 of the lines (D2 = 9.23) have the full training curve. For example, presumably, the red line (D1=9.23, D2=4.61) has been trained for 4.61 billion tokens after upcycling, so why does the curve only start around 4.1 billion? - It would be very interesting to see how the scaling laws of different text domains are similar/different. Since the SlimPajama dataset is composed of a number of domains, this should theoretically be possible with your dataset. In particular, I think it would be very interesting to see how the mixture of pre-training data impacts the downstream performance on each individual domain Questions For Authors: - This finding “Increasing D1 (sunk cost) reduces the initial losses of the upcycled MoE but results in slower training progress with D2 (upcycled MoE tokens).” sounds like the initial parameterization of the upcycled MoE is already in a local minima. Are there possible techniques that you may consider to account for this finding? For example, would adjusting the initialization of each expert in the upcycled model allow the experts to diverge quicker and improve the scaling law? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the thoughtful and positive evaluation of our work. We are especially grateful for the **recognition of our experimental design, the quality of the fitted scaling laws, and the clarity of our empirical methodology**. We also appreciate the constructive suggestions. Let us clarify some of the questions and comments raised in the following. # Question on initial parameterization We appreciate the reviewer’s insightful observation on whether modifying initialization can improve performance, and indeed we have studied this and mentioned it in **Appendix A.6**. Let us elaborate on it further: - To investigate whether this effect could be mitigated, we experimented with several initialization modifications for the MoE experts. These included (i) adding Gaussian noise to the expert weights of the upcycled model, (ii) partially randomizing the MLP weights, and (iii) applying low-rank approximations to encourage expert divergence. However, across all these approaches, we observed no significant improvement in training dynamics or final performance. - These results are consistent with findings from prior work, including Komatsuzaki et al (2022), He et al (2022), and Muennighoff et al (2023), who similarly explored various re-initialization or weight-perturbation strategies in MoE settings, with limited success. Overall, our experiments support the conclusion that the vanilla upcycling strategy (without additional modification) is the most effective approach currently known. # Comment on Figure 3 The reason some training curves begin from intermediate number of tokens (e.g., around 4.1 B tokens) stems from our use of the Warmup-Stable-Decay (WSD) learning rate schedule combined with a checkpoint reuse strategy for efficiency (explained in **Section 3.1**). - Specifically, for runs with smaller $D_2$ budgets (e.g., $D_2$ = 4.61B), we initialize from an intermediate checkpoint of a longer run ($D_2$ = 9.23B) and continue pretraining from that point to reach the desired total token count, without needing to retrain from scratch for every token budget. - Therefore, the curve that appears to start around 4.1B tokens means that it has already undergone 4.1B tokens of training along the “main branch”, and the red line shown corresponds to additional continued pretraining to reach a total of 4.61B tokens. # Comment on different text domains Thank you for the insightful suggestion. We agree that understanding how scaling laws vary across text domains is an important direction. In fact, we already have some results on this in **Appendix B**, where we replicate our experiments on two additional datasets, Japanese and code, to test the generality of our findings. Let us elaborate: - We observe that the scaling relations introduced in the main text hold consistently, regardless of dataset. - The Japanese dataset has higher validation loss (harder task), while the code dataset results in lower loss, making it a relatively easier task in terms of cross-entropy loss. - Interestingly, the Japanese dataset is harder to saturate with increasing $D_1$, meaning upcycled training remains effective. In contrast, the code dataset saturates more quickly, making upcycling less beneficial. This is a promising evidence that the core scaling behavior we identify is robust across diverse domains. However, the more fine-grained question of how mixtures of pretraining data impact respective downstream performances is an orthogonal direction. While interesting, it is beyond the scope of our current work. For readers interested in downstream impacts, we do include results in **Appendix A.9 and Table 7**, where we evaluate downstream performance on various tasks for models trained with SlimPajama. # On analyzing the threshold - In **Section 5.1**, we define the threshold $D^*$ as the token count at which training a model from scratch matches the performance of an upcycled MoE with the same total token budget using derived scaling laws with fixed model configuration. We solve this equation numerically and also provide an analytic approximation (Equation 2) to aid practical use. - We also note in the text that covering all possible configurations and settings would require **exponentially more compute**, which is infeasible in academic environments. As such, we focus on a widely used MoE configuration (Mixtral) to make the analysis tractable. - Finally, we offer guidance on how to apply this threshold in practice, highlighting the need to balance model size, compute, and token budgets in the beginning of **Section 5 and Section 7**. Once the configuration is set, one can run the scaling law experiments and use the procedure mentioned above to get the threshold. We will revise the text to make these points clearer. We hope these addresses your concern. Please let us know if further clarification would be helpful.
Summary: This paper investigates the scaling laws for upcycling pretrained dense language models (LLMs) into sparse Mixture-of-Experts (MoE) architectures. By conducting extensive experiments with models up to 1B dense and 7B MoE, the authors identified scaling laws which describe the relationship between the cross-entropy loss with dataset size and model configuration. This paper also indicate that upcycling is more efficient than training from scratch under certain conditions. Claims And Evidence: 1. In section 4.1, the derivation of scaling law for dataset sizes is convincing. The authors firstly set some priori requirements for the functional form of the scaling law (Requirement 1 and 2, which are reasonable), and then empirically fit the functional form with experiment results. 2. In section 4.2, the hypothesized functional form (Equation 11) requires stronger justification. It is not clear why the relationship between the loss and model configuration should follow the form of Equation 11. The paper does not provide theoretical motivation or ablation studies comparing alternative formulations. 3. In section 5.2, the authors claimed that larger pretrained models require disproportionately more tokens for efficient upcycling, and that upcycling is inefficient relative to from-scratch trainings when considering compute optimality. These claims are supported by the derived scaling law. However, concrete experiments would significantly strengthen these claims. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are appropriate with the problem of understanding upcycling efficiency and scaling laws for MoE models. For example, using WSD LR schedule can reduce computation overhead, and its reliability is verified in the appendix. Theoretical Claims: This work adopts an empirical approach, which aligns with the practical approaches of empirical scaling law research. Some theoretical analysis seem reasonbale. Experimental Designs Or Analyses: The experimental designs and analysies are largely sound for the studied scope (models up to 1B dense / 7B MoE). Supplementary Material: Yes, the authors provided scripts for reproduction. Relation To Broader Scientific Literature: This paper appropriately cites relative works on MoE architecture, scaling law, and upcycling. Essential References Not Discussed: no. Other Strengths And Weaknesses: Strengths: 1. First systematic study of upcycling scaling law for MoE models. 2. This work provides practical insights for MoE model training. 3. Code is available. Weaknesses: 1. Some claims may need empirical support. See item 3 in Claims And Evidence. Other Comments Or Suggestions: 1. Equation 12: correct left-hand side to L(D1, D2, N1). 2. Figure 1: replace the 3D plot with 2D slices (fixed sparsity/parameters) for better readability. Questions For Authors: See weakness. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the thoughtful and positive evaluation of our work. We especially appreciate the **recognition of our empirical approach, the soundness of our experimental design, and the validation of our scaling law derivation in Section 4.1**. We also thank the reviewer for acknowledging the practical value of techniques such as the WSD learning rate schedule. The comments regarding the justification of Equation 11 and the desire for more concrete experiments around compute optimality are very helpful, and we address them in detail below. # Comment on Equation 11 We acknowledge that a derivation and ablation study of Equation 11 were lacking, an oversight on our part. We appreciate the opportunity to address it. - During the rebuttal period, we conducted a principled analysis to derive and validate the appropriate functional form similar to what is done in section 4.1. - Recall that we wish to understand the cross-entropy loss as a function of sparsity, defined as $P= N_{\rm total}/N_2$ and the number of active parameters, $N_2$. - Starting from the power-law ansatz, we require that the loss satisfies $L(P,N_2)= L_{P}(N_2) = A N_2^{-\beta_1} + E$ and $L(P,N_2)= L_{N_2}(P) = A P^{-\beta_2} + E$. This is reasonable as $P$, when fixing $N_2$, is the total number of model parameters, which we expect to satisfy the power-law ansatz. Analogously, $N_2$ corresponds to the number of dense model parameters, which should satisfy the power-law ansatz as well. - We then consider, as before, both additive and multiplicative functional forms, with and without interaction, satisfying the above requirements, and evaluate them using leave-one-out RMS error. We obtain | multiplicative (with interaction) | multiplicative (without interaction) | additive (with interaction) | additive (without interaction) | |------------------|----------------|------------------|----------------| | 0.0414 | 0.0351 | 0.0341 | 0.0322 | i.e., **additional form without interaction** provides the best fit to the data. We will revise the paper to include the corrected form, along with a derivation and ablation comparison of alternative models. # Comment on more experiments - We agree that additional experiments on compute optimality could further strengthen the claims in Section 5.2. However, this would require sweeping over many configurations with comparable FLOP budgets, which is not feasible under our current computational constraints. Such experiments would require multiple times the total GPU budget we already spent (estimated in Appendix A.10), amounting to over 10,000 USD even under conservative assumptions (e.g., 1 USD per GPU hour). - Moreover, while compute optimality is important, it is not the primary focus of this work. Our main goal is to understand how upcycling performance depends on data and model sizes. That said, our formulation naturally supports compute analysis, and we use it in Section 5.2 to identify regimes where upcycling becomes less compute-efficient than training from scratch. Note that our findings are robust and extrapolatable: in **Figure 6**, we show that scaling behavior observed in sub-0.5B parameter models can **reliably predict** the performance of larger models (up to 1B parameters). - Our approach is also in line with prior work, e.g., Krajewski et al. (2024) derived a data–model size scaling law and use it to predict FLOP-optimal behavior, rather than performing exhaustive FLOP-based sweeps, emphasizing interpretable scaling relationships under realistic compute budgets. We will further correct typos and figure presentation as suggested in the revised version. We hope these address your concerns. Please let us know if further clarification would be helpful.
null
null
null
null
null
null
null
null
Equivalence is All: A Unified View for Self-supervised Graph Learning
Accept (oral)
Summary: This paper proposes a novel self-supervised graph learning framework grounded in the principle of node equivalence, which unifies structural (automorphic) and attribute-based equivalence classes to learn robust node representations. The work is well-motivated, offering a principled unification of structural and attribute-based node relationships, backed by theoretical insights, scalable approximations, and comprehensive empirical validation. Claims And Evidence: The claims are well-motivated and largely supported by theoretical and empirical evidence. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are appropriate and well-aligned with the paper’s goals. The core innovations, unified equivalence classes, scalable approximations, are tested on standard benchmarks. Theoretical Claims: I reviewed the theoretical claims, including proofs in the main paper. Experimental Designs Or Analyses: The experimental designs and results are largely sound. Supplementary Material: I reviewed all the supplementary material. Relation To Broader Scientific Literature: The paper sits at the intersection of graph theory and SSL, advancing these areas by providing a unified equivalence framework that subsumes existing SSL methods; introducing scalable approximations to bridge classical graph isomorphism concepts with modern deep learning; and offering actionable insights to improve GNNs and transformers via symmetry preservation. This work aligns with the broader trend toward principled graph SSL (e.g., invariance theories in GCL) but stands out by rigorously formalizing equivalence as a first-class citizen in representation learning. Essential References Not Discussed: The paper makes valuable contributions but omits some reference, such as [1] formalizes equivalence (e.g., group-equivariant networks) in convolution neural networks. Citing these would strengthen the paper’s positioning. Refs: [1] Kondor, R., & Trivedi, S. On the generalization of equivariance and convolution in neural networks to the action of compact groups. ICML 2018. Other Strengths And Weaknesses: Strength: S1: The paper is well-written and easy to understand. I appreciate the ambition behind unifying structural and attribute equivalence. It bridges graph-theoretic principles with modern representation learning, offering a fresh lens to rethink graph SSL. S2: The method’s design is well-founded—approximating automorphic equivalence via PageRank and relaxing attribute constraints strike a pragmatic balance between rigor and scalability. S3: The experiment is well-conducted and validates its performance. Weakness: W1: The impact of hyperparameter choices (e.g., PageRank teleportation parameter) on model performance is not sufficiently discussed. A sensitivity analysis of these parameters would strengthen the empirical section. W2: Incomplete figure and table explanations. For example, the cycle notation in Figure 1 is not explained in the caption, making it difficult for non-expert readers to understand. In the tables, the highlighted colors (e.g., shades of green) lack a legend, leaving their meaning unclear. Other Comments Or Suggestions: The main text inconsistently uses "automorphism equivalence" and "automorphic equivalence" (e.g., in the third paragraph of the introduction). It is recommended to standardize the terminology throughout the paper. Additionally, citing classic references such as [1]. Questions For Authors: 1. The magnitude difference between Intra-class Loss and Inter-class Loss may cause optimization bias (e.g., Intra-class Loss dominates). Would it be better to introduce a temperature coefficient or weight adjustment to balance them? Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: **Q1. The paper makes valuable contributions but omits some reference, such as [1] formalizes equivalence (e.g., group-equivariant networks) in convolution neural networks. Citing these would strengthen the paper’s positioning. Refs: [1] Kondor, R., & Trivedi, S. On the generalization of equivariance and convolution in neural networks to the action of compact groups. ICML 2018.** >R1: Thank you for the valuable suggestion. We will include this reference in the revised manuscript to strengthen our positioning. **Q2. The impact of hyperparameter choices (e.g., PageRank teleportation parameter) on model performance is not sufficiently discussed. A sensitivity analysis of these parameters would strengthen the empirical section.** >R2: Thank you for your insightful comment. As requested, we have performed a sensitivity analysis on the Cora dataset with respect to the PageRank teleportation parameter ($\alpha$). We varied $\alpha$ from 0.1 to 0.9 and recorded the performance of our model. The results are shown in the table below: *Table I: Sensitivity analysis of PageRank teleportation parameter (α) on the Cora dataset* | $\alpha$ | 0.1 | 0.2 | 0.3 | 0.4 | 0.5 | 0.6 | 0.7 | 0.8 | 0.9 | | :------: | :----------: | :----------: | :----------: | :----------: | :----------: | :----------: | :----------: | :----------: | :----------: | | Cora | 85.33 ± 0.19 | 85.35 ± 0.15 | 85.27 ± 0.20 | 85.38 ± 0.12 | 85.23 ± 0.19 | 85.36 ± 0.17 | 85.30 ± 0.09 | 85.35 ± 0.15 | 85.29 ± 0.24 | >From the table, it is evident that our algorithm’s performance is not sensitive to the choice of $\alpha$; the accuracy remains almost consistent across all values. We will include these details in the updated empirical section of the manuscript. **Q3. Incomplete figure and table explanations. For example, the cycle notation in Figure 1 is not explained in the caption, making it difficult for non-expert readers to understand. In the tables, the highlighted colors (e.g., shades of green) lack a legend, leaving their meaning unclear.** >R3: Thank you for your suggestion. We will revise the caption for Figure 1 to provide a clearer explanation. The updated caption is as follows: >_Figure 1. An example of an automorphic equivalence. The hollow double arrows indicate the permutation of nodes in graph $\mathcal{G}$. The permutation is expressed in cycle notation (e.g., (1 2 3) indicates that node 1 maps to node 2, node 2 to node 3, and node 3 back to node 1)._ >In addition, the highlighted colors use lighter shades to represent smaller numbers. A detailed explanation will be provided in the revised manuscript. **Q4. The main text inconsistently uses "automorphism equivalence" and "automorphic equivalence" (e.g., in the third paragraph of the introduction). It is recommended to standardize the terminology throughout the paper. Additionally, citing classic references such as [1].** >R4: Thank you for your valuable suggestion. We will standardize the terminology throughout the paper by consistently using "automorphic equivalence." Additionally, we will include reference [1] in the revised manuscript. **Q5. The magnitude difference between Intra-class Loss and Inter-class Loss may cause optimization bias (e.g., Intra-class Loss dominates). Would it be better to introduce a temperature coefficient or weight adjustment to balance them?** >R5: Thank you for the insightful comment. Rather than introducing additional hyperparameters like a temperature coefficient or weight adjustments, we address the imbalance by applying a Softplus activation after the discriminator. This approach is also described in the Implementation Details section of the manuscript. Once again, thank you for your thoughtful feedback. --- Rebuttal Comment 1.1: Comment: Thank you for the author's response, resolving my concerns. This is an interesting and excellent paper. --- Reply to Comment 1.1.1: Comment: Thank you for your positive feedback and valuable review. Your input has strengthened our work.
Summary: This paper introduces a self-supervised graph learning framework that unifies automorphic equivalence (structural symmetry) and attribute equivalence (node feature similarity) into a cohesive representation learning paradigm. The work bridges structural and feature-based node similarities, offering a principled approach to self-supervised graph learning while addressing scalability and practicality challenges. Thanks to the author for the reply, this solved most of my problems, I will keep my score. Claims And Evidence: Yes, I believe that the majority of the claims are well-supported by evidence. Methods And Evaluation Criteria: Yes, the proposed methods are well-suited to the problem. Theoretical Claims: Yes, I have checked the correctness of the proofs for the theoretical claims. Experimental Designs Or Analyses: I have reviewed most of the experimental design and analysis, which appear comprehensive and reasonable. For the equivalence class matching evaluation, I recommend adding other classic metrics, such as the Rand Index. Supplementary Material: Yes. Relation To Broader Scientific Literature: The paper bridges graph theory, contrastive learning, and self-supervised paradiam design, offering a fresh perspective on equivalence-driven representation learning. Essential References Not Discussed: N/A. Other Strengths And Weaknesses: Strengths 1. Originality: - The unification of automorphic equivalence and attribute equivalence into a single framework is novel and addresses a gap in graph representation learning. - The critique of graph contrastive learning as a "degenerate" equivalence constraint offers a fresh perspective. 2. Significance: - The work bridges graph theory and modern deep learning, providing a principled approach to self-supervised learning that aligns with real-world graph properties (e.g., social roles, molecular symmetries). - Experiments demonstrate consistent performance gains. 3. Clarity of Writing: - The paper is generally well-structured, with clear definitions of equivalence relations and a logical flow from motivation to experiments. - Figures and tables enhance readability. Weaknesses 1. Writing and Presentation: - Inconsistent Terminology: Terms like "automorphic equivalence" and "automorphism equivalence" are used interchangeably, risking confusion. 2. Ablation: - The choice of intersection for fusing equivalences is not justified against alternatives (e.g., union). Ablation studies comparing fusion methods are missing. 3. Minor Issues: - The overview figure (Figure 2) lacks a clear explanation of how the encoder interacts with equivalence constraints. Other Comments Or Suggestions: Terms like "automorphic equivalence" and "automorphism equivalence" are used interchangeably, risking confusion. Questions For Authors: See in Other Strengths And Weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Q1. I have reviewed most of the experimental design and analysis, which appear comprehensive and reasonable. For the equivalence class matching evaluation, I recommend adding other classic metrics, such as the Rand Index.** >R1: Thank you for the valuable feedback. We have added a table below that includes the Rand Index results for our algorithm. The Rand Index is a well-established metric for evaluating matching performance, where values closer to 1 indicate a perfect match. Our results show that across nearly all datasets, the Rand Index is almost equal to 1, which further validates the performance and effectiveness of our approximate method. *Table I: Rand Index (RI) for alignment between equivalences $\simeq_{\text{auto}}$ and $\simeq_{\text{PR}}$ on eight benchmark datasets (higher is better, 1 is perfect).* | **Data** | **Cora** | **CiteSeer** | **PubMed** | **WikiCS** | **Amz-Comp.** | **Amz-Photo** | **Co-CS** | **Co-Phy.** | | :--------: | :------: | :----------: | :--------: | :--------: | :-----------: | :-----------: | :-------: | :---------: | | **RI (↑)** | 0.99948 | 0.99687 | 0.99999 | 0.99999 | 0.99999 | 0.99998 | 0.99999 | 0.99999 | **Q2. Inconsistent Terminology: Terms like "automorphic equivalence" and "automorphism equivalence" are used interchangeably, risking confusion.** >R2: Thank you for highlighting the inconsistency in terminology. In the revised manuscript, we will standardize the terminology by uniformly using "automorphic equivalence" throughout the paper to ensure clarity and consistency. **Q3. The choice of intersection for fusing equivalences is not justified against alternatives (e.g., union). Ablation studies comparing fusion methods are missing.** >R3: Thank you for the insightful suggestion. We have added a table below that presents performance results on our datasets using the union method as an alternative for fusing equivalences. Our experiments indicate that the union approach performs markedly worse compared to the intersection method. This is largely because, as detailed in the ablation study in the appendix, emphasizing one equivalence type (either automorphic equivalence or node attributes) by using the union tends to neglect the contributions from the other. In contrast, the intersection method ensures that both types of equivalences jointly contribute, leading to a more robust and comprehensive representation of node similarity. These findings justify our choice of the intersection method for fusing equivalences. *Table II. Ablation study results on 8 datasets. Origin represents the full model, while union represent the ablated models.* | Model | Cora | Citeseer | Pubmed | Wikics | Computers | Photo | CS | Physics | | :----: | :----------: | :----------: | :----------: | :----------: | :----------: | :----------: | :----------: | :-----------: | | origin | 85.36 ± 0.15 | 74.58 ± 0.18 | 85.06 ± 0.27 | 82.15 ± 0.22 | 90.60 ± 0.19 | 94.51 ± 0.34 | 93.46 ± 0.28 | 95.91 ± 0.38 | | union | 77.21 ± 0.22 | 64.37 ± 0.12 | 83.22 ± 0.19 | 80.96 ± 0.34 | 87.86 ± 0.25 | 93.59 ± 0.28 | 91.55 ± 0.15 | 94.93 ± 0.36 | **Q4. The overview figure (Figure 2) lacks a clear explanation of how the encoder interacts with equivalence constraints.** >R4: Thank you for your valuable comment. In our approach, the encoder is designed to output a representation for each node in the graph. We then apply equivalence constraints to these representations to ensure that nodes which are equivalent (according to our defined criteria) are embedded in a consistent manner. This process helps the model to better capture both structural and attribute similarities. We will update the caption of Figure 2 in the revised manuscript to include a clear explanation of how the encoder interacts with the equivalence constraints.
Summary: This paper presents a novel framework for self-supervised graph representation learning that emphasizes the importance of node equivalence. The authors propose GALE, which unifies automorphic equivalence (based on graph structure) and attribute equivalence (based on node attributes) into a single equivalence class. The framework enforces the equivalence principle, ensuring that node representations within the same equivalence class are similar while those in different classes are dissimilar. Key contributions include the introduction of approximate equivalence classes with linear time complexity to address computational challenges, an analysis of existing graph encoders' limitations regarding equivalence constraints, and the demonstration that graph contrastive learning paradigms are a degenerate form of equivalence constraints. The paper also shows that GALE outperforms state-of-the-art baselines through extensive experiments on benchmark datasets. The authors provide a comprehensive analysis of the connections between equivalence classes and existing techniques, highlighting the potential of their approach to improve the performance and effectiveness of graph representation learning models. Claims And Evidence: The claims made in this submission are generally well-supported by clear and convincing evidence. Methods And Evaluation Criteria: The proposed methods and evaluation criteria in this paper are appropriate for the problem of self-supervised graph representation learning. Theoretical Claims: The theoretical claims made in the paper are logically sound. Experimental Designs Or Analyses: The experimental designs and analyses in this paper are generally sound and well-executed. Supplementary Material: I have read all of it. Relation To Broader Scientific Literature: The paper is related to prior work on graph representation learning, social network analysis and self-supervised learning. Essential References Not Discussed: The paper mentions connections to other fields such as social network analysis. However, it does not cite recent work that could provide additional context. For example, the paper by Santo Fortunato titled "Community detection in graphs" provides a comprehensive review of community detection methods in graphs. Other Strengths And Weaknesses: Pros: 1.The paper presents an unified framework for incorporating node equivalence into self-supervised graph representation learning. This combination of automorphic and attribute equivalence offers a interesting perspective on improving graph representation learning. 2.The proposed framework addresses a gap in existing graph representation learning methods by explicitly considering node equivalence. This is a contribution to the field. 3.The paper is well-written and well-structured, making it easy to follow the authors' reasoning and methodology. 4.The experiment is capable of validating its conclusion. Cons: 1.The paper only discusses the over-smoothing issue in MPNNs but does not mention existing mitigation techniques (e.g., residual connections). It would be helpful to discuss the relationship between equivalence constraints and these techniques. 2.Although the introduction of approximate equivalence classes aims to address the computational complexity, a more in-depth discussion of the trade-offs between accuracy and efficiency would be valuable. Other Comments Or Suggestions: In the introduction, when referring to automorphic equivalence, the paper states that “permuting all nodes within the same equivalence class preserves the graph’s edge relations” However, strict graph automorphism involves a global permutation (an isomorphic mapping of the entire graph), rather than only permuting nodes within an equivalence class. This should be corrected to: "There exists a global permutation that maps nodes within the same equivalence class to each other." Questions For Authors: 1.The paper only discusses the over-smoothing issue in MPNNs but does not mention existing mitigation techniques (e.g., residual connections). It would be helpful to discuss the relationship between equivalence constraints and these techniques. 2. Although the introduction of approximate equivalence classes aims to address the computational complexity, a more in-depth discussion of the trade-offs between accuracy and efficiency would be valuable. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Q1. The paper mentions connections to other fields such as social network analysis. However, it does not cite recent work that could provide additional context. For example, the paper by Santo Fortunato titled "Community detection in graphs" provides a comprehensive review of community detection methods in graphs.** >R1: Thank you for the helpful suggestion. We will include this reference "Community detection in graphs" in the revised version. **Q2. The paper only discusses the over-smoothing issue in MPNNs but does not mention existing mitigation techniques (e.g., residual connections). It would be helpful to discuss the relationship between equivalence constraints and these techniques.** >R2: Thank you for the suggestion. While residual connections help with gradient propagation, they do not explicitly distinguish between equivalent and non-equivalent nodes. Similarly, some methods dynamically alter neighbors in deeper GCNs to alleviate over-smoothing, but they also lack an explicit mechanism to differentiate between nodes and often require careful tuning. In contrast, our GALE approach explicitly enforces this distinction, making representations semantically meaningful. We will add this discussion to clarify the relationship between equivalence constraints and existing techniques in the revised version. **Q3. Although the introduction of approximate equivalence classes aims to address the computational complexity, a more in-depth discussion of the trade-offs between accuracy and efficiency would be valuable.** >R3: Thank you for the feedback. Our experimental results (see Tables 5 and 6) show that the approximate method may incur a slight precision drop compared to the exact method. However, it consistently outperforms current state-of-the-art methods, and in some datasets, it even surpasses the exact algorithm. This demonstrates that the efficiency gains come with only minimal trade-offs in accuracy, underscoring the effectiveness of our approach. **Q4. In the introduction, when referring to automorphic equivalence, the paper states that “permuting all nodes within the same equivalence class preserves the graph’s edge relations” However, strict graph automorphism involves a global permutation (an isomorphic mapping of the entire graph), rather than only permuting nodes within an equivalence class. This should be corrected to: "There exists a global permutation that maps nodes within the same equivalence class to each other."** >R4: Thank you for the insightful comment. We appreciate your insight and will incorporate this change in the revised version of the paper. Thanks again.
Summary: The paper introduces GALE, a self-supervised graph learning framework that unifies automorphic and attribute equivalence into a single node equivalence concept, enforcing intra-class similarity and inter-class dissimilarity through a novel loss function. The paper claims that by explicitly modeling and enforcing node equivalence, the proposed framework provides a more principled approach to self-supervised graph learning that improves upon traditional contrastive methods. Claims And Evidence: The paper’s claims are generally well supported by a combination of theoretical exposition, algorithmic design, and extensive empirical results. Methods And Evaluation Criteria: Yes. Theoretical Claims: I have checked the theoretical claim (Theorem 4.1) and the proof process appears to be correct. Experimental Designs Or Analyses: I have reviewed the Node and Graph Classification Experiments, Ablation Studies, Evaluation of Equivalence Alignment and the Over-smoothing Analysis. The experimental designs and analyses are sound. Supplementary Material: I have reviewed the supplementary material provided with the submission. Relation To Broader Scientific Literature: The paper interweaves classical graph theory with self-supervised learning techniques and deep neural architectures. - The paper extends traditional self-supervised graph learning methods, particularly graph contrastive learning (e.g., GRACE) by challenging the standard paradigm that treats each node as an isolated equivalence class. - The idea of automorphic equivalence has long been studied in graph theory, with foundational works focusing on graph isomorphism and symmetry detection (e.g., using algorithms like Nauty and Bliss). Essential References Not Discussed: To my best knowledge, all the core references are cited correctly. Other Strengths And Weaknesses: **Strengths.** 1. The paper offers a new approach by unifying automorphic and attribute equivalence into a single framework, which is a very interesting perspective in self-supervised graph learning. 2. The paper provides new insights from an equivalence-class perspective, such as over-smoothing in MPNNs and limitations in current positional encoding schemes in Graph Transformers. 3. The paper is well-written. 4. Extensive experiments demonstrate the advantages of the proposed approach. **Weaknesses.** 1. Using PageRank to approximate automorphic equivalence is an efficient approach, but nodes with similar PageRank scores are not necessarily structurally symmetric. Could this have impact on the model's performance? 2. The experimental tables (e.g., color-coded cells) may not fully convey information through text descriptions alone. It is recommended to add textual explanations in the paper, such as clarifying the meaning of "VI" values and specifying threshold ranges. 3. The paper would benefit from a discussion on the potential limitations of the model. Other Comments Or Suggestions: Some typos: - Line 75: Change "noting" to "note". - Line 252: Change "indicates" to "indicating". - Line 308 (right column): Change "a" to "an". Questions For Authors: 1. The definition of approximate attribute equivalence does not appear to satisfy transitivity (i.e., $u \simeq v$ and $v \simeq w$ do not necessarily imply $u \simeq w$). Does this mean it does not form a strict equivalence relation? If so, it is recommended to use "similarity groups" instead. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Q1. Using PageRank to approximate automorphic equivalence is an efficient approach, but nodes with similar PageRank scores are not necessarily structurally symmetric. Could this have impact on the model's performance?** >R1: We acknowledge that similar PageRank scores do not strictly guarantee structural symmetry. However, our experiments (see Table 2 in the paper) demonstrate that the approximate equivalence obtained via PageRank aligns almost perfectly with exact automorphic equivalence—indicated by Variation of Information (VI) values that are nearly zero (with 0 representing perfect alignment). This empirical evidence shows that instances where similar PageRank scores do not correspond to true structural symmetry are extremely rare in practice. Consequently, the impact on the overall model's performance is negligible, validating the effectiveness of our approximation approach. **Q2. The experimental tables (e.g., color-coded cells) may not fully convey information through text descriptions alone. It is recommended to add textual explanations in the paper, such as clarifying the meaning of "VI" values and specifying threshold ranges.** >R2: We appreciate the reviewer's suggestion. We will update the caption of Table 2 to the following: > _Table 2. Variation of Information (VI) between automorphic equivalence ($\simeq_{\text{auto}}$) and PageRank-based approximate equivalence ($\simeq_{\text{PR}}$) over eight benchmark datasets. Lower VI values (with 0 indicating perfect alignment) are shown in lighter colors._ **Q3. The paper would benefit from a discussion on the potential limitations of the model.** >R3: We thank the reviewer for recommending a discussion on potential limitations. While our core contribution focuses on static homogeneous graphs, we acknowledge two related limitations. First, GALE currently assumes a static graph setting, meaning that for dynamic graphs with evolving structures or features, the equivalence classes may change over time, necessitating incremental updates to partitions—a challenge common to equivalence-based methods. Second, our present work targets homogeneous graphs; extending our equivalence definitions to heterogeneous graphs (e.g., those with multi-typed nodes/edges) would require additional design considerations, such as type-specific automorphism constraints. We consider these aspects as important directions for our future work. **Q4. Some typos: - Line 75: Change "noting" to "note". - Line 252: Change "indicates" to "indicating". - Line 308 (right column): Change "a" to "an".** >R4: We thank the reviewer for pointing out these typographical errors. We will correct these typos in the revised version of the paper. **Q5. The definition of approximate attribute equivalence does not appear to satisfy transitivity (i.e., u≃v and v≃w do not necessarily imply u≃w). Does this mean it does not form a strict equivalence relation? If so, it is recommended to use "similarity groups" instead.** >R5: We thank the reviewer for this valuable observation. We acknowledge that the current definition of approximate attribute equivalence does not satisfy transitivity—meaning it does not form a strict equivalence relation. In light of this, we agree that referring to these groups as "similarity groups" may be more appropriate and accurately reflects their nature. We will make the necessary modifications in subsequent revisions of the paper. --- Rebuttal Comment 1.1: Comment: Thanks for the response. My main concerns have been addressed. I keep my score of 4 Accept.
null
null
null
null
null
null
BounDr.E: Predicting Drug-likeness via Biomedical Knowledge Alignment and EM-like One-Class Boundary Optimization
Accept (poster)
Summary: This paper presents a novel approach for estimating the drug-likeness of compounds. Drug-likeness refers to a compound's potential to become a successful drug, factoring in its synthesizability, bioavailability, and safety. Many existing drug-likeness estimators rely on basic physicochemical properties or structural patterns. Some methods utilize machine learning models trained on datasets of drug-like and non-drug-like compounds. However, representing the negative class adequately poses challenges. To address this, the authors suggest employing one-class classification, labeling drug-like compounds as positive examples. The method involves two molecular representations that are integrated into a single representation space. The structural encoder uses Morgan fingerprints, while the knowledge-based representation employs DREAMwalk, developed from knowledge graphs of drugs, targets, and diseases. Following this, a one-class decision boundary is established using an EM-like algorithm that iteratively adjusts the boundary and pushes negative examples outside it. Experimental results indicate that this approach effectively identifies drug-like molecules within large compound libraries. Furthermore, the model can be applied in a zero-shot context to detect toxic compounds. ## update after rebuttal The Authors addressed all my comments and provided additional results. I decided to increase my score to 4. Claims And Evidence: The claims are convincing. Most experiments were conducted multiple times, e.g. using 10-fold cross-validation, and the standard deviations are reported. Most of the results are also supported by statistical tests, where all comparable results are marked together instead of just the highest number. Methods And Evaluation Criteria: The proposed method presents an intriguing and novel approach for predicting drug-likeness. First, representations from two modalities are aligned using the innovative softened CLIP loss and geodesic mixup introduced by Oh et al. Next, a one-class classifier is trained by adjusting a hypersphere that separates drug representations from other compounds through an EM-like procedure. The evaluation of the method is rigorous, and the datasets and metrics are selected carefully to address the problem. There are a few potential flaws in the experiment design that may reduce the impact of the presented results: 1. In the time-split setup, the set of test drug-like compounds remains the same, while only the ZINC subset differs. It would be interesting to see how similar the testing drug-like compounds are to those known before the time cutoff. 2. Given the vast chemical space represented by ZINC, most sampled compounds are likely unlike known drugs. A more challenging evaluation could involve sampling ZINC specifically around known drugs, e.g. further narrowing this space to compounds with comparable weight, numbers of rotatable bonds, and hydrogen bond donors and acceptors, among other factors, to those of known drugs. 3. According to Figure 5, the class imbalance leads to a degradation in the performance of classical models such as XGB and SVM. However, it is unclear whether any class balancing strategies were utilized. The low F1-score of these models may result from their tendency to primarily predict one class when trained on imbalanced data. Theoretical Claims: I reviewed the proof for the theorem proposed in the paper. The intuition appears to be sound, but it functions more as a sketch of the proof. Some scenarios are not addressed. For instance, in Proposition 1, it is claimed that if $\mathcal{L}_\text{drug}$ decreases, the radius decreases as well. However, it is possible that all compounds except the one at maximum distance that defines the radius are moving toward the center, resulting in a decrease. Why are we confident that the farthest point is also moving toward the center? Depending on the selection of function $f$ and the joint optimization with $\mathcal{L}\_\text{out}$, this point may remain in place. Similarly, the proof of Proposition 2 does not describe some important cases. The proof is based on a vague term of "greater contribution" of points, for which $d(x)<\nu$. However, these points may also not move while multiple points for which $d(x) \geq \nu$ move towards the radius. In that case, the assumption $\mathcal{L}^{(t_2)}\_\text{out} < \mathcal{L}^{(t_1)}\_\text{out}$ is not contradicted. Please, clarify if my understanding is correct. Perhaps these cases could be discussed in the text. Experimental Designs Or Analyses: The experimental design is adequate to the posed research questions. Supplementary Material: I went through all the supplementary material, but I might have missed some details during the first reading. Relation To Broader Scientific Literature: Drug-likeness prediction remains an unresolved issue tied to a broader question: what defines a good drug? I believe the suggested on-class classification approach is sensible, considering the challenge of identifying negative examples. The results are promising and show that the proposed model can be useful in multiple setups. I am only concerned that machine learning methods may be biased towards the structures of a few known drugs and may fail to identify novel compounds that could be useful as drugs in the future (more details in Questions for Authors and Methods and Evaluation Criteria). Essential References Not Discussed: Key references have been discussed. Other Strengths And Weaknesses: Most strengths and weaknesses have been described in the other sections. Another strength of this work is the code shared along with the paper to ensure these results are reproducible. Some other potential weaknesses: 1. The set of 34 compounds resulting from filtering the generated molecules has not been analyzed. These molecules could be presented in the supplementary materials as qualitative results, and their similarity to known drugs could be measured. 2. The structural encoding is based on Morgan fingerprints. Other encodings using more general descriptors could be tested to see if this method is not overfitting to known chemotypes. Other Comments Or Suggestions: 1. There is a typo in line 34: "Dues to these challenges..." 2. Typo in line 609, "in the loss $\mathcal{X}\_\text{drug}$" (should be $\mathcal{L}\_\text{drug}$). Questions For Authors: 1. While the idea of using one-class classification is intriguing, I have concerns about the applicability of such methods. Machine learning techniques rely solely on known drugs, and in the future, molecules with entirely different structures may be proposed, including compounds for rare diseases that currently lack approved treatments. Ultimately, what matters for a molecule's success is its safety, efficacy, and, at the early stages of development, its synthesizability. Do you think machine learning methods will yield more robust results for novel chemical structures, or do classical filtering methods (like rules similar to the rule of 5) still need to be employed? 2. Have you compared the training and testing sets of drugs to see how similar these compounds are to one another? This comparison could resemble Figure 11 in the supplementary material, focusing specifically on the positives from the training and testing sets in the time-based split. It would also be interesting to see how many testing positives were developed for the same target or disease as those in the training set. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you, reviewer `CLET`, for your time and thoughtful comments. We sincerely appreciate the depth of your understanding and the effort you invested in reviewing our work. Below, we provide point-by-point responses, with all referenced Tables and Figures available in attached pdf at blind repo: https://anonymous.4open.science/r/boundr_e/R/R3.pdf > **Methods And Evaluation Criteria** 1. Thank you for raising this important point. To assess redundancy between drugs approved before and after the time cutoff, we computed pairwise Tanimoto similarity between test and training drugs in the time-split setting and found low median similarity of 0.27. As shown in Figure 3-1, the similarity distributions between time-split and scaffold-split were not significantly different (t-test p = 0.568), suggesting that high performance is not due to scaffold-level redundancy. Figure 3-2 also shows the discrepancy between train-test drug sets’ molecular property distributions. Further analysis (see `Questions For Authors` Q2) support the low redundancy of pharmacological overlap. 2. We appreciate your insightful suggestion. Based on our prior analysis (Appendix 3, Figure 10), we identified PubChem as the non-drug dataset most similar to DrugBank in key molecular properties. Motivated by this, we evaluated our model using PubChem, instead of ZINC, as a negative set and observed a significant performance drop across all models (Table 3-1), confirming that property-matched negatives present a more realistic challenge. We plan to include these results and extend the evaluation to all baselines in the next version. 3. Acknowledging this important point, we conducted additional experiments using class-weighted versions of XGBoost and SVM to mitigate class imbalance. We tested various weight ratios, and found that assigning 0.5 or 2.0 weight to the positive class yielded modest improvements in some metrics (Tables 3-2 and 3-3). While the improvements were not significant to change the overall performance ranking, this adjustment did help investigate the bias of models toward the majority class and effect of class weighting for balanced drug-likeness predictions. > **Theoretical Claims** Thank you for carefully examining our theoretical analysis and highlighting these important points. We agree that the provided theoretical discussions are primarily intuitive sketches rather than rigorous proofs, and some scenarios indeed depend on the stochastic nature of our EM-like optimization arising from SGD-based neural network training. To *empirically* support these claims, we have now included experimental figures demonstrating the gradual shrinkage of the radius and the consistent decrease of the loss terms over iterations (Figure 3-3). We will tone down our manuscript from 'proofs', clarify these additional cases and emphasize more on empirical evidence in revised version. > **Other Strengths And Weaknesses** 1. Thank you for highlighting this essential aspect. We have included the structures of the filtered molecules, along with the closest approved drug pair, in Figure 3-4 of the link. We also plan to integrate in the future versions of the manuscript. 2. This is an excellent suggestion. To evaluate whether our method overfits to chemotype-specific features, we replaced Morgan fingerprints with Mordred descriptors, which include 697 general physicochemical and topological descriptors. As shown in Table 3-4, this substitution led to a consistent performance drop across all models, including ours as well as other fingerprint-based baselines. This suggests that general descriptors may carry less discriminative signal for the task of drug-likeness prediction compared to substructure-based encodings. We appreciate your suggestion, which has helped broaden the scope of our evaluation. > **Questions For Authors** 1. Thank you for raising this critical point, which aligns with the core challenges of data-driven drug discovery. We agree that models trained on known drugs may struggle to generalize to novel modalities, like PROTACs. Our approach is intended as a robust early-stage filter for well-established chemical spaces, such as me-too drug discovery. We view classical rule-based methods as complementary, especially for assessing compounds in unexplored spaces. As the field evolves and more diverse drugs are approved, we expect machine learning approaches to become more broadly applicable. We will incorporate this essential perspective in the Discussion section. 2. We appreciate this excellent and specific suggestion. We agree that assessing similarity between training and test positives is key to evaluating generalization. Extending from Q1 of `Methods And Evaluation Criteria`, analysis on potential biological overlap show only 9.6% of test drugs shared an ATC code with any training drug, suggesting limited therapeutic redundancy. These results support that our model’s performance is not due to trivial similarity or duplication. --- Rebuttal Comment 1.1: Comment: Thank you for your response and for providing additional results. I have no further questions. --- Reply to Comment 1.1.1: Comment: We appreciate the reviewer `CLET`’s time and thoughtful evaluation throughout the review process. Thank you again for your constructive feedback and for considering our additional results, which has greatly improved the clarity and quality of our work.
Summary: This paper tackles the challenge of identifying drug-like molecules amid huge chemical libraries. Unlike previous methods that either rely on hard negative sets or purely structural filters, the authors propose BOUNDRE, an iterative one-class approach that defines a “drug-likeness” boundary around approved drugs. Two key innovations are: (1) Multi-modal embedding alignment of molecular structure and biomedical knowledge using a softened CLIP-style contrastive loss plus geodesic mixup, and (2) EM-like boundary refinement, which tightens or expands the hypersphere of drug-likeness by alternately updating the boundary parameters and the latent-space encoder. The authors demonstrate substantial gains over state-of-the-art baselines on multiple datasets, with better coverage of approved drugs and stronger exclusion of toxic or non-drug compounds. They also highlight real-world utility by applying BOUNDRE to filter AI-generated molecules, yielding a small but drug-favorable subset. Overall, the paper introduces a robust, knowledge-enriched approach for early-stage drug-likeness filtering in modern data-driven drug discovery pipelines. Claims And Evidence: Several claims with their evidence: Knowledge integration via alignment of structural embeddings with a biomedical knowledge graph is crucial - Ablation shows performance drops significantly if the knowledge alignment or geodesic mixup steps are removed. Iterative (EM-like) boundary approach consistently outperforms static classification (e.g., MLP, GCN) - visualizations of the embedding space show non-drug compounds increasingly pushed out with each iteration, and performance is stable even when negative examples are sampled from different databases. Methods And Evaluation Criteria: make sense Theoretical Claims: no theory Experimental Designs Or Analyses: it is sound. pretty extensive. Supplementary Material: yes Relation To Broader Scientific Literature: Builds on lines of GNN-based or VAE-based drug-likeness scoring (DeepDL, D-GCAN, DrugMetric). Connects to broad usage of one-class classification for anomaly detection, adapting it to a drug-likeness boundary context with iterative re-embedding. Essential References Not Discussed: NA Other Strengths And Weaknesses: Strength: - Data-driven solution for drug-likeness that avoids questionable negative sets. - Iterative boundary approach is intuitive and stable. Other Comments Or Suggestions: NA Questions For Authors: Have you tried combining partial negative sets (e.g., known toxic scaffolds) to accelerate boundary contraction or offset local optima? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ### Overall Response We sincerely thank the reviewer `KY6P` for the insightful and constructive comments. We're especially grateful for your thoughtful summary and understanding of our iterative one-class framework, as well as the recognition of our method's contribution to early-stage drug-likeness filtering. Your interpretation of the boundary refinement and multi-modal embedding alignment is spot on, and your encouragement truly motivates us. > **Questions For Authors:** Have you tried combining partial negative sets (e.g., known toxic scaffolds) to accelerate boundary contraction or offset local optima? Thank you for this excellent and thought-provoking suggestion. We agree that incorporating partial negative sets—such as known toxic scaffolds—can provide valuable guidance for boundary refinement by encouraging the model to contract more meaningfully. Motivated by your idea, we conducted additional experiments in which we introduced toxic compounds (total ~2,316 hepatotoxic, cardiotoxic, and carcinogenic compounds) into the non-drug training set. This adjustment aimed to influence the decision boundary more explicitly during training. **Table 2-1. Performances of comparison models with/without toxic compounds as negatives on Scaffold-split setting.** Mean and standard deviation of 10-fold CV are provided. | | F1 (↑) | IDR (↑) | ICR (↓)| AUROC (↑) |Avg. Precision (↑)| |-|-|-|-|-|-| | **BounDrE** (original) | 0.655 (0.0209) | 0.796 (0.0258) | 0.063 (0.0079) | 0.938 (0.005) | 0.590 (0.037) | | **BounDrE** (with Tox negatives) | 0.601 (0.0390) | 0.756 (0.0369) | 0.076 (0.0149) | 0.910 (0.0124) | 0.510 (0.0495) | | Difference | -0.054 | -0.040 | 0.013 | -0.028 | -0.080 | || | **FP-XGB** (original) | 0.602 (0.0181) | 0.941 (0.0281) | 0.118 (0.0112) | 0.972 (0.012) | 0.811 (0.081) | | **FP-XGB** (with Tox negatives) | 0.341 (0.1664) | 0.266 (0.1599) | 0.023 (0.0060) | 0.838 (0.0666) | 0.424 (0.1573) | | Difference | -0.261 | -0.675 | -0.095 | -0.134 | -0.387 | || | **FP-SVM** (original) | 0.597 (0.0090) | 0.951 (0.0286) | 0.122 (0.0061) | 0.971 (0.012) | 0.765 (0.101) | | **FP-SVM** (with Tox negatives) | 0.287 (0.1958) | 0.195 (0.1815) | 0.004 (0.0019) | 0.762 (0.0906) | 0.420 (0.1737) | | Difference | -0.310 | -0.756 | -0.118 | -0.209 | -0.345 | || Interestingly, as shown in Table 2-1, while this strategy led to a modest decrease in classification performance across all models—possibly due to increased heterogeneity in the negative class—it also resulted in significantly **faster convergence** during training (15% shorder training from average 47.7 epochs to 41.1 epochs). This suggests that even noisy but biologically meaningful negatives can serve as strong regularizers in the boundary contraction process. Furthermore, while ML classifiers (XGBoost and SVM) experienced severe performance drop when heterogeneous toxicity compounds were included in the negative set, our BounDrE model showcased its robustness to negative set with only a minor decline in overall metrics. Additionally, the drastic decrease of both IDR and ICR for ML models indicate the overly tight decision boundary formulated via inclusion of toxic compounds, yielding fewer test molecules predicted as drugs. We plan to include these results, supporting the necessity of iterative one-class boundary optimization for minimizing reliance on negative set, in our future versions. We truly appreciate your insightful question, which has opened up a promising new direction in our research.
Summary: This paper proposes a method which predicts drug-like molecules by defining a hypersphere in a latent embedding space. They show promising results at predicting which drugs are clinically approved. ### update after rebuttal The authors performed additional experiments and answered my questions. I raised my score to a 4. Claims And Evidence: Yes, the claims and evidence seem well-supported. Methods And Evaluation Criteria: Yes, they generally make sense, although I am concerned that the set of approved drugs is small, and there might be train-test set leakage that allows for "cheating" to some degree. Theoretical Claims: I did not look at the theoretical results in detail, but the claims did not clash with my knowledge about the EM algorithm. Experimental Designs Or Analyses: Generally, yes. I had only two concerns: 1. I would like to see all fingerprint methods fit using _count_ fingerprints instead of binary fingerprints, since this reduces the amount of information loss and tends to improve performance in my experience. In `rdkit`'s fingerprint generator I believe you can just specify `use_counts=True`. 2. I would be curious about "distance to nearest approved drug in the training set" as a method, ie $d_{nn}(x) = \min_{x'\in\text{train drugs}} d(x,x')$. The distance function $d$ could be Tanimoto distance between ECFP fingerprints (count fingerprints, no compression, use use rdkit's sparse fingerprint object). The method would predict "drug" if $d_{nn}(x) < \gamma$, otherwise not a drug. The method would have a single hyperparameter to tune ($\gamma$), which you could fit on the validation set. I suggest this because it would very clearly show how much structural similarity there is between test and training set (this is not clear from the time split). Supplementary Material: Yes, I skimmed through the entire supplementary material. Relation To Broader Scientific Literature: The proposed algorithm has many parts, none of which are individually particularly novel, but the combination and application to drug-likeness is novel as far as I can tell. Essential References Not Discussed: none that I am aware of Other Strengths And Weaknesses: Overall this method is very complicated, but it is explained well and seems to work well for both time splits and scaffold splits. The paper is very thorough and contains a lot of extra experiments in the appendix- well done! Other Comments Or Suggestions: I am almost worried that the results are "too good to be true", since predicting approved drugs without knowing what the drug should treat should be very hard. I am suspicious that high performance is only possible due to spurious correlations in the dataset. Some possible ideas for correlations are: - "me too drugs": ie clusters of very similar drugs developed by different companies for the same disease - families of drugs which are very similar (eg different antibiotics containing rare substructures) - Since ZINC is used as the reference set of "non-drug" molecules, it is possible that this is not a challenging reference class. As far as I understand, ZINC has a relatively narrow range of molecular weights / logP / etc. Maybe any molecule with a very "unusual" feature is very easy to classify as not belonging to ZINC I suggested a nearest neighbor experiment above which I think could provide some insight into this, and hopefully would not be very much effort to implement. Questions For Authors: - Did you do any analysis of the physical property distribution of ZINC vs approved drugs? Eg molecular weight, logP, atom type freqency - Did you retrain all the baseline methods on your exact evaluation dataset? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We would like to thank reviewer `Ha6n` for their time invested for such deep understanding of our work and also insightful comments. By reading the comments we were able to see how much time and effort you have taken to understand the intentions of our study. Below, we provide point-by-point responses, with all referenced Tables and Figures available in attached pdf at blind repo: https://anonymous.4open.science/r/boundr_e/R/R1.pdf > **Experimental Designs Or Analyses** 1. Thank you for suggesting the use of count-based fingerprints (FPs) (via `use_counts=True`). We additionally ran experiments under this setting for all our baseline and FP-ML methods. Interestingly, as shown in Table 1-1 of the attached link, the performance consistently dropped compared to binary fingerprints for all models. Visualization of FP spaces reveal approved drugs more compactly clustered in the binary FP space compared to the count-based space (Figure 1-1). While this result contrasts with prior expectations, we have not yet been able to fully explore the cause due to timely deadline. We appreciate your insightful suggestion and plan to further investigate potential optimizations or alternative settings in future work. 2. Thank you for this excellent suggestion. We implemented the suggested 1-nearest-neighbor baseline (`Sim-cutoff`), classifying test compounds as drug-like if their maximum Tanimoto similarity to any training-set drug exceeded a tuned threshold γ, using count-based sparse Morgan fingerprints. Following best practices, we used similarity rather than distance since Tanimoto’s metric properties are debated [1]. As shown in Table 1-2, `Sim-cutoff` performed reasonably well, indicating the influence of structural similarity in classification. However, a noticeable performance gap remains compared to our method. We further tested with PubChem as a more challenging negative set and observed a notable performance drop, supporting your point that ZINC-based evaluations may overestimate generalization compared to the PubChem dataset (more discussion on `Questions For Authors` Q1). > **Other Comments Or Suggestions** We appreciate your thoughtful concerns about potential spurious correlations and the possibility that some results may appear “too good to be true.” In response, we conducted additional analyses: **(1) Dataset Redundancy Analysis:** To quantitatively assess potential overlap in therapeutic targets, we analyzed the pharmacological class overlap using ATC codes. The results show that only 9.6% of test drugs share an ATC code with any training drug, indicating limited therapeutic redundancy. Additionally, the distribution of molecular properties differs significantly (t-test p-val < 0.05) between training and test drugs (Figure 1-2), suggesting low structural redundancy as well. **(2) Nearest Neighbor Experiments:** Following your helpful suggestion, we implemented a `Sim-cutoff` model and also a KNN classifier using count-based ECFP. Both yielded limited performance across splits (Table 1-3), suggesting this problem does not rely on compound similarity. Interestingly, they performed slightly better on scaffold-split, implying that longitudinal evolution in drug design introduces unique challenges not captured by chemical similarity alone. **(3) ZINC vs. PubChem Background Set:** Following your valid concern and to test against a more drug-like background, we evaluated our model on PubChem, a dataset with diverse molecular properties than ZINC. Despite the harder setting, our model remained robust (Table 1-4), supporting its generalization beyond simple structural similarity-based models. > **Questions For Authors** 1. Thank you for this valuable suggestion. We did analyze the physical property distributions between ZINC and approved drugs (DrugBank), as previously presented in Appendix C.5 and Figure 10. However, we recognize that the appendix was dense and not easy to navigate, and we plan to improve its organization in future revisions. The analysis revealed clear distributional differences between DrugBank and ZINC compounds in terms of molecular weight, logP, and atom-type frequencies. These distinctions likely contribute to the strong performance differences, as you noted. To address this, we evaluated our model using PubChem as a more drug-like negative set. As expected, performance dropped to more realistic levels across all models (Table 1-4). These results support your concern that ZINC is a relatively easier negative set, and we plan to include this analysis in the next revision. 2. Yes, we retrained all baseline models using our exact evaluation framework to ensure a fair and consistent comparison. All methods were re-implemented or adapted to use the same training/test splits and molecular features (e.g., ECFP). We made the highest effort to align training conditions across models to support a meaningful evaluation. [1] Surendran, A, et al. "Is Tanimoto a metric?." bioRxiv (2025). --- Rebuttal Comment 1.1: Comment: Thank you for the thorough and thoughtful response. I am surprised that count fingerprints worked worse than non-count fingerprints... Because these concerns are addressed, I am happy to raise my score a bit. In the next version of the manuscript, I think it would be more appropriate to emphasis the PubChem results alongside ZINC, since it is a more reasonable negative set. Another potential choice would be CheMBL. --- On a separate subject: you stated > Tanimoto’s metric properties are debated [1] There is no "debate". For count fingerprints, the way rdkit computes Tanimoto similarity is using "min-max" distance $$1-\frac{\sum_i \min(x_i, y_i)}{\sum_i\max(x_i, y_i)}$$ This _is_ a valid metric (and I suggest you use their implementation since it is fast). The "other" version using dot products is not a metric and I suggest you don't use it in any of your experiments (except with binary fingerprints where the min-max and dot product versions are equivalent). --- Reply to Comment 1.1.1: Comment: We sincerely appreciate reviewer `Ha6n`’s thoughtful follow-up and willingness to raise their score. We agree that the observation regarding count-based versus non-count-based fingerprints is interesting, and will elaborate further on this point in the future works. Thank you again for suggesting such interesting direction. Regarding dataset emphasis, we fully agree with the reviewer’s suggestion that PubChem serves as a more appropriate negative sampling set than ZINC, due to its chemical diversity and relevance. In the revised version, we will place greater emphasis on the PubChem results and adjust the discussion to reflect its significance. Finally, thank you for the clarification regarding the Tanimoto similarity metric. We acknowledge that the RDKit implementation using the min–max formulation for count-based fingerprints is a valid metric, and will ensure that we have consistently utilized the RDKit implementation throughout our experiments. Thank you again for your detailed and constructive feedback, which has greatly improved the clarity and quality of our work.
null
null
null
null
null
null
null
null
ETTA: Elucidating the Design Space of Text-to-Audio Models
Accept (poster)
Summary: This paper presents a high-quality audio-caption dataset containing 1.35M pairs of data, named as AF-Synthetic. The dataset is established through the state-of-the-art (SoTA) audio-language model, Audio Flamingo. In addition, the paper introduces a text-to-audio system based on the Diffusion Transformer (DiT) based Latent Diffusion Model. The experimental results demonstrate that the proposed system, named Elucidated Text-To-Audio (ETTA), achieves SoTA performance on both the natural audio and music. Claims And Evidence: The authors claim the effectiveness of the proposed dataset by demonstrating the SoTA performance of the ETTA system trained on AF-Synthetic. However, the paper lacks sufficient experiments directly comparing the dataset's contribution, such as showing how other SoTA models, such like Tango and AudioLDM, perform when trained with AF-Synthetic. Methods And Evaluation Criteria: The main weakness of the paper is that using audio-language model, such as Audio Flamingo to generate the caption making the result more like a conversation, not just normal captions. Most of the captions will begin with ”the audio consists”, “the audio features”,”There is a”. Which is actually different to the real-world cases. Theoretical Claims: None Experimental Designs Or Analyses: The paper lacks more experiments to illustrate the effectiveness of AF-Synthetic dataset on other systems. In addition, as an audio-language dataset, experiments that present the contribution of the dataset on other tasks, such as audio retrieval and audio captioning can enhance the significant usefulness of the proposed dataset. Supplementary Material: The paper presents some demos to illustrate the performance. However, might lack more demos on the music field. Relation To Broader Scientific Literature: The paper mainly provides a larger-scale audio dataset, which is useful for audio-related tasks Essential References Not Discussed: None Other Strengths And Weaknesses: The proposed system, named as ETTA, more like a model applying Stable Audio as the backbone. Not too much novel techniques are proposed in this system. Other Comments Or Suggestions: Please see questions. Questions For Authors: Overall, this is a high-quality paper and the idea of creating an audio-caption dataset is interesting. I am happy to change the score if the author can solve the following questions or experiments. Question1: What is the performance of other text-to-audio system that train on AF-Synthetic? Such as AudioLDM, Tango, Make an Audio. Question2:What is the difference between ETTA and Stable-Audio(Despite the DIT module)? Question3:Why choose the current CLAP threshold, will the dataset be improved if raises the score? Question4:Did the audio-language format of the caption lead to some problems? Is there any way to avoid such structure and re-caption into a more formal structure? Question5:Why the ETTA does not outperform previous models in some MOS metrics? Question6: Could you also provide some demos on music? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your supportive review. We address your concerns as follows: **Q: The paper lacks sufficient experiments directly comparing the dataset's contribution, such as showing how other SoTA models, such like Tango and AudioLDM, perform when trained with AF-Synthetic. What is the performance of other text-to-audio systems that train on AF-Synthetic? Such as AudioLDM, Tango, Make an Audio.** A: We acknowledge your concern; current main results (Table 2 and 3) are structured to represent prior works in their original form by using the official results and/or their public checkpoints. Stable Audio Open is our most modern and direct baseline, and we did perform the ablation study in AF-Synthetic dataset’s direct impact while keeping the baseline model identical in Table 4 (+AF-Synthetic). Compared to the public Stable Audio Open checkpoint, AF-Synthetic provides significant improvements from the training dataset alone. In our preliminary study, we also did our best effort to train previous models (other than Stable Audio Open) using AF-Synthetic, but except for Stable Audio Open we have faced difficulties using their original training recipe (e.g. model not converging, loss instabilities, etc.). We speculate these models would need non-trivial optimization in the training recipe to accommodate significantly larger-scale data such as AF-Synthetic. **Q: The main weakness of the paper is that using audio-language model, such as Audio Flamingo to generate the caption making the result more like a conversation, not just normal captions. Most of the captions will begin with ”the audio consists”, “the audio features”,”There is a”. Which is actually different to the real-world cases. + Did the audio-language format of the caption lead to some problems? Is there any way to avoid such structure and re-caption into a more formal structure?** A: Thank you for asking this important question. While AF-Synthetic captions mostly start with a conversational prefix from the audio language model (Audio Flamingo), we empirically find that the model is robust in generating good quality samples using plain captions, as evidenced by our main benchmarks that use the original captions (Tables 2, 3, and 22) along with the OOD imaginative prompts in the demo webpage. We also tried using an external LLM (Nemotron-340B [1]) to rephrase and shorten the AF-Synthetic captions to be more caption style (like AudioCaps). We found ETTA has similar quality when trained on this rephrased dataset. [1] Adler, Bo, et al. "Nemotron-4 340b technical report." arXiv preprint arXiv:2406.11704 (2024). **Q: What is the difference between ETTA and Stable-Audio(Despite the DIT module)?** A: Other than the improved DiT backbone, ETTA uses an improved VAE, a better training objective (OT-CFM w/ logit-normal t-sampling), and is trained on AF-Synthetic. **Q: Why choose the current CLAP threshold, will the dataset be improved if raises the score?** A: The CLAP threshold >= 0.45 is set following a suggestion in existing work [2]. Improving the CLAP threshold will retain higher quality subset at a cost of reduction of the dataset size. [2] Kong, Zhifeng, et al. "Improving text-to-audio models with synthetic captions." arXiv preprint arXiv:2406.15487 (2024). **Q: Why the ETTA does not outperform previous models in some MOS metrics?** A: For music, Stable Audio Open is trained on high quality music from Freesound and Free Music Archive (FMA) datasets, but many of these samples are not available as of today. In contrast, ETTA is not trained on any studio-quality music datasets (not even MusicCaps). As a result, ETTA has a slightly worse OVL (which only measures audio quality and does not consider text) on MusicCaps and SongDescriber. While only trained on synthetic captions, ETTA outperforms Stable Audio Open on the REL (audio-text relevance) score on AudioCaps and MusicCaps (Table 2-3), and matches it on an OOD testset SongDescriber (Table 22). **Q: Could you also provide some demos on music?** A: We have added several MusicCaps benchmark samples to the updated demo webpage. Link: https://anonymous.4open.science/r/etta_demo-778A/index.md --- Rebuttal Comment 1.1: Comment: The author has explained and answered most of my concerns. I have raised my score and good luck with the paper.
Summary: This paper proposes ETTA, a state-of-the-art text-to-audio model trained on public data. Its innovations include: - A new dataset called AF-Synthetic that follows the audio captioning pipeline from AF-AudioSet but scales up to million-scale. This is done by captioning or re-captioning audio in AudioCaps, AudioSet, VGGSound, WavCaps, and LAION-630K datasets. - A latent diffusion/flow matching model that builds on Stable Audio and makes several architectural changes. - An analysis on the scalability with respect to model size and training data size. - An ablation study on diffusion sampler and steps (NFEs). Claims And Evidence: Yes in general, but I have minor questions about some details. - The authors claim that "We find the 1.44B model with depth=24 and width=1536 leads to an optimal balance between model size and quality." However, according to Table 6, the 2.08B 36-layer model seems to produce better results. So why is the balance achieved by the 1.44B model optimal? - The authors use experiments to show that an excessively high CFG scale is suboptimal in terms of FD, claiming the compromised diversity as the reason for such a suboptimality. I found this under-motivated, as CFG could suffer FD penalties via other mechanisms, such as distortion caused by CFG's extrapolation. Methods And Evaluation Criteria: Yes in general. I would like to point out several minor things. - The authors use CLAP score to show the superiority of AF-Synthetic captions. While I generally trust CLAP and am convinced about the quality of the proposed data captions, I believe it would benefit from a small-scale human listening test to verify AF-Synthetic's superiority. That is, for a small set of audio pieces shared between AF-Synthetic and AudioCaps or WavCaps or Laion-630K, I suggest letting human raters compare the captions in AF-Synthetics and those in the source dataset. - It would also be nice to include AudioCaps, WavCaps, and Laion-630K in Figure 1 (can be random subsets of these datasets). - When evaluating the distributional similarity between AF-Synthetic captions and other caption datasets, the authors considered a CLAP-score-based max-similarity metric. An alternative and potentially more straightforward metric would be Frechet embedding distance. That is, we can use a similar approach to computing FAD using a CLAP backbone, except now we use CLAP text embeddings instead of audio embeddings. A smaller distance between two caption sets means more similar captions. - The authors make several architectural changes to the DiT model (AdaLN, GELU, etc.) How were these modifications decided? Were they added all at once, or one by one? If they were added to the baseline DiT one by one, which modification made the largest impact? - The authors compare flow-matching models with $v$-prediction diffusion models. To my knowledge, score-prediction diffusion models are also very popular and are known to produce high performance. Has it been considered to compare with them? Theoretical Claims: N/A. Experimental Designs Or Analyses: Yes. They are sound. Supplementary Material: I read the ablation study between text encoders. Haven't got time to read the rest. Relation To Broader Scientific Literature: See summary. Essential References Not Discussed: No. Other Strengths And Weaknesses: **Overall, this paper is comprehensive, thorough, well-written, and worth accepting.** It shares important insights into various dimensions of TTA model design. I had a good laugh when I listened to the demo. **As mentioned above, several things that can be improved are:** - Stronger motivations and explanations for architectural changes. - Human evaluation on the captions of AF-synthetic. - (Stretch goal) compare with a score-prediction diffusion setup. This paper is already quite comprehensive, and I am aware that training a new model from scratch could be time-consuming and infeasible to complete within the rebuttal time frame. So this is optional. **If the above can be addressed, I recommend accepting this paper.** Other Comments Or Suggestions: - There are a lot of interesting results buried in the appendix. It would be nice to briefly mention some of them in the main paper body. - I found it interesting that FLAN-T5-Large is not universally better than T5-Base. Not asking for additional experiments, but do you think this is a "quirk" of this particular model, or is it something likely shared across different types of conditional diffusion/flow matching models? - In Table 2, there are several models comparable with (and even better than) "ETTA (AudioCaps only)" in some metrics. For example, Make-An-Audio 2, TANGO-AF&AC-FT-AC, TANGO2, and GenAU-L all seem strong. Are these models comparable with "ETTA (AudioCaps only)"? Were they trained on additional data? Would be nice to add an "extra training data" column to that table. Questions For Authors: - Is there any chance that the AF-Synthetic dataset can be open-sourced? - What does "Finally, we also sub-sample long audio segments except for music and speech" mean in Section 3.1? - What does "We switch from prepending to AdaLN timestep embedding and apply AdaLN" mean in Section 3.2? I'm also confused with the footnote "in our preliminary study using stable-audio-tools with its vanilla implementation, switching from prepending to AdaLN resulted in worse results." So AdaLN didn't work with vanilla implementation but worked for ETTA? Why? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your supportive review. We address your questions as follows: **Q: according to Table 6, the 2.08B 36-layer model seems to produce better results. So why is the balance achieved by the 1.44B model optimal?** A: We observed improvements for the deeper model (2.08B), but the gap narrows with CFG (Table 16 vs. 17). The subjective quality has been similar from the preliminary study. For this work we consider 1.44B a good balance between quality and speed, but we also expect that larger models will become better as we scale up the synthetic dataset. **Q: CFG could suffer FD penalties via other mechanisms, such as distortion caused by CFG's extrapolation.** A: This is a very good point. We also notice oversmoothing and/or distortion when CFG is set too high. This can be another source of its impact on FD. We will update the description accordingly to reflect this. **Q: I suggest letting human raters compare the captions in AF-Synthetics and those in the source dataset + An alternative and potentially more straightforward metric would be Frechet embedding distance.** A: Thank you for your suggestion to evaluate the subjective quality of the synthetic caption from AF-Synthetic. We also agree CLAP does provide a good proxy that quantifies the caption quality, as evidenced by ETTA’s generation results by using AF-Synthetic as the training dataset. We are happy to conduct human evaluation of AF-Synthetic captions. Since it is a very large scale, we will randomly sample a subset of captions and report human results in the final version. The purpose of the max-sim metric is to assess to which extent we generate novel captions in AF-Synthetic. A max-sim close to 1.0 indicates that generated captions are copied from the source captions, which is not desired. We show in Figure 2 that max-sim is not close to 1.0, suggesting the generated captions are novel and diverse (see Table 9 and 10 for some qualitative examples). **Q: It would also be nice to include AudioCaps, WavCaps, and Laion-630K.** A: Nice suggestion. We will include the full results in the paper. **Q: The authors make several architectural changes to the DiT model (AdaLN, GELU, etc.) How were these modifications decided? Were they added all at once, or one by one? + AdaLN didn't work with vanilla implementation but worked for ETTA? Why?** A: ETTA-DiT modifications have been made entirely, inspired by recent best practices. We conjecture adding AdaLN to both self-attention & cross-attention input with unbounded gating provides stronger conditioning signals. We provide training loss curves between Stable Audio-DiT vs. ETTA-DiT (Figure-Rebuttal-1), link: https://anonymous.4open.science/r/etta_demo-778A/index.md . ETTA-DiT provides better convergence whereas Stable Audio Open-DiT plateaus early. **Q: To my knowledge, score-prediction diffusion models are also very popular and are known to produce high performance. Has it been considered to compare with them?** We did not consider the score (epsilon) prediction diffusion but used velocity (v) prediction in this work, as v-prediction has been the default choice of our baseline (Stable Audio Open). To the best of the author's knowledge, v-prediction has been favored by practitioners due to its better stability during training. We also find that OT-CFM provides even better stability. **Q: Make-An-Audio 2, TANGO-AF&AC-FT-AC, TANGO2, and GenAU-L all seem strong. Are these models comparable with "ETTA (AudioCaps only)"? Were they trained on additional data?** A: Baselines models are from their own official results and/or the public checkpoints trained with different datasets. ETTA (AudioCaps only) in Table 2 is also to illustrate improved results by fine-tuning ETTA on AudioCaps resuming from a pre-trained ETTA with AF-Synthetic. We’ll add the extra training data column to the main tables. **Q: Is there any chance that the AF-Synthetic dataset can be open-sourced?** A: We will release the model code and data preparation methods to reproduce the result of ETTA models. **Q: I found it interesting that FLAN-T5-Large is not universally better than T5-Base.** A: Thank you for mentioning this. While we are not drawing a conclusive claim, it suggests that an optimal choice may also depend on other factors (i.e. “quirks”). We also speculate that another possible reason is the difference in text embedding variance between the encoders [1]. [1] Xie, Enze, et al. "Sana: Efficient high-resolution image synthesis with linear diffusion transformers." arXiv preprint arXiv:2410.10629 (2024). **Q: What does "Finally, we also sub-sample long audio segments except for music and speech" mean in Section 3.1?** A: For long sound (non-music or speech) data, we found most of them to be homogenized (e.g. an hour of continuous engine sound). For these samples, we sub-sample a few ten-second segments instead of adding the entire 360 segments to AF-Synthetic. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for the response. My questions have been resolved, and I believe the paper is worth accepting.
Summary: This paper explores the design space affecting text-to-audio generation models. Specifically, the authors analyze the effects of dataset quality and scale, architectural and training/inference design choices, and sampling methods during inference. For this purpose, a new large-scale synthetic dataset, AF-Synthetic, is constructed, and several architectural ablations are performed based on existing text-to-audio models. Extensive experiments indicate that both dataset size and quality are important, with quality having more impact. Additionally, improving DiT and applying OT-CFM result in more stable training and improved outcomes. Claims And Evidence: - The authors claim that the proposed AF-Synthetic significantly improves text-to-audio generation quality. While Table 4 shows improvement for the existing Stable Audio Open trained on AF-Synthetic, Table 5 indicates that performance with AF-AudioSet is comparable or even superior. Similarly, Table 20 suggests using AF-AudioSet is generally preferable. Although the reviewer acknowledges the authors' efforts in building a large dataset, it remains challenging to conclude that AF-Synthetic is crucial for model improvement. - The authors also argue that their proposed design choices effectively enhance model performance. Tables 5, 12, and 13 demonstrate improvements with AF-Synthetic. However, simply adding components does not fully support the effectiveness of each design choice, as results fluctuate. Additional analyses—such as human perception studies, loss curves, or convergence times—would better illustrate each component's impact.- Methods And Evaluation Criteria: - The rationale behind adding each component is unclear. Additionally, comparisons seem unfair when datasets differ. For instance, the authors should provide ablations modifying each design choice while training on the same dataset as in Table 4. Currently, all models trained with the new large-scale dataset potentially mask the true effects of added modules relative to the original Stable Audio Open. Theoretical Claims: No theoretical claims are made in the main paper. Experimental Designs Or Analyses: - Table 4 makes it difficult to interpret the contributions of proposed components, indicating that dataset scale has the largest effect, while other components yield mixed results. Clarifying effects via human evaluations, additional datasets, or deeper analyses would help. Tables 12 and 13 similarly show mixed results, suggesting dataset choice as the most influential factor. - Why are the results of Stable Audio Open different between Table 3 and Table 4? Are these different models? - In Table 5, AF-AudioSet achieves comparable performance with only one-tenth of AF-Synthetic's data, suggesting dataset quality outweighs dataset scale. This observation questions the necessity of AF-Synthetic compared to AF-AudioSet. - Comparing baseline models versus ETTA using the same training dataset (e.g., all trained on AF-Synthetic or AudioCaps) would better validate ETTA's design choices. - The reason for not reporting some human scores in Tables 2 and 3 is unclear. Table 2 reports human scores for both ETTA and ETTA-FT-AC-100k, while Table 3 only includes ETTA. Supplementary Material: The reviewer has reviewed all supplementary material. Concerns and questions are detailed in the "Claims and Evidence" and "Experimental Designs or Analyses" sections. Relation To Broader Scientific Literature: No relation to broader scientific literature is identified. Essential References Not Discussed: N/A Other Strengths And Weaknesses: - Limitations have not been discussed in this paper. Other Comments Or Suggestions: - In Section 3.1, "Table 2" should be denoted as "Figure 2." Questions For Authors: - In Section 3.1, when generating synthetic captions, if all generated captions have CLAP similarity below the threshold, is the audio discarded? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your review. We address your concerns as follows: **Q: Table 5 indicates that performance with AF-AudioSet is comparable or even superior… It remains challenging to conclude that AF-Synthetic is crucial for model improvement… This observation questions the necessity of AF-Synthetic compared to AF-AudioSet.** A: We showed that a scaled-up dataset with AF-Synthetic shows much better overall results for out-of-distribution data (Table 23). Note that AudioCaps and MusicCaps are considered in-distribution as both are derived from AudioSet. This means that using larger-scale training data (AF-Synthetic) provides better OOD generalizability. AF-Synthetic brings other practical benefits during training as well: we found that training ETTA using AF-AudioSet diverged after 250k training steps, whereas AF-Synthetic provides stable training up to 1M steps without instabilities. This means AF-Synthetic helps ETTA converge better from its scale. We added the training loss curves of AF-AudioSet vs. AF-Synthetic (Figure-Rebuttal-3) along with the generated samples using the OOD challenging captions to the demo webpage, to better showcase the necessity of AF-Synthetic for generalization. Link: https://anonymous.4open.science/r/etta_demo-778A/index.md **Q: The rationale behind adding each component is unclear… Additional analyses—such as human perception studies, loss curves, or convergence times—would better illustrate each component's impact.** A: To address this concern, we added loss curves to the updated demo webpage to better illustrate the design choices: https://anonymous.4open.science/r/etta_demo-778A/index.md Figure-Rebuttal-1: Stable Audio-DiT vs. ETTA-DiT, under the same v-diffusion objective and AF-Synthetic training dataset for both. We find that Stable Audio-DiT plateaus around 300K steps and starts to diverge early. ETTA-DiT continues to improve its quality with better loss for the same training steps. This shows clear benefits of the ETTA-DiT architecture. Figure-Rebuttal-2: v-diffusion vs. OT-CFM training objective, under the same ETTA-DiT architecture and AF-Synthetic dataset for both. Note that the loss scale of OT-CFM is different to v-diffusion. For prolonged training (e.g. over 500K steps), v-diffusion starts to be unstable, whereas OT-CFM provides better stability up to 1M steps. This shows practical advantages of OT-CFM over v-diffusion and is a motivation to use it for ETTA. Figure-Rebuttal-3: AF-AudioSet vs. AF-Synthetic training dataset, under the same OT-CFM objective and ETTA-DiT architecture. AF-AudioSet quickly diverges around 250K steps and is unable to continue its training, whereas AF-Synthetic provides better convergence with continued improvements up to 1M steps. We believe these additional loss plots better illustrate the rationale behind each of the design choices of ETTA. **Q: Why are the results of Stable Audio Open different between Table 3 and Table 4? Are these different models?** A: Table 4 turned off the classifier free guidance (CFG) for all configs as an ablation study, including the original Stable Audio Open, so the difference compared to Table 3 (which uses a default CFG scale of the baselines) is expected. **Q: Table 2 reports human scores for both ETTA and ETTA-FT-AC-100k, while Table 3 only includes ETTA.** A: Thank you for mentioning this, we indeed measured ETTA-FT-AC-100k on musiccaps as well: OVL: 3.30 ± 0.10, REL: 3.44 ± 0.12. This shows that fine-tuning ETTA on AudioCaps (general sound) degrades the music generation quality in subjective evaluation, as expected. We will improve the writing to avoid confusion. **Q: In Section 3.1, when generating synthetic captions, if all generated captions have CLAP similarity below the threshold, is the audio discarded?** A: Yes, the audio is discarded in that case.
Summary: The paper provide an extensive analysis on the design choices of TTA models, achieving much superior quantitative performance to baselines across most metrics. The authors provided extensive results showing the superiority and generalization of their method as well as extensive ablations justifying their choices. Additionally, the paper present a large scale synthetic dataset which the authors has verified its effectiveness in TTA generation. Claims And Evidence: While the paper does not propose any methodological components, the authors supported all of their claims on the best practices to train TTA with convincing experiments. Methods And Evaluation Criteria: The authors evaluated their design choices using standard evaluation protocols of TTA on ambient sounds (AudioCaps) and music generation (MusicCaps), reporting metrics that shows the superiority of their methods in sound generation as well as text-audio alignment. Theoretical Claims: The paper does not present any theoretical claims. Experimental Designs Or Analyses: The author analyses and experimental designs are sound. Supplementary Material: I looked into the generated audio in the supplementary website and the skimmed through the supplementary materials. Relation To Broader Scientific Literature: While the paper does not propose any novel methodological components, the authors has done extensive study on the design choices of TTA generation, showing superior performance. Such studies are missing from most prior work and is very useful to the community. Essential References Not Discussed: The authors has discussed the most essential references in TTA, and dataset creation. Other Strengths And Weaknesses: In summary, the following are the main strength and weaknesses of the paper **Strengths** - The authors provided extensive analysis on the design choices of TTA generation - The paper presents significantly improved performance in TTA generation compared with the open-sources models. - The authors presented a large-scale synthetic dataset which will be useful to the community if released. - The paper is well written. **Weakness** - Despite providing many analyses on design choices of TTA, the paper lacks any significant novel contribution to TTA. - Some claim in the papers are not well addressed. The authors claim that AF-Synthetic is the first million-size synthetic caption dataset, while AutoCap [1] has open-sources a dataset of size 40+ millions. - While the authors has presented significant quantitative improvements, their subjective evaluation is close on underperform baselines (e.g Table 3, Table 22) . - The authors has not promised the released of their checkpoints or code. Considering that the paper is focused around building high quality audio generation model by adapting the best practices, this could largely impact the level of the contribution of the paper. - The paper lacks a clear discussion on the difference between the dataset collection pipeline of AF-Audioset and AF-Synthetic. Other Comments Or Suggestions: The conclusions of the papers could be summarized better in experiment sections, while leaving details on hyperparameters to the supplementaries. This could make the technical details of the papers easier grasp. Questions For Authors: - How much of the initial dataset is filtered with CLAP and others? - AF-audioset performance is comparable with AF-synthetic in Table. 5, despite having a smaller size. While AF-synthetic achieves better performance in Tab. 23. I am wondering what would be the performance of AF-synthetic at a similar scale. Have the authors perform any experiments that shows the impact of data quantity over the generation quality? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your supportive review and appreciating our large-scale extensive study on the design choices of TTA to reach state-of-the-art results. **Q: AutoCap has open-sourced a dataset of size 40+ millions.** A: Thank you for mentioning the status of this concurrent work. We will discuss this concurrent work in the final version of the paper. **Q: Subjective evaluation is close or underperform baselines (e.g Table 3, Table 22).** A: For music, Stable Audio Open is trained on high quality music from Freesound and Free Music Archive (FMA) datasets, but many of these samples are not available as of today. In contrast, ETTA is not trained on any studio-quality music datasets (not even MusicCaps). As a result, ETTA has a slightly worse OVL (which only measures audio quality and does not consider text) on MusicCaps and SongDescriber. While only trained on synthetic captions, ETTA outperforms Stable Audio Open on the REL (audio-text relevance) score on AudioCaps and MusicCaps (Table 2-3), and matches it on an OOD testset SongDescriber (Table 22). **Q: The authors have not promised the release of their checkpoints or code.** A: We will release the model code and data preparation methods to reproduce the result of ETTA models. **Q: The paper lacks a clear discussion on the difference between the dataset collection pipeline of AF-Audioset and AF-Synthetic.** A: We scaled-up the data collection method proposed in AF-AudioSet [1], and added several filtering methods (Section 3.1 and Appendix C.1). [1] Kong, Zhifeng, et al. "Improving text-to-audio models with synthetic captions." arXiv preprint arXiv:2406.15487 (2024). **Q: How much of the initial dataset is filtered with CLAP and others?** A: The total filtering rate is about 73.5% (acceptance rate 26.5%) under a CLAP threshold of 0.45 and other filtering methods. **Q: AF-audioset performance is comparable with AF-synthetic in Table. 5, despite having a smaller size. While AF-synthetic achieves better performance in Tab. 23.** A: This is a great question. First of all, we find a scaled-up dataset with AF-Synthetic shows much better results for out-of-distribution data (Table 23) (Note that AudioCaps and MusicCaps are considered in-distribution, derived from AudioSet). AF-Synthetic brings other practical benefits during training as well: we found that training ETTA using AF-AudioSet diverged after 250k training steps, whereas AF-Synthetic provides stable training up to 1M steps without instabilities. We added the training loss curves of AF-AudioSet vs. AF-Synthetic along with the generated samples using the OOD challenging captions to the demo webpage, to better showcase the necessity of AF-Synthetic for generalization. Link: https://anonymous.4open.science/r/etta_demo-778A/index.md
null
null
null
null
null
null
Demystifying MPNNs: Message Passing as Merely Efficient Matrix Multiplication
Reject
Summary: This paper investigates the role of different aggregation and graph types on the performance of GNNs. They state several connections for different connectivity patterns on the density of the adjacency matrix with increased power iterations. They argue that gradient decay is a key issue for GNNs because the performance for using A^k instead of k layers of message-passing does not decrease as much with increasing k. Even when k-1 linear transformations are applied to the output of a GNN aggregating with A^k, the performance severely decreases. They also study the effect of different normalization on the adjacency matrices. ## update after rebuttal The rebuttal did not result in any changes on my review, apart from missing the availability of code. The authors seemingly do not understand the UAT, which is used incorrectly in central proofs of this paper. Claims And Evidence: The proofs for Lemmas 2.7, 2.8, and 2.9 are unconvincing. The UAT is used to remove a non-linearity, which does not seem correct to me. Methods And Evaluation Criteria: It is argued that gradient issues might cause the poor performance of some of the considered methods, but there are no experiments or theoretical insights that specifically investigate this issue. Performance during optimization would be insightful. Similarly the actual observed gradient values. It is also argued that over-smoothing is not a concern for the power method A^k. However, there are also no experiments supporting this claim. With the over-smoothing phenomenon, the performance can remain stable with depth as well. This can be evaluated using a suitable metric, e.g., the rank-one distance (ROD) [1]. To me, it seems more like these are two separate issues. For A^k, we observe only over-smoothing, while for GCN, we observe over-smoothing+gradient issues. --- [1] Roth, Simplifying the Theory on Over-Smoothing, arxiv, 2024. Theoretical Claims: As above, the proofs for Lemmas 2.7, 2.8, and 2.9 are unconvincing. The UAT is used to remove a non-linearity, which does not seem correct to me. Experimental Designs Or Analyses: See Methods And Evaluation Criteria Supplementary Material: I reviewed the proofs. I have concerns about Lemmas 2.7, 2.8, 2.9. Relation To Broader Scientific Literature: There is no related work part. Section 3 presents some basic graph theoretical properties as Lemmas that can be found in introductory literature, e.g., [2]. Literature on over-smoothing and vanishing / exploding gradient is not discussed. --- [2] Gallagher, Discrete stochastic processes, Journal of the Operational Research Society, 1997. Essential References Not Discussed: Basic literature on graph theory is not mentioned for properties in Section 3. Other Strengths And Weaknesses: I like the distinction between over-smoothing and gradient issues. However, it lacks depth, as there are no theoretical insights into these differences, and empirical evaluations are not precise enough to draw any conclusions about these issues. There is no code provided. Other Comments Or Suggestions: The paper lacks mathematical preciseness in many parts. E.g. in Lemma 3.8 it is written "the connections present in A_m are identical to those in A_h". A_m = 0 \Leftrightarrow A_h = 0. Similarly in many other statements. This would make it a lot clearer. Questions For Authors: * How does the UAT allow you to remove the non-linearities for Lemmas 2.7, 2.8, 2.9? Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: Dear Reviewer 5Agy, Thank you for your thoughtful feedback. We address your concerns below to clarify potential misunderstandings and reaffirm the validity of our work. 1. On UAT and Nested Non-linearities (Lemmas 2.7, 2.8, and 2.9) You asked how the Universal Approximation Theorem (UAT) justifies removing nested non-linearities in Lemmas 2.7, 2.8, and 2.9. This step follows directly from UAT’s core principle: a sufficiently expressive neural network can approximate any continuous function [1]. Specifically, UAT permits replacing nested non-linearities with a single non-linearity over a transformed input, a result well-supported in the literature [2,3]. This approach aligns with prior applications of UAT in Graph Neural Networks (GNNs) [4] and empirical evidence showing that removing inner non-linearities enhances GNN performance [5]. While Reviewer tXTD confirmed the correctness of our proof, stating, "I checked the Appendix. I think the claims are correct," we value your perspective, particularly your recognition of our distinction between over-smoothing and gradient issues—an insight Reviewer tXTD overlooked. Given these differing viewpoints, we respectfully suggest the review process account for this variation in expertise. Should you desire further detail, we are happy to elaborate in the appendix. 2. Code Availability We regret any confusion about our code’s availability. Contrary to the impression that no code was provided, we specified its location in the Introduction, just before the Notation and Definitions section: "The code for the experiments conducted in this paper is available at https://anonymous.4open.science/status/demystifyB30E." This anonymous GitHub repository offers a one-click script with detailed configurations to replicate all figures and tables. We apologize if its placement outside the abstract led to oversight, recognizing that unavailable code could understandably raise doubts, especially since our findings challenge prevailing assumptions. Given your expertise in evaluating our distinction between over-smoothing and gradient issues—a critical insight we believe you are uniquely positioned to champion—we kindly invite you to reconsider our work in light of this clarification. Your perspective could prove invaluable in persuading other reviewers of this key contribution. 3. On Graph Theory and Spectral Methods You noted an absence of graph theory literature in our review. While graph theory often assumes uniform node features, our focus in message-passing neural networks (MPNNs) emphasizes feature heterogeneity, which we believe limits the relevance of traditional graph-theoretic approaches here. Additionally, we contend that spectral methods (e.g., ChebNet, MagNet) are, in practice, message-passing techniques. For example, in a separate paper, we show MagNet equates to GraphSAGE with incidence normalization, suggesting a common misconception about spectral methods’ distinctiveness. We plan to explore this further in future work as additional evidence accumulates. 4. Words vs. Mathematical Expression (Lemma 3.8) You suggested that Lemma 3.8’s statement, "the connections present in A_m are identical to those in A_h," lacks mathematical precision, proposing "A_m = 0 ⇔ A_h = 0" instead. We believe this interpretation may not fully capture our intent. Lemma 3.8 asserts that A_m(i,j) and A_h(i,j) share identical connectivity patterns—i.e., A_m(i,j) is non-zero if and only if A_h(i,j) is non-zero—not that the matrices are identically zero or non-zero overall. We chose a verbal description for readability, but we assert it preserves clarity and accuracy. If you prefer a compact mathematical form (e.g., A_m(i,j) ≠ 0 ⇔ A_h(i,j) ≠ 0 for all i, j), we are happy to revise the text, despite the slight increase in space. 5. Closing Remarks We believe our paper advances MPNN research through rigorous experiments and reproducible code, offering clear guidance on novel concepts. We apologize for any initial confusion—particularly regarding code availability and UAT application—and hope this response resolves your concerns. Your insights are invaluable, and we are prepared to make adjustments, such as adding proofs or refining expressions, to align with your expectations. Thank you for your time and consideration. References [1] Hornik, K., et al. (1989). Multilayer feedforward networks are universal approximators. Neural Networks, 2(5), 359–366. [2] Lu, Z., et al. (2017). The expressive power of neural networks: A view from the width. NeurIPS. [3] Hanin, B., & Sellke, M. (2017). Approximating continuous functions by ReLU nets of minimal width. arXiv:1710.11278. [4] Xu, K., et al. (2019). How Powerful are Graph Neural Networks? ICLR. [5] Wu, F., et al. (2019). Simplifying Graph Convolutional Networks. ICML. --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed rebuttal and for clarifying the availability of their implementation. However, there are still too many other issues with this work for me to change my score. As a final remark to "1. On UAT and Nested Non-linearities (Lemmas 2.7, 2.8, and 2.9)": UAT holds for MLPs, not linear transformations as used in this work. LMqE identified the same issue with the proofs. --- Reply to Comment 1.1.1: Comment: Dear Reviewer 5Agy, Thank you for your continued feedback. We appreciate the opportunity to address your concern that our work relies solely on linear transformations. We believe there may be a misunderstanding here: our model explicitly incorporates a non-linearity at the final layer, as detailed in Section 2 and Appendix A.2. This non-linearity, applied after the linear transformations, ensures that our architecture aligns with the Universal Approximation Theorem (UAT), which we invoke in Lemmas 2.7, 2.8, and 2.9 to justify replacing nested non-linearities with a single, sufficiently expressive non-linear layer. This approach is consistent with established theory [1,2] and practical simplifications in GNNs [4,5]. We hope this clarifies that our framework is not limited to linear transformations alone, and we will revise the manuscript to make the presence and role of the final non-linearity more explicit to avoid further confusion. References [1] Hornik, K., et al. (1989). Multilayer feedforward networks are universal approximators. Neural Networks, 2(5), 359–366. [2] Lu, Z., et al. (2017). The expressive power of neural networks: A view from the width. NeurIPS. [3] Hanin, B., & Sellke, M. (2017). Approximating continuous functions by ReLU nets of minimal width. arXiv:1710.11278. [4] Xu, K., et al. (2019). How Powerful are Graph Neural Networks? ICLR. [5] Wu, F., et al. (2019). Simplifying Graph Convolutional Networks. ICML.
Summary: This paper studies the message passing mechanism commonly used in GNN. It investigates how k-layer GNN can be empirically approximated by a k-order adjacency matrix with a single-layer GNN. It further studies the influence of loop structures in the graph. It then examines if node features are necessary to perform a node classification task on different types of graphs. It discusses that the degeneration of deeper GNN may be attributed to the gradient-descent issue rather than over-smoothing. Claims And Evidence: - Line130 is not accurate. The k-order nodes are not necessarily the k-hop nodes (in terns of shortest-path distance). Therefore, low-order can still be kept. Like 1-order information is kept in 3-order in undirected graph, (also discussed in lemma 3.3) Methods And Evaluation Criteria: - The datasets chosen are of concern. They are all tiny graph datasets for the modern GNNs. Any performance gain/theoretical analysis only verified on these small datasets is no longer convincing in the recent literature. - The graph datasets chosen in Figure 3 are not convincing. Figure 3 claims that without explicit graph preprocessing, the over smoothing issue can be less severe. The experiment shows that the performance of deeper GNN is not degenerate at all. However, the datasets are known as heterophilic, which have much less over smoothing problems compared to homophily graphs. Experiments on homophily are necessary to support this claim. Theoretical Claims: - The proof of Lemma2.7 is wrong here. One layer neural network cannot universally approximate. The simplification is the proof is not rigorous at all. - Almost all the theoretical contributions are well-known in the graph theory community. It takes too much space to discuss these known results in the paper. Experimental Designs Or Analyses: See 'Methods And Evaluation Criteria' part. Supplementary Material: The proof section. Relation To Broader Scientific Literature: It broadens the understanding of when/why/how deeper GNNs can perform well on various types of graphs. Essential References Not Discussed: The k-hop GNN is almost identical to SGC in [1], which is not properly cited in the paper. [1] Simplifying Graph Convolutional Networks. Other Strengths And Weaknesses: NA Other Comments Or Suggestions: Typos: - Lemma2.3 p-kop —> k-hop Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: Dear Reviewer LMqE, Thank you for your time and detailed feedback. We appreciate your comments and would like to address your concerns as follows: Rebuttal 1. Definition of k-hop Neighbors You noted that "Line 130 is not accurate." In our paper (Definition 2.1), we define k-hop neighbors as nodes reachable in exactly k steps, not by shortest-path distance that you added. With self-loops added, a 1-hop neighbor can also be a k-hop neighbor, so our claim holds. Rebuttal 2. Dataset Choice You mentioned that the small datasets used in our analysis are not convincing for modern GNNs, and performance gains on these datasets are no longer credible in recent literature. We respectfully disagree with this generalization. Our goal is to analyze fundamental theoretical behaviors of GNNs, specifically the influence of gradient descent. Small datasets are sufficient to reveal key theoretical insights. In scientific exploration, simple experiments often uncover fundamental principles. For example, Newton used a slope and a ball to illustrate the second law of motion. Should he have used landslides or large-scale geological movements to validate acceleration? Similarly, our choice of datasets is appropriate for demonstrating the mechanisms we investigate. Rebuttal 3. Experiments on Homophilic Datasets You pointed out that the datasets in Figure 3 are not convincing, but this is due to the lower over-smoothing in heterophilic graphs. You suggest that experiments on homophilic graphs are needed to support this claim. In our paper, we conducted experiments on Chameleon and Squirrel to demonstrate a case of non-degradation in performance over deeper layers, showing that not adding self-loops could prevent over-gathering of multi-hop information. This performance is better compared to adding self-loops on these datasets. You assumed that homophilic graphs do not exhibit this characteristic, attributing the cause to heterophily, which is incorrect. The reason citation graphs, typically homophilic, do not exhibit this behavior is that they lack multi-node loops—later papers cannot cite earlier ones—leading to performance degradation in deeper layers. As a result, many nodes lack distant neighbors (e.g., 200-hop connections). In contrast, multi-node loops in Chameleon and Squirrel allow nodes to continuously aggregate information, even as the number of layers increases. This effect is independent of whether a graph is homophilic or heterophilic. To further clarify, we conducted additional experiments on the homophilic dataset Telegram, which contains loops. As shown in the README https://anonymous.4open.science/status/demystifyB30E, even after 400 layers, performance does not degrade significantly. This directly contradicts your assumption that homophily or heterophily is the cause. We acknowledge that due to the complexity of the situation, we focused on demonstrating non-degradation in performance without explicitly discussing the effect of multi-node loops. We apologize for not emphasizing this point earlier. However, we have fully demonstrated our key claim. While good performance is similar, bad performance can be influenced by numerous factors. It's not feasible to list and explain every possible assumption in advance, and we hope our rebuttal clarifies our position. Rebuttal 4. No contribution You claimed: "Almost all the theoretical contributions are well-known in the graph theory community. It takes too much space to discuss these known results in the paper." However, we believe that fundamental misconceptions persist regarding these concepts. Take, for example, your incorrect assumption in Rebuttal 3. Your feedback suggests a need for more explanation than we originally provided. We encourage you to refer to our rebuttal to Reviewer unW8, where they outlined our three main contributions, and we rebutted how subtly and importantly we have contributed to the field. Additionally, Reviewer 5Agy acknowledged our distinction between over-smoothing and gradient-related issues. While some explanations may seem tedious, they are essential for clarity, especially for readers less familiar with these topics. As reflected in other reviews and rebuttals, there are still misunderstandings. As long as our claims are valid, we ask that you don't dismiss our work simply because you believe you're already familiar with these concepts. We present unexpected insights, and while our claims may seem basic, they are crucial for a solid understanding of MPNNs. A closer reading of our work will highlight the novel contributions we are making. 5. Other Points Your comparison of k-hop GNN to SGC is insightful, as SGC provides strong experimental support for our use of the Universal Approximation Theorem (UAT), which Reviewers 5Agy and unW8 don't accept. We hope this clarification helps address your concerns, and we appreciate your engagement with our work. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for the response. I wonder if there is any comment on the proof of Lemma2.7. --- Reply to Comment 1.1.1: Comment: Thank you, Reviewer LMqE, for your thoughtful comments. We appreciate the opportunity to address your concerns, which we also discussed in our rebuttal to Reviewer 5Agy. In addition to that explanation, we’d like to elaborate further. The Universal Approximation Theorem (UAT) states that a single hidden layer with enough neurons and a non-linear activation (e.g., sigmoid or ReLU) can approximate any continuous function. In Lemma 2.7, we simplify a nested non-linearity model by removing the inner non-linearity. This demonstrates that both the nested non-linearity model and the single non-linearity model can approximate the same continuous function for node classification. The equality in Lemma 2.7 reflects the equivalence of the models in terms of their approximation capability, rather than the exact mathematical equivalence of the formulations. We will clarify this distinction in our camera-ready version. Regarding the condition of sufficient neurons: Our model uses 64 neurons and performs well, but tests with as few as 10 neurons also show good results. This suggests that even 10 neurons provide sufficient width for the task, in accordance with the UAT. While removing the inner non-linearity doesn’t eliminate its necessity, it allows for a valid approximation in a single-layer model. I’d be glad to elaborate further if needed. Thank you again for your valuable feedback!
Summary: This paper presents a comprehensive analysis of GNN behavior through several fundamental aspects. - (Contribution 1) The authors establish that k-layer Message Passing Neural Networks efficiently aggregate k-hop neighborhood information through iterative computation - (Contribution 2) The authors analyze how different loop structures influence neighborhood computation. - (Contribution 3) The authors examine behavior across structure-feature hybrid and structure-only tasks. ## update after rebuttal Thanks for the authors' rebuttal. I would like to keep my original evaluations due to the poor presentation in the original draft. Claims And Evidence: The definition of $W$ in Lemma 2.7 is unclear. The derivation from Equation (6) to Equation (5) in Appendix is unclear. Methods And Evaluation Criteria: The experiments on large-scale datasets are missing. Theoretical Claims: The definition of $W$ in Lemma 2.7 is unclear. The derivation from Equation (6) to Equation (5) in Appendix is unclear. Experimental Designs Or Analyses: The experiments on large-scale datasets are missing. Supplementary Material: I have reviewed Appendix A in the supplementary material. Relation To Broader Scientific Literature: The contributions of this paper are as follows. - (Contribution 1) The authors establish that k-layer Message Passing Neural Networks efficiently aggregate k-hop neighborhood information through iterative computation - (Contribution 2) The authors analyze how different loop structures influence neighborhood computation. - (Contribution 3) The authors examine behavior across structure-feature hybrid and structure-only tasks. Contribution 1 has been proposed in Section 5.1 in [Ref1]. Contributions 2 and 3 of this paper beyond the research presented in [Ref2] and [Ref3] is unclear. --- [Ref1] Graph Representation Learning. https://www.cs.mcgill.ca/~wlh/grl_book/files/GRL_Book.pdf [Ref2] A New Perspective on the Effects of Spectrum in Graph Neural Networks.ICML 2022. [Ref3] Edge directionality improves learning on heterophilic graphs. Learning on Graphs Conference 2024. Essential References Not Discussed: The cited references are sufficient. Other Strengths And Weaknesses: Weaknesses: 1. The presentation needs significant improvement. There are at least five topics in this paper according to the Abstract, but none of them are explored in sufficient depth. 2. The derivations in the main text are trivial. I suggest moving the derivations to the Appendix and focusing on the key results in the main text. 3. There is a large white space on Page 6. Other Comments Or Suggestions: "neibors" in Figure 1 should be "neighbors". Questions For Authors: See Weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: Dear Reviewer unW8, We sincerely appreciate your time and effort in reviewing our manuscript and providing valuable feedback. Below, we provide a point-by-point response addressing your comments and concerns. 1. Novelty Relative to Prior Work about Contribution 1 You stated that "Contribution 1 has been proposed in Section 5.1 in [Ref1]." While Ref[1] presents general statements that may appear similar to our contributions, it lacks the depth and specificity of our analysis: *After k iterations, node embeddings contain information about their k-hop neighborhoods. *Embeddings may include both degree and feature information. However, due to Ref[1]'s generality, these conclusions might be incorrect for some specific cases, as revealed by our findings. In contrast, our work rigorously extends the understanding by: *Examining the impact of loops (e.g., adding self-loops or converting to an undirected graph). While node embeddings after k iterations contain information about k-hop neighborhoods, lower-hop neighborhoods are also incorporated when loops are introduced—an important nuance often overlooked in prior work. *Demonstrating the importance of one-hot features. One-hot features preserve neighboring feature information, whereas summing general features can lead to information distortion. The claim in Ref[1] that embeddings include feature information does not hold universally, particularly when node features are not one-hot but general features. *We show that under row-normalization with uniform feature settings, degree information not learned by GNN models. This aspect is not explored in Ref[1], making its general claim that embeddings inherently contain degree information incorrect in such cases. 2. Novelty Relative to Prior Work about Contribution 2 and 3 You stated that Contributions 2 and 3 are unclear beyond the research presented in [Ref2] and [Ref3]. However, these references do not cover the same aspects as our work: Ref[2] examines self-loops through their effect on the adjacency matrix spectrum, showing that they push the bounded spectrum closer to 1. In contrast, we study self-loops from a spatial perspective, analyzing their impact on feature propagation. Self-loops allow a node’s own features to mix with its neighbors', which may cause multi-hop neighborhood features to coexist in the final-layer representations, leading to over-smoothing. Thus, while Ref[2] focuses on spectral analysis, our contribution provides a spatial perspective, making the two analyses fundamentally different. Ref[3] does not discuss loop structures at all, and the absence of self-loops makes their model suboptimal in homophilic datasets. Additionally, Ref[3] does not consider uniform feature settings, which are central to our analysis for structure-only tasks. In sum, Ref[3] neither addresses loops (Contribution 2) nor considers uniform features (Contribution 3)—both of which we rigorously analyze in our paper. Given this, we find it unclear why our contributions beyond Ref[3] are in question. 3. Clarification on Theoretical Foundations (Universal Approximation Theorem) Please refer to our response to Reviewer 5Agy, bullet point 1, for a detailed explanation. 4. Other issues Your mention "The definition of W in Lemma 2.7 is unclear.", but We defined W as weight matrix in Remark 2.4. As to requirement of large-scale experiments, we answered this question in Rebuttal to Reviewer LMqE, part 2. On Formatting and White Space (Page 6) You noted a large white space on Page 6. This is an artifact of LaTeX’s automated formatting, which we use to adhere to the 8-page limit. While we’ve employed barriers and positioning controls for figures and tables, precise placement is constrained to avoid exceeding the page restriction. We are open to suggestions for optimizing the layout within these bounds. On Derivations in the Main Text You suggested that the derivations in the main text are trivial and should be moved to the appendix to emphasize key results. We appreciate this guidance and are open to relocating parts of the text. However, we note that preferences vary among readers: what may appear straightforward to you could be critical for reviewers or readers with differing expertise, potentially preventing fundamental misunderstandings of our approach. Retaining these derivations in the main text has not, in our view, detracted from the paper’s core contributions, but we are willing to adjust their placement to better highlight the key results if you feel this would enhance clarity. You mentioned that our paper covers "at least five topics" but lacks depth. However, Reviewer tXTD states: "I see the importance of each of these claims in isolation, and the experiments seem to support these claims." We prioritize clarity and conciseness due to space limitations. If you feel any areas need more detail, we’d be happy to provide further elaboration. Best Regards, Authors
Summary: The ideas in this paper have merit and are interesting. A multi-layer MPNN with 𝑘 with adjacency A is roughly equivalent to a single-layer MPNN utilizing the adjacency matrix with adjacency A^k, which essentially means that intermediate information is disregarded. The authors also present some analysis related to self-loops and correctly identify gradient-related issues. Claims And Evidence: I think that the main issue with this paper is that the claims are not concise or clear enough. The paper is written in a quite disconnected fashion, and it is not exactly clear what the main messages or the main points of each section are. While I see the importance of each of these claims in isolation, and the experiments seem to support these claims, the paper needs significant effort such that it can be accepted at a top conference like ICML. I think the authors might have also missed a few related works in a similar direction, which I will point to in a later section, which might provide some alternative explanations to some of the claims made in the paper. Methods And Evaluation Criteria: The authors use standard datasets used in the literature. I believe that the empirical results are satisfactory. Theoretical Claims: Yes, I checked the Appendix. I think the claims are correct, but they are sometimes somewhat trivial, especially those with just bullet points as proof. Experimental Designs Or Analyses: Yes, I did not find issues. Supplementary Material: I checked the proofs in the supplementary material. Relation To Broader Scientific Literature: The results will be good for clarifying some of the misconceptions that people might have about MPNNs and oversmoothing, as well as some of the tasks used in the community. The overall intention of the paper is good, but it needs significant work before being published in a venue like ICML. Essential References Not Discussed: I think that the authors make an interesting point with regard to the relationship between GNN performance and gradients. This is not the author's fault, but they should take a look at a recent paper I recently came across [1], which provides a much more concrete characterization of how gradients affect MPNN performance. Perhaps some of the findings in this paper (for example, the role of symmetrization and self loops) can be explained from that lens as well. [1] Arroyo, Álvaro, et al. "On Vanishing Gradients, Over-Smoothing, and Over-Squashing in GNNs: Bridging Recurrent and Graph Learning." arXiv preprint arXiv:2502.10818 (2025). Other Strengths And Weaknesses: I have commented on this. Other Comments Or Suggestions: None. Questions For Authors: No more questions Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: Dear Reviewer tXTD, Thank you for your thoughtful review of our paper and for recognizing the correctness and importance of each of our individual claims and the validity of our experimental results. We appreciate the effort you’ve invested and are grateful for the opportunity to address your concerns. While we respect your perspective, we believe some points in your evaluation may not fully capture the intent and contributions of our work. We hope the following clarifications will address these issues. 1. Synthesizing the Paper’s Contributions Holistically We appreciate your feedback noting that the paper feels "disconnected" and that the main messages of each section are unclear, making it challenging to synthesize our contributions holistically. We regret that our intent was not fully conveyed and would like to clarify the paper’s structure and purpose. Our work delivers a comprehensive analysis of multi-layer Message-Passing Neural Networks (MPNNs), spanning theoretical and practical perspectives. Section 2 lays the groundwork for MPNNs, followed by an exploration of key performance factors: input graph characteristics (e.g., presence or absence of node features in Section 4), preprocessing techniques (e.g., adding self-loops and converting to undirected graphs in Section 3), normalization strategies, and challenges of deep layers (e.g., over-smoothing). Each section confronts potential misconceptions, forming an indispensable part of the narrative. For a conference like ICML, omitting any risks leaving reviewer biases unaddressed, which could lead to rejection based on assumptions not explicitly countered in the text. Though the topics may appear wide-ranging, they are integral to our central thesis: addressing MPNN limitations requires a multifaceted perspective. The interplay of input graph, preprocessing, normalization, and depth weaves a unified story of MPNN behavior. Reviewer unW8 recognized this, noting our work as "a comprehensive analysis of GNN behavior through several fundamental aspects." For researchers steeped in MPNNs, these issues are daily touchstones, effortlessly linking the sections. To those less familiar, the discussion might seem disjointed—like a rich tapestry of a shared endeavor, resonating deeply with those who recognize its patches, yet appearing as scattered patches to newcomers or keen observers who haven't yet made their hands dirty. We are impressed by how much you grasped in a short time, despite not being immersed in this domain. Unfortunately, other reviewers although see the cohesion of our paper, Reviewer unW8 and Reviewer 5Agy rejects mainly because of their lack of knowledge of UAT, unlike you. We respectfully request that you reconsider the paper in light of this clarification and refer to Reviewer unW8’s evaluation on this point. 2. Representation of Related Work and Our Contributions We appreciate your concern that we may have overlooked relevant prior work, notably Arroyo et al. (2025), which you suggest offers a superior explanation. Upon review, we note that Arroyo et al. (2025) attributes over-smoothing solely to gradient vanishing, supported by an extensive proof and countermeasures tailored to gradient descent-based methods. They state: "Contrary to common consensus, which explains over-smoothing by showing that the signal is projected into the 1-dimensional kernel of the graph Laplacian, we instead describe over-smoothing as occurring due to the contractive nature of GNNs and their inputs converging in norm to exactly 0." While Arroyo et al.’s gradient-focused explanation is insightful, we argue that over-smoothing stems from both gradient vanishing and feature propagation—such as loop-influenced aggregation. Our work delivers a broader perspective than their narrower lens. Reviewer 5Agy underscored this strength, stating: "I like the distinction between over-smoothing and gradient issues," which we take as recognition of our holistic approach. By tackling both the similarity of node features via propagation and the decay of gradients in deeper layers, our analysis offers a more complete picture of over-smoothing in MPNNs. Conclusion As we’re forced to write a very compact paper on complex topics, we couldn’t explicitly claim or restate points as a normal, simple paper might, which may have led to your misunderstandings, yet its contributions and quality stand undiminished. We respectfully suggest that our paper’s key contributions may not have been fully recognized in your review. The challenge you noted in synthesizing our work, alongside this mischaracterization of related efforts, might have obscured the novelty and significance of our approach. We invite you to reconsider our contributions in light of these clarifications. Thank you for your time and thoughtful consideration. Best regards, Authors
null
null
null
null
null
null
CLOVER: Cross-Layer Orthogonal Vectors Pruning
Accept (poster)
Summary: This paper proposes a method dubbed CLOVER to address the memory-bound in large language models during inference. Specifically, CLOVER performs singular value decomposition (SVD) on the Query-Key and Value-Output parameter matrices in the attention layer, thereby orthogonalizing the vectors within attention heads to reduce linear redundancy. This enables efficient pruning and parameter efficient fine-tuning. Experimental results demonstrate that CLOVER can remove more redundancy than the vanilla pruning method and achieves performance improvements with parameter efficient but full-rank fine-tuning while avoiding the issue of intrusive dimensions. Claims And Evidence: Yes. The claim in the paper about cross-layer decomposition is mathematically sound and has experimental validation. Methods And Evaluation Criteria: Yes. The proposed method facilitates the aggregation of the principal components in the parameter space, exposing redundant components. Additionally, it enables full-rank parameter updates, demonstrating practical significance. Theoretical Claims: Yes. I checked SVD, intrusive dimensions problem, etc. and the paper discusses them correctly. Experimental Designs Or Analyses: Yes. The experimental design of the paper is generally well-founded. Specifically, the paper validates the effectiveness of CLOVER for pruning on GPT-2-XL, demonstrates its efficiency for PEFT across the LLAMA series models and multiple inference tasks. Furthermore, more fundamental analysis is provided by observing the parameter and feature space rank, projection distribution, spectral variations, etc. However, some settings in the experiment are insufficient or may be confusing, please refer to Other Strengths And Weaknesses. Supplementary Material: Yes. The supplementary materials provide more detailed method description, experimental setting, analysis, and the core code is provided, making the paper more complete and convincing. Relation To Broader Scientific Literature: The proposed CLOVER demonstrates applicability in both pruning and PEFT. In the context of pruning, existing structured pruning methods suffer from significant performance degradation. CLOVER effectively reduces redundant dimensions by concentrating the principal components in the parameter space. However, the paper claims to only introduce a novel perspective and does not benchmark its performance against SOTA approaches, which **cannot make me fully convinced**. Regarding PEFT, most existing methods operate within individual layers and incur substantial cost. In contrast, CLOVER employs cross-layer low-rank decomposition to achieve full-rank fine-tuning with reduced cost and mitigates the issue of intrusive dimensions. Essential References Not Discussed: The paper provides a comprehensive enough discussion of related work. Other Strengths And Weaknesses: This paper provides a new perspective on discovering redundancy and improving PEFT performance, which demonstrates application value. The writing is clear and fluent, confirming to the fundamental academic writing standards. However, there are still some weaknesses. 1. The paper claims to address the issue of memory-bound during inference. However, the experiments section does not present any memory-related results to validate the effectiveness of the proposed method. Furthermore, the method is not compared with any novel pruning approaches (such as LLM-Pruner[1], SliceGPT[2], FLAP[3], UKMP[4]) in the targeted inference stage. While a study offering a novel perspective may be tolerated even if it does not achieve optimal performance, it should at least present comparative results in the experiments or demonstrate the effectiveness of combining this method with existing SOTA pruning techniques. 2. The figures in the paper are somewhat difficult to interpret. For example, in Figure 4, what values are being ranked in "Top" "Next" and "Bottom"? In Figure 5, what do the horizontal and vertical axes represent? 3. The quantitative validation of CLOVER on pruning is only conducted on GPT-2-XL. Since Figure 2 already demonstrates that the method effectively exposes redundancy across various models, reporting the pruning results on these models as well would better showcase the effectiveness of the proposed method. [1] Llm-pruner: On the structural pruning of large language models. NeurIPS 2023. [2] Slicegpt: Compress large language models by deleting rows and columns. ICLR 2024. [3] Fluctuation-Based Adaptive Structured Pruning for Large Language Models. AAAI 2024. [4] Unified Knowledge Maintenance Pruning and Progressive Recovery with Weight Recalling for Large Vision-Language Models. AAAI 2025. Other Comments Or Suggestions: There are some typos such as in line 115 “without need introduce” and in line 181 “We presents”. Questions For Authors: 1. In Figure 2, why is the product of the norms of $W_Q$ and $W_K$ used as the vertical axis? I understand that this is consistent with the CLOVER method, but is this metric also used as an importance criterion in the vanilla pruning method? Based on my understanding of the pruning field, l2-norm-based pruning typically utilizes the norm of either $W_Q$ or $W_K$ individually or their sum, rather than their product. Therefore, I have concerns regarding the correctness of the vanilla pruning method used for comparison. 2. Does the experiment in Section 4.4 maintain the parallelizability of multi-head attention? Specifically, is does the $W_{QK}$ remove the same number of dimensions for each head? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Q1: Benchmark the effectiveness of the proposed method.** **A1:** For the pruning experiments, we evaluated the effectiveness of CLOVER using the Wikitext2 eval-dataset with the following settings: batch size of 32, max sequence length of 1600, and tested on an A100-PCIE-40GB GPU with different pruning ratios for GPT-2-XL inference speed. The results are summarized in the table below: | **Pruning Ratio** | **Vanilla Perplexity** | **CLOVER Perplexity** | **Median Time per Batch (s/batch)** | **Throughput (token/s)** | **Latency (ms/token)** | | --- | --- | --- | --- | --- | --- | | 0% | 14.78 | 14.78 | 0.0635 | 503.67 | 1.99 | | 12.5% | 16.38 | 15.77 | 0.0572 | 559.14 | 1.79 | | 25.0% | 17.07 | 16.05 | 0.0504 | 634.83 | 1.58 | | 37.5% | 18.14 | 16.48 | 0.0505 | 633.06 | 1.58 | | 50.0% | 19.02 | 17.13 | 0.0412 | 777.09 | 1.29 | | 62.5% | 21.44 | 18.40 | 0.0463 | 690.89 | 1.45 | | 75.0% | 27.22 | 20.99 | 0.0390 | 820.40 | 1.22 | The results clearly demonstrate that pruning the head dimension significantly accelerates model inference. Notably, CLOVER pruning 50% of the head dimensions outperforms vanilla pruning at 37.5% by generating 144 more tokens per second while maintaining a lower perplexity. For the PEFT benchmarking experiment, please refer to **A2 for Reviewer DDfo**. **Q2: Conduct pruning on models other than GPT-2-XL and combine CLOVER with existing SOTA pruning techniques.** **A2:** Thank you for providing a list of some state-of-the-art pruning research. In Section 4.4, we also applied pruning to Whisper-large-v3 in addition to GPT-2-XL. Due to time constraints, we only compared CLOVER with SliceGPT during the rebuttal period. **In the camera-ready version, we will cite them all and compare CLOVER with those methods.** As mentioned in **A3 for Reviewer jpx5**, we expanded our experiments by using CLOVER pruning on Deepseek and OPT models. Early results indicate that CLOVER achieves a 5% pruning ratio improvement on these models, and we plan to refine these experiments further. Additionally, CLOVER supports pruning in models such as ViT and DiT, and we will continue to compare CLOVER with pruning methods in these domains. **Q3: L2-norm-based pruning typically utilizes the norm of either W_q or W_k individually, or their sum, rather than their product.** **A3:** You are correct that many L2-norm-based pruning methods use the norm of either W_q or W_k individually or their sum. However, there are cases where the product of both norms is used. For example: 1. When pruning the output of one layer affects the input of the next, considering both layers together may yield better results, and in such cases, the product of the two layers is used [1]. 2. In the attention layer, the attention score is influenced by both Q and K, which is why their product is considered in pruning [2]. **Q4: Does the experiment in Section 4.4 maintain the parallelizability of multi-head attention? Specifically, does it remove the same number of dimensions for each head?** **A4:** Yes, all pruning in this paper maintains the parallelizability of multi-head attention. In Section 4.1, the pruning ratio is consistent across all layers, with each attention head having the same dimension. In Section 4.4, to achieve higher pruning rates while maintaining a training-free approach, we allow for different pruning ratios for each layer while ensuring that the number of dimensions removed is the same for each attention head within the same layer. This design ensures that inference can proceed in parallel. **Q5: Figure annotations and typo corrections.** Thank you for pointing out these issues. In the camera-ready version, we will revise the figure captions for clarity and make necessary corrections to improve the overall expression. **Q5.1: In Figure 4, what values are being ranked in “Top,” “Next,” and “Bottom”?** **A5.1:** As explained in [Lines 375-377, left] and [Lines 362-364, right], after performing SVD on the model, According to the magnitude of the singular values, the singular vectors are denoted as: “Top-256,” “Next-256,” and “Bottom-256.” We found that only 10% of the feature components are projected onto the singular vectors corresponding to the Top-256 singular values. **Q5.2: In Figure 5, what do the horizontal and vertical axes represent?** **A5.2:** In Figure 5, the X-axis represents the index of the feature values of $\Delta W$, while the Y-axis represents the magnitude of these feature values. We hope this clarifies your questions and concerns. We greatly appreciate your effort and look forward to your further feedback on our response. **References:** [1] Fluctuation-based Adaptive Structured Pruning for Large Language Models, AAAI 2024. [2] Round and Round We Go! What makes Rotary Positional Encodings useful?, ICLR 2025.
Summary: The paper addresses the memory increase in large language models due to kv caching and applies SVD to the pairs of KV and QO matrices. After an SVD decomposition, the new representation can be used for efficient pruning or for fine-tuning. The authors include experiments on both these fronts, showing that the proposed method allows for higher pruning ratios for the same performance compared to Vanilla pruning improvements over various PEFT methods. Claims And Evidence: Yes. Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense for the fine-tuning part. For pruning, the only baseline is vanilla pruning. Theoretical Claims: There are no theoretical claims. Experimental Designs Or Analyses: The experiments are in general sound. An issue is with section 4.4 where the results are not convincing. The more important aspect is the lack of baselines form the pruning literature in Section 4.1 and Table 1. Supplementary Material: Appendices A.1 and A.2. Relation To Broader Scientific Literature: The paper discusses connections with the PEFT and LLM compression literature and uses standard (foundation) models for evaluation. Essential References Not Discussed: I believe that all essential references are included. Other Strengths And Weaknesses: ## Strengths 1. The paper addresses two important issues in the literature 2. The idea to treat the KV and VO matrices as pairs is interesting. The writing on the method is clear and the method itself intuitive. (some other parts are not clear \-\> see weaknesses) 3. The experimental validation contains multiple different settings to test the method’s effectiveness.. ## Weaknesses 1. The paper is not clear at some points. 1. How is section 4.5 a fair comparison given that LoRA is is by design low-rank? 2. See questions for other instances 2. The hyperparameter selection seems to be not thorough: only one learning rate per method and one seed (?). Another example can be “We adjust the learning rate from 6e-4 to 6e-3 and remove weight decay, while keeping other hyperparameters consistent with the other two methods.” 3. The pruning literature is vast but only vanilla pruning is used for comparison Other Comments Or Suggestions: 1. Abstract: “these values are reintegrated…count” can be rephrased for clarity 2. Figure 2 y axis can be the singular value itself with x axis the index. The current presentation is confusing. 3. Section 4.4 presents a single example (e.g. Figure 3 adds no value) and it seems like cherry picking. Imo should be moved to appendix or removed altogether. 4. Conclusion: CLVOER instead of CLOVER Questions For Authors: 1. What are the “series” and “parallel” baselines? 2. L128 (2nd column): “Instead..overhead”. Is the comparison here fair? $D$ corresponds to all heads but is compared with a single head? 3. What is CLOVER with the cross superscript? It has not been introduced in the manuscript. 4. Can you explain the vast performance difference compared to LoRA and DoRA? Are these methods properly tuned? 5. L262 “Due to the non-linear…matrix”: can you clarify this sentence? 6. Xaxis of Figure 1d? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Q1: Is the comparison in Section 4.5 fair, considering that LoRA is designed for low-rank?** **A1:** As noted in **A2 for Reviewer DDfo**, CLOVER and LoRA have an identical number of trainable parameters. CLOVER offers slight improvements in both training time and GPU memory consumption when compared to LoRA.The original paper also present comparisons with HiRA, a full-rank update method. **Q2.1:** Are the experiments conducted with a single learning rate per method and a single seed? **A2.1:** The results for LoRA and DoRA presented in Table 2 are directly taken from the original DoRA paper, where the hyperparameters are carefully tuned. Similarly, the results for HiRA are cited from its original publication. In contrast, we introduce new experimental results for PiSSA and CLOVER, both of which are optimized with the best learning rates and aligned hyperparameters. Specifically, PiSSA achieves optimal performance at a learning rate of 2e-5, while CLOVER performs best at 1e-4, as shown in the table below: | **Method** | **Learning Rate** | **Acc** | | --- | --- | --- | | PiSSA | 1e-4 | 80.1 | | | 5e-5 | 82.9 | | | 3e-5 | 84.1 | | | 2e-5 | **84.5** | | | 1e-5 | 83.6 | | CLOVER | 5e-4 | 79.0 | | | 2e-4 | 83.9 | | | 1e-4 | **85.2** | | | 5e-5 | 84.3 | | | 2e-5 | 82.8 | In the original paper, PiSSA and CLOVER were trained using the default seed (42). Following your suggestion, we tested multiple seeds (40, 41, and 42). We will conduct these experiments on additional models and include the updates in the camera-ready version. | **Model** | **Method** | **Avg.** | | --- | --- | --- | | LLaMA-7B | PiSSA | 82.7±0.06 | | | CLOVER | 83.3±0.35 | | LLaMA2-7B | PiSSA | 84.3±0.26 | | | CLOVER | 84.9±0.23 | **Q2.2:** Why was the learning rate of CLOVER† adjusted from 6e-4 to 6e-3? **A2.2:** Both Vanilla Pruning and CLOVER employ full-parameter fine-tuning and same learning rate, while CLOVER† fine-tunes only singular values. Generally, PEFT converge more slowly than full-parameter methods. To speed up convergence, we increased the learning rate and removed weight decay for CLOVER†. **Q3: How does CLOVER compare to SOTA methods?** **A3:** Please refer to **A3 for Reviewer jpx5**. **Q4: What does the y-axis represent in Figure 2?** **A4:** After applying CLOVER, we multiply the square root of the singular values by each singular vector and calculate their L2 norms. This result is equivalent to directly using the singular values. Therefore, we uniform this representation for better comparison. **Q5: Section 4.4 presents a single example, which might seem like cherry-picking.** **A5:** The example is not cherry-picked; it is an official sample provided with the Whisper-large-v3 model [1]. Due to significant linear redundancies in the attention heads of the Whisper model, CLOVER demonstrates substantial effectiveness. We aimed to highlight this clearly in the paper. We plan to systematically evaluate the Whisper model and, as suggested, move this case study to supplementary materials. **Q6: What are the “series” and “parallel” baselines?** **A6:** “Series” and “parallel” are two standard baselines commonly used in parameter-efficient fine-tuning for NLP tasks. The series adapter [2] inserts a trainable module sequentially after the Attention and MLP layers, while the parallel adapter [3] integrates a trainable module parallel to the Attention and MLP layers. **Q7: Why is D compared with a single head in the experiments?** **A7:** The comparison is fair because we are comparing the reduced dimensions within each attention head, not the entire set of heads against a single head. This process reduces the dimension of each original attention head from $D \times d$ to $D \times r$, where $D$ is the hidden size, $d$ is the attention head dimension, and $r$ represents the attention head rank, with $r \leq d \ll D$. **Q8: What is CLOVER†?** **A8:** Please refer to A2 for Reviewer jpx5. **Q9: Are LoRA and DoRA properly tuned?** **A9:** Please refer to A2.1. **Q10: What does L262 mean by “Due to the non-linear…matrix”?** **A10:** Matrix merging and decomposition are linear operations, which require linearity to preserve equivariance during orthogonalization. However, RoPE encodings vary depending on token positions, introducing non-linearity. This prevents cross-layer merging and decomposition. To address this, we perform orthogonal decomposition within a single layer rather than across layers when fine-tuning Q-K Vectors with RoPE. **Q11: What does the x-axis represent in Figure 1d?** **A11:** It represents the pruning ratio. Thank you for your thoughtful and constructive feedback, which has significantly improved our paper. Please feel free to reach out if you have any further questions or concerns. [1] https://huggingface.co/openai/whisper-large-v3 [2] Parameter-Efficient Transfer Learning for NLP, ICML 2019. [3] Towards a Unified View of Parameter-Efficient Transfer Learning, ICLR 2022.
Summary: The manuscript introduces CLOVER which orthogonalizes the Query, Key, Value, and Output vectors in the attention layers, aiming to reduce the computational overhead and thus guiding pruning and serving for effective fine-tuning. Specifically, it is based on treating pairs of attention layers as low-rank decompositions using Singular Value Decomposition (SVD). CLOVER applies SVD to Query-Key (Q-K) and Value-Output (V-O) pairs, allowing the extracted singular values to guide pruning or act as trainable parameters for efficient fine-tuning. Experiments demonstrate the superior performances compared with other state-of-the-art methods such as LoRA, DoRA and PiSSA in PEFT tasks. Claims And Evidence: This paper overclaims the contributions of the CLOVER framework to cache optimization during the inference phase. The authors attribute the challenges in the development of existing models to "memory and communication bottlenecks" [L051-054 (Left)]. However, it is unclear how the proposed method reduces memory load in the inference phase. Methods And Evaluation Criteria: The descriptions of CLOVER and CLOVER† are somewhat confusing. In [L141-143 (Right)], the authors demonstrate the CLOVER fine-tuning as “We freeze the matrices UQK h [:, : r] and VQK h [:, : r], and only fine-tune the singular values SQK h [: r, : r].” Besides, in [L190-193(Right)], the authors clarify the CLOVER† as “In the CLOVER† case, after pruning, S is not immediately merged into the U and V matrices but is used for parameter-efficient fine-tuning, with the merging occurring afterward.” The two descriptions appear to be the same, which raises questions about what is the difference between CLOVER and CLOVER†. Theoretical Claims: I have checked the correctness of proofs for theoretical claims in Multi-Head Self-Attention Setup, Cross Layers Merging and Cross Layers Orthogonal Decomposition. All the formulations are correct. Experimental Designs Or Analyses: The authors only compare the performance of CLOVER with Vanilla method in the context of pruning, without conducting comparisons with other state-of-the-art methods. First, they do not explain the specific setting of “Vanilla” method. Second, it is insufficient to demonstrate the effectiveness of CLOVER by merely comparing with Vanilla method. It is recommended that the authors supplement their findings with recent baselines. Supplementary Material: I have reviewed the Appendix, including the cross layer orthogonal vectors in value and output layers, the hyperparameters setup, the detail information of dataset and so on. Relation To Broader Scientific Literature: This paper shows a promising method in model pruning and parameter-efficient fine-tuning by orthogonalizing the Query, Key, Value, and Output vectors in the attention layers with SVD. Essential References Not Discussed: There are not any related works that are essential to understanding the key contributions of this paper but not currently cited in this paper. Other Strengths And Weaknesses: Strengths: 1. Efficient Pruning: From theoretic perspective, the method of matrix compression using SVD is novel and effective. It removes redundant vectors while keeping critical features intact, making pruning more effective. 2. Parameter Efficient Fine-tuning a) From performance perspective, CLOVER surpasses LoRA, DoRA, HiRA, and PiSSA on commonsense tasks, achieving up to 7.6% higher accuracy with equal or fewer trainable parameters. b) Compared to other PEFT methods, CLOVER still has some advantages. For example, CLOVER alleviates the intrusive dimensions problem identified in LoRA. Weaknesses: See “Claims And Evidence”, “Methods And Evaluation Criteria” and “Experimental Designs Or Analyses”. Other Comments Or Suggestions: The abstract of this paper in submitted PDF is inconsistent with that submitted to the ICML system. Questions For Authors: Please clarify the weaknesses above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate your thoughtful and constructive feedback. We have carefully addressed each of your concerns and will incorporate your valuable suggestions into the camera-ready version of the paper. **Q1: Clarify how the proposed method reduces memory load during inference.** **A1:** Autoregressive token generation in LLMs involves frequent access to the previous key-value (KV) cache, which can significantly hinder inference speed [1]. CLOVER addresses this issue by reducing the dimensionality of attention heads, thereby reducing the KV cache size and effectively lowering memory usage during inference. As noted in **A1 to Reviewer AvA1**, this reduction in KV cache size contributes to more efficient processing during inference. **Q2: Clarify the confusing descriptions of CLOVER and CLOVER†.** **A2:** Thank you for pointing out the confusion between CLOVER and CLOVER†. We acknowledge this naming conflict, especially in pruning experiments where both techniques are used together. To resolve this, we will clearly redefine the terms in the camera-ready version: • **CLOVER:** After pruning the less important components based on singular value magnitudes, **only the singular values are fine-tuned**, which are then merged back into singular vectors (this corresponds to the original **CLOVER†**). • **CLOVER†:** After pruning the less important components, they are directly merged into singular vectors, followed by **full-parameter fine-tuning** (this corresponds to the original **CLOVER**). This adjustment will ensure clarity and consistency across both the methodology and experimental sections. In this context, CLOVER† serves as an ablation study, validating the effectiveness of the orthogonal initialization method in comparison to vanilla pruning. **Q3: Setting of Vanilla Pruning and comparison with SoTA methods.** Vanilla pruning differs from CLOVER primarily in that it does not perform orthogonalization. Instead, we directly prune the head dimension based on the vector norm. (Lines 182-183 right, 275-293 left). CLOVER is orthogonal to existing pruning methods and can be effectively combined with techniques like SliceGPT [2]. In response to your suggestion, we compared the combination of CLOVER and SliceGPT with SliceGPT alone on both OPT and Deepseek models, reporting the perplexity results on WikiText2, as shown in the table below: | **Method** | **Attention Pruning Ratio** | **MLP Pruning Ratio** | **OPT 6.7B** | **Deepseek V2-Lite** | | --- | --- | --- | --- | --- | | Baseline | 0% | 0% | 10.85 | 6.31 | | SliceGPT | 25% | 25% | 11.90 | 8.65 | | CLOVER | **30%** | 25% | **11.89** | **8.53** | The baseline represents the original model with no pruning. CLOVER achieves performance comparable to SliceGPT, despite pruning 30% of attention parameters, compared to SliceGPT’s 25%. This demonstrates the simplicity and effectiveness of CLOVER. It is important to note that most current pruning techniques for Deepseek primarily focus on reducing the number of MoE experts [3] and token selection [4]. There is limited research on pruning attention heads, attention head dimensions, or latent dimensions. CLOVER facilitates pruning these components in Deepseek’s Multi-Layer Attention (MLA), significantly reducing the KV cache overhead. Additionally, CLOVER utilizes a small number of dominant singular values from the Query-Key matrices to dynamically identify important tokens for full computation, enhancing token selection accuracy, especially in long-context tasks when combined with methods like MoBA [4]. Moreover, by rotating Q-K and V-O pairs in attention heads, CLOVER minimizes outliers in weights and activations [5], which is advantageous for reducing quantization errors. We are excited to continue exploring CLOVER to optimize the efficient deployment of Deepseek models. **Q4: Address the inconsistency in the abstract between the submitted PDF and the ICML submission system.** **A4:** Thank you for bringing this to our attention. We will promptly update the abstract in the ICML submission system to ensure that it is consistent with the content in the submitted PDF. We hope this addresses your comments and concerns effectively. We look forward to your feedback and the opportunity to further improve our work. **References:** [1] Keep the Cost Down: A Review on Methods to Optimize LLM’s KV Cache Consumption, COLM 2024. [2] SliceGPT: Compress Large Language Models by Deleting Rows and Columns, ICLR 2024. [3] MoE-Pruner: Pruning Mixture-of-Experts Large Language Models Using Hints from Its Router, arXiv preprint. [4] MoBA: Mixture of Block Attention for Long-Context LLMs, arXiv preprint. [5] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models, arXiv preprint.
Summary: Decoder - only models face memory issues during inference as the key/value cache grows. This paper introduces CLOVER to address this by treating attention layers as low - rank decompositions. CLOVER applies SVD to the Q−K and V−O pairs in each attention head. The resulting singular values can guide pruning or be used for efficient fine - tuning without increasing the model's parameter count. Experiments on models like GPT - 2 XL, Whisper - Largev3, and LLaMA - 3 show that CLOVER improves pruning efficiency. For example, it can prune more vectors in GPT - 2 XL with less performance degradation. In fine - tuning, CLOVER outperforms state - of - the - art methods such as LoRA, DoRA, HiRA, and PiSSA on commonsense reasoning tasks. It can achieve full - rank updates and better performance with equal or fewer trainable parameters. Although CLOVER has limitations when dealing with certain non - linear operations, it can be combined with other pruning methods and has potential for future applications, like being combined with quantization methods. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. Mainly for applications, no proof included. Experimental Designs Or Analyses: Yes. Supplementary Material: Yes. Relation To Broader Scientific Literature: Not all. See the weakness 1. Essential References Not Discussed: Yes. Other Strengths And Weaknesses: **Strengths:** 1. This paper is well organized, motivations and method details can be clearly understood. 2. The proposed method can achieve better accuracy with similar or fewer parameters. 3. The paper provides sufficient technical details and implementation parameters, and fine-tunes tasks for as many domains as possible, e.g., NLP, CV. **Weaknesses:** 1. The domain of this paper is unclear. From my understanding, the proposed method can only be used for models that contain attention mechanisms (please correct me if I am wrong), not all linear-layer models. Although experiments on stable diffusion are included, this is most likely the attention part between the text encoder and u-net, rather than the core component of SD. So, first of all, should the paper even consider including a claim like, for example, a xxx method for large language models? Second, this paper does not seem to explicitly state that this is a PEFT method, while the baselines are almost all PEFT methods. This is important in my opinion, because pruning and efficient fine-tuning are distinct, at least, they should consider the unique characteristics of training and fine-tuning, respectively. Therefore, in summary, two points are unclear: a. whether it can only be used for LLMs; b. whether CLOVER is a PEFT method. 2. I will use the PEFT one to understand CLOVER. As for the PEFT method, the authors show the performance with similar or fewer parameters, which is certainly correct. However, should also consider to show the GPU memory consumption and running time compared to LoRA. For example, the time to run one epoch with the same batch size and max length. 3. Full rank adaptation may be intuitively reasonable, but, to my knowledge, few papers have rigorously demonstrated this. So I do not recommend authors to make such STRONG claims. Moreover, full rank may also lead to information redundancy. The author can consider whether this is one of the main arguments of CLOVER. If so, please elaborate on it. If not, please hide it. 4. Could the authors please provide a comparison of GPU memory consumption between CLOVER and LoRA? No need for complete fine-tuning, just provide the GPU memory data, for example, you can use the smallest batch size to perform several steps. Other Comments Or Suggestions: N.A. Questions For Authors: N.A. Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: **Q1. Clarification of Domain** **Q1.1: Whether CLOVER is a PEFT method.** **A1.1:** CLOVER is a straightforward yet effective re-initialization method that benefits **both pruning and PEFT**. Pruning and PEFT are closely interconnected, as both aim to achieve efficient training and inference under resource constraints. Typically, pruning requires a recovery process, and PEFT methods are commonly applied to minimize training overhead and enhance efficiency, particularly in data-scarce scenarios [1]. While many previous pruning methods primarily focus on the pruning phase and directly apply LoRA in the recovery phase [1, 2, 3, 4], **CLOVER uniquely addresses both tasks simultaneously**. It not only enhances pruning rates (comparing with vanilla pruning and SliceGPT, refer to **A3 for reviewer jpx5**) but also improves PEFT training effectiveness. We believe this dual approach provides valuable insights for efficient model deployment. **Q1.2: Whether CLOVER can only be used for LLMs.** **A1.2:** Indeed, our proposed CLOVER method targets on the attention component. But since **attention mechanisms are widely used across many domains beyond LLMs** (such as Vision Transformers (ViT) for image classification, detection, segmentation, DiT for image generation, and Automatic Speech Recognition, as well as Text-to-Speech), CLOVER has potential and value in a wide range of applications. Additionally, in Section 4.4, we provide experimental evidence using the **speech recognition** model Whisper, where CLOVER achieves parameter reductions of 56.01% in the Q-K pair and 36.82% in the V-O pair without the need of further fine-tuning. CLOVER’s flexibility allows it to integrate with other techniques, extending its applicability to other models. **Q2. GPU Memory Consumption and Runtime Comparison with LoRA** **A2:** We conducted experiments on the LLaMA-2-7B model using commonsense data, trained for 3 epochs with the following hyperparameters: model_max_length = 1024, per_device_train_batch_size = 2, gradient_accumulation_steps = 2, num_gpus = 4, executed on 4 × A800-80G GPUs, with seed = 42. The comparison of memory consumption and runtime is as follows: | **Method** | **Trainable Parameters** | **Total Parameters** | **Max Allocated Memory** | **Runtime** | | --- | --- | --- | --- | --- | | LoRA | 56,098,816 | 6,794,514,432 | 110.84 GB | 2:42:37 | | CLOVER | 56,098,816 | 6,794,514,432 | 104.75 GB | 2:22:47 | the results demonstrate that CLOVER consumes less GPU memory and exhibits shorter training runtime compared to LoRA. We attribute this to CLOVER being applied between two layers, whereas LoRA operates in parallel with the main branch. This enables sequential computation, eliminating the need to retain the input features of the main branch. **Q3. Rigor in Demonstrating Full-Rank Adaptation’s Advantages and Information Redundancy** **A3:** In Section 4.5, we emphasize the importance of full-rank adaptation by demonstrating that only 6% of the orthogonal components from randomly selected commonsense samples project onto low-rank vectors, leading to inefficient data utilization. Full-rank adaptation resolves this issue by fully utilizing the data’s orthogonal projection components. Sections 4.6 and 4.7 visualize the matrices fine-tuned by various methods, showing that CLOVER’s effects closely resemble full-parameter fine-tuning, whereas LoRA, constrained to low-rank updates, introduces intrusive dimensions. Several studies [5, 6, 7, 8] have rigorously compared full-rank and low-rank adaptation, consistently concluding that full-rank updates outperform low-rank methods. Regarding the concern about information redundancy, we believe this issue is more closely related to the number of trainable parameters rather than the direction of adaptation itself. Since CLOVER maintains parameter efficiency, it mitigates redundancy effectively while offering greater flexibility and learning capacity. **Q4. Same as Q2** We hope that these responses address your concerns regarding CLOVER, and we look forward to further discussing any additional points you may have. ### Reference: [1] LLM-Pruner: On the Structural Pruning of Large Language Models, NeurIPS 2023. [2] SlimGPT: Layer-wise Structured Pruning for Large Language Models, NeurIPS 2024. [3] Pruner-Zero: Evolving Symbolic Pruning Metric from scratch for LargeLanguage Models, ICML2024. [4] SliceGPT: Compress Large Language Models by Deleting Rows and Columns, ICLR24. [5] HiRA: Parameter-Efficient Hadamard High-Rank Adaptation for Large Language Models, ICLR 2025. [6] GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection, ICML 2024. [7] QuanTA: Efficient High-Rank Fine-Tuning of LLMs with Quantum-Informed Tensor Adaptation, NeurIPS 2024. [8] Delta-LoRA: Fine-Tuning High-Rank Parameters with the Delta of Low-Rank Matrices, arxiv preprint. --- Rebuttal Comment 1.1: Comment: Thanks for the reply, I still have some questions, which prevent me from improving the score at the moment. **[Q1.1].** The author's explanation of pruning and PEFT is strictly correct. But it seems to have avoided the key issues: 1. Pruning and fine-tuning have different purposes and scenarios. Generally speaking, both are for the pursuit of Efficient AI, that is, training and reasoning efficiency, but the former is mostly for compressing models and improving inference speed (of course, some can also improve training speed). The latter is more oriented to fine-tuning speed and GPU consumption. Furthermore, from an experimental point of view, the authors do not consider any specific pruning baselines. Although I am not very familiar with pruning, there should be no lack of existing advanced pruning methods for LLM or general linear layers (please correct me if I am wrong). For the scenario, the former is for recovery, and the latter is for amplifying and updating downstream related knowledge. These differences are also, in my opinion, the reason why there are few joint studies on the two. This is not a problem of the ability of the method itself (for example, there are also modifications of LoRA to the field of pretraining or pruning), but because it is difficult to fully analyze the motivation and experimental effects of your method in these two fields in the length of a paper. For example, the submitted version of this paper did not even consider introducing any experimental results related to efficiency. Although it was added in the rebuttal, the efficiency results are still incomplete. Compared with performance (acc.) part, the efficiency analysis of this paper can even be ignored. In summary, I do not doubt CLOVER's ability to handle two tasks at the same time. I just don't think this article has clearly described the motivation and experimental results of the method for both scenarios. Therefore, I still think that this article can be written in a clearer and more concise way. **[Q1.2].** I think your argument can convince me, but I firmly believe that technology will continue to improve and develop. I still suggest that you add a limitation section, which will not damage the innovation of the method, but make it clearer. **[Q3].** Thanks to the author for providing literature evidence. However, after careful inspection, I did not find comprehensive and rigorous evidence. For example, in HIRA, only figure 2 conducted an experimental comparison on the commonsense reasoning task. However, there is no empirical evidence in other scenarios and tasks. For example, the experimental results in DyLoRA [R1] are on the GLUE benchmark, and there is no strong correlation between rank and performance. As another example, in NoRM [R2], the authors observed that increasing rank would introduce hallucination noise, which may lead to worse results. Here, I do not doubt that high rank will bring benefits. But my intuition is that it is not certain, because it may mean multiple redundancy. Moreover, objectively speaking, there is no paper that comprehensively and empirically analyzes its impact. So this is obviously a very STRONG conclusion. So I still keep my point of view: if the author insists on this statement, please verify it; otherwise, a simpler way is to delete the part of the statement. If it is deleted, I personally think that the readability of the article has been improved. [R1] DyLoRA: Parameter-Efficient Tuning of Pretrained Models using Dynamic Search-Free Low Rank Adaptation. EACL 2023. [R2] Fine-tuning with Reserved Majority for Noise Reduction. ICLR 2025. **My conclusion:** I appreciate the author's response and efforts. But the comments I mentioned are all to make the article more concise, more focused and enhance the readability of this article. Proposing generalist models that can handle several tasks is certainly a good thing and it is the trend. But if people just claim that the method can do this, I don’t think it will be beneficial to the community. For example, if the Efficient AI community follows this way, there will be very few researchers who will really analyze the essence of pruning and PEFT separately. --- Reply to Comment 1.1.1: Comment: Dear Reviewer DDfo, Thank you very much for your sincere suggestions. **[A1.1].** Honestly, we had intense discussions about whether to accept this suggestion. Some authors believe that **CLOVER is a novel re-initialization method that benefits pruning, fine-tuning, and even quantization, with potential applications in other areas.** In the next step, we plan to use this cross-head rotation re-initialization method to reduce quantization errors in low-bit attention computation, following SmoothQuant[1] and SpinQuant[2]. However, this would make the paper significantly larger, and for a conference paper, this might not be ideal. Therefore, we ultimately decided to **accept your suggestion**. In the camera-ready version, we will restructure the paper, making **pruning** the main focus. We will incorporate the inference efficiency experiments conducted during the rebuttal period [**A1 for Reviewer AvA1**] into the main text. Since pruning includes both compression and fine-tuning phases, the improvements in the fine-tuning phase, relative to methods like LoRA, will be presented as an ablation study. More detailed advantages of CLOVER will be fully explored in a new paper or merged into a journal article. It is worth mentioning that during the rebuttal period, we compared CLOVER with the previous state-of-the-art pruning method, **sliceGPT**. Please refer to A3 for Reviewer jpx5, where **CLOVER prunes at a larger ratio while maintaining or lowering perplexity on the OPT and Deepseek models.** **[A1.2].** Thank you for acknowledging CLOVER’s generality. In fact, we have already discussed the limitations of CLOVER in the **Conclusion and Limitations Section** of the original paper (lines 408-425). In the camera-ready version, we will ensure that this section is clear and comprehensive. **[A3].** Thank you for your suggestion. Indeed, as the rank increases, the fine-tuning performance on downstream tasks improves gradually, as shown in Table 6, Table 15, and Table 18 in LoRA [3], especially when the rank is relatively small. However, this improvement in fitting capability comes at the cost of forgetting more knowledge[4] and introducing more noise[5]. **Both low-rank and full-rank have their advantages and disadvantages, and hardly any paper has definitively proven one to be better than the other.** We will revise the wording in the paper to present full-rank updates as one of the benefits of CLOVER, rather than the sole reason for its superiority over low-rank methods. Once again, thank you for your thoughtful reply. We hope our response has addressed your concerns, and we look forward to hearing your further feedback. We remain open to continued discussions on any aspect of the paper. [1] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models, ICML2023. [2] SpinQuant: LLM Quantization with Learned Rotations, ICLR 2025. [3] LoRA: Low-Rank Adaptation of Large Language Models, ICLR 2022. [4] LoRA Learns Less and Forgets Less, TMLR 2024. [5] Fine-tuning with Reserved Majority for Noise Reduction.
null
null
null
null
null
null
Flexible Tails for Normalizing Flows
Accept (poster)
Summary: This paper addresses learning heavy tailed distributions with generative models and normalizing flows in particular. Similar to the previous approach COMET, it first transforms the tails of the input data to be light tailed and then applies a classical normalizing flow. Claims And Evidence: Let's span this by claimed contributions: - "Illustrate the problem of using extreme inputs": I somewhat agree, but it is unclear how bad it is -- for the normal/m_normal/g_normal approaches normal, m_normal and g_normal, what is the success rate of training a model? Table 1 only says that for heavy tails, at least one out of ten runs fails. - "We introduce ... TTF ... and prove it converts Gaussian tail to heavy tails with tunable tail weights" -- yes. - "We demonstrate improved empirical results for density estimation ... and VI ... compared to standard NFs, and other NF methods for heavy tails." -- I think there are some important comparisons missing, see below. # Update: Addressed I am convinced that the method improves the quality of learning heavy tails with normalizing flows. Methods And Evaluation Criteria: Datasets: Artificial targets make sense, cannot judge for the real-world data. Metrics: - KL: Correct me if I'm wrong, but the KL divergence = NLL is more sensitive to body than tail behavior due to the vanishing weight of these regions. Please also report the learned tails (which presumably even makes sense for the cases where the tails are learned and not fixed using the known values, given that there could still be some offset produced by the Lipschitz transformation). - hat k: In the appendix, the authors write " Yao et al. (2018) argue that k>0.7 corresponds to a very poor approximation, and k<0.5 indicates a good fit.", but they then proceed to mark values already below 0.7 in Table 3 as bold, which is inconsistent. Also, more details on this metric would be crucial instead of the three lines of text in l. 349-351: How well do these metrics capture tails? # Update: Addressed Main argument: KL is indeed sensitive to extreme values, which I agree to. Theoretical Claims: I did not check proofs, but the statements seem plausible. The authors argue in the Related Work + Appendix B.2 that COMET is not in the Fréchet domain of attraction -- why is this important? COMET performs pretty well experimentally, and also I would assume the tail behavior can be rich, so why pick this particular instantiation? To my understanding, COMET could just well fulfill some other class of heavy-tailedness. # Update: Largely addressed The authors argue that COMET learns a class of less heavy-tailed distributions. I think this argument and the fact that they outperform COMET is good enough to motivate their method. Experimental Designs Or Analyses: In terms of density estimation, Table 2 merges several benchmarks from previous papers. In terms of variational inference, it would be good to have a comparison to an established benchmark instead of a new artificial one. # No update Supplementary Material: I scanned the supplements and had a look at B.2, B.3, D, E, F, G, H, and I. Relation To Broader Scientific Literature: Learning heavy tails correctly is an important consideration for designing generative models, which can help them be successful in a broader set of applications. Like COMET, the method indicates that feeding heavy-tailed data into a neural network may be detrimental to performance; instead, the heavy tails should be converted first. Essential References Not Discussed: Seems like nothing is missed. Other Strengths And Weaknesses: No additional comments. Other Comments Or Suggestions: One idea to strengthen the paper is by showing that a mixture of Gaussians with a finite number of components as a latent distribution does not have an influence on the tail behavior of the learned distribution. It seems like this is an easy proof -- the asymptotic behavior in any direction is determined by the largest standard deviation component only, and it is light tailed. I read [Draxler et al. 2024] differently: From their Limitations section: "Secondly, it is unclear how the convergence metric [in] Section 5.2 is related to convergence in the loss used in practice, the KL divergence given in Equation (3)." So while I agree that they consider the full space (which contains the tails), their theory is not about KL convergence (they only have experimental evidence for KL). Given that their statements apply to any continuous distribution with finite second moment, it would be interesting to characterize what class of r is needed to convert any heavy into light tails. l. 56: z should be x. # Update: Found common ground The authors will clarify the statements regarding universality, they might add the proof I suggested regarding the tails of GMMs being always light. Questions For Authors: What is the success rate of normal/g_normal/m_normal in Table 1? How relevant is the Fréchet basin of attraction? What tails can/cannot be explained by it? Is this an argument for why TTF should be preferred over COMET? Given that a heavy-to-light transformation should be applied on the data end: For sub-Gaussian tails, do the authors think that adapting the tails on the latent end is beneficial? How well do NLL, ESS and k capture tails? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for the detailed review which has provided some helpful new insights into our results. ## Success rate Failure rates for normal, m_normal, g_normal in Table 1 were 100% for $\nu=0.5$, and in the range 10-20% for $\nu=1$. For $\nu=1$, even in runs which did converge for these 3 methods, the best loss (NLL of 49.62) was significantly worse than results for other methods. These results confirm that all the methods not specifically designed to permit GPD tails become increasingly unstable as the data becomes more heavy tailed. (On checking our code, we note "did not converge" is not as we defined it in the paper. In fact it means at least one run had a final loss above 1e5, corresponding to extremely unstable optimisation. We'll update this in the paper.) ## NLL metric NLL has some sensitivity to both body and tail behaviour. For instance under a $N(0,1)$ density estimate, a single observed value of 10 would contribute a very large amount to NLL. We have concentrated on NLL as it's the main metric in the past literature on this topic. For instance Table 2 contains a direct comparison to log likelihood values from a previous paper. We agree it's desirable to explore metrics more specialised to tail behaviour in future e.g. see our reply to reviewer mY4v. ## Learned tails A rough plot of learned tail parameters for the experiment of Table 7 with $d=5$ is at [this link](https://pasteboard.co/oKRznj9WA3UL.png). The y-axis shows $1/\lambda$, with the target $\nu$ value shown as a horizontal line. The two would need to be equal for a good tail fit if our transformation were applied to a $N(0,I)$ random variable. As you suggest there is an offset due to the Lipschitz transformation. This is an interesting finding - thanks for suggesting this plot! It suggests a potential advantage to jointly learning the Lipschitz transform and the tail parameter. We'll write this up as an appendix. ## $\hat{k}$ threshold Table 3 highlights the threshold $\hat{k} < 0.7$ since Yao et al recommend this as a cut-off for a variational approximation to be useful. Thanks for pointing out that this isn't clear from our current discussion of Yao et al. We'll extend this. ## How well do VI metrics capture tails? ESS and $\hat{k}$ are general VI metrics. Informally we expect both are sensitive to tail behaviour. This is because both are based on importance weights $w(x) = p_{\text{target}}(x)/q_{\text{approx}}(x)$. When $p_\text{target}(x)$ is heavier tailed than $q_{\text{approx}}(x)$, this typically results in a few very large $w$ values leading to poor ESS and $\hat{k}$. We agree that specialised VI metrics for tail behaviour would be useful, but we are not aware of any in the related literature. ## Fréchet domain of attraction The Fréchet domain of attraction captures power-law (Pareto) tails and closely related tail behaviour. Our theory shows that COMET flows cannot represent such tails, although they can produce milder heavy tails similar to log-normal. This is an argument in support of TTF: it allows a much wider range of heavy tails than COMET flows. We'll add a sentence to summarise this in the main paper. ## VI benchmark We've chosen our VI example as it allows the tail weight to be varied. We found that established benchmarks didn't easily allow exploration of heavier tails, such as those corresponding to $\nu = 0.5$. We agree investigating established benchmarks is a useful future step. ## Mixture of Gaussians The suggested result should indeed strengthen the paper and be straightforward to prove, so we'll add it to an appendix. ## Draxler et al Thanks for pointing out that the theory of Draxler et al doesn't use KL convergence. Instead their metric is based on the effect of extending a sequence of coupling layers by one element. It's not obvious how to extend this to our setting where the sequence instead terminates in our TTF transform. So we'll remove our comments in D.4 on the possibility of extending the work of Draxler et al, and instead just note that it exists. This doesn't affect our main argument in Appendix D on extending universality under weak convergence. ## Line 56 Thanks for highlighting this. We do mean z not x. Our presentation describes the "generative direction" of normalizing flows i.e. a function converting z to x. We do this simply to be concrete and consistent, and it does not affect our results. Note z can be extreme when a heavy tailed base distribution is used. We'll add a clarifying comment about "generative direction" around line 56 in the paper, as the current version doesn't mention this until line 139. ## Sub-Gaussian tails For heavy tails, we motivate making a transformation at the data end to avoid providing extreme input to a neural network. For sub-Gaussian tails this argument does not apply, so we expect that adapting the tail at either end will work well. Which is most beneficial likely depends on which is easier to implement in practice. --- Rebuttal Comment 1.1: Comment: I thank the authors for their answers. I am happy with most answers, but ask the authors for some more comments below: > NLL metric I agree that a single "outlier" can have a significantly lower likelihood than body samples. I guess it is just a general problem for tail estimation that there is not a lot of data in the first place by construction. > Learned tails Thanks for the plot, but I am confused about what it shows! If I read it right, the predicted degrees of freedom is consistently off from the true value, right? How do you measure that exactly? Do you pick a direction and measure how the modeled log prob evolves? Similarly, can't one also use the modeled log prob to analyze the tail behavior also for the VI problems? > Fréchet domain of attraction So every distribution that COMET learns can be represented via TTF? > Draxler et al [and other universality work] Their argument goes from data to latent space, I think, so the TTF transform is applied in the beginning. This implies that their theory applies once the tails of the input distribution are finite (so that the variance is bounded and their assumptions are fulfilled). So this should provide a reasonable universal construction, although with the caveat of the the convergence metric. --- Reply to Comment 1.1.1: Comment: Thanks for the helpful comments. ## Learned tails I think we misinterpreted your original question slightly! The plot reflects our learned tail transformation **parameters** rather than the resulting tail **shapes**, which are hard to obtain. More explanation follows. "How do you measure that exactly? Do you pick a direction and measure how the modeled log prob evolves?" The plot shows the learned parameters for our tail transformation (after a transformation so they are comparable to degrees of freedom). The directions these parameters represent are aligned to each dimension in the original data. Since this example has the same tail shape for each dimension, we've simply aggregated all the parameters in each plot. "Similarly, can't one also use the modeled log prob to analyze the tail behavior also for the VI problems?" It would be very desirable to learn the tail shapes from (A) the fitted log prob in a density estimation context or (B) the log prob up to proportionality in a VI context. However it turns out this is not an easy problem. As we mention in our paper's conclusion, one possible approach for VI is "static analysis of probabilistic programs" (Liang et al 2024). "Thanks for the plot, but I am confused about what it shows! If I read it right, the predicted degrees of freedom is consistently off from the true value, right?" Yes the predicted degrees of freedom appear consistently off. This is because the plot shows the resulting degrees of freedom **if our transformation were applied to $N(0,1)$ tails**. However in reality the $N(0,1)$ base distribution is first transformed through some standard normalising flow layers. This means the tails are no longer $N(0,1)$ when our transformation is applied. Our theory shows this can alter the resulting tail shape. In particular Theorem B.9 shows that applying our transformation to $N(\mu, \sigma^2)$ tails produces a tail shape that depends on $\sigma^2$ as well as our tail transformation parameter. The plot is interesting because it confirms the optimal tail transformation parameters don't exactly match the known tail shapes. It appears they must be varied slightly due to the effect of the other normalising flow layers, as predicted by the theory. This suggests that ideally it's better to learn the tail parameters (as in our TTF method) rather than fixing them in advance (as in our TTFfix method). Despite this TTFfix is often competitive in practice, and can be easier to train. This is a helpful finding which we'll add to the paper's discussion. ## Fréchet domain of attraction "So every distribution that COMET learns can be represented via TTF?" This isn't covered by our theory. We prove that TTF can capture power-law like heavy tails (i.e. the Fréchet domain of attraction) but COMET cannot. The extreme value theory reviewed in our paper suggests this is the most important class of heavy tails. We haven't proved that TTF can exactly match the lighter tails generated by COMET (e.g. log-normal tails). To prove this we'd need to show that there's a suitable Lipschitz $T$ such that $R_\text{TTF} \circ T$ results in tails in the COMET class. ## Draxler et al Thanks for help on this point! The fact that their argument goes from data to latent space is very helpful. As you say, if our transformation does produce finite first and second moments, then we can indeed apply a form of the Draxler et al argument to show universality. This means we just need a small update to the appendix to correct its sketch argument. We may be able to provide a full proof for a camera ready version of the paper, but can't 100% commit to this as a rigorous asymptotic argument might have some complexities.
Summary: The paper introduces an invertible transformation to overcome difficulties that exist when fitting normalising flow models to heavy-tailed data distributions. The transformation is to be used at the tip of a normalising flow "chain", handling the heavy tails of the data and enabling normalising flows to be fit to heavy-tailed data without restricting the flow to a specific architecture. ## Update after rebuttal No changes after the rebuttal, as no further comments were made by the authors and the interactions with other reviewers did not change my evaluation of the paper. Claims And Evidence: The claims made are supported by clear and convincing evidence. Methods And Evaluation Criteria: The evaluation criteria is sensible and uses reasonable baselines and data sets. There experimental settings cover both controlled scenarios and real data, allowing a clearer picture to be made. Theoretical Claims: The theoretical claims were verified insofar as covered by Appendix A.1, where I found no issues. Experimental Designs Or Analyses: The experimental settings and analyses are valid, though I have not verified the contents in the Appendix. Supplementary Material: I have only reviewed Appendix A.1. Relation To Broader Scientific Literature: The paper introduces a transformation to be applied to normalising flows to handle heavy-tailed data, thus relating closely to existing methods that address the same problem. Because it applies to most normalising flows that could be applied in tandem with the proposed method, it naturally relates to most of them. Essential References Not Discussed: I have not identified any lacking references. Other Strengths And Weaknesses: **Strengths** The paper is very well-written, with an easy-to-follow structure. The method itself is quite simple to implement and understand, consisting of a simple transformation parameterised also in a straightforward manner, and is thoroughly studied by the authors. I also appreciate the authors discussed important limitations of their work. **Weaknesses** Given how much the authors rely on the theory and some of the notable aspects (such as universality) being so fundamental, I do wonder why the paper is organised with so much didactic introduction to already existing work. I do believe this point is rather subjective, so I consider this only a very minor weakness. Other Comments Or Suggestions: No other suggestions or comments at this point. Questions For Authors: No questions at this point. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for the positive review of our paper. We appreciate you taking the time to read it!
Summary: The paper proposes a way to enhance normalising flows models by incorporating ideas from Extreme Value Theory literature for representing distributions with heavy tails. The authors propose adding a new invertible layer after the traditional flow layers that does not have a Lipschitz transformation and hence provide more flexibility. ## update after rebuttal I am still not convinced of the added value of the universality result and if the method can be applied to more classes of flow-based models, hence I will keep my score. Claims And Evidence: Yes, the evidence is convincing. Methods And Evaluation Criteria: The evaluation follows commonly used negative log-likelihood metric. Theoretical Claims: Yes, the theoretical claims appear valid. Experimental Designs Or Analyses: Yes! Supplementary Material: Some sections such as Universality, Implementation, and Related Work Relation To Broader Scientific Literature: The prior works often consider using heavy tail distributions such as T-distributions (or Gaussian mixtures, generalised Gaussian), whereas the authors propose a more flexible approach and substantiate their claims empirically. Essential References Not Discussed: NA Other Strengths And Weaknesses: Taking motivation from Extreme Value Theory Literature was an ingenius idea Other Comments Or Suggestions: NA Questions For Authors: I add most of my questions and concerns here. 1. Normalising flows are great models with attractive properties such as density estimation. Their real power appears to be in the form of Flow-Matching, though, where they have rivalled Diffusion models. The implicit modelling approach only requires learning the associated vector field instead of explicit mapping. Now my question is, if the problem of poor modelling of heavy tailed distribution also appears in flow matching. Since flow matching does not constrain the architecture, I doubt it. If it is still an issue, how could the proposed methods be applied to non-discrete versions of Normalising Flows? 2. If I understood correctly, the authors show that the additional layer does not change the existing universality proofs. In that case, I question the theoretical value of the TTF transformation, given that it doesn't relax any assumption on modelled class in the universality proof. 3.Whilst the density estimation results are better for the proposed method, I believe log-likelihood alone is not enough to show the power of the proposed additional layer, I believe metrics like MMD or TVD could shed more light on the efficacy of the method. 4. Could the authors show how the proposed layer help if the base distribution is chosen to be the T-distibution. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thanks for the helpful review. We respond to each of your questions below, and invite you to increase your score if you think we have addressed the points fully. ## Flow-Matching Thanks for raising this interesting question: does it remain challenging to fit heavy tailed observations in Flow-Matching? We expect similar problems would occur here and in other related methods e.g. diffusion models and continuous normalizing flows. The reason is as follows. All these methods involve learning a vector field $u(x,t)$ controlling a differential equation. The vector field is taken to be a neural network. In a successful generative model we will sometimes have $x \approx x_i$ for any given training data item $x_i$. So if there are extreme values in the training data, the neural network will need to be able to cope with such input. A key argument of our paper is that neural networks are poor at this task. A plausible fix is to use our tail transform as a final discrete layer following a Flow-Matching generative model. Either joint training or a two-stage approach could be tried, similarly to the methods in our paper for discrete normalizing flows. Empirical investigation is beyond the scope of our current paper, but is a very interesting avenue for future work. We will add a paragraph to our discussion section commenting on this topic. ## Universality You ask a natural question about the theoretical value of the TTF transformation given that it does not improve existing universality properties. We do comment on this in the paper, but it may not be easy to find as it's in the appendices (Appendix D.3). Here's an extended version of our argument. A normalising flow defines a set of probability densities depending on the capacity of the flow (e.g. number of knots in a spline flow, number of neural network parameters etc). Roughly speaking, universality means that under the limit of large capacity the set contains all probability densities. We expect improved tail behaviour modelling means that it's easy to get good approximations from lower capacity. This is because only a small number of parameters are required to produce heavy tails. In contrast a spline flow with a Gaussian base distribution (for example) would need a large or infinite number of knots to modify a Gaussian tail to become heavier. In summary, universality properties are about the limit of infinite capacity. We expect the benefits of our approach occur for any given finite capacity. Further theoretical developments would be needed to formalise this argument mathematically, but it is consistent with our empirical findings. We'll add a brief summary of the above to the main paper's section on universality. ## Metrics Thanks for the suggestion of MMD and TVD as metrics. We have concentrated on test log likelihood as it's the main metric in the past literature on this topic. For instance Table 2 contains a direct comparison to log likelihood values from a previous paper. We've not been able to repeat our experiments to report additional metrics within the review period, but we will explore this in future work. We note care may be needed applying MMD in higher dimensional examples: https://arxiv.org/abs/1406.2083. ## Combining new layer and T-distribution for base Thanks for the intriguing suggestion of combining these approaches i.e. (1) a Students T base (2) our final TTF layer. In the paper we present these as two competing methods, and didn't consider making a combination. We have run some experiments on combining the methods. For the setting of Table 1 with $\nu=0.5$, we get a scaled NLL of 4.17 with standard error of 0.01. This is better than the heavy tailed base distribution methods, but not as good as the TTF methods. In another setting - $d=5$, $\nu=0.5$ - we find similar results. The combined method gets a scaled NLL of 3.40 with standard error of 0.02. Again this is better than the heavy tailed base distribution methods, but not as good as the TTF methods (see Table 7 in the appendices). However the differences between the methods are smaller now. Further experiments would be needed to explore why the combined method has worse performance. --- Rebuttal Comment 1.1: Comment: Thanks for writing the rebuttal. Flow Matching: You wrote, "In a successful generative model ....... So if there are extreme values in the training data, the neural network....." Please correct me if I am wrong here, your argument that this issue arises cos of the Lipschitz-constrained properties in the flow models, rather than any neural networks themselves. When learning the vector field, in my opinion, there are no constraints on the neural networks. The transformation map is realised by solving the ODE, which begs the question if there exists no vector field for which the ODE solution can exactly map between a heavy tail distribution and a Gaussian (or some other source). Universality: Your point elaborated on what I meant. As you put it, "We expect improved tail behaviour modelling means that it's easy to get good approximations from lower capacity.", this is empirical and some theory based on it would have been more valuable, instead of showing that the existing Universality arguments are still applicable. Indeed, most flow-based papers only report the likelihood (or maybe FID when images look prettier), but that does not mean we should rely on one evaluation metric (although we can have a primary metric, such as likelihood here. But I understand that the rebuttal period is usually hectic and it's not always possible to run all the experiments suggested by all the reviewers. Thanks for analysing the T-distribution combination idea. Indeed, the worst results were unexpected, I am curious as to why this has happened, but I won't push this point further as the combination is not what the paper discusses. --- Reply to Comment 1.1.1: Comment: Thanks for your comments, see below for our response. As before, we invite you to increase your score if you think we have addressed your points sufficiently. ## Flow Matching You write: "your argument that this issue arises cos of the Lipschitz-constrained properties in the flow models... The transformation map is realised by solving the ODE, which begs the question if there exists no vector field for which the ODE solution can exactly map between a heavy tail distribution and a Gaussian (or some other source)" This is indeed a crucial theoretical issue: under what conditions (e.g. Lipschitz) does a vector field exist which can produce heavy tails under flow matching or similar methods? However our response was about another issue: the difficulty of learning a vector field even when it does exist. Consider an extreme data point $x^*$ e.g. in scalar data $x^*=1000$. Consider a generative model based on a vector field $u(x,t)$. That is we solve an ODE $x'_t = u(x,t)$ over $t \in [0,1]$ where $x_0$ is sampled from $N(0,1)$. Suppose this vector field can generate $x^*$. Then for $t \approx 1$ the resulting ODE trajectory will have $x_t \approx x^*$. When solving the ODE we must calculate $u(x_t, t)$. So we must evaluate the neural network $u$ for extreme input $x_t$. In the paper we argue that training neural networks with extreme inputs is difficult. This motivates using a transformation - such as the one we propose - to transform the training data so that it no longer has heavy tails. ## Universality You ask about theoretical results on the benefits of more flexible tail modelling under finite capacity. We'd argue this is available from existing results in the literature and our paper. Here is a summary, which we'll put in Section 3.5 (Universality) of our paper: "It's been proved that some NFs have a universality property: 'the flow can learn any target density to any required precision given sufficient capacity and data' (Kobyzev et al., 2020). In Appendix D we show that many NF universality results are preserved when the TTF transformation is added as a final layer. As we've already seen, the situation under bounded capacity is different. Standard NFs cannot produce heavy tailed distributions (Jaini et al 2020, reviewed as Theorem 1.2 above). However adding our transformation does permit these (see Section 3.2). So theoretically our method improves the set of distributions which can be modeled under bounded capacity without sacrificing expressiveness in the limit of infinite capacity. The next section shows this is reflected by improved empirical performance modelling heavy tailed data." Your questions have been very helpful in developing this summary. The universality material is one of the most recent additions to the paper, and it's been useful to improve how we present and interpret it.
Summary: Normalizing Flows have been shown to be difficult to make work when data/density to be represented has heavy tails, something common to see to atleast some degree in tasks such as density estimation and Variational inference. Neural networks are known to have difficulty in converging during training where heavy tails are involved. Also some recent work has shown that existing standard NF algorithms with light tailed Gaussians as base distribution does not work well which serves as a motivation for this work. Some recent work has suggested to using heavy tailed distributions as base distributions, and then passing it through a Lipschitz transformation, this work suggests doing it the other way, using a standard light tailed ddensity as the base distribution while making the transformation non-Lipschitz so that the final transformation becomes heavy tailed. The transformation function to induce heavy tails has to be both invertible and differentiable for this to work, for which the authors emply the ecdf function The experimental results show some improvements over the other algorithms based on the other approach on real world datasets but marginal improvements on synthetic datasets. Claims And Evidence: I think the paper has done a good job at supporting their claims with experiments. I particularly liked how sections in Appendix B3 and D are provided by authors to elaborate and explain the claims which they used as motivation in the paper. Section D supports previous claim made on universality by the work of Kobyzev et al. The appendix is so well done and covers the practical and finer optimization details quite adequately, which are especially needed to practitioners especially when it is common to encounter numerical overflow and underflow issues when working with heavy tailed and datasets containing outliers. Methods And Evaluation Criteria: The authors used both real and simulated datasets in the experiments and had a proper cross validation set up to compare with the baselines. Theoretical Claims: I found the theoretical claims well done and supoorted by arguments using theory from previous recent work esp. that of Jaini 20. Experimental Designs Or Analyses: I found the analysis adequate and supporting the claims made by the paper. I have also felt in my first hand experience that working with heavy tailed base distributions can be more tricky in VI annd NF especially in higher dimensions and challenging posterior, I have also felt that fixing the tail parameters results in more stable optimization than optimizing it along with other parameters, and this work also makes similar conclusions based on their experimental results. Supplementary Material: I read many parts of the appendix, but did not go through all the sections of the appendix. Relation To Broader Scientific Literature: The work uses many recently proposed algorithms as baselines which means that there is still interest in the community for this nature and direction of work. Essential References Not Discussed: I am moderately familiar with the literature on flow methods and extreme value theory, I think the authors cover all the most noteworthy papers that I know of. Other Strengths And Weaknesses: 1. I think the paper is a really nice read, it introduces the reader with a lot of interesting concepts and methods in EVT, Normalizing flow and tail-index estimation like modelling the tails and estimating shape parameter of the GPD to researchers who work with standard Normalzing flows or standard VI using light tailed Gaussians. 2. The appendix is well written and the paper makes an effort to elaborate whenever it touches a claim in the appendix. 3. The empirical results are mostly supportive, it is good to see that in some cases where the target is quite difficult, standard methods with heavy tailed base distributions can fail to converge ash shown in Table 1. 4. The paper lists its limitations in quite transparent and matter of fact way, something which is missing in a lot of papers. 5. For a user who is really concerned about accuracy, and maybe dealing with smaller datasets, the results are encouraging and Table 3 clearly shows that. Weaknesses 1. I would have hoped for a bit better results, meaning seeing higher degree of improvement in Table 2. Other Comments Or Suggestions: 1. I have personally seen left and right tails in literature in place of lower and upper, maybe you can mention that somewhere to make it clearer. 2. Can you find some real world dataset where you can show a bigger improvement in results in Table 2. I would still like to accept this work as it is so well written and explained. Questions For Authors: Listed above already. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for the positive review of our paper. We appreciate you taking the time to read it! ## Left/right tails This is a great suggestion. We will update the paper to include terminology around left/right tails. ## Additional experiments Unfortunately each experiment on real world data took several weeks to set up and run, including the time to find suitable datasets and perform preliminary data cleaning. So we aren't able to add any more datasets at this stage in the review process. We argue that our method does show reasonable improvement for real datasets. We summarise the argument below. Table 2 shows our method provides improvements for all experimental datasets. In line with our synthetic experiments these gains are modest for lower dimensional datasets (Insurance, Fama 5), but larger for higher dimensional datasets (CLIMDEX and S&P returns). In particular for the highest dimensional dataset, CLIMDEX, we reach a NLL value of -2214. This is a significant improvement on competing methods, which provided values of -2113 (gTAF), -2121 (mTAF), -2118 (COMET), and had only a modest advantage on the value of -2102 from using no tail method.
null
null
null
null
null
null
LightGTS: A Lightweight General Time Series Forecasting Model
Accept (poster)
Summary: This paper presents LightGTS, a lightweight base model for time-series forecasting, along with Periodical Patch Embedding to adapt to intrinsic periodic differences across datasets and Periodical Parallel Decoding to prevent error accumulation. LightGTS achieves 10 to 100 times size reduction based only on 4 million parameters. Claims And Evidence: yes Methods And Evaluation Criteria: yes Theoretical Claims: yes Experimental Designs Or Analyses: should add more similar works into comparsion Supplementary Material: yes Relation To Broader Scientific Literature: the significant contribution of this paper is very similar to that of the article, which also puts forward an Adaptive patching technology. such as Tiny Time Mixers (TTMs) Essential References Not Discussed: n/a Other Strengths And Weaknesses: pros: 1. The proposed techniques are plug-and-play and can be combined with many time series forecasting models. 2.The model achieves very good performance despite its extremely lightweight design. cons: 1. Although the idea of training a lightweight temporal foundation model is quite interesting, there have been some cutting-edge works that made similar attempts, such as Tiny Time Mixers (TTMs): Fast Pre-trained Models for Enhanced Zero/Few-Shot Forecasting of Multivariate Time Series. Moreover, the significant contribution of this paper is very similar to that of the article, which also puts forward an Adaptive patching technology. I think the authors should have more discussions with relevant literature and add corresponding experimental comparisons. 2. The article's expression and description need some revision. Last word of the first sentence of the abstract should perhaps be 'pretraining' instead of 'pertaining'. It might be helpful to include a diagram illustrating the differences between the Periodical Patching and Periodical Parallel Decoding techniques and traditional practices. This will better help readers understand. Other Comments Or Suggestions: n/a Questions For Authors: see concerns above Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Q1: Discussions and experimental comparisons with Tiny Time Mixers (TTMs).** A1: Thank you for mentioning TTMs, which are also lightweight TSFMs like LightGTS. We will differentiate LightGTS from TTMs in the following two aspects: - **Flexibility**: TTMs have fixed input and output formats, which imposes limitations in downstream applications. In contrast, LightGTS supports flexible input and output configurations. - **Adaptive Patching**: While TTMs employ adaptive patching through CV-inspired patch merging techniques to capture multi-scale features, they remain constrained by predefined patch sizes. LightGTS, however, leverages periodical patching that adaptively segments time series based on the intrinsic periods. This approach enables LightGTS to achieve unified modeling across datasets with varying scales. Based on your suggestions, we add a comparison experiment with TTMs. The following table shows the average MSE in zero shot setting, and LightGTS outperforms TTMs on most datasets. | Model | ETTh1 | ETTh2 | ETTm1 | ETTm2 | weather | electricity | | --- | --- | --- | --- | --- | --- | --- | | LightGTS-mini | **0.388** | 0.348 | **0.327** | **0.247** | **0.208** | 0.213 | | TTMs-Advanced | 0.4 | **0.333** | 0.362 | 0.252 | 0.231 | **0.192** | **Q2: The article's expression and description need some revision.** A2: Thank you for your detailed suggestions, we will correct the errors you mentioned in the revised paper. **Q3: Need a diagram illustrating the differences between the Periodical Patching and Periodical Parallel Decoding techniques and traditional practices.** A3: - As shown in Figure 2 (a), the key difference of Periodical Patching from other patching techniques is its ability to adaptively perform consistent modeling on multivariate time series datasets with varying scales. - The differences between Periodical Parallel Decoding and other decoding techniques are shown in the table below. | Decoding techniques | Flexible Input/Output | Hight Inference | Pre-train/Downstream Consistency | | --- | --- | --- | --- | | Auto-regressive Decoding | √ | X | √ | | MLP Decoding | X | √ | √ | | Masked Auto-encoders | √ | √ | X | | Periodical Parallel Decoding | √ | √ | √ |
Summary: The paper introduces LightGTS, a lightweight time series forecasting model leveraging consistent periodical modeling. It proposes a periodical tokenization, which adaptively splits time series into patches aligned with intrinsic periods to handle varying scales, and periodical parallel decoding, which leverages historical tokens to improve forecasting while avoiding autoregressive errors. Experments show that LightGTS achieves SOTA performance on 9 benchmarks in zero/full-shot settings with <5M parameters, significantly improving efficiency over existing foundation models. Claims And Evidence: The claims regarding improved generalization and handling diverse scales through periodic tokenization are supported by case studies (e.g., Figure 2b) and multi-dataset benchmark experiments. However, the assumption of 'consistent periodic patterns' relies on accurately detecting intrinsic periods, yet the validation of this detection remains unclear. While the authors provide a comparison in Appendix Figure 6, the distinction between LightGTS and LightGTS-FFT is not well explained. Additionally, results indicate that the proposed method underperforms on strongly periodic data (e.g., Electricity) but excels when periodicity is less pronounced. This contradicts the claims and explanations in Appendix B.1. Methods And Evaluation Criteria: Periodical tokenization aligns with time series' cyclical inductive bias, addressing fixed tokenization's limitations. Parallel decoding avoids error accumulation, a known issue in autoregressive models. Evaluation on 9 benchmarks is reasonable. Theoretical Claims: The author provides a brief theoretical analysis in Sec. 3.2, focusing on explaining the resize transformation of the projection layer weights. Experimental Designs Or Analyses: The experiments span diverse datasets with a well-reasoned design, including zero/full-shot settings and hyper-parameter studies. However, the zero-shot evaluation requires further validation for generalizability to unseen scales and periods, such as an ablation study on cycle-length estimation. Supplementary Material: Yes. All. Relation To Broader Scientific Literature: LightGTS builds on time series foundation models by addressing fixed tokenization limitations. It aligns with recent trends in adaptive tokenization but uniquely integrates periodicity. The decoding approach relates to non-autoregressive methods in NLP/TSF, though periodicity integration is novel. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: Novel integration of periodicity into tokenization/decoding, strong empirical results, and practical efficiency. Weaknesses: Cycle-length detection ablation is unclear; limited discussion of failure modes (e.g., non-periodic series). Other Comments Or Suggestions: Clarify the detection and effectiveness of intrinsic periods across datasets. ## update after rebuttal The concerns raised have been addressed. I will maintain the original acceptance score. Questions For Authors: Cycle detection: How are intrinsic periods determined for datasets with unknown/noisy periodicities? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Q1: The zero-shot evaluation requires further validation for generalizability to unseen scales and periods, such as an ablation study on cycle-length estimation.** A1: To further evaluate the generalizability of LightGTS, we add experiments on Chronos Benchmark II which contains 27 evaluation datasets with different scales and periods. As shown in the table below, LightGTS shows outstanding performance, second only to Moirai-large whose training corpus overlap is much higher. In addition, LightGTS also exhibits superior efficiency compared to other foundation model. | Model | Moirai-large | Chronos-large | TimesFM | LightGTS_mini(ours) | Seasonal Naive | | --- | --- | --- | --- | --- | --- | | Average Relative Error | **0.791** | 0.824 | 0.882 | 0.819 | 1.000 | | Median inference time (s) | 14.254 | 26.827 | 0.396 | 1.390 | **0.096** | | Training corpus overlap (%) | 81.5% | 0% | 14.8% | 31% | 0% | **Q2: Cycle-length detection ablation is unclear; limited discussion of failure modes (e.g., non-periodic series).** A2: Actually, for datasets with little periodicity, LightGTS is insensitive to the prediction of trends with the selection of patch size. Take the exchange dataset which exhibits little periodicity as an example, the performance of LightGTS in zero-shot setting remains stable when the patch size is set to 1, 4, 6, 8, 16 and 24, respectively. In the paper, we use fft-extracted period: 6. | Patch size | 1 | 4 | 6 (fft-extracted) | 8 | 16 | 24 | | --- | --- | --- | --- | --- | --- | --- | | Exchange-MSE | 0.347 | 0.348 | 0.347 | 0.353 | 0.351 | 0.354 | **Q3: Clarify the detection and effectiveness of intrinsic periods across datasets. The periods determined for datasets with unknown/noisy periodicities?** A3: For datasets with prior knowledge of period, we use known period. For datasets where the period is not known, we use Fast Fourier Transform (FFT) to extract the period, which we have discussed in the Appendix B.1. In Table 8, we give the cycle lengths of the evaluation dataset. For traffic and electricity dataset with two cycle lengths: 24 (one day) and 168 (one week), we choose the smaller cycle length 24 as the patch size. For the exchange dataset with insignificant periodicity, we use fft to extract the cycle length of 6. In Table 4 of paper, we perform an ablation study against periodical patching on seven datasets. After replacing periodical patching with a fixed patching, the model’s performance decreases significantly, validating the effectiveness of periodical patching. --- Rebuttal Comment 1.1: Comment: The concerns raised have been addressed. I will maintain the original acceptance score.
Summary: In this paper, the authors proposed a lightweight pretrained TSF model with a new tokenization technique. With the proposed periodical tokenization method, the authors claimed that one can naturally deal with time series with different granularity and periodicity. In addition, it significantly reduced the number of model parameters for pretraining, while achieving SOTA performance on one benchmark dataset. Claims And Evidence: No, the authors claimed that they proposed a general time series forecasting model. However, the proposed periodical patching and periodical parallel decoding, the core component of the proposed LightGTS, are only designed for periodical time series data. In addition, they only evaluate datasets with strong periodic (long-term TSF benchmark). Therefore, it is not clear how the proposed method performs on non-periodic time series, which are prevalent in the real world as well. Therefore, in my opinion, it is not appropriate to claim the statement of the general TSF model. Methods And Evaluation Criteria: They only evaluated datasets with strong periodic (long-term TSF benchmark). Larger benchmark datasets should be used for evaluating the pretrained TSF model, e.g., GIFT-Eval. Theoretical Claims: No, there are no theoretical claims in the paper. Experimental Designs Or Analyses: The choice of evaluation datasets should be improved. The experimental settings are incomplete. Supplementary Material: No Relation To Broader Scientific Literature: The proposed a new tokenization method for time series datasets utilizing periodical information. The idea seems to be novel and technically sounds. Essential References Not Discussed: Light-weight time series models [1] [2] or even non-parametric models [3] that use periodical information to improve forecasting should be discussed. [1] Lin, S., Lin, W., Wu, W., Chen, H., & Yang, J. (2024). Sparsetsf: Modeling long-term time series forecasting with 1k parameters. arXiv preprint arXiv:2405.00946. [2] Lin, S., Lin, W., Hu, X., Wu, W., Mo, R., & Zhong, H. (2024). Cyclenet: enhancing time series forecasting through modeling periodic patterns. Advances in Neural Information Processing Systems, 37, 106315-106345. [3] He, X. Li, Y., Tan, J., Wu B. & Li, F. (2023). OneShotSTL: One-Shot Seasonal-Trend Decomposition For Online Time Series Anomaly Detection And Forecasting. VLDB Endowment, 16, 06, 1399-1412. Other Strengths And Weaknesses: Strengths: 1. The proposed periodical patching and projection layer looks novel and interesting. It seems to be a good way to handle datasets with different gradualility and periodicity. Weeknesses 1. How the method would handle periodic and non-periodic datasets together is not clear. 2. Again, the evaluation datasets are not enough. 3. Specific experimental settings should be given. For instance, which period is used for training and evaluation? How long is the context length? 4. Some related works are missing. Other Comments Or Suggestions: Page 1, line 12, pertaining -> pretraining Questions For Authors: 1. How would the proposed model perform trend forecasting? Does it go back to point patch? Then the model size would be significantly affected, if there are many non-periodic time series in the pretraining datasets, right? Emperically, how does the proposed model perform on such datasets? 2. How would the proposed method deal with different cycle lengths? For instance, there are daily and weekly periods in traffic data. How did the authors pick them? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Q1: Larger benchmark datasets should be used for evaluating the pretrained TSF model.** A1: We use Chronos Benchmark II to further evaluate the effectiveness of LightGTS. As shown in the table below, LightGTS shows outstanding performance, second only to Moirai-large whose training corpus overlap is much higher. In addition, LightGTS also exhibits superior efficiency compared to other foundation model. | Model | Moirai-large | Chronos-large | TimesFM | LightGTS_mini(ours) | Seasonal Naive | | --- | --- | --- | --- | --- | --- | | Average Relative Error | **0.791** | 0.824 | 0.882 | 0.819 | 1.000 | | Median inference time (s) | 14.254 | 26.827 | 0.396 | 1.390 | **0.096** | | Training corpus overlap (%) | 81.5% | 0% | 14.8% | 31% | 0% | **Q2: Light-weight time series models [1] [2] or even non-parametric models [3] that use periodical information to improve forecasting should be discussed.** A2: - CycleNet, SparseTSF, and OneShotSTL all propose explicit periodicity decomposition methods to enhance the accuracy and lightweight nature of time series forecasting models. However, LightGTS not only explicitly models periodicity through periodic patching but also adapts to multivariate time series pretraining via flex resizing technique. This allows the model to leverage the scale-invariant periodicity inductive bias in multivariate time series, achieving strong zero-shot performance with fewer parameters. - Experiment (MSE): | Datasets | LightGTS_mini | SparseTSF | CycleNet | OneShotSTL | | --- | --- | --- | --- | --- | | ETTm2 | **0.239** | 0.251 | 0.244 | 0.255 | | Electricity | **0.156** | 0.162 | 0.159 | 0.167 | | Traffic | **0.393** | 0.404 | 0.397 | 0.403 | **Q3: Specific experimental settings should be given. For instance, which period is used for training and evaluation? How long is the context length?** A3: - For datasets with prior knowledge of period, we use known period. For datasets where the period is not known, we use Fast Fourier Transform (FFT) to extract the period. - We use a variable context length, keeping the number of tokens at 10. For example, for a dataset with a period of 24, we set the context length to 240. **Q4: How would the proposed model perform trend forecasting? Does it go back to point patch? Then the model size would be significantly affected, if there are many non-periodic time series in the pretraining datasets, right? Emperically, how does the proposed model perform on such datasets?** A4: Actually, for datasets with little periodicity, LightGTS is insensitive to the prediction of trends with the selection of patch size, and the the model size would be affected by the selection of patch size. Take the exchange dataset which exhibits little periodicity as an example, the performance of LightGTS in zero-shot setting remains stable when the patch size is set to 1, 4, 6, 8, 16 and 24, respectively. In the paper, we use fft-extracted periods: 6. | Patch size | 1(**point patch**) | 4 | 6 (fft-extracted) | 8 | 16 | 24 | | --- | --- | --- | --- | --- | --- | --- | | Exchange-MSE | 0.347 | 0.348 | 0.347 | 0.353 | 0.351 | 0.354 | **Q5: How to deal with different cycle lengths.** A5: For the case where a time series is known to have different cycle lengths, we pick the smallest cycle as the patch size. For example, traffic and electricity dataset both have two cycle lengths, 24 (one day) and 168 (one week), and we choose 24 as the patch size. For the case where the cycle length is not well known, FFT can be used for cycle length extraction, which we have discussed in the Appendix B.1. --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal. Many of my concerns have been addressed. But my major concerns remain, the core component of the proposed LightGTS, are only designed for periodical time series data, why it should be working for dataset without periodicity? The authors should explain this observation. The authors provide results only on one dataset without periodicity (Exchange), with predicating length from 96 to 720 (more than 2 years), the authors should evaluate on more datasets without periodicity in a more realistic scenario. --- Reply to Comment 1.1.1: Comment: Thank you for your thoughtful questions and valuable feedback. We appreciate the opportunity to clarify and refine our responses. - Our design of periodical patching enables the model to naturally adapt to time series with different granularity and periodicity. However, this does not imply that we focus only on periodic information. By computing the attention between patch tokens, the model also learns the overall trend of time series, which is key to making the LightGTS work for datasets without periodicity. Moreover, our non-autoregressive decoder avoids cumulative errors, thus enabling more accurate prediction of trends. - Experiments: Actually, the Chronos Benchmark II presented in Q1 includes 27 datasets, and 10 of them are datasets without periodicity. Based on your suggestion, we supplement the evaluation of the following three datasets without periodicity on four prediction lengths: 24, 36, 48, and 60 in zero-shot and full-shot setting. Our experimental setup follows TFB [1], and since these datasets are of daily granularity, predicting values up to two months is consistent with practical application scenarios. zero-shot setting: | Models | LightGTS | TimesFM | Timer | MOIRAI | | --- | --- | --- | --- | --- | | Metrics | MSE / MAE | MSE / MAE | MSE / MAE | MSE / MAE | | nn5 | **0.768/ 0.599** | 0.780/ 0.605 | 1,260/ 0.878 | 0.786/ 0.623 | | wike2000 | 597.043/ 1.358 | **475.582/ 1.077** | 605.070 / 1.428 | 512.875 / 1.209| | NASDAQ | **0.885/ 0.681** | 1.034/ 0.687 | 0.890/ 0.688 | 1.045/ 0.701 | full-shot setting: | Models | LightGTS | PatchTST | iTransformer | FITS | TimeMixer | Pathformer | PDF | | --- | --- | --- | --- | --- | --- | --- | --- | | Metrics | MSE / MAE | MSE / MAE | MSE / MAE | MSE / MAE | MSE / MAE | MSE / MAE | MSE / MAE | | nn5 | **0.648/ 0.543** | 0.692/ 0.594 | 0.660/ 0.550 | 0.811/ 0.653 | 0.656/ 0.556 | 0.698/ 0.582 | 0.657/ 0.554 | | wike2000 | **511.017/ 1.128** | 513.966/ 1.140 | 545.647/ 1.189 | 732.541/ 1.486 | 518.679/ 1.251 | 788.620/ 1.527 | 543.558/ 1.170 | | NASDAQ | **0.882/ 0.677** | 0.972/ 0.721 | 0.944/ 0.683 | 1.043/ 0.774 | 1.005/ 0.731 | 1.014/ 0.726 | 1.021/ 0.724 | [1] Qiu, Xiangfei, et al. "Tfb: Towards comprehensive and fair benchmarking of time series forecasting methods." arXiv preprint arXiv:2403.20150 (2024).
Summary: The paper proposes a general purpose pretrained time series forecasting model called LightGTS. The model is an encoder-decoder transformer operating on patches of time series observations. However, unlike existing works which operate on fixed patch lengths, LightGTS uses a dynamically adjusted patch length which is aligned to the seasonality of the time series. Authors argue that this enables better modeling of time series data compared to an arbitrarily selected fixed patch length. Empirical results have been reported on 9 time series datasets. Claims And Evidence: The central claim of the paper is that LightGTS is a general purpose pretrained time series model which is more accurate and lightweight compared to existing pretrained models. Authors argue that "existing TSFMs largely depends on massive pre-training data and large model parameters". **Existing models depend on large scale training and large number of parameters**: Although this is true for most existing models, in the end, LightGTS is also pretrained on a large scale multi-source corpus. However, the claim about fewer number of parameters is correct. **LightGTS is more accurate compared to existing TSFMs**: This claim is not supported by the evaluation conducted in the paper. In particular, the evaluation is severely limited to draw meaningful conclusions. The experiment design also suffers from serious flaws that have previously been discussed in the community. Note that these flaws are not unique to LightGTS and have also existed in previous works. However, with the rapid progress in the field and existence of better benchmarks, it is imperative that newer works do a better job at evaluation, especially when the claim involves a general purpose pretrained model. Please see **Methods And Evaluation Criteria** for details. **Fixed patch embedding does not transfer well to new seasonalities**: The experiment in Fig 2 demonstrates this to a certain degree and I intuitively agree with this assertion. However, more evidence is needed showing that this is an issue in existing patch-based pretrained models such as Chronos-Bolt and TimesFM. I also have questions about Fig 2. Please refer to **Questions for Authors**. Methods And Evaluation Criteria: The authors propose a reasonable architecture for a general purpose time series model. I particularly find the contribution of periodical patching quite interesting. I would encourage the authors to set the stage better for their approach, e.g., by discussing Moirai better. Moirai uses multiple patch sizes and a patch selection mechanism with a similar motivation to LightGTS. Periodical patching is a natural and elegant way to address the limitation of fixed patch sizes. The breadth of experiments is severely limited for a general purpose model. Only 9 datasets have been studied, which is not enough to justify the title of a general purpose model. Moreover, 5 of these datasets belong to the same domain with 4 being essentially the same dataset (ETTh1, ETTh2, ETTm1, ETTm2). This infamous "ETT long term forecasting benchmark" is often criticized for its flaws such as limited domain coverage and the practice of forecasting at unreasonable horizons (e.g., 720 days into the future for exchange rate or oil temperature of a transformer at a specific hour months into the future). Every new model somehow beats this benchmark; however, there is still barely any absolute progress, only an illusion of it. Please refer to the talk (and paper) from Christoph Bergmeir [1, 2] where he discusses the limitation of this benchmark and current evaluation practices. A _very recent_ position paper [3] also conducted a comprehensive evaluation of models on this benchmark showing that there's no obvious winner. One (not so difficult) way to improve the quality of evaluation is to include results on better benchmarks that have been proposed recently in the context of pretrained time series models. - Chronos Benchmark II: This benchmark includes 27 datasets (42, if you include Benchmark I) providing a comprehensive coverage over domains, frequencies and other properties. Please refer to https://github.com/autogluon/fev for details on how to use the benchmark. - GIFT-Eval: This benchmark includes 90+ tasks across multiple datasets and domains. It also provides a leaderboard of existing pretrained models. Please refer to https://github.com/SalesforceAIResearch/gift-eval. [1] https://neurips.cc/virtual/2024/workshop/84712#collapse108471 [2] Hewamalage, Hansika, Klaus Ackermann, and Christoph Bergmeir. "Forecast evaluation for data scientists: common pitfalls and best practices." Data Mining and Knowledge Discovery 37.2 (2023): 788-832. [2] Brigato, Lorenzo, et al. "Position: There are no Champions in Long-Term Time Series Forecasting." arXiv preprint arXiv:2502.14045 (2025). Theoretical Claims: The paper does not make theoretical contributions. Also, **Theorem 3.1** is not a theorem per se but more of a remark. Experimental Designs Or Analyses: Please see **Methods and Evaluation Criteria**. Supplementary Material: Yes, I referred to parts of the supplementary material to check detailed results. Relation To Broader Scientific Literature: The paper makes a contribution to the rapidly evolving area of pretrained time series forecasting models. The idea of periodical patching is promising and addresses the issue of fixed patch sizes in previous models. The evaluation is limited and can be improved by the inclusion of better benchmarks. Essential References Not Discussed: Some works such as TinyTimeMixers have made similar claims as LightGTS and should be discussed as part of related work. Other Strengths And Weaknesses: LightGTS is only a point forecaster, whereas most existing pretrained models (Chronos, Moirai, TimesFM, Chronos-Bolt) support probabilistic forecasting. Uncertainty quantification is critical feature for downstream decision making based on a forecast. Other Comments Or Suggestions: Authors use new terms for known quantities in time series forecasting. For example, _scale_ is used to refer to _frequency_ and _cycle length_ is used to refer to _seasonality_. I would encourage the authors to use existing terms to better align with the literature. Questions For Authors: - For the experiment in Fig 2, did the authors only trained the model on ETTh1 dataset? If yes, I find it very surprising that it transfers to very different types of seasonal patterns such as those in the solar dataset. Is the claim here that only pretraining on a single seasonality could somehow deliver a pretrained model? Out of curiosity, how does the model only trained with a single seasonality perform quantitatively? Why does the pretraining corpus contain diverse seasonalities and how were these selected? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Q1: Improve the quality of evaluation** A1: We use Chronos Benchmark II you mentioned to further evaluate the generalizability of LightGTS. As shown in the table below, LightGTS shows outstanding performance, second only to Moirai-large whose training corpus overlap is much higher. In addition, LightGTS also exhibits superior efficiency compared to other foundation model. | Model | Moirai-large | Chronos-large | TimesFM | LightGTS_mini(ours) | Seasonal Naive | | --- | --- | --- | --- | --- | --- | | Average Relative Error | **0.791** | 0.824 | 0.882 | 0.819 | 1.000 | | Median inference time (s) | 14.254 | 26.827 | 0.396 | 1.390 | **0.096** | | Training corpus overlap (%) | 81.5% | 0% | 14.8% | 31% | 0% | **Q2: Periodical Patching vs. Multi-Patching** A2: While MOIRAI's predefined patch sizes based on sampling frequency offer some solution for consistent modeling across different frequencies, they are still fixed and lack flexibility in certain scenarios. In contrast, Periodical Patching adaptively divides patches according to scale-invariant periodicity, enabling more flexible and unified modeling for datasets with varying frequencies. **Q3: Some works such as TinyTimeMixers have made similar claims as LightGTS and should be discussed as part of related work.** A3: Thank you for mentioning TTMs, which are also lightweight TSFMs like LightGTS. We will differentiate LightGTS from TTMs in the following two aspects: - **Flexibility**: TTMs have fixed input and output formats, which imposes limitations in downstream applications. In contrast, LightGTS supports flexible input and output configurations. - **Adaptive Patching**: While TTMs employ adaptive patching through CV-inspired patch merging techniques to capture multi-scale features, they remain constrained by predefined patch sizes. LightGTS, however, leverages periodical patching that adaptively segments time series based on the intrinsic periods. This approach enables LightGTS to achieve unified modeling across datasets with varying scales. We would update it in the related work of the revised paper. **Q4: LightGTS is only a point forecaster, whereas most existing pretrained models (Chronos, Moirai, TimesFM, Chronos-Bolt) support probabilistic forecasting. Uncertainty quantification is critical feature for downstream decision making based on a forecast.** A4: We are developing an enhanced version of LightGTS with quantile regression pre-train task to support probabilistic forecasting (**uncertainty quantification**), which will be released in future updates. **Q5: Replace non-standard terms (e.g., "scale" for frequency, "cycle length" for seasonality) with established conventions.** A5: We sincerely appreciate your feedback regarding terminology alignment. In the revised manuscript, we will systematically replace non-standard terms with established terminology from time series. **Q6: Was the model only trained on the ETTh1 dataset? How does the model perform quantitatively when trained on a single seasonality?Why does the pretraining corpus include diverse seasonalities, and how were they selected?** A6: - Yes, we found that pre-training on a single seasonality dataset can enable effective transfer learning. - **Experiment:** LightGTS-single was pre-trained on ETTh1 and directly tested on downstream datasets in a zero-shot setting. As shown in the table below, LightGTS-single achieved strong transfer performance on datasets with single seasonality (e.g., Solar), while its performance was less optimal on datasets with multiple seasonalities (e.g., Electricity). By pre-training on multi-seasonalities datasets, LightGTS-mini significantly improved its performance on datasets with multiple seasonalities. | Datasets | ETTh2 | ETTm2 | Weather | Solar | Electricity | | --- | --- | --- | --- | --- | --- | | Metrics | MSE/MAE | MSE/MAE | MSE/MAE | MSE/MAE | MSE/MAE | | PatchTST | 0.351/0.395 | 0.256/0.314 | 0.224/0.261 | 0.200/0.284 | 0.171/0.270 | | LightGTS-single | 0.357/0.381 | 0.251/0.314 | 0.232/0.274 | 0.227/302 | 0.246/0.358 | | LightGTS-mini | 0.359/0.396 | 0.250/0.318 | 0.210/0.258 | 0.196/0.269 | 0.214/0.306 | - **Pre-training corpus selection:** We did not explicitly filter datasets based on seasonality. Instead, we collected datasets from diverse domains (e.g., energy, weather, traffic), assuming that time-series data from different domains inherently contain distinct seasonal patterns. This approach ensures broader generalizability while retaining domain-specific characteristics. --- Rebuttal Comment 1.1: Comment: Thank you! This is one of the best rebuttals I have read today: straight to the point. I truly appreciate your efforts on expanding the evaluation. As such, I am more confident updating my score to 4. Thank you also for clarifying on the other points that I had raised. A small note, but this is more of request: In the final version of the paper, could you also add evaluations on other benchmarks such as GIFT-Eval? This will have a positive cascading effect on the community and hopefully everyone will start doing better evaluations. It may also be a good idea to include more recent models such as Chronos-Bolt, TabPFN-TS and TimesFM-2.0. Note that these models fall within the concurrent work guidelines for ICML, but it may be a good idea to include them for completeness. --- Reply to Comment 1.1.1: Comment: Dear Reviewer 3dom, Thank you sincerely for your thoughtful feedback and constructive suggestions. We're deeply encouraged by your recognition of our rebuttal efforts and are committed to implementing your recommendations in the final version. Specifically: - **Expanded Evaluations**: We will include additional benchmarks like GIFT-Eval to strengthen the empirical analysis, aiming to promote more comprehensive evaluations in the time-series community. - **Model Comparisons**: We will incorporate evaluations of recent models such as Chronos-Bolt, TabPFN-TS, and TimesFM-2.0 where feasible. Your insights have been invaluable in refining this work, and we fully agree that rigorous benchmarking benefits the broader research ecosystem. Best regards, Authors
null
null
null
null
null
null
Settling the Maximin Share Fairness for Scheduling among Groups of Machines
Accept (poster)
Summary: The paper studies a variant of the fair scheduling problem. There are groups of machines and tasks which need to be scheduled on these machines. The paper focuses on the fairness objective of (group) maximin share (GMMS) and shows that a 2 approximation to GMMS can be computed in polynomial time. The paper additionally improves upon a result in the literature by showing that a 4/3 approximation to GMMS exists when the agents in each group are homogenous. The paper as a whole is purely algorithmic and theoretical. ## update after rebuttal Thank you for the rebuttal! I hope you take the improvement suggestions seriously! Claims And Evidence: The paper is purely theoretical and all claims are backed up by (for the most part understandable) proofs. All claims and evidence seems to be present and non-problematic. Methods And Evaluation Criteria: N/A due to being a purely theoretical paper. Theoretical Claims: I checked the proofs, however, I could not follow the proof of Theorem 3.2 fully. For the most part the proofs are understandable, however, I believe that the authors could have put more work into making them easier to digest. In particular, I would have been a fan of (even more) visualization in the proofs. For instance, the delegation of the illustration in Theorem 3.1 does not really help the proof (and I would have preferred it to be reversed). I think in general, the paper would have been nicer, if the authors maybe removed one of the proofs and instead focused more on extending/explaining the other two. Experimental Designs Or Analyses: N/A Supplementary Material: I looked at the example in Appendix A and confirmed that the proof of Theorem 4.4 is essentially the standard one. Relation To Broader Scientific Literature: The paper is part of the well established literature on (indivisible) fair division. Specifically, it builds upon the work of Li et al. (Neurips 2023) in studying MMS for a group scheduling problem, answers one of their open questions and improves upon their obtained bound. On a more meta-level it contributes to the understanding of MMS for non-additive fair division instances and contributes a non-trivial non-additive model in which constant factor MMS approximations can be achieved. The results and studied fairness notions are very "standard" for the fair division literature and are a generalization of commonly studied objectives in the multi-machine scheduling setting. Essential References Not Discussed: I cannot think of any. Other Strengths And Weaknesses: In general, I like this paper. The question studied is mostly natural (one might question the relevance to the ML literature, however, ICML has opened up to more non ML papers in recent years, and has included a game theory track this year, so I believe this is okay). I however (as argued above) believe that the authors could have improved the clarity of their writing throughout the work. In particular, as this is not a paper on ML, I think to be properly appreciated by non-ML readers, I think the paper should have included more motivation for their setting and additionally guidance to the reader. I am not really a fan of not including any examples at all, just a single example after the model definition greatly improves the readability and clarity in my experience. Other Comments Or Suggestions: "There are two layers of objects, where a number of indivisible jobs (can be viewed as data points) need to be first allocated to n groups (viewed as clusters or super-agents). Each group controls a number of machines (can be viewed as features or atomic agents). Upon receiving a set of jobs, the group should further assign these jobs to its machines with the objective of completing these jobs as early as possible." I find this whole sentence construction to be quite confusing (in particular the "can be viewed as" parts) "MMS fair allocation of chores and job scheduling problem" -> "MMS fair allocation of chores and the job scheduling problem" I prefer the use of \citet for inline citations. "does not admit better than n approximation" -> "does not admit a better than n approximation" "Each group i contains" -> This should probably be each group G_i I am not the biggest fan of just starting new sections with a theorem. A bit of guidance to the reader is always nicer. I find it confusing that you call the groups large, medium, and small, but abbreviate them as H, D, L with L not being large Questions For Authors: I have no questions at the moment Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the appreciation of the paper and the detailed and constructive comments. We will carefully revise the paper following these suggestions. *Comment*: On the theoretical proofs. *Response*: We thank the reviewer for the suggestion of restructuring the proofs. We will focus more on explaining and visualizing the proof ideas in the main body (including adding examples and figures), possibly putting some proofs in the appendix. *Comment:* On the motivations and examples. *Response*: We thank the reviewer for the suggestion of including more motivation in the introduction and one example after the definition. We will follow this suggestion to revise the paper. In particular, we will expand the motivating example in the introduction to motivate our setting (e.g., a group’s workday can conclude when all members complete their assigned duties) and add one concrete example after the definition of GMMS. We will also include a figure to illustrate the partition and the notations in GMMS. *Comment*: On the other comments. *Response*: - We will rephrase the sentence "There are two layers of objects, ..., with the objective of completing these jobs as early as possible" and the corresponding paragraph. Basically, we want to explain how the items are first allocated to groups and how the group will further distribute the items among its members. We will avoid using ambiguous words like "can be viewed as". - We will use \citet for inline citations. - We will add guidance at the start of each (sub)section. - Regarding the abbreviations of large, medium, and small groups, we used H and L to represent high and low costs. We understand the ambiguity and will revise this part accordingly. - We will also address the other grammar issues and carefully revise the writing in the revised version.
Summary: This paper addresses the maximin share (MMS) fairness problem in the context of job scheduling among groups of machines. . The study paper builds upon the work of Li et al. (NeurIPS 2023), which considered MMS fairness for groups of identical or related machines but left open the case where machines within a group are unrelated. The authors provide a polynomial-time algorithm that computes a 2-approximate MMS allocation in the case of heterogeneous machines using linear programming techniques. The paper proves that no algorithm can achieve better than a (2 - 1/n)-approximate MMS allocation. When machines within each group are identical, the paper improves the approximation ratio from 2 to 4/3. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: I did not check thoroughly all the proofs but I checked the high level ideas and they seem correct. Experimental Designs Or Analyses: N/A Supplementary Material: Some of the proofs Relation To Broader Scientific Literature: Very related Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: -The paper is well-written and closes an open question that was left from a previous work. -The results are clean, with clear proofs and tight bounds. -Also, nice linear programming techniques are used that make the proofs quite novel. -The problem is well-motivated and well-defined. Other Comments Or Suggestions: No Questions For Authors: Right column, line 118: Do you mean all k-partitions of S? Right column, line 127: Is this correct: "of her bundle with the other n−1 groups."? Also, in the equation below MMS(Sk,n,ci,k) is n correct? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the positive feedback and the constructive comments. In the following, we answer your questions. We will carefully address these questions in the revised paper. *Question*: On line 118, all $k$-partitions of $S$. *Answer*: Yes, it is a typo, which should be all $k$-partitions of $S$. We will fix this in the revised version. *Question*: On line 127, $n-1$ groups and $n$ in $MMS(S_k,n,c_{i,k})$. *Answer*: The two statements are indeed correct but might be a little unclear. To define $GMMS_i$ for group $G_i$, we first partition all items into $g_i$ bundles, where the $k$-th bundle $S_k$ is further allocated to $n$ (imaginary) agents, and each of them has the same valuation as $a_{i,k}$'s (like the classic definition of MMS with no groups). We will carefully refine the statements and clarify this definition in the revised version. If it helps, we will include a figure to illustrate the partition and these notations.
Summary: The paper discusses the problem of group maximin share job allocation, where groups need to be assigned sets of jobs which are distributed to machines within each group to ensure the minimum largest makespan within that group. In the heterogenous setting, where machines within a group can have different cost functions, the authors show a lower bound of 2-1/n approximation ratio using a difficult example, and an upper bound of 2-MMS approximation ratio by constructing a polynomial time algorithm for constructing an allocation. Further, in the homogenous case, they reduce the previous upper bound of 2-MMS to 4/3MMS (when the Group MMS allocations are known) and 4/3MMS +$\epsilon$ relaxation in polynomial time. Post-rebuttal comment: The author's rebuttal addresses some of my concerns, and as long as they fix the typos and minor errors, I am in favor of accepting this paper. Claims And Evidence: The paper presents detailed proofs of the three claims. Methods And Evaluation Criteria: Yes Theoretical Claims: I checked the proofs for the three theoretical claims, and was convinced of their correctness. I was not able to completely follow the final proof for Theorem 4.1 (4/3 GMMS allocation upper bound), but I can see how it holds. Experimental Designs Or Analyses: NA Supplementary Material: I read the appendix in the main text Relation To Broader Scientific Literature: The paper pushes the frontier of group job allocation, presenting new algorithms and improved upper and lower bounds for the problem setting of improving maximin share fairness. The designed counterexample proves a good lower bound for heterogenous utilities, which has not been seen before. Essential References Not Discussed: The paper has good related work Other Strengths And Weaknesses: One general comment I have about the paper is the slight confusion the term Maximin Share induces. In typical fair division literature, MMS refers to maximizing the worst-off agent's utility, but in this case, the problem is minimizing the maximum makespan, bringing it closer to a minimax setting. There is also some inconsistency within the paper with how approximation ratios are presented. In most of the paper, approximation ratios larger than 1 are used, but in the related work, some of the values mentioned are below 1 (suggesting they are better than optimal?). This seems like a mistake, and I would encourage the authors to review the text and make this consistent. This is also present in section 4.1. The title mentions a 3/4 GMMS algorithm, while the theorem states 4/3 GMMS. Another note for the introduction: in the Introduction section, it seems like the message isn't clearly conveyed. It talks about indivisble jobs vs data points, machines, features and atomic agents. It is talking about too many different kinds of things, which sends a confusing message. Since the paper deals with the job scheduling problem, sticking with one single narrative would be better for readers trying to understand the contributions of the paper. Other Comments Or Suggestions: NA Questions For Authors: No questions Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the appreciation of the results and the insightful comments. We will carefully review the paper to refine the presentation, including the introduction and the technical proofs. Below, we address your specific comments. *Comment*: On the term of Maximin Share. *Response*: The reviewer is correct that MMS should actually refer to the "minimax share". However, to be consistent with the allocation of goods (i.e., maximizing the worst-off agent's utility), the use of "maximin share" forms a convention in the literature of allocating chores (i.e., minimizing the maximum makespan). This is partially because the very first several works on this topic used "non-positive utilities" instead of "non-negative costs" to describe the valuation functions. Recent works on chores found it is more convenient to use non-negative costs, and maximin share is used for consistency. We will carefully explain the notations in the revised version and would be happy to make changes based on the reviewer's advice. *Comment*: On the range of the approximation ratio. *Response*: We thank the reviewer for pointing out this issue. It is a typo in the title of Section 4.1, where the approximation ratio should be 4/3. To define the approximation ratios, we adopt the following rule. - For the problem of goods (corresponding to maximization objectives), the approximation ratio $\alpha$ is defined to be smaller than 1, and $\alpha$-approximation means the algorithm can guarantee at least $\alpha$ fraction of the optimal utility; - For the problem of chores (corresponding to minimization objectives), the approximation ratio $\alpha$ is defined to be greater than 1, and $\alpha$-approximation means the algorithm can guarantee no more than $\alpha$ times the optimal cost. We will carefully review the paper and ensure consistency in approximation ratios. *Comment*: On the presentation of the introduction. *Response*: We thank the reviewer for the suggestion of sticking with the job scheduling problem. We will reorganize the introduction and avoid talking about different kinds of things.
Summary: This paper considers a fair resource allocation problem called Group Maximin Share (GMMS). There are two layers of allocation: at the first layer, there are $m$ items to be allocated to $n$ groups. Then, once items are allocated to each group $G_i$, they are further allocated to the $g_i$ agents in $G_i$. Each agent has a cost for each item, and the agent's cost on the allocation they receive is the makespan of the items; the cost of the group is the maximum makespan among the agents. The maximin share $\textsf{GMMS}_i$ of the group $G_i$ is computed as follows (Lemma 2.3) in the paper: consider all possible partitions of $m$ items into $n$ parts. Consider the part $S$ that, when allocated to group $i$, has the worst cost. Here, cost is referring to the second layer, where it is the minimum maximum makespan among all possible allocations of $S$ to the $g_i$ agents in $G_i$. So the problem can be written as two successive min max problems. The fairness goal is to find an allocation (at both layers) such that the cost of any group (computed as the maximum of the agents' makespans) is at most $\alpha \cdot \textsf{GMMS}_i$. The authors show that for the heterogeneous case (agent's costs within a group are potentially different), there is an algorithm yielding $\alpha = 2$, and that this is asymptotically tight ($2-1/n$). They also show that for homogeneous costs, there is an algorithm yielding $\alpha = 4/3$. ## update after rebuttal: As my concerns were mainly about clarity of writing, I maintain my evaluation. Claims And Evidence: Full proofs are provided. Methods And Evaluation Criteria: No experiments are provided; this is a pure theory paper. Theoretical Claims: I did not check proofs in any detailed manner. Experimental Designs Or Analyses: Not applicable. Supplementary Material: I did not read the supplementary material. Relation To Broader Scientific Literature: The most closely related literature is the work of Li et. al. (2023), where the problem was introduced. They showed a $O(\min\{n, \log m\})$-GMMS for the heterogeneous case and a 2-GMMS for the homogeneous case. The present work improves the first factor to 2 and the second factor to 4/3. Essential References Not Discussed: The most relevant paper, Li et. al. (2023) is discussed, along with a number of references on MMS more broadly. I did not notice any important omissions for understanding the paper. Other Strengths And Weaknesses: This paper contributes a worthy contribution of nearly matching upper and lower bounds for the heterogeneous case. The idea behind the proof of the 2-GMMS is nice: the authors compute a fractional solution to the LP, and then construct an auxiliary graph where edges are labelled with the fractional allocations. Then the solution is rounded to an integral one by sending allocations through the graph. While the 4/3-GMMS result for the homogeneous case is not proven to be tight, it is a notable improvement over previous work. The main weakness of the paper is that the writing is unclear, particularly in the first half and with respect to both related work and the model at hand. Several problems are discussed without defining the relevant terminology, which makes it difficult to contextualize the present results. There is also an abuse of language, as approximation ratio is used to describe the fairness factor $\alpha$, but this is not how approximation ratio is used in classic approximation algorithms (while the term is also used in the regular sense to discuss some related works, as I understand). Other Comments Or Suggestions: - pg.5: referring to $k$th copy of $G_i$ is confusing language - Sometimes ratios are listed as the reciprocal of what they should be. - In Theorem 4.4, it would be clearer to say something like an exact allocation. - It would be good to define what related machines means in this context Questions For Authors: n/a Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewer's supportive review and constructive suggestions. We will carefully address the typos and polish the presentation. *Comment*: On the terminologies in related work and the model. *Response*: We thank the reviewer for pointing out this issue. We will thoroughly examine the paper and improve the writing. Specifically, we will focus on providing clear definitions for the terms used and expanding on the discussed models (e.g., related machines). *Comment*: On the definition of approximation ratios. *Response*: We thank the reviewer for raising this issue, and we agree that the approximation ratio is classically used in optimization problems to measure the multiplicative factor between efficient and optimal solutions. However, in the literature of MMS fair allocation of indivisible items, it becomes the convention to use the term "approximation ratio" to describe the best possible achievable factor of MMS. We thought it might be better to be consistent with the literature when preparing the submission, but we also understand the potential ambiguity. Perhaps "approximation factor" and "competitive ratio" make more sense. We are open to adjusting this terminology based on the reviewer's advice. *Comment*: On pg.5, the kth copy of $G_i$. *Response*: $k$ refers to the $k$-th bundle in the $n$-partition of group $G_i$. We will rephrase this sentence to make it clear. *Comment*: Sometimes ratios are listed as the reciprocal. *Response*: We will check the ratios carefully and make them consistent and accurate. *Comment*: On Theorem 4.4. *Response*: Thanks for the suggestion. We will mention "exact allocation" in the description of the theorem. *Comment*: On the definition of related machines. *Response*: Thanks for the suggestion. We will include the definition of related machines the first time we mention it in the paper. We will also review other places to ensure the clarity of terminologies and models.
null
null
null
null
null
null
Calibrated Physics-Informed Uncertainty Quantification
Accept (poster)
Summary: The paper focuses on uncertainty quantification in physics-informed models via conformal prediction. It uses a neural network based surrogate (specifically FNO) as the base model and provides uncertainty via marginal and joint CP. The main innovation when compared to previous methods (Gopakumar et al 2024a) is the new score function that does not require simulator data ($Y_i$) and is instead using the surrogate itself to determine the error in the PDE. The method is tested on standard problems as well as a more advanced plasma modelling example following (Gopakumar et al 2024a). Claims And Evidence: The claims (as listed in lines 27-29, column 2) are supported. Methods And Evaluation Criteria: The proposed method and evaluation criteria make sense for the given problem though I have some concerns over the methodology itself (see questions below). Some benchmark examples and comparisons are provided in the supplement. Theoretical Claims: I did not check every step in the theoretical results (Supplement A) but I believe the results follows from standard CP results as long as the residuals are exchangeable. Experimental Designs Or Analyses: The design of experiments closely follows previous studies, e.g. (Gopakumar et al 2024a) and similar literature. Supplementary Material: I reviewed part C and checked the other results in parts D-N only lightly. Relation To Broader Scientific Literature: Yes, the key contributions are related to the broader literature. Table 2 in Supplement C provides a high level overview of how this paper fits within the broader literature and some discussion on relevant literature in both Physics-informed ML and CP is given throughout the paper. Essential References Not Discussed: Not that I can tell. Other Strengths And Weaknesses: Could you discuss the results in Tables 3-5 in more detail? What is the Evaluation time, particularly in the context of the two CP based methods. As far as I understand, the CP-AER method will need to evaluation the FNO and run the simulator once for each CP sample while CP-PRE does not require the simulator but requires the various differential operators to be applied to FNO. I assume the evaluation of FNO is very fast and I can see how for the examples in your paper the simulator evaluations may also be very fast as long as an optimised solver appropriate for the given problem is used. However, I do not have a good intuition of how expensive are the evaluations of the differential operators when applied to FNO but I would guess they are not much better than the simulator since they are performing ultimately the same calculations and the simulator may be highly optimised (e.g. FEM using high performance linear algebra libraries). Can you comment more on this and hence the resulting evaluation times? Can you then provide some guidance on when CP-AER is preferred over CP-PRE? The results shown in the main part of the paper are hard to interpret. For example, looking at Fig. 4 or Fig. 8, it is not obvious whether the method does well or poorly, or how the CP uncertainty estimates could be interpreted. This is further obscured by the fact that the predictions of the model are not in the original parameter/observation space but it in the residual space, as also pointed out by the authors. A broader discussion of the results, their interpretation and the implications would make the Experimental section stronger. I would also suggest replacing some of the figures in the main part of the paper with the model comparison results that are currently given in Supplement C, e.g. Table 2 and Tables 3-5. I believe these offer more insight than the visual representations of the solutions. Other Comments Or Suggestions: The paper is overall well-written. Line 368: Feel - fail Questions For Authors: The primary area for this paper is Applications->Chemistry, Physics, and Earth Sciences. However, the problems shown in the paper are closely aligned with examples used in existing literature and do not solve a new problem (as far as I can tell). Is there a particular reason you chose this primary area? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: # Evaluation times for CP-AER and CP-PRE: Thank you for highlighting the need for clarity regarding computational costs. We've updated the table to separately report calibration times for both methods: | PDE | UQ | L2 \(ID\) | Coverage \(ID\) | L2 \(OOD\) | Coverage\(OOD\) | Train time \(hr\) | Eval\. time \(s\) | Cal\. time \(s\) | |:----:|:----------------------------:|:----------------------------:|:-------------------:|:----------------------------:|:-------------------:|:-----------------:|:-----------------:|:----------------:| | Wave | CP\-AER | 1\.76e\-05 $\\pm$ 4\.40e\-07 | 95\.70 $\\pm$ 0\.21 | 2\.46e\-03 $\\pm$ 1\.41e\-05 | 95\.59 $\\pm$ 0\.14 | 0:38 | 22 | 2000 | | | **CP\-PRE \(Ours\)** | 1\.78e\-05 $\\pm$ 4\.61e\-07 | 95\.52 $\\pm$ 0\.21 | 2\.46e\-03 $\\pm$ 1\.25e\-05 | 95\.39 $\\pm$ 0\.12 | 0:38 | 22 | **10** | | NS | CP\-AER | 1\.05e\-04 $\\pm$ 6\.58e\-06 | 95\.56 $\\pm$ 0\.40 | 3\.66e\-03 $\\pm$ 2\.81e\-05 | 95\.54 $\\pm$ 0\.15 | 3:22 | 25 | 20000 | | | **CP\-PRE \(Ours\)** | 1\.07e\-04 $\\pm$ 5\.18e\-06 | 95\.44 $\\pm$ 0\.22 | 3\.70e\-03 $\\pm$ 4\.23e\-05 | 95\.57 $\\pm$ 0\.14 | 3:22 | 25 | **100** | | MHD | CP\-AER | 2\.20e\-03 $\\pm$ 4\.38e\-05 | 95\.61 $\\pm$ 0\.26 | 4\.69e\-02 $\\pm$ 8\.18e\-04 | 95\.60 $\\pm$ 0\.27 | 5:00 | 40 | 30000 | | | **CP\-PRE \(Ours\)** | 2\.20e\-03 $\\pm$ 4\.96e\-03 | 95\.54 $\\pm$ 0\.18 | 4\.71e\-02 $\\pm$ 1\.06e\-03 | 95\.67 $\\pm$ 0\.22 | 5:00 | 40 | **400** | Evaluation (Eval.) time refers to the time taken to evaluate the FNO. Calibration (Cal.) times refer to the time for data generation (CP-AER) of 1000 simulations or residual estimation (CP-PRE) over 1000 predictions. All evaluation and calibration times are done on a standard laptop, while the training is done a single A100 GPU. CP-AER requires substantial computational resources for calibration data generation, often through complex finite volume/element simulations that demand domain-specific expertise. CP-PRE leverages finite difference stencils as convolutional kernels, enabling GPU-parallelized computation through ML libraries with simultaneous space-time evaluation and cross-domain transferability. Our additive operator structure balances computational efficiency with statistical sufficiency (Appendix D). *** *** # Guidance on when CP-AER is preferred over CP-PRE: Thank you for this important question. Each method offers distinct advantages depending on the application context: CP-AER is preferred when: • Physics knowledge is incomplete or uncertain • Bounds on physical vector fields are specifically needed • Sufficient computational budget exists for calibration data • The system has unknown or complex physics, unable to be expressed in a residual form CP-PRE is advantageous when: • Calibration data is prohibitively expensive • Complete physics knowledge is available to get bounds on conservative variables • Computational efficiency is critical The primary limitation of CP-AER is its dependence on calibration data, which becomes expensive as simulation complexity grows. Meanwhile, CP-PRE only requires sufficient physics knowledge to formulate the equality constraint needed for the residual calculation. *** *** # Broader discussion of the results and their implications: We thank the reviewer for this suggestion and upon much deliberation we agree that it’s more informative to provide the tables in demonstrating coverage as opposed to the figures of the residuals and the bounds. The tables (2-5) help illustrate our idea both qualitatively and quantitatively without needing any domain knowledge. The figures showing the bounds over the physical fields are indicative of the conservation laws associated with the problem and might not provide clarity to the uninitiated reader. We have the meaning of the figures in detail for the same query raised by reviewer MQ8g (under utility of method). We have restructured the paper to abide by your suggestion. Thank you for your valuable advice. *** *** # Primary area as Applications->Chemistry, Physics, and Earth Sciences: Our primary area selection reflects the paper's core contribution to physics and computational science applications. While using established benchmarks, our framework addresses a significant gap in scientific computing by providing statistically guaranteed, physics-informed uncertainty quantification for neural PDE surrogates. Sections 5.4-5.5 demonstrate applications to fusion plasma modelling and tokamak magnetic equilibrium, showing how our method enhances reliability in high-consequence scientific domains with rigorous uncertainty bounds. --- Rebuttal Comment 1.1: Comment: Thank you for the response. While I appreciate the clarifications, the utility of the proposed methodology appears limited (requiring good knowledge of the physics of the problem and ability to implement data driven solvers while offering only minor improvements on existing methods). --- Reply to Comment 1.1.1: Comment: ## UQ for downstream deployment of Neural PDE solvers We respectfully disagree that our method has limited utility. **Knowing the underlying physics is precisely the scenario where our method provides unique value.** Neural PDE solvers (like traditional numerical solvers) are specifically designed for scenarios where the governing equations are known—whether they're physics-informed neural networks [1], neural operators handling families of solutions [2, 3], or foundation models for physics [4, 5, 6]. The critical gap in this field is not solving PDEs but quantifying the trustworthiness of these solutions with statistical guarantees (as illustrated in figure 1 and explained further in the rebuttal to reviewer MQ8g under the big-picture). Our PRE-CP framework addresses this by providing: Our PRE-CP framework provides unique value through **model-agnostic calibration** that works as a post-hoc measure on any trained neural PDE solver without architectural modifications, while delivering the first **physics-informed coverage guarantees** for neural PDEs essential for high-stakes applications; it requires **no ground truth data for calibration** (applicable when training data is unavailable/proprietary), and offers **decision-enabling information** through a joint-CP formulation that determines when predictions likely violate physical laws—creating a principled basis for choosing between neural solvers and traditional numerical methods. In domains like climate prediction, fusion plasma control, or aerospace design, neural PDE solvers could provide 1000× speedups but aren't deployed due to uncertainty concerns. Our method enables these applications with calibrated uncertainty bounds that respect physical laws. The reviewer mentions "minor improvements over existing methods," but we emphasise that no existing method provides statistically valid, physics-informed uncertainty quantification for neural PDEs without requiring data, model modifications, or sampling. **In PDE-based applications, the governing physics equations are known a priori—the exact forward problem setting where our method excels**. As neural solvers become increasingly accessible as black-box tools (such as emerging foundation models), our approach uniquely bridges this critical gap, adding solution reliability to uncalibrated neural PDEs. *** *** ## Further applications of PRE-CP Our framework applies to any prediction case where the forward problem can be formulated as a residual with an equality constraint (Appendix A, B). The framework works in any scenario where the forward model can be expressed in the standard canonical form: $$ \mathcal{D}(u) - b = 0$$ Where $u$ is the model prediction, $\mathcal{D}$ is a differential or algebraic operator governing the dynamics, and $b$ is a non-homogenous term such as a function or a constant. This extends our applications beyond PDEs to ODEs and algebraic equations found in control problems [7], chemical reactions [8], biological systems [9], and financial scenarios [10]. We deliberately focus on PDEs as they represent the most comprehensive and challenging case—multi-dimensional domains with complex spatio-temporal dependencies and unique computational challenges. Success with PDEs implicitly validates applicability to simpler systems. We're currently extending our PRE-CP framework as an acquisition function within an active learning pipeline, as data generation from complex simulation codes is computationally intensive. This method could provide a data-free, model-agnostic, physics-informed approach to sampling training data from expensive simulation codes [11]. --- **References** [1] Raissi, M., et al. (2019). Physics-informed neural networks. Journal of Computational Physics, 378, 686-707. [2] Kovachki, N., et al. (2023). Neural Operator: Learning Maps Between Function Spaces. JMLR, 24(89), 1-97. [3] Lu, L., et al. (2021). Learning nonlinear operators via DeepONet. Nature Machine Intelligence, 3(3), 218-229. [4] McCabe, M., et al. (2024). Multiple Physics Pretraining for Physical Surrogate Models. arXiv:2310.02994. [5] Rahman, M. A., et al. (2024). Pretraining Codomain Attention Neural Operators for Multiphysics PDEs. arXiv:2403.12553. [6] Herde, M., et al. (2024). Poseidon: Efficient Foundation Models for PDEs. arXiv:2405.19101. [7] Jiang Y, Jiang ZP. (2014). Robust adaptive dynamic programming and feedback stabilization. IEEE Trans Neural Netw Learn Syst, 25(5), 882-93. [8] Thöni, A. C. M., et al. (2025). Modelling Chemical Reaction Networks using Neural ODEs. arXiv:2502.19397. [9] Wang, S., et al. (2019). Massive computational acceleration using neural networks to emulate mechanism-based biological models. Nature Communications, 10, 4354. [10] Liu, S., et al. (2019). Pricing Options and Computing Implied Volatilities using Neural Networks. Risks, 7(1), 16. [11] Musekamp, D., et al. (2025). Active Learning for Neural PDE Solvers. Proc. ICLR 2025.
Summary: This paper presents a method for estimating uncertainties in neural PDE solvers without requiring labeled data. The authors propose PRE-CP, which combines PDE residuals and conformal prediction. By using the PDE’s own equations as the reference, the method calibrates each model’s physical errors directly. They show that PRE-CP works with standard neural PDE setups (such as wave, Navier-Stokes, and magnetohydrodynamics) and also demonstrate applications to fusion research. Their results indicate that PRE-CP can uncover locations in the model’s predictions where the solution fails to follow the underlying physics, offering a statistically valid way to determine when to trust or question the model’s output. ## update after rebuttal Rebuttal acknowledged. Claims And Evidence: The key claim is that using a physics-informed residual as a nonconformity score leads to valid coverage guarantees. The paper supports this with theoretical arguments and empirical validation on multiple PDEs. Methods And Evaluation Criteria: The authors employ standard PDE examples (wave, Navier-Stokes, MHD) and then apply the approach to practical fusion applications. The authors use recognized PDE solvers for reference. They also describe how to generate PDE residuals efficiently via finite difference stencils Theoretical Claims: The proof sketches in the appendices appear coherent. A deeper check would require verifying each step of their derivation regarding exchangeability assumptions for PDE initial conditions, but no immediate flaws are evident. Experimental Designs Or Analyses: The experiments compare coverage results across wave, Navier-Stokes, and magnetohydrodynamics PDEs, and also show real-world scenarios. The analyses use marginal and joint coverage and show how coverage is empirically measured. The experimental protocols appear logically consistent, with error bars and coverage curves displayed clearly. Supplementary Material: I did not review the supplementary code in detail. However, it would be helpful to include a brief README file. Relation To Broader Scientific Literature: This work expands conformal prediction into PDE-solving contexts by incorporating PDE residuals. It aligns with recent research on physics-informed methods for PDE-based modeling (e.g., PINNs, neural operators). Unlike many UQ approaches requiring labels or Bayesian sampling, this paper uses physics-based residuals, filling a gap in data-free calibration methods. It also relates to prior CP frameworks for high-dimensional data, extending them to spatio-temporal PDE outputs. Essential References Not Discussed: Most relevant PDE-based operator learning work is cited (Fourier Neural Operators, Physics-Informed Neural Networks), and the paper mentions conformal prediction references for spatio-temporal data. Other Strengths And Weaknesses: Strengths: + It provides both marginal and joint coverage formulations, giving users the option to pinpoint local errors or reject predictions at a global scale. + It provides coverage guarantees without labeled data. + The appendices are thorough, showing proofs for the theoretical aspects and comparisons across multiple UQ methods and PDE scenarios. Weaknesses: + The approach requires knowing the PDE precisely, so it is not suitable if the physical model is partially unknown or has uncertain terms. + The residual-based method can be sensitive to discretization, especially in the temporal domain, potentially inflating error bars in coarser meshes. + Initial and boundary conditions, while addressed, could receive a clearer explanation in the main text for completeness. Other Comments Or Suggestions: + README for Code: Including a short README in the supplementary code would help others replicate the experiments. + Extended Discretization Analysis: More discussion about how changing resolution in space/time affects the residual estimates would be helpful. Questions For Authors: + If certain PDE terms or coefficients are unknown, could this approach handle partially specified physics by defining an approximate residual operator? + How large is the overhead when evaluating PDE residuals for high-resolution spatio-temporal domains, and are there workarounds to reduce it without losing coverage guarantees? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: # README for Code: Thank you for pointing out this error that happened during anonymisation and giving us a chance to rectify it. For the purpose of the review, we are providing an abridged README below. ## Installation ```bash pip install -r requirements.txt ``` ## Quick Start Run standalone experiments (no data or pre-trained models needed): ```bash # Marginal bounds for 1D advection python -m Marginal.Advection_Residuals_CP # Joint bounds for 1D advection python -m Joint.Advection_Residuals_CP ``` ## PRE Estimation Example ```python from ConvOps_2d import ConvOperator # Define operators for PDE: ∂u/∂t - α(∂²u/∂x² + ∂²u/∂y²) + βu = 0 D_t = ConvOperator(domain='t', order=1) # time-derivative D_xx_yy = ConvOperator(domain=('x','y'), order=2) # Laplacian D_identity = ConvOperator() # Identity Operator # Combine operators with coefficients alpha, beta = 1.0, 0.5 D = ConvOperator() D.kernel = D_t.kernel - alpha * D_xx_yy.kernel - beta * D_identity.kernel # Estimate PRE from model predictions PRE = D(model(X)) ``` ## Advanced Experiments For Navier-Stokes, MHD, and other experiments: 1. Generate data: Run scripts in `Neural_PDE/Numerical_Solvers/` 2. Train models: Use scripts in `Neural_PDE/Expts/` 3. Run uncertainty estimation: See scripts in `Marginal/` and `Joint/` ## Repository Structure - `Joint/`, `Marginal/`: Conformal prediction implementations - `Neural_PDE/`: Neural PDE solver implementations - `Utils/`: Utility functions - `Other_UQ/`: Bayesian Deep Learning benchmarks *** *** # Discussion on discretisation: We've addressed the impact of spatial/temporal resolution on residual estimates in Appendix D.1, but agree this merits further discussion. The discretisation in PRE-CP stems from the neural-PDE solver itself. With pre-trained models, we typically have limited control over this aspect. The convolutional kernels (and corresponding finite difference stencils) adopt the discretisation present in the predicted data. As shown in Figure 10 for the 1D Advection equation, coarser discretisation leads to inflated error bounds compared to finer discretisation implementations. However, PRE-CP consistently delivers guaranteed coverage regardless of resolution. Even with coarser discretisation, PRE-CP remains valuable by: 1. Providing statistical identification of poorer fit regions (marginal formulation) 2. Highlighting physically inconsistent predictions (joint formulation) 3. Enabling relative assessment of physical inconsistencies across predictions While residuals may be inflated with coarser discretisation, the corresponding bounds reflect this inflation, preserving the relative information about physical inconsistency across a series of predictions. We're currently leveraging this property to develop an active learning pipeline for neural PDEs, using PRE-CP formulation as an acquisition function. *** *** # Unknown PDE terms: Thank you for this insightful question. Our method fundamentally relies on formulating the nonconformity score as an equality constraint through the residual formulation, as demonstrated in the Theorem in Appendix A. This approach allows us to perform conformal prediction without requiring data. When certain PDE terms are unknown, the equality constraint may not be fully satisfied, potentially limiting PRE-CP's direct applicability. However, PRE-CP doesn't necessarily require complete knowledge of the entire PDE family. As shown in our Navier-Stokes and MHD equation examples, we can still derive meaningful bounds using only one conservation law (continuity or momentum) without explicitly incorporating all equations in the family. It's worth noting that our work primarily targets neural PDE surrogates for forward modelling where the PDE terms are known. This focus enables us to provide guaranteed uncertainty quantification for these specific applications. *** *** # Computational Overheads: The computational overhead for PDE residual evaluation is relatively low due to our use of highly optimised convolutional operations. The cost hierarchy in our workflow is: 1. Running full simulations (e.g., 350 core hours for Tokamak plasma modelling) 2. Training neural PDE models (e.g., 6 hours on a single A100 for FNO) 3. Model inference (e.g., 90 seconds on a standard laptop) 4. Residual estimation (less than 5 seconds) We've optimised our ConvOps library to minimise computational costs through: 1. Additive kernels for linear PDE components, reducing the number of required convolutions (detailed in Appendix D) 2. Support for both spectral and direct convolutions, providing speedups for high-resolution grids [1] These optimisations maintain the physics residual equality constraint and prediction exchangeability, preserving our coverage guarantees. [1] Rippel, O., Snoek, J., & Adams, R. P. (2015). Spectral Representations for Convolutional Neural Networks. arXiv preprint arXiv:1506.03767. Retrieved from https://arxiv.org/abs/1506.03767
Summary: This paper proposes a model-agnostic, physics-informed conformal prediction network that provides guaranteed uncertainty estimates independent of input data. Claims And Evidence: 1. The proposed approach is model-agnostic and physics-informed. The physics-informed aspect is evidenced in Section 4, but whether the approach is truly model-agnostic is not clearly demonstrated. 2. The proposed approach guarantees coverage bounds both marginally and jointly. This is evidenced by Theorem A.1. Methods And Evaluation Criteria: The authors utilize physicist residual error as the nonconformity score, enabling data-free prediction. The proposed approach is validated through comparisons of estimated uncertainty with PDE residuals. Theoretical Claims: The authors claim that the proposed approach guarantees coverage bounds both marginally and jointly, as formalized in Theorem A.1. Experimental Designs Or Analyses: The authors evaluate the proposed approach using: (1) Wave equations, (2) Navier-Stokes equation, (3) Magnetohydrodynamics, (4) Plasma modeling within a tokamak, (5) Magnetic equilibrium in a tokamak. The estimated uncertainty of the model is compared with the PDE residual. Supplementary Material: Code is provided in the supplementary material. The authors also include a code snippet in Section D and a quantitative analysis in Section C. Relation To Broader Scientific Literature: The study contributes to uncertainty quantification (UQ) in physics-informed machine learning and conformal prediction. The proposed method aligns with ongoing research in Physics-informed neural networks (PINNs), neural PDE solvers, and uncertainty quantification in computational physics. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: - The paper introduces an interesting framework that extends conformal prediction to physics-informed models. Weaknesses: 1. From the quantitative evaluation in the supplementary material, the proposed approach does not achieve the best performance in most scenarios compared to other baselines. This raises concerns about its effectiveness. 2. The writing of the paper needs improvement, as many symbols in Section 5 are not clearly defined. Other Comments Or Suggestions: - Please highlight the best-performing approach in the quantitative tables in the supplementary material. - It may be beneficial to move these tables to the main paper, as quantitative results are important for evaluating the proposed approach. Questions For Authors: 1. Uncertainty can be broadly categorized into aleatoric and epistemic. Given that the proposed approach estimates uncertainty independent of input, how does it differ from epistemic uncertainty estimation approaches? 2. Why are validation plots and quantitative analysis provided only for the first three experiments but not for plasma modeling and magnetic equilibrium? Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: # Quantitative evaluation to baselines Thank you for raising this important point. We'd like to clarify that our framework indeed demonstrates superior performance in guaranteed coverage compared to baseline methods. In Appendix C (Tables 3-5), we comprehensively compare our method (CP-PRE) against standard Bayesian approaches (BNN, MC Dropout, Deep Ensembles, SWA-G) and data-driven inductive conformal prediction using absolute error residual (CP-AER). Our results show that CP-PRE consistently provides guaranteed coverage across all experiments, including both in-distribution and out-of-distribution evaluations, with performance comparable to CP-AER. While CP-AER achieves similar coverage, it requires extensive simulation data for calibration—a significant limitation. Following Reviewer X8Gm's suggestion, we've updated our evaluation metrics to include data generation time for CP-AER and residual estimation time for CP-PRE in the revised manuscript, providing a more complete comparison of computational requirements and is shown in the rebuttal to X8Gm. We appreciate your feedback and have moved Tables 2-5 to the main text, highlighting methods that achieve estimated coverage for improved clarity. Thank you for helping us enhance the paper's readability. *** *** # Missing symbols and improving writing Thank you for your careful review of Section 5. We've thoroughly re-examined this section and have ensured all symbols are properly defined. While Reviewer X8Gm found the writing quality satisfactory, we recognise the importance of clarity for all readers. In our revised manuscript, we've added additional clarification where needed and performed a comprehensive review of all notations to ensure consistency and accessibility throughout the paper. We appreciate your feedback, as it has helped us improve the overall quality of our presentation. *** *** # Aleatoric vs Epistemic Uncertainty Thank you for raising this important conceptual question. The uncertainty quantified by PRE-CP aligns with conformal prediction's characterization of predictive uncertainty. From one perspective, this could be viewed as aleatoric uncertainty since we construct confidence intervals relative to a specific probability distribution (the distribution from which calibration data, i.e. initial conditions are sampled). Alternatively, it could be considered epistemic uncertainty since we model the neural network's error through confidence intervals (typically used for unknown but fixed quantities). While we believe the latter interpretation is more appropriate, we acknowledge that the traditional aleatoric/epistemic dichotomy may not be directly applicable to our framework. This distinction is most valuable when both uncertainty types coexist and require separate treatment [1]. Although we could elaborate on this in our manuscript, we felt it somewhat peripheral to our paper's focus on practical uncertainty quantification rather than fundamental statistical theory. [1] S. Ferson, and L. Ginzburg, "Different methods are needed to propagate ignorance and variability", Reliability Engineering & System Safety, Elsevier, 1996, https://doi.org/10.1016/S0951-8320(96)00071-3 *** *** # Validation plots for all experiments Thank you for this question. We've included complete validation plots and quantitative analysis for all experiments, including plasma modeling and magnetic equilibrium, in the appendices due to space constraints in the main text: • Plasma modeling coverage plots appear in Figure 29, Appendix M • Magnetic equilibrium coverage plots are shown in Figure 34, Appendix N We initially focused on presenting coverage guarantees for widely recognized test cases (Wave, Navier-Stokes, and MHD equations) where we do ablation studies in the main text. Based on your feedback, we will improve cross-referencing to these appendices to ensure readers can easily locate the complete analysis for all experiments.
Summary: Papers consider uncertainty quantification of PDEs. They claim that by utilising a physics based approach they can quantify and calibrate the model’s inconsistencies with the PDE. Claims And Evidence: (see Other Strengths And Weaknesses)) Methods And Evaluation Criteria: (see Other Strengths And Weaknesses)) Theoretical Claims: (see Other Strengths And Weaknesses)) Experimental Designs Or Analyses: (see Other Strengths And Weaknesses)) Supplementary Material: Did not look so much. I think the main body should be able to describe the main concept well enough. Relation To Broader Scientific Literature: PDEs are important in physics. Essential References Not Discussed: . Other Strengths And Weaknesses: They claim that by utilising a physics based approach they can quantify and calibrate the model’s inconsistencies with the PDE. I can follow the paper in high level, meaning (based on my understanding) they calculate score function PRE using the stencil approximation and calculate some kind of conformal prediction (CP). But I have to admit that I do not understand the pig picture meaning that why they are doing that or how these CPs can be used in practice? Maybe the reason is that I am not previously familiar with CP and cannot completely follow the idea based their description. Probable due to size constraints this CP section is quite short and mostly summarises some key definitions and results from previous works, but it does not explain so much that how the quantities are actually calculated or how those are used. They could explain more that how the approach is actually used. Also, results shows PRE (which is basically just the derivate of the solution) and derived CPs for different PDE problems. Those are nice looking pictures, but again overall meaning or how those could help in practice is not clear to me. Furthermore, they also do not compare to any of previous methods. Other Comments Or Suggestions: Overall approach or "big picture" could be explained better. Perhaps showing an example which shows that how CP is calculated and then how it is used to solve practical problem? Some kind of algorithmic description of the overall approach could also be helpful. Questions For Authors: No further questions Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: # The big picture: We appreciate this opportunity to clarify our motivation. Our work stems from the need to make neural PDE solvers more practical for scientific modelling. Numerical PDE solvers have been essential to scientific modelling since the 1950s, enabling cost-effective simulation of complex scenarios. However, they still impose significant computational burdens in complex settings. Neural PDEs offer a promising middle ground—networks that learn physics approximations could quickly explore design spaces before running expensive numerical simulations. For this workflow to be practical (as shown in Figure 1), we must quantify neural PDE model performance with reliable error bounds. While various Bayesian methods can be applied to neural PDE solvers, they fail to guarantee coverage in high-dimensional settings. Extensions of inductive conformal prediction to spatio-temporal domains require substantial calibration data for error bounds. Our approach addresses these limitations by using PDE residuals as nonconformity scores, providing guaranteed bounds across conservative PDE variables without requiring data or sampling, functioning post-hoc without modifications. Table 2 in Appendix C qualitatively compares our method's advantages against Bayesian UQ and standard CP methods. *** *** # Utility of this method and the meaning of these figures: Our approach begins with predictions of spatio-temporal physical field variables from a neural PDE solver. We then transform these predictions to their conservative form using the differential operators that characterise the PDE. This transformation captures how well the solution satisfies conservation equations, identifying regions of physical inconsistency. The Physical Residual Error (PRE) estimates reveal regions where the predicted dynamics deviate from the ground truth. By applying Conformal Prediction (CP) to these PRE values, we obtain calibrated bounds across conservative variables that provide meaningful information to domain experts. The figures illustrate how conservative variables are modelled across space and time and the regions where the model struggles to understand and learn the known physics. Additionally, our PRE-CP framework serves as an indicator of model quality, as demonstrated in Appendix I. While PRE-CP guarantees coverage regardless of model fit, the width of the error bars effectively measures model quality. Figure 18 compares (a) predictions from poorly-fit (bad) and well-fit (good) models, showing that (b) error bounds for the well-fit model are substantially narrower than (c) those of the poorly-fit model. *** *** # Procedure: We thank the reviewer for this helpful suggestion. Due to page constraints, we limited our discussion of conformal prediction in the main text while referencing relevant literature for broader context. We have outlined an abridged algorithmic procedure below and have added a detailed one to the appendix. 1. Neural PDE Solver Setup: Define PDE simulator, generate data and train neural approximator 2. Physics Residual Error: Calculate $|D(\hat{f}(X))|$ using differential operators as convolutions. 3. Calibration: a. Sample ${X_1,...,X_n}$, get predictions ${\hat{f}(X_1),...,\hat{f}(X_n)}$ b. Compute PRE scores ${|D(\hat{f}(X_1))|,...,|D(\hat{f}(X_n))|}$ c. For confidence $1-\alpha$, find $\hat{q}^\alpha$ as the $\lceil(n+1)(1-\alpha)\rceil/n$ quantile from the PRE scores. 4. Application: a. For new $X_{n+1}$, calculate $|D(\hat{f}(X_{n+1}))|$ b. Valid if within $\mathbb{C}^\alpha = [-\hat{q}^\alpha, \hat{q}^\alpha]$ 5. Interpretation: Apply cell-wise for local bounds or use supremum for global bounds We appreciate your feedback and thank you for making the paper accessible to a broader audience. *** *** # Comparison to other methods: Our method's coverage performance is documented in Tables 3-5 in Appendix C, where we compare against both Bayesian UQ approaches (deep ensembles, MC dropout, stochastic-weighted averaging, Bayesian neural networks) and data-driven conformal prediction using absolute error residuals (CP-AER). Our method (CP-PRE) consistently provides guaranteed coverage across all experiments in both in-distribution and out-of-distribution evaluations, with performance comparable to CP-AER. However, it's important to note that CP-AER requires extensive simulation data for calibration, while our method does not. This difference is quantified in the tables, showing that the computational cost of evaluating PRE is only a fraction of the time needed to acquire new simulation data, but has been explicitly mentioned in response to reviewer X8Gm's questions. Following reviewers' suggestions, we have moved Tables 3-5 to the main paper and hope that demonstrates the advantages of our method. We apologise for the confusion that this may have caused and welcome any further clarification requests.
null
null
null
null
null
null
Counterfactual Graphical Models: Constraints and Inference
Accept (spotlight poster)
Summary: The paper presents a novel framework for counterfactual reasoning using graphical models. The paper introduces two key contributions: Ancestral Multi-World Networks (AMWN) – a new graphical representation for counterfactuals, and Counterfactual Calculus (ctf-calculus) – a set of rules for transforming counterfactual expressions. The work extends Pearl’s do-calculus, allowing for more efficient and general counterfactual reasoning. AMWN is a graphical model that encodes counterfactual independence relationships. It builds upon Structural Causal Models (SCMs) and replaces traditional Twin Networks, which are computationally expensive. The paper provides an algorithm to construct AMWNs, ensuring they are both sound (correct) and complete (able to capture all relevant relationships). The model enables efficient testing of counterfactual independence using d-separation. Counterfactual Calculus (ctf-calculus) is a set of three transformation rules that generalize do-calculus to counterfactual settings: First, the consistency rule relates observations and interventions. Second, the independence rule uses d-separation to infer counterfactual independence. Third, the exclusion rule eliminates interventions that do not affect a variable. These rules allow counterfactual expressions (e.g., $P(Y_x | X=x’)$ ) to be rewritten in terms of observable or interventional probabilities. The approach is computationally efficient, reducing the complexity of counterfactual queries compared to existing methods. Claims And Evidence: The paper presents several claims that can be summarized in two areas. First, it claims AMWN is a sound and complete method for counterfactual independence reasoning and improves on existing graphical models (e.g., Twin Networks, Single World Intervention Graphs). These assertions are supported by consistency and constraint definitions, nested (and unnesting) counterfactuals, exclusion operators, and a hierarchy of counterfactual relations. These are used for the key theorem on counterfactual d-separation (independence) Second, the paper presents the CTF as a generalization of Pearl’s do-calculus (Theorem 3.1), resulting from counterfactual d-separation. Methods And Evaluation Criteria: The proposed methods are theoretical. They introduce AMWN as a new representation of counterfactual graphical models. The paper defines ctf-calculus as a set of inference rules using d-separation in the AMWN framework to determine counterfactual independence. Since the work is theoretical, the evaluation criteria consist of the proofs. This includes computational complexity analysis compared with prior methods in terms of efficiency (Twin Networks, SWIG, and Multi-Networks). Theoretical Claims: The key claims are summarized in three theorems, among other lemmas. Theorem 2.5 (Counterfactual d-Separation) proves that d-separation in AMWN is both sound and complete. Theorem 3.1 (Counterfactual Calculus Rules) formally establishes the transformation rules for counterfactual expressions. Theorem 3.2 (Soundness and Completeness of ctf-calculus for Counterfactual Identifiability) shows that ctf-calculus can fully characterize counterfactual identification. Experimental Designs Or Analyses: The paper does not include empirical experiments or simulations Supplementary Material: No Relation To Broader Scientific Literature: The paper extends prior work on Pearl’s do-calculus and Structural Causal Models (SCMs). It improves on existing methods such as Balke & Pearl, 1994); AMWN overcomes the incompleteness of d-separation in Twin Networks; Richardson & Robins (2013), for which AMWN handles multiple interventions simultaneously; Shpitser & Pearl (2007); for which AMWN avoids the exponential graph explosion problem. The literature review is strong, but an empirical comparison with recent causal inference models (e.g., deep learning-based causal models) could be useful. **** POST REBUTTAL thank you to the authors for their answer. Their comments can be helpful in improving the paper. I will maintain my positive score for this paper. Essential References Not Discussed: I would suggest to discuss the counterfactual d-separation concept in the context of other graphical models criteria of independence. See for instance Ma et al, (2022), Vo et al. (2023) , etc. Ma, Jing, et al. "Clear: Generative counterfactual explanations on graphs." Advances in neural information processing systems 35 (2022): 25895-25907. Vo, Vy, et al. "Feature-based learning for diverse and privacy-preserving counterfactual explanations." Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 2023. Other Strengths And Weaknesses: Two observations on this work. First, the paper provides a general framework for counterfactual reasoning, unifying different causal inference tools. Second, the method is computational efficiency as AMWN provides a polynomial-time method for counterfactual independence testing. Other Comments Or Suggestions: Just a suggestion to rename the heading Any sep. in Table 1 for something more informative. Alternatively, it could be added to the caption. Questions For Authors: How does your proposed idea compare to the works listed above? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for reading our work, providing feedback, and asking questions. We refer next to the research mentioned in the review. Also, thank you for sharing the references. We describe the work as we understand it, but we would be happy to hear more about it from the reviewer. Regarding Ma et al (2022), the paper makes assumptions on the latent variables’ prior and the availability of an auxiliary variable. Based on these, they aim to identify the SCM that generates the data. They integrate the information on the SCM into the optimization process of a generative model to produce explanations. Compared to our approach, there are similarities in modeling the data-generating process as an SCM and then inferring counterfactual quantities based on the assumptions made over the SCM. On the other hand, in our framework, we make no assumptions about the distribution of the latent variables or the functional form of the mechanism of the SCM but assume a known causal graph. This setting is called non-parametric in the literature, which motivated Pearl’s Biometrika’s 1995 paper, where he introduced the do-calculus. Moreover, we do not attempt to identify the SCM as a whole but to identify particular (counterfactual) queries of interest in the form of independence constraints or probabilities. These are indeed the two main contributions of our paper, namely, graphical criteria and calculus for counterfactual identification. Vo et al (2023) seems to focus on producing examples that counter the original outcome produced by a model (e.g. classifier). Moreover, it seems the validity of the counterfactual is measured in terms of whether the outcome can be overturned by the generated example. We believe our work is not directly comparable here since it seems this work does not address the causal structure of the underlying data-generating process. Having said that, we would be happy to understand the reviewer's suggestions, in case it implies some subtle connection we may have missed based on our cursory reading. --- Rebuttal Comment 1.1: Comment: Thank you for the answer. I will maintain my score.
Summary: The paper studies the identification of counterfactual queries. It studies the constraints induced by the casual graph: consistency, exclusion, and independence. The paper proposes a sound and complete method for testing independencies among counterfactual variables based on constructing a simplified graph (AMWN) and then testing d-separations on the graph. These constraints then lead to a set of sound and complete rules, called counterfactual calculus, for identifying counterfactual queries. Claims And Evidence: Yes, proofs for all theorems were included in the supplementary. Methods And Evaluation Criteria: Examples were provided in the paper to illustrate how the counterfactual calculus can be applied to identify counterfactual queries. Theoretical Claims: I reviewed the proofs in Appendix B.1, B.3, B.5, and C.2 in detail and skimmed through others. They look correct and rigorous to me. Experimental Designs Or Analyses: N/A Supplementary Material: Yes. I reviewed the proofs (Sec B), Discussion and further examples (Sec C), and Frequently asked questions (Sec E). Relation To Broader Scientific Literature: The paper proposes a sound and complete method for testing independencies for general counterfactual variables, which improves upon the previous methods including [Balke & Pearl, 1994] (not complete), [Shpitser & Pearl 2007] (not efficient), and [Richardson & Robins, 2013] (restricted to a single world). Moreover, the paper proposes sound and complete rules for identifying counterfactual queries, which complement the previous algorithmic method in [Correa et al., 2021a]. As mentioned in Appendix E, this is an important contribution since the rules (counterfactual calculus) bring the potential to solve problems under more general setups. Essential References Not Discussed: I'm not aware of any missing references. Other Strengths And Weaknesses: - The paper is clearly structured and well-written. - The supplementary contains a huge discussion on the connection to other related frameworks. - The paper also included the examples (end of Sec. 3), which makes it easier for me to see how the rules can be applied. Other Comments Or Suggestions: - Figure 3 is a bit confusing to me since the meaning of the two columns and edges is missing. - While this was shown in the Appendix, I think it is worth mentioning in the main paper that counterfactual calculus can remove interventions (do operations) whenever they can be removed using do-calculus. Otherwise, it's unclear how it is complete for identification given the observational distributions. Typos: - Page 2: "We base our analysis on the Structural Causal Model (SCM) paradigm *(?)Ch. 7]pearl:2k.*" - Page 6, "Algorithm 1: For each *edge* $V \in \textbf{V}$" -> variable? - Supplementary Definition D.2: "conditions ??", Section E Q1: "(see (?)Sec. 1.3]bar:etal2020 for details" Questions For Authors: Could you please provide some intuitions for the (counterfactual) ancestors in Definition 2.4? Maybe include it after the definition. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for reviewing our work, providing feedback, and giving suggestions. To answer your question about the intuition for counterfactual ancestors (Definition 2.4), they are the counterfactual variables that are causally relevant to the variable in question. This extends the idea that, in graphical terms, a variable can only affect another if the former is an ancestor of the latter. When counterfactuals are involved, some of those ancestors become irrelevant (by virtue of the exclusion operator). For example, in a simple chain graph such as $X \to Z \to Y$, $X$ is an ancestor of $Y$, but it is not a (counterfactual) ancestor of $Y_z$, because under $do(Z)$, $X$ cannot affect $Y_z$. Similarly, $Z$ is an ancestor of $Y$, but for $Y_x$ the counterfactual ancestor is $Z_x$, not $X$ or $Z$. As suggested, we will include a brief discussion of this after the definition; your suggestion is appreciated. As you point out, we did not mention in the main paper that ctf-calculus subsumes do-calculus. It is suitable for any task where the latter is complete, such as identification from observational distributions. We will add Lemma C.1 (in the supplemental material) “ctf-calculus subsumes do-calculus” and a brief discussion to the main text to make this fact clear to the reader. We see your point about Figure 3, and that the meaning of the columns and edges can be confusing in the middle of so many models where elements have a well-defined meaning. In this figure, the gray boxes represent data-generating mechanisms that transform a specific unit $U=u$ into a counterfactual event over the observable variables. Each rectangle is a copy of the mechanisms of the structural causal model. Depending on the counterfactual of interest, the mechanisms share some functions (e.g., $f_z$ and $f_y$ in (a)), redefine others ($f_x$ and $f_x'$ in (a)), or contain functions that require the evaluation of a separate set of mechanisms (e.g. $f_x'$ in (b)) to compute a nested counterfactual. Following your suggestion, we will provide a better description of the graphical elements of the figure in the manuscript, thank you.
Summary: The paper is focused on the graphical modelling and the (symbolic) calculus of counterfactual inferences within the framework of Pearlian structural causal models. There are two major contributions: (i) a novel graphical representation called Ancestral Multi-World Networks (AMWN), which efficiently encodes counterfactual independencies implied by causal diagrams (d-separation is complete wrt AMWNs); (ii) a new set of inference rules called "counterfactual calculus" that extend Pearl's classical do calculus to counterfactuals (also sound and complete for those queries). Claims And Evidence: All the claims about the construction and soundness of the new graphical structure and the corresponding calculus are formally proved. Methods And Evaluation Criteria: This is a theoretical paper with no experiments. Theoretical Claims: I checked the proofs of the main results but not those of the preliminary lemmas. I believe the results are correct. Experimental Designs Or Analyses: This is a theoretical paper with no experiments. Supplementary Material: I only read the supp-mat, but didn't check the proofs of the lemmas. Relation To Broader Scientific Literature: The relation with the broader scientific literature is very clear. Essential References Not Discussed: I think all the relevant references are properly cited. Other Strengths And Weaknesses: This is a significant and influential paper for counterfactual inference. The work fills a gap in the earlier literature, and I might imagine lots of applications based on the calculus presented here. Of course, some results are pretty technical, but this is expected and the authors did an excellent job in giving additional information and insights in the supplementary material. Other Comments Or Suggestions: The sentence about "transforming nested ctf into non-nested one" might be misleading, as we have a sum in the transformation. There are a few typos in the references (ex. "(?)Ch. 7]pearl:2k"). Moreover, in section 2.3., the test query is conditioned on Z intervened by setting X=x’ and not Z’ intervened setting X=x. Some of the material in S2 is not entirely novel. The authors should better make this explicit in their revised version. Has the lack of completeness of twin nets been explicitly stated in the original paper of Balke & Pearl? As I can understand, the idea of computing a CTF query in the twin network after the surgery might therefore lead to wrong conclusions. If so, it would be nice to emphasize this point in the paper. Questions For Authors: - Ethical Review Concerns: - Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: Thank you for reading our work, pointing out typos, and providing suggestions, which we will incorporate into the manuscript. In particular, we will clarify in the paper that the unnesting corresponds to a transformation that starts with a nested counterfactual and ends with an expression involving a counterfactual with one less level of nesting. As far as we can tell, the original paper describing the Twin Network method by Balke & Pearl does not claim the method’s completeness for using d-separation to assess counterfactual independencies. However, in Shpitser & Pearl (2007), the authors discuss the incompleteness of the method to motivate multi-networks. As you point out, “the idea of computing a CTF query in the twin network after the surgery might therefore lead to wrong conclusions”. For instance, in the example in Sec. 2.3, related to Figure 5 (a,b), some distinct nodes in the Twin Network may refer to counterfactual variables that are deterministically the same. In the example, conditioning on $Z_{x}'$ is the same as conditioning on $Z$. Because the Twin Network does not capture this constraint graphically, d-separation does not capture such independencies. We have also provided further discussion on this in section C of the supplemental material but will clarify this in the main text as well. Thank you!
Summary: The paper introduces an efficient graphical construction called Ancestral Multi-world Network, which is sound and complete for interpreting counterfactual independencies from a causal diagram through d-separation. Furthermore, the authors propose the counterfactual (ctf-) calculus, which provides three transformation rules for deriving counterfactual quantities based on the constraints encoded within the diagram. Claims And Evidence: Yes, the claims presented in the paper are supported by clear and convincing evidence. Methods And Evaluation Criteria: Yes Theoretical Claims: I have checked most of the proofs. However, for Lemma 2.3, Definition 2.4, and Theorem 3.1, providing a causal diagram would greatly enhance clarity and ease of understanding. Experimental Designs Or Analyses: There is no experimental designs or analyses. Supplementary Material: Yes, parts A and B in the supplementary material have been reviewed. Relation To Broader Scientific Literature: The paper generalizes Pearl's celebrated do-calculus from interventional to counterfactual reasoning. The proposed AMWN improves upon existing frameworks, including Twin Networks, Single World Intervention Graphs, and Multi-Networks. Additionally, the three rules introduced for ctf-calculus are more general than Pearl’s do-calculus and the Potential Outcome Calculus (po-calculus). Essential References Not Discussed: Although the paper mentions the k-plet Network, it appears that no relevant references are provided. Other Strengths And Weaknesses: **Strengths**: -Compared to Twin Networks and k-plet Networks, AMWN reduces complexity by requiring fewer variables for representing counterfactual scenarios. -Extends Pearl's do-calculus effectively to counterfactual reasoning. **Weaknesses**: Many definitions in the paper overly emphasize detailed explanations of variables. Although the paper compares AMWN to Twin Networks and SWIG, detailed experimental results quantifying performance improvements are not provided. Other Comments Or Suggestions: In definitions and lemmas, clearly explaining each variable would be helpful; additionally, providing accompanying causal diagrams would significantly enhance readability and understanding. Questions For Authors: Could you provide a simple example illustrating a scenario where your method succeeds but the approaches listed in Table 1 fail—for instance, demonstrating that the Twin network algorithm is not complete, while yours is? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for reviewing our paper. To address your question about a scenario where our method succeeds but the other approaches mentioned in Table 1 fail, let us consider the question of whether the causal graph in Figure 4(b) implies that $(Y_{xw}, W_{x'} \perp X | {Z_x}')$. Figure 5(a) shows a 3-plet (triplet) network (a natural generalization of the twin network to 3 worlds) for this graph and question. The variables in the query involve three submodels: $\mathcal{M}, \mathcal{M_x}$, and $\mathcal{M_{xw}}$, all depicted in the network sharing explicit unobservable variables. It seems that $X$ is d-connected to $Y_{xw}$ given ${Z_{x}}'$, because there is an active path $X \gets Z \to U_z \to Z_{xw} \to Y_{xw}$. However, as discussed in the manuscript (around line 262) and due to exclusion restrictions, conditioning on $Z_{x'}$ is equivalent to conditioning on $Z$, meaning the corresponding separation statement holds. This implies that the Twin Network alone leads us to infer a wrong conclusion. Although this example involves three worlds (to take advantage of the figure in the paper), the same argument could be made with only two of them. For Single World Intervention Graphs (SWIGs), conditional independence among variables present in the SWIG can be read using d-separation, while the representation itself cannot capture cross-world restrictions on the counterfactual joint distribution. For instance, the separation of $X$ and $Y_x$ given Z cannot be judged using the SWIG for Figure 4(a) and intervention $X = x$ because Z does not appear in the resulting graph. These examples and some discussion on Shpitser & Pearl 2007 can be found in the supplemental material, section C. The k-plet Network is a concept we use in the paper to refer to the extension of the Twin Network method to k worlds (2-plet network equals Twin Network). In the paper, we imply that when combined with the exclusion operator, the k-plet network method is complete, but it can further be optimized, leading to the discussion of AMWNs. We added proper discussion and clarification about this point in the paper. Thank you! Also, as you pointed out, several definitions in the paper emphasize explanations about the notation and counterfactual variables. We will try to make those definitions more concise, maintaining the essence of the definition while moving additional explanations elsewhere. --- Rebuttal Comment 1.1: Comment: The authors have addressed my concerns. Hence, I will maintain my current score and lean toward accepting the paper.
null
null
null
null
null
null
CursorCore: Assist Programming through Aligning Anything
Accept (poster)
Summary: The core problem this paper addresses is that existing coding benchmarks are incongruent with human development processes. The paper argues that an effective coding assistant should be able to use various types of information available to humans to make edits, rather than simply respond to constrained prompts. To this end, the paper proposes a new benchmark, APEval that provides. a testbed for AI assistants that can utilize a richer context stream and propose / predict modifications to the code. The benchmark is created by human annotation, and based on HumanEval. The paper proposes a pipeline called Programming-Instruct that uses an LLM parameterized by a persona to simulate a human coder. SLMs trained with this pipeline outperform other SLMs and approach the performance of frontier LLMs (GPT-4o). Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Not applicable. Experimental Designs Or Analyses: I checked the details of the experiments in Section 6. Supplementary Material: No. Relation To Broader Scientific Literature: The primary novelty of the paper is the task formulation and the dataset. To my knowledge, there are no datasets that provide the edit stream amid a more naturalistic evaluation of coding assistants. In comparison, datasets like HumanEval (which APEval is based on) and SWE-Bench focus on a more restricted setting in which a single prepared instruction is provided and the final product is evaluated. Essential References Not Discussed: No. Other Strengths And Weaknesses: The paper provides a resource of significant utility to the community (a testbed for more naturalistic coding assistants), and the evaluation of the proposed pipeline is reasonable. Other Comments Or Suggestions: None. Questions For Authors: None. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for your review. We sincerely appreciate your recognition of our work.
Summary: Current code LLMs typically use only the code context and, optionally, user instruction as input, without considering the code’s development history. In this paper, the authors propose training a model to integrate various types of the information - particularly editing history - along with the current context and user instruction to predict future code edits. For evaluation, they introduce a new benchmark, APEval, to evaluate code capabilities using different combinations of information. For training data, they generate 219K samples via Programming-Instruct, which uses LLMs to automatically synthesize training data from GitHub commits and online judge platforms. Using the collected data, they fine-tune multiple state-of-the-art LLMs and develop the CursorCore model series, which outperforms models of comparable size. Claims And Evidence: My main concern is that the paper’s central claim - that integrating editing history into LLM training data helps models to learn more effectively from extensive editing data and ultimately become better programming assistants - was not clearly demonstrated in the evaluation. While Table-4 shows that CursorCore models outperform their unadapted counterparts in H+C and H+C+U settings, this improvement could be attributed to the unadapted models’ unfamiliarity with the format of history (H). If I understand correctly, a direct comparison between C+U and H+C+U is not possible because they involve different subset of problems. Therefore, to show the benefit of incorporating edit history in both training and testing, it is necessary to compare the CursorCore model’s performance with and without H on the same set of problems. For example, if we construct H for the C+U problems and use the new H+C+U for prediction, would this improve CursorCore’s performance over C+U? Methods And Evaluation Criteria: Overall, the proposed methods and evaluation criteria make sense to me, except for the following issues: **Unclear data processing** (page 5) The statement, “let the LLMs judge whether each segment of M aligns with the user's purpose through principle-driven approaches”, is ambiguous. How can the LLM determine whether a code change aligns with user intent without first inferring that intent? Does this mean the LLM only filters out simple and obvious cases, such as private information, based on predefined principles? The prompt in Table 18 suggests that the LLM is instructed to first assess user intent. If that is the case, the statement should be revised for clarity. **Limited scope of benchmark** The APEval benchmark consists mostly of simple function-level problems, as they are extended from HumanEval and contain on average 139/31/19 lines of H/C/U. Since part of the training data come from GitHub commits, I would expect that the fine-tuned model to perform well beyond function-level code generation. It would be better if the benchmark included more complex tasks, such as those from DS1000 (data science tasks) or ClassEval [a] (class-level program synthesis). [a] ClassEval: A Manually-Crafted Benchmark for Evaluating LLMs on Class-level Code Generation. (ICSE 2024) update after rebuttal: The authors have conducted further experiments on Zeta, which is a non-contaminated, more realistic dataset. Theoretical Claims: N/A Experimental Designs Or Analyses: Overall, the experimental design and analysis are sound and valid. Supplementary Material: N/A (no supplementary material submitted) Relation To Broader Scientific Literature: Key contribution: - Integrating code commit and edit history into code LLM training using a carefully-designed format, making a novel contribution. Prior work primarily focuses on using a snapshot of current code and user instruction as input, while the best way to represent and utilize edit history remains understudied. - A training dataset with edit history input collected from an automated data synthesis pipeline. The large dataset (219K) requires significant computation resources to construct and can benefit future research. - Extensive evaluation across models and edit representation formats. It conducts a comprehensive evaluation across a wide range of models (including close-source and open-source, various sizes), along with extensive ablation studies to support its technical choices (such as different representations of code changes, integrating reasoning traces, and data selection ablation). Essential References Not Discussed: The paper discusses essential related works. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: * Please ensure the correct capitalization of tool names: * Line 326 and later paragraphs: “stack” should be “Stack”, “oss-instruct” should be “OSS-Instruct”, “editpackft” should be “EditPackFT”, “evol-instruct” should be “Magicoder-Evol-Instruct” (as per the citation), “sglang” should be “SGLang”, and similar adjustments for other tool names. Questions For Authors: 1. Can you provide further empirical evidence demonstrating that adding history indeed improves the model’s ability to assist with programming? (see Claims And Evidence) 2. From Appendix J, it appears the CursorCore models trained from Qwen2.5-7B perform worse than Qwen2.5-Coder-7B-Inst on EvalPlus and HumanEvalFix. This contradicts CursorCore’s strong overall performance on APEval, which also extends HumanEval. This is surprising to me - could the authors provide further discussion? It also makes me wonder if APEval has some inherent bias. For instance, the C-only setting might be problematic, as the current code alone may not sufficiently indicate the user’s next intention. 3. Could you please clarify what is meant by “user instruction”? For example, in the context of HumanEval completion, do you transform the docstring into user instruction, or do you leave the docstring as the code context? Also, on page 2, it mentions “User instructions ... generated as feedback based on interactions with external environments (such as a code interpreter)”. Can you clarify what type of feedback can be considered as user instructions? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for your review. Please see our detailed response below: > Claims and Evidence & Q1 Thanks for the suggestion. We can provide the results of removing H from the samples in APEval that contain H, and compare them with the results obtained without removal. This ensures a fair comparison under the same conditions. The evaluation results are as follows: |w/o H|w H| |-|-| |35.4 (29.9)|39.0 (32.9)| The settings are consistent with those used in Appendix I. Additionally, the results presented in Appendix I further support it, as they provide evaluation results under different window lengths of H. > “let the LLMs judge whether each segment of M aligns with the user's purpose through principle-driven approaches” is ambiguous Thanks for the detailed review. We did prompt the LLMs to first analyze the user's intent before making a judgment, as shown in Table 18. We have corrected this in the revised version. > It would be better if the benchmark included more complex tasks Including more benchmarks is certainly beneficial! While, the prompts in the DS1000 and ClassEval lack historical context. Recently, a new dataset called Zeta has become available on HuggingFace (released just last month, so it was not possible to include it before the ICML deadline). Zeta is constructed from real engineering use cases, and better reflects the distribution of tasks in real-world scenarios. It include H (though not U). We have now conducted evaluations on this benchmark, and some results are as follows: |Model|Metric| |-|-| |DS-Coder-1.3B-Base|18.2| |DS-Coder-1.3B-Inst|42.4| |CursorCore-DS-1.3B|45.5| |Qwen2.5-Coder-7B|51.5| |Qwen2.5-Coder-7B-Inst|54.5| |CursorCore-QW2.5-7B|60.6| The reported metric is the average accuracy over all evaluation samples. We use GPT-4o to assess the correctness of the generated results based on the assertion texts. Of course, we can also evaluate DS1000 and ClassEval, to assess the model's ability to leverage U and C. We choose to evaluate the performance of CursorCore on these benchmarks using the Inline and Tab modes, as they most closely resemble the original formats of them. Some results are as follows: ||DS1000|ClassEval| |-|-|-| |DS-Coder-1.3B-Base|16.2|13.0| |DS-Coder-1.3B-Inst|20.7|13.0| |CursorCore-DS-1.3B|21.2|17.0| The reported metrics are averaged over all samples, with class-level generation evaluated using ClassEval. All generations are performed under the greedy decoding setting. Appendix J also includes another open-domain code editing benchmark (CanItEdit). These results collectively demonstrate the strong effectiveness of CursorCore. > Performance of Qwen2.5 and Concerns about bias in APEval (Q2) We have included a discussion of this issue in lines 1130 to 1135. Moreover, while instruction-tuned models are effective at aligning with prompts for program synthesis and code repair, they struggle to align with various forms of contextual information, such as historical context, which is commonly encountered in programming workflows. Therefore, the strong performance of Qwen2.5-Coder-7B-Inst on the synthesis and repair tasks, coupled with its weaker performance on APEval, is expected. Regarding your concern about bias in APEval, we have already taken this into consideration during the annotation process. As described in Appendix C, we ensure that the current code is sufficient to indicate the user's next intent. > Explanation of user instruction (Q3) For "user instruction," we refer to general prompts or feedback related to the code. Alternatively, from another perspective, if we consider the historical context as representing changes in the internal state, then the user instruction can be viewed as an external signal. The term is defined broadly—any type of prompt or feedback that may assist programming can be regarded as a user instruction, such as "write a quicksort algorithm," "translate this code into Java," or feedback messages like "Traceback...". In normal conversation templates of instruct models, this is commonly labeled as "user" or "instruction." To maintain compatibility and in consideration of a previous reviewer's suggestion, we choose to use it. For HumanEval, we employed different modes, as described in lines 1093 to 1128. In the Tab mode, we leave the docstring as part of the code context; in contrast, we transform the docstring into a user instruction. This setting helps align the evaluation inputs more closely with those encountered in real-world applications. > Typo We have fixed them. Thanks for your careful review. We appreciate your review and look forward to your response! --- Rebuttal Comment 1.1: Comment: I appreciate the authors for conducting additional experiments. The new results address my concerns, and I’ve raised my score.
Summary: The paper introduces a new family of models called CursorCore, which enables handling of historical context while making code generation or assistant response predictions. The authors also propose Programming-Instruct which is a framework designed to collect data to train CursorCore with the historical code edit context. The authors evaluate that models on APEval, which is a modified version of the popular HumanEval benchmark to incorporate historical context, and show improvements over base models. Claims And Evidence: There are many claims made in Section 2.2 such as: > Although they can utilize C, they fail to capture H, limiting the modeling of future changes in C, and are incapable of deleting or editing code. Although user instructions and reflection information can be used through comments and assert statements, this capability is weak and unstable. > > Prompt engineering can integrate some of this information into existing models, but the impact is limited. > However, these are not adequately supported in the evaluations conducted in Section 6. Specifically, the baseline of prompt-engineering to incorporate historical context is missing from the evaluations. Without this role of the target data curation and model training is unclear. Further, the work focuses primarily on incorporating historical edit context into an LLM’s input. However, no empirical justification for this provided, and it is only assumed that this needed. In fact, experiments in Section 6 show that (H, C) which includes historical context is consistently poorer in performance compared to (C) which does not. Generally speaking, I agree that historical context would be needed under certain scenarios but it seems that not sufficient work was done to identify these cases; I don’t think that HumanEval-like evals would benefit from historical user context. Finally, it is known that HumanEval suffers from contamination due to overlap with public sources [1]. [1] Matton, Alexandre, et al. "On leakage of code generation evaluation datasets." *arXiv preprint arXiv:2407.07565* (2024). Methods And Evaluation Criteria: 1. I think APEval does not thoroughly capture the relevance of historical user edits for code generation tasks. So while the fine-tuned models are better than baseline models, it is not clear if the proposed methods are really relevant for real-world use cases. I have elaborated this concern above. 2. The method used to collect user edits from GitHub and online judges is also not reflective of the kind of data that the authors are seeking. Specifically, the authors are looking to collect incomplete code snippets (as in Figure 1) but the sources for training data collection do not contain incomplete code snippets but rather earlier versions of complete code snippets. Theoretical Claims: N/A Experimental Designs Or Analyses: Please see my other comments. Supplementary Material: I reviewed Appendix C. I think that the authors should also include their human annotation rubric and results in Section 3.1 where they discuss the use of human annotations. Relation To Broader Scientific Literature: The paper attempts to present extend the traditional code completion task [1, 2, 3] by introducing historical user edit context. I think this is an interesting direction, though the treatment of this problem in this work is not very thorough as I have discussed above. [1] Chen, Mark, et al. "Evaluating large language models trained on code." *arXiv preprint arXiv:2107.03374* (2021). [2] Athiwaratkun, Ben, et al. "Multi-lingual evaluation of code generation models." *arXiv preprint arXiv:2210.14868* (2022). [3] Guo, Daya, et al. "DeepSeek-Coder: When the Large Language Model Meets Programming--The Rise of Code Intelligence." *arXiv preprint arXiv:2401.14196* (2024). Essential References Not Discussed: The inclusion of historical user context will also benefit agents for code [1, 2]. I think that the work will greatly increase in value by also discussing their contributions in this context. [1] Jimenez, Carlos E., et al. "Swe-bench: Can language models resolve real-world github issues?." *arXiv preprint arXiv:2310.06770* (2023). [2] Wang, Xingyao, et al. "Openhands: An open platform for ai software developers as generalist agents." *The Thirteenth International Conference on Learning Representations*. 2024. Other Strengths And Weaknesses: 1. The paper is mainly well-written and easy to follow. 2. The authors should consider moving away from HumanEval-style of evaluations as this does not reflect real-world use cases. In fact, I’d like to know if the code examples in Figure 3 are actually from real-world sources; I’d be surprised if they are. Other Comments Or Suggestions: Line 21 - collect → collects; evaluate → evaluates Questions For Authors: 1. Under which scenarios do you think it is absolutely necessary to include historical user context? Have you considered experiments to demonstrate the value of historical context in the input to LLLMs? 2. Can you comment on the distribution gap between real-world scenarios and HumanEval-style of benchmarks? These days most code generation evaluations follow either software-engineering (SWE-Bench), repository-level coding (CrossCodeEval) or very hard programming problems (LiveCodeBench). As such HumanEval is an outdated benchmark, and you should consider moving away from this, especially given the focus on historical user context. 3. Do you think it would be useful to conduct a study on utilizing historical user context for agentic workflows? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thanks for your review. Please see our response below: > Baseline of prompt-engineering to incorporate H is missing The reviewer may have misunderstood our experimental setup. Our prompt engineering baseline does include H, as shown in Tables 19 and 20. > No empirical justification for incorporating H. Experiments show that (H, C) which includes historical context is poorer in performance compared to (C) which does not & Q1 A direct comparison between the C and H+C subsets is impossible, as they involve different problem sets. Detailed APEval annotations are in Appendix C. The H+C subset is more challenging due to ambiguous or irrelevant history, making it harder for models to leverage and resulting in lower performance. While, as reviewer mFUu suggested, we can remove H from the H-included subset for direct comparison; please see our response to mFUu. Historical information is essential in many cases. For example, during variable renaming, a model without access to edit history may mistakenly revert the name. Similarly, if a user writes and later deletes a draft, only the edit history reveals its intention. > No sufficient work to identify these cases For the training data, we do not need to explicitly label which cases require historical information. In practice, the collected historical data naturally includes both helpful and unhelpful contexts. We must train LLMs to make predictions based on such inputs containing noise. For the benchmark construction, we identify cases where historical edits are necessary to infer the intended changes, shown in Appendix C. > APEval does not thoroughly capture the relevance of historical user edits & It would not benefits from historical context As noted above, we already considered this during annotation. We suspect the reviewer may have misunderstood our approach: we did not simply add historical edits to the original code. Since HumanEval docstrings include rich context like functionality and test cases, doing so would offer little benefit. Instead, we removed all docstrings and only showed annotators the signature and its purpose, making it significantly harder. In this setting, historical context becomes more helpful. > Collect user edits from GitHub and OJs is not reflective of data authors are seeking The reviewer is concerned that the collected historical edits may already be “complete,” with no need for further edits. Our data collection pipeline has taken it into consideration. We adopt the following perspectives and methods: 1. For data from OJs, we retain only the user’s first correct submission and submissions preceding it. 2. As shown in Section 4.2, we use LLMs to judge whether code changes align with user intent based on historical and current context. If the current version is complete and changes are version updates (e.g., adding auther information), the LLMs are designed to filter out such updates, while meaningful edits are retained. > HumanEval-style of evaluations suffers from contamination & does not reflect real-world use cases & Q2 The base models’ technical reports confirm HumanEval contamination was addressed. We also cleaned our training data (Section 5.2). Besides, our benchmark inputs were significantly modified which further reduce contamination risk. While HumanEval-style benchmarks differ from real-world scenarios in data length (function/file vs. full repositories) and use of third-party libraries/tools, we still chose to extend this benchmark because: 1. At the time of our experiments, no public benchmarks included all contextual elements like historical edits and user instructions. Thus, we chose to extend existing benchmarks as the most efficient way to assess models on basic tasks in this paradigm; directly creating a more complex benchmark might obscure performance differences. 2. Repository-level benchmarks (e.g., SWE-Bench) target different goals than ours (see historical context for agent). Furthermore, a newly released dataset Zeta is constructed based on real-world engineering cases with historical edits; please see our response to mFUu. Examples in Figure 3 are from a real-world source; we chose the shortest one to aid clarity and fit space constraints. > Should also include human annotation rubric/results in Section 3.1 Thanks, we agree it is important. Due to space limits, it's in the appendix for now, but we will add the detailed discussion to Section 3.1 in the camera-ready version. > Historical context for agent & Q3 Including such context could help the agent better understand user intent and adapt to coding style! While, our work focuses on single-call LLM usage with strict latency requirements, unlike agentic workflows, which involve multi-turn interactions and prioritize end-to-end performance. Integrating context into such workflows is a great direction for future work. > Typo We have fixed it. Thanks for the careful review. We appreciate your review and look forward to your response!
null
null
null
null
null
null
null
null
Isolated Causal Effects of Natural Language
Accept (poster)
Summary: Effects of the attribute of a text on an outcome can be influenced by the surrounding linguistic context around this attribute. This motivates defining an "isolated causal effect of this attribute (the "focal" text), where the surrounding context is marginalized (the "non-focal" text). A framework for estimation of this treatment effect is introduced, with assumptions, identification, a doubly-robust estimator and a sensitivity analysis. Notably, confidence intervals for the estimated isolated effect typically include the ground-truth isolated effect, in stark contrast to the vanilla natural effects from the literature. (EDIT : updated score based on authors' rebuttal) Claims And Evidence: I find that while the framework is interesting, there lacks justification for *why* these isolated effects should be considered, in contrast to the natural effect. In the Amazon dataset, the estimator recovers the true isolated effect than the naive method that estimates the natural effect : sure, but the isolated effect was defined as such in the semi-synthetic dataset and the first estimator was inherently designed to estimate it. However, it is interesting to see that the true causal effect on the SvT dataset is the result of an RCT and the estimator developed by authors recovers it unlike the naive method. This suggests that the RCT-given causal effect is the isolated effect and not the natural effect. Thus, if I did not miss anything, I find that authors should expand on this and include more justification (which could be simple citation of previous literature) on why the isolated effect matches such a "ground truth" effect, unlike the natural effect. Other than that, all other claims seem well justified. Methods And Evaluation Criteria: Yes, they seem natural. Theoretical Claims: I checked the proofs and they look correct. Experimental Designs Or Analyses: The analysis in 5.2 seems to match the practice of the Chernozhukov et al. (2024) reference in the paper. Supplementary Material: I reviewed the entire supplementary material. Relation To Broader Scientific Literature: I am not familiar with causal inference for text part, but to the best of my knowledge, all the relevant literature on the treatment effect estimation and OVB part is cited. One caveat however: the information loss and the confounding error from the two Clivio et al. (2024) references seem to refer to the same quantity. Essential References Not Discussed: I'm not aware of such missing references Other Strengths And Weaknesses: Strengths : - The paper is generally clear. - The perspective is interesting. Weaknesses : - There might be a lack of novelty in the treatment effect estimation part, as I do not see how the proposed framework differs from previous work in treatment effect estimation if one posits $T' = a(X)$, $X' = a^c(X)$, $Y'(t) = Y(T'=t, X')$ and performs classical treatment effect estimation, including OVB from Chernozhukov et al. (2024) using $X'$ as covariates, $T'$ as treatment, $Y'(t)$ as potential outcomes. Notably, if I understood correctly, $a(X)$ and $a^c(X)$ are not estimated but given, although $a^c(X)$ is too high-dimensional for direct usage in estimation, thus "covariates" $X'$ and "treatments" $T'$ are given as in any vanilla treatment effect estimation task. Other Comments Or Suggestions: - IMO $C_Y$ and $C_D$ should be defined in the main text. - Authors might include more justification on why these robustness values actually measure... robustness. Questions For Authors: I refer to my two main concerns: (1) Can you include a justification for why the proposed isolated causal effects are more appropriate than natural effects, as outlined in the "Claims and Evidence" section? (2) Can you explain how your framework differs from vanilla treatment effect estimation, as outlined in the "Other Strengths And Weaknesses" section? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your positive remarks describing our work as interesting, theoretically correct, and largely well-justified. We address questions and concerns below. **[Why isolated effects vs. natural effects?]** To illustrate why isolated causal effects are important for language, consider the effect of misinformation in online posts on readers’ voting decisions. Texts containing this type of misinformation also often contain other attributes likely to influence voting—e.g., politically inflammatory messaging. To determine whether action needs to be taken against misinformation, we would need to isolate its effect from the effects of other correlated attributes. Similarly, in scientific applications, researchers are also frequently concerned with the isolated effect and conduct randomized clinical trials to eliminate effects of potential confounders. In our experiments on the SvT dataset to estimate effects of weight loss medication, we use text data from social media posts, which often encode additional information related to weight loss like users’ exercise habits and diet. We find—as you highlight—that the isolated causal effect estimate from SenteCon-Empath corresponds to the ground truth from real-world clinical trials, while the natural effect estimate does not. This problem, also known as *aliased treatments*, is an important problem for social scientists studying the effects of language, and prior to our work the proposed solution was to carefully design a text experiment that constructs artificial texts that are not aliased (Fong & Grimmer, 2023). Our work expands the scope of possible research by allowing scientists to use naturally occuring text to study these kinds of effects instead of relying on resource-intensive text experiments with artificial data. We will include this additional motivation in the introduction and in the discussion of the SvT results. **[Novelty of proposed framework]** Treatment effect estimation with doubly robust estimators is of course well established in the causal inference literature. A main contribution of our paper is to conceive of, formalize, and parameterize natural language—a complex, unstructured, and high-dimensional form of data—in a way that allows us to use classical causal frameworks like doubly robust effect estimation, which are designed for much simpler forms of data. Indeed, leveraging classical causal estimators to solve new problems in new settings is an active area of research (e.g., Dudik et al. 2011; Schnabel et al. 2016; Azizzadenesheli et al. 2019; Byrd & Lipton 2019; Kallus et al. 2022). Our work introduces the concepts of isolated causal effects and non-focal language, and we define a new problem formulation, estimand, and practical estimation framework for these causal effects. Once language has been formalized in this way, we see the ability to draw on well-studied tools that already exist for estimating robust causal effects as a strength. Likewise, once language has been parameterized as focal and non-focal components, our OVB estimators indeed naturally follow the work on bounding OVB of Chernozhukov et al. (2024), which presents modern tools for this classical problem. Our main contribution here is to draw the connection between language representation and OVB, which to our knowledge has not previously been articulated. We explore the idea that language representation is lossy specifically in a way that is detrimental to isolated causal effect estimation, and we link that information loss to OVB. We use this connection to introduce OVB-based metrics for evaluating the sensitivity of isolated effect estimates to language representation that are distinct from traditional NLP methods for measuring information loss. **[Clivio et al. papers]** We will refine the language in our discussion of these references. **[$C_Y$ and $C_D$]** We originally left the mathematical definitions of $C_Y$ and $C_D$ in Appendix C.4 due to space constraints but will move them to the main paper. **[Robustness value]** Robustness in causal inference refers to the stability of the effect estimate under potential errors or omissions in modeling, assumptions, and/or data. Our robustness value measures how much OVB can be present before the point estimate of the causal effect changes from the correct to the incorrect sign. A larger robustness value indicates that the effect estimate better tolerates omission of key variables, suggesting that it is stable and therefore robust. We will emphasize this intuition when we define the robustness value in Section 3.3. We hope that our response addresses any concerns you may have and that you will consider revising your score. Thank you again! --- Azizzadenesheli et al. "Regularized Learning for Domain Adaptation under Label Shifts." ICLR 2019 Dudík et al. "Doubly robust policy evaluation and learning." ICML 2011 Schnabel et al. "Recommendations as treatments: Debiasing learning and evaluation." ICML 2016 --- Rebuttal Comment 1.1: Comment: I can see that I wrote my rebuttal comment as an official comment and not a rebuttal commment. But it does not change the result: Many thanks, this addresses my concerns. I will move to a clear Accept. --- Reply to Comment 1.1.1: Comment: Thank you again for your feedback and your response to our rebuttal. We appreciate it!
Summary: The paper introduces a framework for estimating isolated causal effects of language, which focuses on how specific linguistic attributes influence external outcomes while controlling for non-focal language to mitigate OVB. It uses doubly robust estimators to ensure unbiased estimations. The authors define three metrics (fidelity, overlap, and robustness value) to assess estimation quality. Experiments on a semi-synthetic dataset and a real-world dataset show that SenteCon-based representations yield the most reliable estimates, LLM-Prompting is effective but costly, and MPNet embeddings suffer from overlap violations. The study highlights the fidelity-overlap tradeoff to show the importance of balancing representations for robust causal inference for texts. Claims And Evidence: I think most claims are well-supported by theoretical and empirical evidence. This tradeoff concept is valid. But this phenomenon may be general in standard causal inference and classical machine learning where more features can hamper overlap. They do put it into a text-specific lens. Methods And Evaluation Criteria: I think the doubly robust is a good choice for this task. Generally, the methods and evaluation make sense for the problem. Theoretical Claims: I checked equations in 3.1 and 3.2 and briefly looked at Appendix A. I am not aware of any issues. Experimental Designs Or Analyses: The experiment design is sound. It is expected that using SenteCon yields the most balanced results as it is a mixture of discrete and continuous embeddings. Adding more experiments using textual embeddings from common LMs might enhance persuasiveness. Supplementary Material: No. Relation To Broader Scientific Literature: This paper builds on some recent research like codebook functions (Egami et al., 2022), natural text experiments (Fong & Grimmer, 2023), and methods that integrate textual features with causal inference (Pryzant et al., 2021). Unlike prior studies that estimate the effect of a focal text feature while allowing correlated text attributes to influence outcomes, this work aims to isolate a binary linguistic attribute by controlling for the nonfocal text distribution. It also extends OVB analysis in high-dimensional settings by introducing fidelity and overlap metrics. These contributions enhance the rigor of text-based causal inference by providing new evaluation metrics and sensitivity checks. Essential References Not Discussed: No. Other Strengths And Weaknesses: This is not a big issue, but I have some concerns about the overlap assumption mentioned in the problem setting. For simple tasks, such as binary intervention and low-dimensional focal representation, positivity is easier to achieve. However, this may not hold for more complex tasks, and it might be related to the scale of the data. The subsequent experiments also show that overlap is violated under certain conditions. Other Comments Or Suggestions: - Based on the framework of this paper, researchers may expect deeper discussion or more varied experiments of how some representation learning methods can be specifically designed for text tasks to balance the tradeoff (for future work). - The authors could try some dimension reduction methods for high-dimensional language embeddings, as it is a common approach for handling text embeddings. Reducing dimensionality within a dataset might capture unique non-focal features specific to the data. Questions For Authors: - What is the exact outcome model? - Did you try non-synthetic Amazon experiments? Is the original data too random? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your feedback! We appreciate your positive remarks describing our work as theoretically and empirically well-supported, experimentally sound, and enhancing the rigor of text-based causal inference. We address further comments and questions below. **[Experiments with additional LM embeddings and dimensionality reduction]** We include additional results ([anonymous link](https://naturl.link/extra-embeds)) on the SvT dataset using embeddings from 3 common LMs: BERT, RoBERTa, and MiniLM. We find that the 3 new LM embeddings improve upon the previous MPNet over all metrics but still do not match SenteCon-Empath. - The MiniLM effect point estimate is almost as close to the true effect as SenteCon-Empath, but its robustness value remains middling due to poorer overlap. The BERT and RoBERTa point estimates are only slightly better than MPNet, and their robustness values are lower than MiniLM. - BERT and RoBERTa have much better overlap than either MPNet or MiniLM, but worse fidelity. Since BERT and RoBERTa are higher-dimensional than MiniLM, this is surprising but may be due to optimization of MiniLM for sentence-level tasks in the sentence-transformers library. Interestingly, this result suggests the fidelity-overlap tradeoff involves more than just the dimensionality of the representation. We also include the results of dimensionality reduction using SVD on the MPNet and RoBERTa embeddings, which are the two most complex representations. - For both representations, dimensionality reduction improves both the robustness value and the point estimate, highlighting the potential of data-based post-processing of representations. - In fact, both point estimates move from outside the true effect confidence interval to inside the true effect confidence interval after SVD. We thank you for suggesting these experiments and look forward to including them in the paper. **[Overlap assumption]** It is true that overlap violations can be a concern in language settings (D’Amour et al. 2017). As we mention in our paper, high-dimensional representations with good fidelity are prone to overlap violations, and so we rely on our OVB metrics, overlap and robustness value, to warn us when such violations may be occurring. In this paper, we assume that strict overlap holds (i.e., that there is *some* non-zero chance of each $a(X)$ condition given $a^c(X)$). In our experiments, we have one result that suggests a “soft” overlap violation (i.e., the probability of one $a(X)$ condition is very small but *not* non-zero given $a^c(X)$): the MPNet result on SvT, where the overlap metric is fairly large. We also have a negative overlap value for Empath on the same dataset; however, for this metric we use a doubly robust estimator, and doubly robust estimates do not necessarily satisfy criteria like being non-negative in noisy settings. Therefore, this negative overlap value (which is small in magnitude) may not necessarily be due to an overlap violation. **[Representations for balancing fidelity-overlap tradeoff]** We agree that the fidelity-overlap tradeoff naturally suggests learning representations that optimize the tradeoff. Experiments that achieve this require further theoretical analysis and empirical work and are outside the scope of this paper—this is in fact the basis of ongoing research—but we are happy to expand on this topic when discussing future work in our conclusion. **[Exact outcome model]** For Amazon, the outcome model is a linear regression model of the semi-synthetic $Y$ fit over the LIWC categories that form $a(X)$, $a^c(X)$. For SvT, the outcome model is a gradient boosting classifier fit over $a(X)$ and each named non-focal language representation. **[Non-synthetic Amazon experiments]** We did not use the original Amazon data because the true causal effects of the text interventions are unknown, so there is unfortunately no way to validate effect estimates on this data. The SvT dataset allowed us to validate our methods in a natural setting in which the true effect *was* known through an accompanying external clinical trial. Since such trials are highly resource-intensive, for Amazon we rely on the more common practice of obtaining true effects by generating a semi-synthetic dataset (Veitch et al. 2020; Pryzant et al. 2021). If you are interested in experimental results from a more complex semi-synthetic Amazon dataset, please see our response to reviewer gbt6 titled [Nonlinear $Y$, $a(X)$, $a^c(X)$ relationship]. We hope that our response addresses any concerns you may have and that you will consider revising your score. Thank you again! --- D’Amour et al. "Overlap in observational studies with high-dimensional covariates." J. Econom. 221.2 (2021): 644-654 Pryzant et al. "Causal Effects of Linguistic Properties." NAACL 2021 Veitch et al. "Adapting text embeddings for causal inference." UAI 2020 --- Rebuttal Comment 1.1: Comment: Copied from my official comment: Thanks for the authors' responses, which address many of my concerns. I'd like to raise my score. --- Reply to Comment 1.1.1: Comment: Thank you—we appreciate your time and feedback!
Summary: This paper, based on the principle of omitted variable bias, proposes a framework to estimate the sensitivity of bias in evaluating the non-focal language outside of the intervention and the quality of isolated effect estimation along the two key dimensions of fidelity and overlap. Claims And Evidence: The writing of this work is unclear and contains a large number of technical terms, such as language attributes and focal language, as well as specialized terminology in causal estimation, making it challenging for readers outside the field to clearly understand the paper. Additionally, the motivation of the study is not well-articulated—why is it necessary to study the isolated causal effect? Could the authors further elaborate on this? Methods And Evaluation Criteria: 1. Could the authors further clarify their methodological contributions? From the paper, both the doubly robust construction and effect estimation seem to be a combination of existing techniques (please correct me if I’m wrong). It appears that the authors have merely applied existing methods to a new setting. 2. Could the authors clarify the differences and connections between the method proposed in this paper and approaches that use mediation analysis to analyze specific effects? For example, in [1,2], mediation analysis is used to study the effect of LLMs. Similarly, one could consider treating focal language as the variable of interest and non-focal language as a mediator, as they together constitute the input text, while the goal is to isolate the effect of focal language on the output. This would require a strict disentanglement of the mediating effect introduced by non-focal language. Could the authors further elaborate on how their method differs from the mediation-based approaches mentioned above? [1] Alessandro Stolfo, Zhijing Jin, Kumar Shridhar, Bernhard Schölkopf, and Mrinmaya Sachan. A causal framework to quantify the robustness of mathematical reasoning with language models. arXiv preprint arXiv:2210.12023, 2022. [2] Han Y, Xu L, Chen S, et al. Beyond Surface Structure: A Causal Assessment of LLMs' Comprehension Ability[J]. arXiv preprint arXiv:2411.19456, 2024. Theoretical Claims: This paper does not include additional theoretical explanations for the proposed method. Could the authors further clarify certain formulas, such as the importance weight in Line 144 and the estimand $\tau^*$ in Line 15? How are these formulas derived—are they based on existing methods or newly developed? Since the reviewer is not familiar with this field, the appearance of these formulas feels quite abrupt. If these formulas are based on existing methods, it would be helpful to indicate that. Experimental Designs Or Analyses: How does the accuracy of isolated effects estimated, using the proposed metrics such as confidence intervals, fidelity, and overlap, perform in more complex situations, such as when Y and $a(X)$, $a^c(X)$ exhibit a nonlinear relationship or when $a(X)$ is not a simple binary variable? Additionally, I noticed that in Figure 3, the confidence intervals of the proposed method on the SvT dataset are quite large, with only the SenteCon-Empath case coming close to the true isolated effect. Does this suggest that the method may not be robust enough across different tasks? Supplementary Material: I reviewed and checked the necessary appendices related to the main text, and found no additional issues. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: The reviewer is not familiar with the relevant work in this field and therefore cannot provide further comments. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful questions and feedback! We address your comments and clarify points below. **[Technical language]** We will revise the paper to more clearly introduce and contextualize technical terms. **[Motivation for isolated effects]** Due to character limits, please see our response to reviewer ZnfT titled [Why isolated effects vs. natural effects?]. **[Methodological contributions]** Likewise, please see our response to reviewer ZnfT titled [Novelty of proposed framework]. **[Derivation of formulas]** The importance weight in line 144 and estimand $\tau^*$ in line 158 draw from the transportability literature and doubly robust estimation literature, respectively. In Appendix A.1.1, we provide the full derivation of the importance weight and show the equivalence of $\tau^*$ in line 158 to $\tau^*$ in Definition 2.1. We initially left these derivations in the appendix due to space constraints. We agree some technical details may have felt abrupt as a result and will move key derivations to the main paper. **[Mediation analysis]** We appreciate this insightful question. Though there are parallels in intuition between isolated effects and direct effects in mediation analysis, they are different technical problems. Our setting is *not* a mediation setting: - In mediation analysis, such as in the papers you mention, interventions (e.g., math operations) cause mediators (e.g., the text “surface”), and mediators cause outcomes. Only the intervention can be controlled directly in the study, while the mediator cannot. - In our setting, the entire text is a high-dimensional treatment where both the focal language $a(X)$ and the non-focal language $a^c(X)$ can be directly intervened on by randomizing the text. $a^c(X)$ is not caused by $a(X)$ and *can* be directly controlled, so it is not a mediator. **[Nonlinear $Y$, $a(X)$, $a^c(X)$ relationship]** $Y$ and $a(X)$, $a^c(X)$ have a linear relationship only in the Amazon data setting. Our SvT dataset results demonstrate that the true effect can still be recovered even when the relationship between the text and outcome is nonlinear and complex. To further explore this, we conduct additional experiments ([anonymous link](https://naturl.link/nonlinear) to results) on a new version of the Amazon dataset where $Y$ and $a(X)$, $a^c(X)$ have a nonlinear relationship. We fit a nonlinear gradient boosting model over $a(X)$, $a^c(X)$ on the true helpful vote count, then use the predicted vote count (+noise) as our new semi-synthetic $Y$. Using MLPs with <4 layers to fit importance weights and outcome models, we follow the protocol in Sections 4.2.2 and 5.1 to estimate effects for the attributes featured in Fig. 2. The results of these additional experiments are consistent with the previous Amazon results: - For both *home* and *netspeak*, as # dimensions increases (i.e., as # omitted variables decreases), the effect point estimate grows closer to the ground truth, and overlap becomes worse while fidelity improves. There is slight variability in these trends, as we would expect from the extra noise from more complex models. - For *home*, robustness value increases with dimensionality, suggesting that gains in fidelity outweigh losses in overlap. For *netspeak*, robustness value decreases sharply after 6 features, suggesting that worse overlap outweighs improved fidelity. This is especially true on the 9th feature, where the overlap sharply worsens, indicating that $a(X)$ can be almost fully predicted from $a^c(X)$. We thank you for this question and look forward to including these results in the paper. **[Complex $a(X)$]** If $a(X)$ is continuous rather than binary, this introduces a new causal problem where it is common to estimate a dose-response curve or incremental effect instead of an average treatment effect. As the study of continuous treatment effects is its own active area of research in causal inference (e.g., Kennedy et al. 2017; Brown et al. 2021), we believe it is beyond the scope of this paper. **[SvT confidence intervals (CIs)]** Wide CIs suggest that the estimates from the data are noisy (i.e., imprecise) but not necessarily that they are not robust. Robustness in causal inference refers to the stability of the effect estimate under potential errors or omissions in modeling, assumptions, and/or data. Our OVB analysis in Fig. 4 suggests that the SenteCon-Empath estimate (which does have wide CIs) *is* robust in this sense since the point estimate remains correctly positive even when we compromise the estimator by removing important features. We hope our response addresses any potential concerns and that you will consider revising your score. Thank you again! --- Brown et al. "Propensity score stratification methods for continuous treatments." Stat. Med. 40.5 (2021): 1189-1203 Kennedy et al. "Non-parametric methods for doubly robust estimation of continuous treatment effects." J. R. Stat. Soc. Ser. B Methodol. 79.4 (2017): 1229-1245
null
null
null
null
null
null
null
null
Variational Control for Guidance in Diffusion Models
Accept (poster)
Summary: This paper introduces DTM - Diffusion Trajectory Matchning - a novel and general guidance approach for generic diffusion models. The idea is to add a guidance vector at each time point, u_t, such that it serves the given measurements while also maikg sure that the original diffusion trajectory is not deviated much. The vector u_t is found by minimizing a cost function that takesthe above two forces into account, and then it is added as the guidance for the current stage. When simplified, this approach amounts to a guidance that is similar to DPS (but avoiding the derivative through the denoiser) , but with a regularization that forces a proximity between the means of the conditional probability means (\mu(x_t) versus \mu(xt+\gamma*u_t)). This last result leads to a non-linear computaiton of u_t, termed NDTM (Non-Linear Diffusion Trajectory Matching). A further simplified version of this is derived for DDIM, penalyzing for the length of u_t, and the difference between the two denoiser outputs. Various experiments demonstrate the superiority of this approach over the alternative methods in handling inverse problems, linear and non-linear ones. Claims And Evidence: Excellent paper Methods And Evaluation Criteria: Perfect. Theoretical Claims: All are correct. Experimental Designs Or Analyses: Very well designed experiments Supplementary Material: Read it - no comments. Relation To Broader Scientific Literature: Explains well the surrounding papers, including stochastric control methods that inspired this work. Essential References Not Discussed: None. Other Strengths And Weaknesses: Nothing to add. This paper is pleasant to read. Other Comments Or Suggestions: Looking at Eq. (17), there are 3 forces that are involved in the creation of ut: 1. Forcing the norm of this vector to be small (||ut||_2) 2. Forcing the trajectory not to deviate much by forcing the two denoiising results to be close by, and 3. Forcing the measurements in some sort of projection step on the denoised image. Of these three, #2 complicates things and makes the method non-linear and more complicated for optimization. Therefore, an ablation that shows the effect of excluding #2 is necessary. Perhaps #1 would be sufficient to regularize the guidance. Furthyeremore, for linear inverse problems, ut will have a closed form solution of #2 is omitted. Questions For Authors: None. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their encouraging feedback about experiment design and readability. We address specific questions below: > Of these three, #2 complicates things and makes the method non-linear and more complicated for optimization. Therefore, an ablation that shows the effect of excluding #2 is necessary. Perhaps #1 would be sufficient to regularize the guidance. Furthyeremore, for linear inverse problems, ut will have a closed form solution of #2 is omitted. We would like to thank the reviewer for their insight. We agree that such an ablation could be useful to study the impact of different terms in the loss function in Eq. 17. Empirically, we find the choice of weighting to be largely task-specific (see Tables 7 and 8). --- Rebuttal Comment 1.1: Comment: The grade remains as is after the response.
Summary: The paper introduces a framework called Diffusion Trajectory Matching (DTM) for guiding diffusion models without requiring retraining. This approach is rooted in variational inference and optimal control, allowing for guidance by optimizing control signals based on terminal costs. The proposed method, Non-linear Diffusion Trajectory Matching, integrates with existing samplers like DDIM and targets performances on linear and non-linear inverse problems by optimizing diffusion trajectories toward a desired outcome, showing improvements in metrics like FID over some baselines. Claims And Evidence: The claims made in the submission are generally right. Methods And Evaluation Criteria: The evaluation tasks make sense but the comparisons are limited. Theoretical Claims: The paper doesn't include theorems. The Proofs section in the submission is essentially formulations instead of rigorous math proofs. Experimental Designs Or Analyses: There lack experimental comparisons with more recent methods. Please refer to Essential References Not Discussed. Supplementary Material: Yes. The appendices. Relation To Broader Scientific Literature: The paper discussed the relation of its idea with Classifier Guidance in Diffusion Models and Optimal Control. The key contributions of the paper relate well to the broader scientific literature by extending the capabilities of diffusion models in inverse problem-solving. Essential References Not Discussed: The paper lacks a comprehensive literature review. There are many guidance methods in diffusion which the authors did not discuss or compare, such as [1-8]. [1] Unlocking guidance for discrete state-space diffusion and flow models. [2] Monte carlo guided diffusion for bayesian linear inverse problems. [3] Amortizing intractable inference in diffusion models for vision, language, and control [4] Derivative-Free Guidance in Continuous and Discrete Diffusion Models with Soft Value-Based Decoding [5] Steering Masked Discrete Diffusion Models via Discrete Denoising Posterior Prediction [6] Alignment without Over-optimization: Training-Free Solution for Diffusion Models [7] Diffusion Model Alignment Using Direct Preference Optimization [8] DyMO: Training-Free Diffusion Model Alignment with Dynamic Multi-Objective Scheduling Other Strengths And Weaknesses: Strengths: 1. The integration of variational control into the guidance of diffusion models is well-founded. 2. Demonstrated improvements on tasks such as non-linear and blind inverse problems show broader practical applications. Weaknesses: 1. The paper does not provide code for verification. 2. For the comparison with DPS, the paper seems to fix the hyperparameters. Given that the method itself's performance heavily depends on the correct setting of hyperparameters such as guidance weight and terminal cost weight, hyperparameters of DPS should also be selected from a space. 3. The paper lacks a comprehensive literature review. There are many training-free guidance methods in diffusion which the authors did not discuss or compare. Please refer to Essential References Not Discussed. 4. While the paper discusses non-linear control benefits, comparisons primarily focus on linear control setups, possibly overlooking the full spectrum of non-linear dynamics. 5. The method assumes differentiable objectives, which may not be applicable or ideal for all types of generative tasks. 6. The paper lacks Impact Statements, which is required. Other Comments Or Suggestions: Please refer to above. Questions For Authors: Please refer to above. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback. Please find our responses below. > The evaluation tasks make sense but the comparisons [...] lack [...] more recent methods. . Please refer to Essential References Not Discussed.[...] authors did not discuss or compare, such as [1-8]. This was also a common point from Reviewers DxJn, 113Q so we address it jointly here. **Re. Generalization to other tasks**. Our framework indeed generalizes to non-linear tasks such as those pointed out by the reviewers. We demonstrate this by simply adapting the terminal cost in our method for Style and text-conditioned guidance. In more detail: **Style Guidance**: Following the experimental setup in MPGD [He et al.], we evaluate our method on Style Guidance using Stable Diffusion (see [Figure 1](https://imgur.com/a/wEDqoaR)). We further quantitatively compare our method on 1k (reference images, prompts) with FreeDOM [Yu et al.] and MPGD in [Table 1](https://imgur.com/a/jTn0hQ2). Our quantitative and qualitative results suggest that NDTM outperforms competing baselines on prompt (CLIP score) and style adherence (Style Score) while exhibiting better perceptual quality. **Text Guidance with CLIP**: Following the experimental setup in MPGD, we present qualitative results on text-conditional generation on the CelebA (see [Figure 2](https://imgur.com/a/5jlnDLo). NDTM can be used to generate samples that adhere well to simple and complex prompts. Therefore, we emphasize that our method can be applied to more general guidance scenarios. We will include additional qualitative results for these tasks with their experimental settings in a revised version of our paper. Note that we selected the best hyperparameters for the selected baselines, just like for all other experiments. **Re. Limited Empirical Comparisons**. Based on the suggestion from Reviewer 113Q, we compare with two additional baselines, MPGD and RB-Modulation [Rout et al.], on the non-linear deblur and 4x superresolution tasks for the FFHQ and ImageNet datasets. NDTM outperforms these additional baselines on these tasks. See [Table 2](https://imgur.com/a/Y8Mlvw0). We also thank the reviewer for the interesting pointers, which we will include in the related work section for broader context. However, they primarily focus on discrete diffusion models or finetuning with human preference data—directions that differ from the scope and goals of our paper. In detail: **Re. Discrete Diffusion Models.** Our work focuses on continuous state space diffusion models (see Section 2) and assumes a differentiable terminal cost (see line 120, Right Column in the main text). We think that optimal control provides an interesting approach to guiding discrete diffusion models and leave this open to future work. **Re. Finetuning with human preference data.** In this work, we only work with training-free methods for conditional generation tasks and not with methods that finetune diffusion models with additional human preference data. Therefore, comparisons with baselines like Diffusion-DPO are outside the scope of this work. > The paper does not provide code for verification We will make our code publicly available upon acceptance. Please refer to the pseudocode in Algorithm 1 in the main text and the corresponding hyperparameters in Tables 4,5 and 6 in the Appendix. > For the comparison with DPS [...] should also be selected from a space. We tuned all baselines (including DPS) for the best performance for all tasks and datasets (see Appendix B.2). > While the paper discusses non-linear control [...] of non-linear dynamics. We respectfully disagree, as there seems to be a misunderstanding: We both consider non-linear inverse problems and allow for non-linear control. In detail, while we do update the state $x_t$ additively with the control $u_t$, the updated state ($x_t + \gamma u_t$) is directly injected in the diffusion denoiser (modeled as a non-linear neural network). Therefore, the guidance process indeed depends non-linearly on the control. We acknowledge that to update the state $x_t$ with the control $u_t$, there could be multiple ways to achieve this. However, we don’t explore the full space of such possible combinations due to space and time constraints which we also highlight in line 175, Right column. > The method assumes differentiable objectives [...] generative tasks. We clearly state our assumptions of a differentiable terminal cost in Line 120, Right Column in the main text. Therefore, while the argument that our method may not be ideal for all types of generative tasks is correct, we never claim this in our work. We agree that extending our method to discrete diffusion models and discrete terminal costs can be an interesting direction for further work and will clarify this point in more detail in Section 6 of our revised paper. > The paper lacks Impact Statements, which is required. We have provided the impact statement at line 435 (Right Side) in the main text.
Summary: In this paper, the authors formulate the diffusion posterior guidance problem as a variational control problem and propose a novel training-free framework for this problem. They introduce a new algorithm for diffusion guidance and their framework also unifies many existing training-free diffusion guidance algorithms. They have conducted experiments on FFHQ-256 and ImageNet-256 datasets to show superior performance in comparison to several popular baselines. Claims And Evidence: I do find the statement that the authors make about the "broad generalizability" of their method to be overclaiming. In particular, the authors claim in the introduction that prior works which are based on optimal control only "focus on a restricted class of control problems", and their method "adapts well to diverse downstream tasks". However, in their experiments, the authors only show evidence on simple inverse problems like deblurring and inpainting. The only non-linear task that the authors explore is non-linear deblurring, and none of the more complicated non-linear task such as style guidance generation, face identity guidance generation or text conditioned generation (all of which are widely studied [1,2,3,4]). Moreover, the authors claim to be able to solve "(blind) non-linear inverse problems". However, the only blind inverse problem that they attempt to solve is linear deblurring. (They assume that there exist a blurring kernel and they would apply linear convolution with this kernel). As a result, I find the claim about the diversity and the generalizability of the method to be inaccurate and not well justified by the evidence provided. [1] Bansal, et al. Universal Guidance for Diffusion Models. ICLR 2024. [2] Yu et al. FreeDoM: Training-Free Energy-Guided Conditional Diffusion Model. ICCV 2023. [3] He et al. Manifold Preserving Guided Diffusion. ICLR 2024. [4] Rout et al. RB-Modulation: Training-Free Personalization of Diffusion Models using Stochastic Optimal Control. ICLR 2025. [5] Ye et al. TFG: Unified Training-Free Guidance for Diffusion Models. NeurIPS 2024. Methods And Evaluation Criteria: The authors have provided sufficient justification and proposed mostly valid evaluation criteria for the problem at hand. However, since the evaluations are conducted with only 1000 samples, FID is not an appropriate metric for these experiments (FID is highly biased when using a small number of samples [1]). Given that the authors also have included the KID results, which is the more appropriate metric to be used, I am only mentioning this because the authors have emphasized their performance gain based on FID results in their abstract. [1] Binkowski et al. Demystifying MMD GANs. ICLR 2018. Theoretical Claims: I have two major concerns regarding the theoretical aspect of this paper. 1. The authors choose addition as the aggregation function to incorporate control for calculating the posterior mean. However, this design choice is not justified at all. As a matter of fact, without any constraint on $u_t$, the diffusion model may not even be well defined on $x_t + \gamma u_t$ because all noisy samples are concentrated on certain intermediate manifolds during the diffusion process [1]. It would be great if the authors can provide further justification and clarification on the assumptions when choosing the aggregation function of the control. 2. The authors mentioned that the Tweedie’s estimate in DPS causes high computational requirement and high sensitivity to the hyperparameters. However, in the proposed approach, the authors also use the Tweedie’s estimate for $\hat{x}_0$. I wonder why the authors consider their method to be exempt from the same limitation. [1] Chung et al. Improving diffusion models for inverse problems using manifold constraints. NeurIPS 2022. Experimental Designs Or Analyses: My concerns about the experimental design are elaborated in section "Methods And Evaluation Criteria" and "Essential References Not Discussed". Supplementary Material: I have reviewed the appendix of this paper and I would suggest the authors to provide qualitative comparison for all tasks, all datasets and all baselines in their appendix. So far the qualitative comparison is not comprehensive. Relation To Broader Scientific Literature: One of the biggest strength of this paper is that the authors do provide a unified framework that generalizes diffusion posterior sampling method to an optimal control inspired formulation. Solving inverse problems with training-free diffusion guidance is also a vibrant and important research field as it is closely related to imaging processing, broader signal processing and many medical applications. Essential References Not Discussed: The most prominent concern I have about this paper is the lack of comparison with relevant literature. As I have mentioned earlier in this review, the authors fail to mention and/or compare a large portion of related works in training-free diffusion guidance, many of which achieves better performance than the baselines selected by the authors. In particular, here is a list of papers that the authors should consider comparing to: [1] Bansal, et al. Universal Guidance for Diffusion Models. ICLR 2024. [2] Yu et al. FreeDoM: Training-Free Energy-Guided Conditional Diffusion Model. ICCV 2023. [3] He et al. Manifold Preserving Guided Diffusion. ICLR 2024. [4] Rout et al. RB-Modulation: Training-Free Personalization of Diffusion Models using Stochastic Optimal Control. ICLR 2025. (The authors have mentioned this work in their paper but failed to compare with this method) [5] Ye et al. TFG: Unified Training-Free Guidance for Diffusion Models. NeurIPS 2024. [6] Chung et al. Improving diffusion models for inverse problems using manifold constraints. NeurIPS 2022. Other Strengths And Weaknesses: I have elaborated the strengths and weaknesses of this paper in the previous sections. Other Comments Or Suggestions: N/A Questions For Authors: The authors choose $T=200,N=15$ as the hyperparameters in their experiments. So in total there will be $200\times(15+1)=3200$ diffusion NFE required for each sample. However, their method is significantly faster than DPS, which only requires $1000$ NFE. Can the author explain why they can achieve this speedup? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed and helpful feedback. Please see our response below. > I do find the statement [...] the authors claim in the introduction [...] that prior works which are based on optimal control [...] (all of which are widely studied [1,2,3,4]) Our framework indeed generalizes to non-linear tasks like Style and text guidance. Due to space constraints, please see our response to Reviewer UY5V (first point). > Moreover, the authors claim to be able to [...] However, the only blind inverse [...] to be inaccurate and not well justified by the evidence provided. For the blind deblurring task considered in this work, the forward model y = k * x + n is non-linear since both the kernel k and the underlying signal x are unknown here, making the inverse problem non-linear and challenging. Therefore, we disagree that it is equivalent to solving linear deblurring. Moreover, as highlighted in our common response, we already show that our method can generalize to tasks like Style Guidance, providing further evidence in favor of its generalizability. > The authors have [...] proposed mostly valid evaluation criteria for the problem at hand [...] performance gain based on FID results in their abstract We agree that 1000 samples may produce a biased FID. However, we evaluate all baselines similarly, so we expect similar improvements in FID for more samples. In addition, the reported FID strongly correlates with other more stable metrics, such as LPIPS and KID. Lastly, our choice of metrics and evaluation protocol is primarily based on prior work in the literature like DPS [Chung et al.] and RedDiff [Mardani et al.] and is therefore justified, as also noted by the reviewer. > The authors choose addition as the aggregation function [...] further justification and clarification on the assumptions when choosing the aggregation function of the control. We do regularize u_t via the guidance scale $\gamma$, compare Eq. 17 in the paper. More specifically, the first term in Eq. 17 regularizes the magnitude of u_t, while the second term encourages a u_t which does not change the magnitude of noise in the guided signal x_t. This brings the guided trajectory close to the unguided trajectory and close to the usual input to the learnt score. Regarding the image manifold, since u_t is only added inside a score function, the terminal cost cannot make the trajectory leave the span of the score function. More importantly, superior empirical results across a range of linear and non-linear inverse problems justify our choice of aggregation function. Lastly, we did not explicitly explore specifying terminal costs that are incompatible with the diffusion prior. We think that this is an interesting future direction and will add this to the Limitations in Section 6 of the revised paper. We hope this clarifies any concerns about our choice of aggregation function. > The authors mentioned that the Tweedie’s estimate in DPS causes high computational requirement [...] exempt from the same limitation. We think that DPS converges worse because the overall framework assumes that the gradient of the terminal cost is accurate at every step, but it is actually approximated by Tweedie’s estimate. We think that our framework is able to deal with this approximation to the terminal cost better through the additional flexibility in the guidance. We propose rephrasing the corresponding section for the final manuscript to reflect this intuition. > I have reviewed the appendix [...] So far the qualitative comparison is not comprehensive. We include some more qualitative results on FFHQ [here](https://imgur.com/a/2cVdEff). In light of the new results on style guidance we will update the Appendix with more experimental details for these tasks, and additional qualitative results. > The most prominent concern I have about this paper is the lack of comparison with relevant literature [...] performance than the baselines selected by the authors. Thanks for your feedback and for suggesting other competitive baselines. Based on the reviewers' suggestion, we compare with two additional baselines, MPGD and RB-Modulation, on the non-linear deblur and 4x superresolution tasks for the FFHQ and ImageNet datasets. NDTM mostly outperforms these additional baselines on these tasks. Due to space constraints, please see our response to Reviewer UY5V (Limited Empirical Comparisons). > The authors choose T=200,N=15 as the hyperparameters [...] Can the author explain why they can achieve this speedup? The configuration T=200, N=15 is used only for the blind inverse problem, which is inherently more challenging and benefits from additional optimization. However, for all other tasks, we use a more efficient configuration (See Table 8), carefully balancing performance and runtime. The runtimes reported in Table 9 correspond to the superresolution task, where T=50, N=5. This setup allows our method to be over 14x faster than DPS on that task. --- Rebuttal Comment 1.1: Comment: Thank you for your rebuttal! It has answered most of my concerns and I can see that the authors has put a lot of efforts into comparing with more prior works. Given that the authors have addressed my main concern, which is the lack of comprehensive comparison to baselines, I would increase my score to 3. However, I would still like to comment on several points from the authors' response: 1. **Linearity of the blind inverse problem task:** In DMPlug, whose setup is adopted in this paper according to line 325, the blind inverse problem experiment is about " about recovering a sharp image x (and kernel k) from y = k*x+n where * denotes the linear convolution and the spatially invariant blur kernel k is unknown" (page 8 in DMPlug). This makes this experiment linear. Please still consider modifying your claim regarding this matter. 2. **FID:** Thank you for your acknowledgment. My suggestion was simply to remove the claim about FID in your abstract since it is not an appropriate metric for your experiment setup, even though many prior works also make the same mistake. 3. **$\gamma$:** Thank you for your clarification. Another question: Is $\gamma$ a fixed value or a time-dependent value? Please answer my followup questions and I will raise my score! --- Reply to Comment 1.1.1: Comment: We again thank the reviewer for their feedback which helped make our work better. Please find our response to additional questions below: 1. **Re. linearity of the blind inverse problem**: Thanks for pointing this out. We will update our experimental setup to reflect these caveats. 2. **Re. FID**: Thanks for the suggestion. We will update our abstract in light of these arguments. Note that the claim needs to be updated anyway since we added new baselines for the non-linear deblur experiment. 3. **$\gamma$**: For simplicity, in this work, we assume $\gamma$ to be a fixed scalar (see line 172, right side in the main text). However, we can also impose a schedule on $\gamma$ (making it time-dependent). We will make this more explicit in Section 6 by highlighting it as an interesting direction for further exploration. Thanks again for considering to increase your score. We hope our response clarifies your concerns and we would be happy to answer further questions.
Summary: The paper proposes to optimize the guidance signal with three losses $C_{\text{score}}, C_{\text{control}} and C_{\text{terminal}$. After that, the guidance signal is utilized in the sampling process as normal guidance. The guidance signal is updated through the greedy scheme. Claims And Evidence: no problem Methods And Evaluation Criteria: 1. The method might be too expensive. The running time cost should be provided 2. What is the value of N in Algorithm 1? 3. The evaluation does not have generation task for example label-condition generation, text2image generation. This hinders the evaluation of the work. 4. There are recent advances in classifier-free guidance but there is no discussion or comparison in the main paper. [1] https://arxiv.org/abs/2404.07724 [2] https://arxiv.org/html/2404.13040v1 Theoretical Claims: I checked the theoretical claims Experimental Designs Or Analyses: Good Supplementary Material: I checked the supplementary for proof and implementation Relation To Broader Scientific Literature: Provide a new guidance method with optimization on guidance term before doing guidance Essential References Not Discussed: There are recent some advance in classifier-free guidance but does not have the discussion in the main paper. [1] https://arxiv.org/abs/2404.07724 [2] https://arxiv.org/html/2404.13040v1 Other Strengths And Weaknesses: n/a Other Comments Or Suggestions: n/a Questions For Authors: 1. Please compare the running time of the algorithm with other CFG methods. 2. What is the value of N in Algorithm 1? 3. The evaluation does not have generation task for example label-condition generation, text2image generation. This hinders the assessment of the work. Please include these tasks in the paper. 4. The authors should include the analysis of the method when combining with other methods such as Interval Guidance, CFG++ Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback. We address specific concerns as follows in the order of sections: > The method might be too expensive. The running time cost should be provided > Please compare the running time of the algorithm with other CFG methods. Runtime is often faster than the closest competitor. See the runtimes in Table 2 in the main text and Table 9 in Appendix C.2, where we list runtimes once per hyperparameter configuration. In detail, our method is about one order of magnitude faster than DPS, the closest competitor for superresolution and random inpainting. While our method is indeed slower than for example Red-diff or DDRM, it always improves the sample quality. Note that we did optimize all hyperparameters for sample quality for all methods. We agree that reducing the sampling time is a crucial improvement for guiding diffusion sampling, but this limitation applies to all high-quality competitors. > What is the value of N in Algorithm 1? We provide complete details of hyperparameters for our method and baselines in Tables 4,5 and 6 in the Appendix. To summarize, we set N=50 for linear and non-linear inverse problems and N=200 for blind inverse problems. > The evaluation does not have generation task for example label-condition generation, text2image generation. This hinders the evaluation of the work. [...] Please include these tasks in the paper. Our framework indeed generalizes to non-linear tasks such as those pointed out by the reviewer. We demonstrate this by simply adapting the form of the terminal cost for Style Guidance generation and text-conditioned generation. **Style Guidance Generation**: Following the experimental setup in MPGD [He et al.], we evaluate our method on Style Guidance using Stable Diffusion 1.4. Please refer to our qualitative results at this URL: **[https://imgur.com/a/wEDqoaR](https://imgur.com/a/wEDqoaR)**. We further quantitatively evaluate our method on this task on 1000 (reference image, prompt) pairs and compare with FreeDOM [Yu et al.] and MPGD in the accompanying [**Table 1**](https://imgur.com/a/jTn0hQ2). Our quantitative and qualitative results suggest that our NDTM can be applied to this task and outperforms competing baselines on prompt (CLIP score) and style adherence (Style Score) while exhibiting better visual perceptual quality. **Text Guidance with CLIP**: Following the experimental setup in MPGD [1], we further present qualitative results on text-conditional generation using a pretrained diffusion model on the CelebA-HQ dataset, which can be accessed using this link: **[https://imgur.com/a/5jlnDLo](https://imgur.com/a/5jlnDLo)**. Similar to StyleGuidance, NDTM can be used to generate samples that adhere well to not only simple prompts (like Blond hair) but also more complex prompts such as “a person with blonde hair and a goatee”. Therefore, we emphasize that our method can be applied to more general guidance scenarios. For brevity, we will include additional qualitative results for these tasks with their experimental settings (like the form of the terminal cost) in a revised version of our paper. Note that we selected the best hyperparameters for the selected baselines, just like for all other experiments. We also urge the reviewer to check out our response Reviewer UY5V (see point Re. Limited Empirical Comparisons) which presents additional empirical comparisons with additional baselines on 4x super-resolution and non-linear deblur tasks > There are recent advances in classifier-free guidance but there is no discussion or comparison in the main paper. [see also] Essential References Not Discussed We thank the reviewer for the interesting pointers, which we will include in a dedicated section on classifier-free guidance in the related work section. Note that the two approaches are complementary: Classifier-free guidance is based on retraining a diffusion model using example training data, training-free guidance is based on a pretrained model and a cost function. We think an extensive evaluation of different paradigms is beyond the scope of this work.
null
null
null
null
null
null
Parrot: Multilingual Visual Instruction Tuning
Accept (poster)
Summary: This paper proposed an MOE architecture to handle multilingual multimodal tasks in vision-language models. And created a new multimodal understanding benchmark including 6 languages translated by GPT-4 with human post-edit. Note: this paper uses an incorrect template, which might have risk of getting rejected. According to ICML's author instructions: "All submissions must closely follow the formatting guidelines in the provided templates; otherwise, they will automatically be rejected." Please correct it as soon as possible. Claims And Evidence: Yes, their experimental results shown the effectiveness of their method compared with other baselines. Methods And Evaluation Criteria: Yes. Theoretical Claims: No theoretical claims in this paper. Experimental Designs Or Analyses: Yes. Supplementary Material: Yes, they provided their training data and code, but lack of their benchmark data. Relation To Broader Scientific Literature: There are previous works on multilingual instruction tuning and multilingual multimodal evaluation benchmarks, and the authors discuss their relationships in this paper. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: 1. This paper proposes a novel MOE architecture integrated into current VLMs, which can help enhance model's multilingual ability during visual instruction tuning in a natural and easy-to-understand way. 2. This paper points out the limitations of previous literature regarding multilingual multimodal benchmark curation and creates a new benchmark. Weaknesses: 1. I have some doubts about the motivation of this paper, where the authors claim in the introduction: "it is necessary to use as little multilingual data as possible to enhance the model's multilingual capabilities." In fact, we can relatively easily obtain large-scale multilingual data using translators. Although the quality is not guaranteed, the quantity is sufficient, which might make the performance gains from the new architecture trivial compared to using more translated training data. While this requires additional cost, in my view, the cost is not too significant compared to pre-training. 2. This leads to another question about the experimental setting in this paper: a strong baseline that needs to be considered for comparison is using machine translation to obtain translated large-scale datasets, e.g., 200K samples per language, then training VLMs on this merged translated dataset (commonly known as the translate-train setting). Another strong baseline would be to first translate test data to English, then test performance on the translated data (commonly known as the translate-test setting). Other Comments Or Suggestions: No. Questions For Authors: No. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your kind comments and constructive feedback on our paper. > **Q1: Motivation of data efficiency.** A1: While large-scale translated multilingual data may seem abundant, its quality (especially for low-resource languages) is often critically compromised due to translation errors, cultural mismatches, and noisy artifacts. Even with massive data volumes, low-resource languages remain underrepresented in practice **because high-resource languages inevitably dominate training distributions** (e.g., 90%+ of tokens in typical datasets). Forcing higher proportions of low-resource data risks triggering **the curse of multilingualism**, where over-parameterized models sacrifice high-resource language performance to accommodate low-resource languages, as observed in prior work [1-3]. Our approach strategically prioritizes high-quality alignment signals over raw data quantity, avoiding both the pitfalls of noisy translation and the imbalance inherent to brute-force multilingual scaling. This ensures stable performance across all languages without compromising high-resource capabilities. We will further discuss and refine the motivation, taking this issue into careful consideration in the final version. [1] Unsupervised Cross-lingual Representation Learning at Scale. ACL 2020. [2] When Is Multilinguality a Curse? Language Modeling for 252 High- and Low-Resource Languages. ICLR 2024. [3] Breaking the Curse of Multilinguality with Cross-lingual Expert Language Models. EMNLP 2024. > **Q2: Translate-train/test baselines.** A2: We appreciate the insightful suggestion to include translate-train and translate-test baselines. While these are valid approaches, they face critical limitations: 1. Translate-Train: - **Data Quality:** Machine-translated data often contains semantic distortions, syntactic errors, and cultural mismatches (e.g., idioms, region-specific references), especially for low-resource languages. For example, as shown in the table below, our experiments showed that training with 70K translated multilingual samples for each language achieved limited improvement. - **Imbalanced Optimization:** Merging large-scale translated data amplifies the dominance of high-resource languages (e.g., English/Chinese), as models tend to overfit to their syntactic patterns, further marginalizing low-resource languages. This phenomenon is called the curse of multilingualism. |Methods|MMMB_en|MMMB_zh|MMMB_pt|MMMB_ar|MMMB_tr|MMMB_ru| |-|-|-|-|-|-|-| |LLaVA w/ 0K|67.1|58.8|59.8|43.5|46.4|59.1| |LLaVA w/ 10K|67.0|59.0|60.3|44.1|47.2|59.4| |LLaVA w/ 30K|66.8|59.4|60.7|44.6|47.9|59.7| |LLaVA w/ 50K|67.1|59.3|61.2|44.4|47.6|60.1| |LLaVA w/ 70K|66.7|59.7|61.3|44.8|48.1|60.4| |Parrot|70.0|68.1|67.3|62.7|58.0|66.3| 2. Translate-Test: - **Latency Overhead:** Translating inputs introduces **2× latency (translation + inference), making real-time applications impractical.** For instance, translating a 100-token Arabic query to English adds ~500ms latency (Google Translate API), which is prohibitive for interactive systems. - **Limited Gains:** **As shown in Table 14 in Appendix**, our preliminary tests with LLaVA on MMMB showed that the translate-test improved Arabic accuracy by only 3.8% (vs. Parrot’s +19.2% gain), while degrading Russian performance by 0.2% due to translation errors. Further details are provided in **Response A8** of the reply to Reviewer Gwu1. These results align with prior findings [4-5], where translate-train/test paradigms underperform dedicated multilingual architectures in both efficiency and robustness. Parrot circumvents these issues by directly aligning cross-lingual semantics without relying on noisy intermediate translations. [4] Lost in Translation: Analysis of Information Loss During Machine Translation Between Polysynthetic and Fusional Languages. ACL 2018. [5] Why do LLaVA Vision-Language Models Reply to Images in English? EMNLP2024. > **Q3: Template formatting issue.** A3: We sincerely appreciate the reviewer’s attention to detail. Upon re-examination, we confirm that the manuscript was prepared using ICML’s official template. However, subtle discrepancies in visual formatting emerged during PDF compilation due to technical nuances in rendering tools. Regardless of the compilation method, the page limits and other requirements are fully in line with ICML’s guidelines. While we cannot directly update the PDF during the rebuttal phase, these unintended inconsistencies have now been fully resolved, with corrections to be incorporated into the final version.
Summary: This paper introduces PARROT, a novel approach to enhance the multilingual capabilities of MLLM, using language specific embeddings to fuse to visual embeddings and multilingual MoE. It addresses the issue of multilingual erosion, where MLLM loses proficiency in non-English languages after multimodal alignment training (e.g. answering in English while asking in non-English). For better evaluating multilinguality, this paper also introduces a new Massive Multilingual Multimodal Benchmark (MMMB). Experiments show SOTA results on MMMB, MMBench and other multimodal tasks. ## update after rebuttal The rebuttal has clarified my concerns. I am happy to maintain my original recommendation. Claims And Evidence: The evidence is clear to support the claim. Methods And Evaluation Criteria: Methods - The method looks to make sense overall, but I am not sure whether it’s necessary to introduce an expensive MoE module to fuse language-specific embeddings – would a simpler LoRRA approach work as well? - Also it’s unclear whether each language expert indeed handles its language specific embeddings? (Figure 7c is just an example for Chinese prompt) Evaluation - The new benchmark is well designed and addresses the limitations (section 2.1) of existing multilingual benchmarks for MLLM. - Evaluations are comprehensive, conducted on a wide range of tasks. - However, it might be unfair to compare PARROT with others on MMMB/MMBench, as PARROT has been trained on these specific 6 languages and on the data constructed with the same approach as MMMB? Theoretical Claims: N/A Experimental Designs Or Analyses: Some ablations might be worth adding: - MoE vs. other approaches to distinguish prompt language, such as LoRRA and language as task prefix. - Number of MoE experts. - Size (parameters) of MoE experts. - Frozen vs. unfrozen ablations. Also qualitative analysis on baselines and PARROT would be insightful. Supplementary Material: I reviewed most parts of the supplementary material. Relation To Broader Scientific Literature: Multilinguality is a key capability for MLLM. This paper addresses this domain with a new approach to improve multilinguality and a new benchmark to better verify multilinguality – all these are good for the community. Essential References Not Discussed: no Other Strengths And Weaknesses: The new MMMB benchmark should be beneficial to the community. This paper is well written and easy to read. Other Comments Or Suggestions: suggestion: Figure 5 shows the core approach so could be moved forward (instead of in page 6) suggestion: add legend for Figure 6; otherwise it’s unclear without reading the corresponding paragraphs Questions For Authors: Given there are much more non-English image-text (e.g. alt-text) pairs on the web than English (≈3:1), why this paper claims “Due to the scarcity of non-English multimodal data (e.g., the lack of large-scale, high-quality image-text datasets), it is necessary to use as little multilingual data as possible to enhance the model’s multilingual capabilities.”? Can the multilingual erosion issue be simply mitigated if we mix more multilingual data in the pre-training instead of the SFT stage? Table 14 shows the comparison of the translation-based baseline and Parrot. It’s unclear to me why Parrot can be much better than translation baselines. Could you please explain the experiment’s settings (such as how the baseline eval was conducted), and give some qualitative examples to show why translation doesn’t work in some cases? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their thorough and constructive feedback, as well as their endorsement of our work. > **Q1: MoE vs. Simpler Methods (LoRRA).** A1: We explored the LoRRA-based (abbreviated as L-based) adaptation shown in the table below but found it insufficient for two reasons.[1] 1) L-based method introduces fixed language-specific parameters, which struggle to handle multilingual prompts dynamically (e.g., mixed-language queries). 2) Its low-rank updates are less effective for aligning diverse languages, especially when training data is sparse. In contrast, MoE dynamically routes visual tokens to language-specific experts based on textual guidance, enabling flexible adaptation with minimal parameters. Additionally, due to the small number of parameters in MoE (<0.5% of the parameters in the LLM), it is not an expensive method. |Methods|MMMB_en|MMMB_zh|MMMB_pt|MMMB_ar|MMMB_tr|MMMB_ru| |-|-|-|-|-|-|-| |LLaVA-1.5|67.1|58.8|59.8|43.5|46.4|59.1| |LLaVA-1.5 w/ LoRA+multilingual data|66.9|61.1|60.9|47.2|50.4|61.3| |Parrot|70.0|68.1|67.3|62.7|58.0|66.3| [1] Unsupervised Cross-lingual Representation Learning at Scale. ACL 2020. > **Q2: Whether each language expert indeed handles its language-specific embeddings.** A2: In Figure 7c, the router activates the Chinese expert most strongly, but experts are not strictly language-exclusive. Instead, they learn cross-lingual synergies (e.g., English/German share morphological features). To further present the activation using other language prompts, we will add t-SNE visualizations of expert activations across languages and include expert distributions of MoE for each language in the final version. > **Q3: Evaluation fairness.** A3: To validate Parrot's effectiveness, we conduct an ablation study by expanding LLaVA with **the same multilingual data** used in Parrot. Both models are evaluated on MMMB, with results in **Table 13 in Appendix.** While LLaVA shows slight improvement, the gains are limited. In contrast, Parrot achieves substantial improvements, highlighting that simply adding multilingual data is insufficient to bridge the gap. Moreover, the findings from the ablation study in Figure 6a further support this conclusion, reinforcing the validity of our design. > **Q4: Ablation studies.** A4: In the previous response (A1), we provided comparisons between Parrot and L-based alternatives. Due to time and word constraints during the rebuttal phase, we will include expanded ablation studies on expert counts, sizes, and frozen vs. trainable configurations in the final version. > **Q5: Suggestions regarding the figure revisions.** A5: We thank your valuable feedback. We are committed to implementing these revisions and will incorporate them into the final version. > **Q6: Non-English data scarcity.** A6: While non-English web data is abundant, high-quality multimodal data remains scarce. 1) Most non-English alt-text is noisy or misaligned (e.g., social media images with irrelevant captions). 2) Parrot’s semi-automatic curation (Figure 3) ensures linguistic and cultural precision, which raw web data lacks. Further details are provided in **Response A1** of the reply to Reviewer QoYJ. > **Q7: Mitigating multilingual erosion via pre-training.** A7: Incorporating more multilingual data during pre-training could help mitigate multilingual erosion, but it may not fully resolve the issue. The main goal of multimodal pre-training is to learn a projector for cross-modal alignment, which often remains biased toward dominant languages (e.g., English) due to data imbalances. Without explicit mechanisms like Parrot's MoE-based language-specific alignment, subsequent SFT stages would still be affected by English-centric bias. Due to word constraints, we will include experiments with multilingual pre-training data in the final version. > **Q8: Translation baseline explanation.** A8: The translation baseline uses Google Translation API to translate non-English queries to English, feeds them to LLaVA, then translates responses back. 1. Translated prompts often lack cultural-specific context. **Image**: A traditional Chinese red envelope (红包) with handwritten characters "福到" (upside-down "福", symbolizing **"fortune arrives"**) and "岁岁平安" ("peace every year"). The translated-based method cannot perform Glyph-aware visual grounding (recognizing 福's inverted form as intentional) 2. When a Portuguese user asks about a Russian meme with the text "Почему программисты любят кофе? Потому что Java!" (where Java is a pun on both coffee culture and programming language), machine translation to English collapses the dual meaning into "coffee brands," stripping the programming-language humor. We have presented examples (Figure 10) where translation easily fails due to ambiguity or cultural nuances.
Summary: The paper proposes Parrot, an MLLM targeting to handle multilingual tasks. Parrot based on the LLava architecture, and employ an MoE module to enhance multilingual VQA ability. The paper employ a new alignment method that aligns a english-biased clip encoder to various languages modality. Moreover, it proposes a benchmark MMMB to test the multilingual ability of different MLLMs. Claims And Evidence: Motivations are clear and well-supported with empirical analysis. One problematic claims are that while there maybe potential problem when prompting MLLM with non-english query, what are the previous solutions for a multilingual MLLM? And what is the drawback of those method that motivate this method? The author need to highlight this part to differentiate the work from others. Methods And Evaluation Criteria: 1. It will be a little nonsensical to propose a benchmark and a novel model at the same time. 2. Have the author consider using a multilingual CLIP and a multilingual LLM as a starting point and train them on multilingual dataset? What would be the performance of it compare to the MoE alignment paradigm? 3. Lack of comparison to latest MLLM (open-source and potentially close model). Theoretical Claims: There are no theoretical claim in the paper. Experimental Designs Or Analyses: The overall design is good. My comments are on the baselines: The multilingual ability assessment seems to be reasonable to me - it contains some latest baselines and parrot demonstrates a superior performance among them. However, in the Radar plot, where the paper access the general ability of the MLLMs, the baselines are outdated (all from 2023). I would like to comparison against more latest MLLMs. Supplementary Material: I check the specification of the training datasets (part A) to ensure that parrot is training using similar amount of the data as LLaVA (a fair comparison). Relation To Broader Scientific Literature: The idea is effective when dealing with limited available data when one tries to train a multilingual MLLMs. If one is able to scale the data via AI-generated data, distillation or human labeling, the contribution of the paper is less useful. Essential References Not Discussed: Some SoTA baselines are missing, which may have solve the multilingual problem to some extent. Other Strengths And Weaknesses: See Claims And Evidence and Methods And Evaluation Other Comments Or Suggestions: N/A Questions For Authors: See Claims And Evidence and Methods And Evaluation Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer’s thoughtful and candid feedback. > **Q1: Prior work and drawback.** A1: **In section B in Appendix**, we have discussed prior multilingual MLLM methods like mCLIP, VisCPM, and M3IT. Most prior work relies on large-scale multilingual multimodal data (e.g., M3IT uses 2.5M+ samples), which is impractical for low-resource languages (more details are referred to in Response A1 of the reply to Reviewer QoYJ). Other multilingual CLIP methods improve performance for specific languages but sacrifice generalizability across diverse languages due to fixed encoders. For more clarity, we will expand this section in the revision to explicitly outline their key drawbacks. > **Q2: Benchmark and method co-design.** A2: MMMB is not specifically designed for Parrot. In contrast, MMMB and Parrot are co-designed to address two limitations in multilingual multimodal research: (1) existing benchmarks (e.g., M3Exam, LLaVA-Bench) not only lack standardized linguistic coverage, typically limited to 2-3 high-resource languages with inconsistent annotation protocols, but also exhibit systemic limitations that we have discussed in Section 2.1; (2) conventional models trained on imbalanced SFT data exhibit performance erosion across languages, yet lack evaluation frameworks to diagnose such failures. Additionally, we conducted experiments not only on our designed benchmark but also validated results on other multilingual benchmarks, such as MMBench (Table 1) and LLaVA-Bench (**Table 4 in the Appendix**). > **Q3: Using multilingual CLIP and LLM.** A3: In our preliminary experiments, we explored the use of multilingual CLIP as a baseline. However, this approach exhibited critical limitations: (1) It failed to balance performance across all target languages, as reliance on the inherent multilingual capacity of CLIP led to inconsistent generalization. In other words, it is hard to generalize beyond typologically similar languages covered in multilingual CLIP's pretraining. (2) Multilingual CLIP showed inferior visual perception capabilities compared to the original OpenAI-CLIP, degrading performance on general visual tasks. As shown in the table below, while LLaVA equipped with multilingual CLIP shows improvement in Chinese (+1.5), Arabic (+2.7), and Turkish (+1.1), performance in other languages declines (e.g., English -1.7). In contrast, Parrot leverages a single OpenAI-CLIP alongside a multilingual LLM backbone (Qwen-LM) and achieves balanced, state-of-the-art performance across all 6 languages in two benchmarks. This demonstrates that our MoE-driven alignment paradigm effectively resolves language bias without requiring language-specific or multilingual visual encoders, while preserving strong general multimodal perception capabilities. |Method|ViT|LLM|MMMB_en|MMMB_zh|MMMB_pt|MMMB_ar|MMMB_tr|MMMB_ru| |-|-|-|-|-|-|-|-|-| |LLaVA-1.5|OpenAI-CLIP|Qwen1.5-7B|67.1|58.8|59.8|43.5|46.4|59.1| |LLaVA-1.5|M-CLIP|Qwen1.5-7B|65.4|60.3|58.1|47.2|47.5|58.8| |Parrot|OpenAI-CLIP|Qwen1.5-7B|70.0|68.1|67.3|62.7|58.0|66.3| > **Q4: Lack of comparison to the latest MLLM.** A4.1: Current MLLMs are primarily data-driven or architecture-driven. Our work aims to achieve data-efficient multilingual adaptation by enhancing multilingual capabilities with minimal training data while preserving general ability, rather than developing a model with exceptionally strong general capabilities. Many leading methods do not release their data. **Therefore, direct comparisons of general capabilities with strong models risk unfairness due to vast data disparities.** Despite this, we also compare Parrot to general-purpose MLLMs (e.g., Qwen2-VL (2024.09) and LLaVA-OV (2024.08)) on the MMMB and multilingual MMBench dataset in the table below and **Table 12 in Appendix**. Although Qwen2-VL and LLaVA-OV are trained with **over 10x the amount of data used by our model**, Parrot still outperforms them on the multilingual benchmark. This comparison spans two datasets and six languages, demonstrating Parrot's superior performance over these recent approaches. |Method|LLM|MMMB_en|MMMB_zh|MMMB_pt|MMMB_ar|MMMB_tr|MMMB_ru| |-|-|-|-|-|-|-|-| |Qwen2-VL|Qwen2-7B|80.5|80.2|78.1|74.0|71.7|79.3| |LLaVA-OV|Qwen2-7B|79.0|78.2|75.9|73.3|67.8|76.4| |Parrot|Qwen2-7B|80.1|80.0|79.6|76.5|75.0|79.9| |Method|LLM|MMB_en|MMB_zh|MMB_pt|MMB_ar|MMB_tr|MMB_ru| |-|-|-|-|-|-|-|-| |Qwen2-VL|Qwen2-7B|79.6|79.6|75.9|71.7|70.9|76.0| |LLaVA-OV|Qwen2-7B|77.1|76.6|73.2|66.9|65.5|71.3| |Parrot|Qwen2-7B|78.7|78.4|76.3|75.2|74.1|77.8| A4.2: For the radar chart in Figure 7, we have compared Parrot with the SOTA in 2024 (e.g., Mini-Gemini and LLaVA-Next), which shows Parrot’s competitiveness even with limited data. **Crucially, Parrot’s goal is not to surpass general-purpose MLLMs but to enhance multilingual ability with minimal data.**
Summary: The paper is addressing what authors call as "multi lingual erosion" in multimodal large language models (MLLMs) - a phenomenon where post multi modal alignment the model loses ability to respond in or process non-English inputs. The authors identify that existing vision-language alignment methods (LlaVa) use English centric data, thus resulting in visual features that are biased towards English and thus is generally poor towards other languages To overcome the issue of multilingual erosion, the paper introduces Parrot, a novel multilingual visual instruction tuning method that uses textual guidance with Mixture of Experts to align visual features with multiple languages. Parrot, extracts image features via frozen encoder and then projects these into LLMs embedding space. Then, it performs a cross modal cross attention between visual features and token embeddings on input text. The cross attention output is fed to MoE router that activates language specific experts which then produces a language specific visual token representation by transforming English biased visual embeddings. Subsequently MoE re-weighting merges transformed features with original ones, there by retaining visual information. Thus image embeddings are first aligned with visual information prior to feeding them to the LLM. Parrot is then trained in a multi stage fashion - first, a modality alignment phase using a large English centric data keeping the vision encoder and the LLM frozen. The second stage is instruction tuning phase. for multilingual alignment where projector, MoE and LLM are tuned based on a multi-lingual instruction image text data set ~10k samples for each of the 5 language used in stage 2. These data sets were obtained using ShareGPT4V and GPT-4 translations with human calibrations. The base LLM used in Qwen-1.5 Chat which has strong multi-lingual capabilities. Authors introduced Massive Multilingual Multimodal Benchmark (MMMB) to evaluate multilingual performance, comprising 12,000 visual QA instances spanning 6 languages (English, Chinese, Portuguese, Arabic, Turkish, Russian) across 15 categories​. They further extend the existing MMBench dev set to all six languages by translating questions via GPT-4 and manually verifying them​. Authors used a circular evaluation strategy where each multiple-choice question is converted into two yes/no questions to mitigate random guessing biases​. Beyond these multimodal ability is tested on a range of standard tasks. The paper shows that this approach state-of-art (SOTA) on multilingual benchmarks, notably 14B obtains highest accuracy on all languages in MM Bench, and all non English languages on MMMB. 7B has a strong performance as well and exceed LLaVa-NeXT-13B. The gains have come without regressions to English performance. Similarly authors also show strong performance on general multimodal tasks performing competitively across tasks. The ablation studies demonstrate that MoE and the multilingual instruction tuning data set are critical to the performance of Parrot. Claims And Evidence: Overall the claims are generally well supported with convincing evidence. 1. Multilingual erosion: This is empirically well demonstrated using Chinese pre-trained CLIP yields better outputs (66.4 vs 68.3) on MM Bench-cn validation English biased encoder was an issue. 2. Parrot improving multi lingual alignment with little multilingual data: This is one of the strongest claims of this work and is well supported. Parrot is using ~1% data that competing models use (ex: Parrot 7B on 2M examples with ~10k per non-English language) outperforms Qwen-VL-Chat (1.1B English + 300M Chinese examples). The ablation studies in table3 clearly establishes that using Parrot architecture adding small multilingual datasets incrementally can generate accuracy improvements 3. SOTA performance and improvements in underserved languages : Parrot's performance is SOTA across both benchmarks except English MMMB. This is well supported through comparisons in Table 2. Authors shows significant improvement on Turkish and Arabic exceeding prior SOTA by at least 10 points. These are substantial and adequately represented. Lastly authors show that the performance is maintained across multimodal tasks. This claim is supported by measuring on standard MM tasks - MME, MMStar etc. The data is represented in figure 5b and can be represented in a table for better clarity. Methods And Evaluation Criteria: 1. Methods are well motivated, and the approach looks sound. The idea of using language-specific experts to condition visual token on input language makes sense - text conditioned transformation of visual-linguistic token in another language. Using MoEs and letting the experts choose specialized transformation seems appropriate. The modular approach, and two-stage training while not new are a reasonable strategy. Overall this methodology isn't significantly novel but extends current approaches to solve the multi lingual problem. There do not appear to be any fundamental flaws with this approach. 2. The MoE re-weighting is good design choice, that ensures visual features are constrained by original semantics. This is likely a contributor for Parrot's strong performance on generic tasks. However detailing on how the reweighting param was set and its impact would be useful to have. 3. Evals and benchmarks - This is a strong suite of the paper. They evals are thorough and appropriate. Specifically MMMB is well designed benchmark and fills prior gaps - wider array of languages and consistency. The authors also cover typologically different languages ensuring results are meaningful for a range of linguistic scenarios. Beyond these authors evaluate on multimodal benchmarks to show that there no notable regressions while comparing against an array of both open source and closed models. Using VLMEvalkit to evaluate all models ensures consistency and fairness. This is a strong, rigorous evaluation criteria. Theoretical Claims: There are no theoretical claims that are put forth here. This is primarily a empirical work. Experimental Designs Or Analyses: The experimental design is comprehensive and sound. 1. Baselines - Parrot is compared a wide range of existing models open-source MLLMs (LLaVA 1.5, LLaVA-NeXT, Qwen-VL, mPLUG-Owl 2, InstructBLIP, MiniGPT-4v2 and closed source ones GPT-4V, Gemini, Qwen-VL-MAX. Authors used a unified eval framework which is VLMEvalKit. They have onboarded all models onto this eval framework which is a fair experimental evaluation approach. 2. MMMB and MM Bench: The benchmarks seem comprehensive and robust. MMMB which was described in detail, consists of moderately difficult, multimodal questions across languages. They are not biased towards a specific language or task. For MM Bench, GPT-4 was used for translation followed by human verification. Human verification followed by circular evaluation is a strong design choice for evals which is fair and consistent 3. Ablations: The ablations are well designed. a) Multilingual data vs MoE- (i) baseline (English-only), (ii) +multilingual data (but no MoE), (iii) +MoE (but maybe no multilingual data), and (iv) full Parrot. The authors show clear evidence that adding MoE improves the performance. b) Incremental data ablation: Table 3 provides an analysis of how adding each language’s fine-tuning data affects performance​ There appear to be no flawed experiment design choices. The experiments are comprehensive. Supplementary Material: Yes, reviewed the supplementary material from Appendices A-E. Relation To Broader Scientific Literature: This work is well-positioned within existing literature - effectively builds on prior prior multilingual vision-language modeling, MoE applications, and multilingual instruction-tuned models. The key is the integration of MoEs with modular approaches to improve multilingual MMLLMs with minimal computational and data resources - Multilingual Text Encoders for Vision: There have been attempts to retrofit models like CLIP to multilingual text. For example, mCLIP learned a multilingual text encoder aligned with CLIP’s vision space via knowledge distillation​. Parrot’s approach is different in that it keeps the text input to the LLM multilingual, and instead adjusts the visual input. - Large Multilingual Multimodal Models: The paper references PaLI (Chen et al., 2022)​ which is a 17B parameter model jointly trained on image-language data for 100+ languages. Parrot’s contribution can be seen as a more data-efficient and parameter-efficient. Parrot adapts a 7B or 14B LLM with a small MoE module and achieves strong multilingual performance with a tiny fraction of data - Instruction Tuning in Multilingual Context: The paper cites Ying-VLM- instruction tuning an MLLM in English can indirectly generalize to other languages. Parrot’s work aligns with the philosophy that leveraging a multilingual LLM is key and goes further - explicitly tuning the visual features for each language, which yields far better results than hoping for zero-shot transfer. Parrot’s MoE module can be seen as bridging the gap between a multilingual instruction-tuned LLM and the monolingual vision features. - Mixture-of-Experts in Multilingual Systems: There is relevant work in NLP that uses MoE for multilingual or multi-domain adaptation - Zhao et al. on language-guided MoE routing for machine translation. Parrot directly draws from this concept, as they mention in related work and implement a similar idea in the vision-language domain. Parrot’s novelty is applying it to align visual embeddings with multilingual text Essential References Not Discussed: Paper is comprehensive addresses all major directions. No omission of essential work Some recent notable references could be include - Pangea (Yue et al, 2024) is very recent work. Introduces fully open multi lingual MMLLM (39 languages) with instruction tuning dataset which 6M. Here authors show it significantly outperforms existing models in multi lingual settings. It is likely Pangea was published after the submission. It would be interesting for authords to compare Parrot with Pangea, though Pangea has an alternate, complementary approach which is huge data and end-end training. Similarly Maya - instruction tuned using a multilingualCLIP could be another reference to cite Pali has been extended to PaliX with more capabilities, which is another reference to cite. Other Strengths And Weaknesses: Strengths - Strong Empirical Results and Data Efficiency - As stated previously significantly outperforms open models and most closed models with a fraction of data - Originality of Approach: While built on existing components, the combination of cross-modal attention + MoE for language-specific visual token transformation is a novel idea in this space. To my understanding, no prior multimodal LLM work has used a mixture-of-experts to handle multiple languages dynamically. This is a fresh perspective compared to simply increasing training data or training separate models. This is a non-trivial innovation that extends ideas from multilingual NLP into multimodal alignment. Weaknesses - Scalability to Many Languages: The paper doesn’t address how the method scales beyond the chosen languages. In principle, MoE could scale but there are challenges of requiring data in each language. The authors do not test or discuss what happens if an unseen language is input to the model. There is strong dependency on Base LLM’s Language Ability: Other Comments Or Suggestions: Minor comments Clarify Figure 2 Caption: The caption for Figure 2 is a bit unclear: “bad cases for multilingual benchmarkperceive”​ MoE Re-weighting Parameter: It’s mentioned that a trade-off parameter is used to blend original and expert outputs​. How is this chosen? Add more details to discuss this further. Questions For Authors: 1. How is the MoE re-weighting parameter determined? 2. MoE Gating Mechanism: Do you use a hard gating or a soft combination of experts during inference? 3. Base LLM Choice and Multilingual Strength: You chose Qwen-1.5 as the LLM due to its strong multilingual capability. Have you tried Parrot with a base LLM that is less multilingual? How doed the base LLM impact performance? 4. Inference Efficiency: During inference, what is the computational overhead of Parrot compared to a standard pipeline (e.g., LLaVA)? Since you use a CLIP encoder (frozen) and an MoE, do you run the cross-attention and expert forward pass for every token, or just once per image? 4. High-Resolution Image Handling: In the limitations, you mention CLIP’s limitation with high-res or detail rich images. Have you considered straightforward mitigations like using CLIP-ViT-L/14 with a larger input size? Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We are deeply grateful for the reviewer’s thorough and thoughtful assessment of our work, as well as their recognition of Parrot’s contributions. > **Q1: Scalability to many languages.** A1: Parrot’s MoE framework is inherently designed to support seamless integration of new languages. The addition of language-specific experts requires only minimal data (e.g., ~5k samples per language) and incurs a linear parameter overhead (one expert per language). Empirically, we observe robust knowledge transfer **within language families.** We will extend Parrot to encompass broader linguistic diversity in future work and present more case studies given an unseen language to the model. > **Q2: MoE re-weighting parameter.** A2: The trade-off parameter $\alpha$ in Eq. 5 balances the preservation of original visual semantics ($\alpha=0$) against language-specific transformation strength ($\alpha=1$). Through grid search on MMMB validation data with $\alpha \in \{0.1, 0.3, 0.5, 0.7, 1.0, 1.5\}$, we found $\alpha=0.5$ optimally maintains visual fidelity while enabling robust multilingual alignment (>5% gain over $\alpha=0.1$), as excessive reliance on original English-biased features hindered language adaptation. Higher $\alpha$ values (>1.0) caused degradation in general visual tasks (e.g., -2.2% on MMBench object counting). We will include full parameter sensitivity curves in the final version to clarify this design choice. > **Q3: MoE gating mechanism.** A3: As described in Eq. 4, we use soft gating (weighted combination of experts) rather than hard routing during inference. This ensures smooth transitions between languages, avoids overfitting to dominant experts, and dynamically handles multilingual prompts (e.g., mixed-language queries). We will add a discussion in §3.3 highlighting this design choice. > **Q4: Base LLM impact.** A4: We ablated this by replacing Qwen1.5-7B with Vicuna1.5-7B (weaker multilingual support). As shown in the table below, Qwen1.5 outperforms Vicuna1.5 by +4.4% on Turkish and +8.5% on Arabic. This highlights that Parrot’s effectiveness is somewhat reliant on the base LLM’s multilingual ability. **Notably, while using the weaker multilingual LLM, the performance gain compared to the baseline LLaVA-1.5 is remarkably large.** |Method|LLM|MMMB_en|MMMB_zh|MMMB_pt|MMMB_ar|MMMB_tr|MMMB_ru| |-|-|-|-|-|-|-|-| |LLaVA-1.5|Vicuna1.5-7B|67.1|58.8|59.8|43.5|46.4|59.1| |Parrot|Vicuna1.5-7B|68.2|65.4|64.3|54.2|53.6|63.0| |Parrot|Qwen1.5-7B|70.0|68.1|67.3|62.7|58.0|66.3| > **Q5: Inference efficiency.** A5: Parrot maintains inference efficiency comparable to LLaVA by executing cross-attention and MoE processing **once per image** during visual token projection, not per generated token. The lightweight MoE module adds <0.5% parameters to the LLM, resulting in almost no extra runtime latency increase versus LLaVA-1.5 under identical hardware. Frozen CLIP encoding and single-pass visual-language alignment ensure the computational overhead remains negligible despite enhanced multilingual capabilities. > **Q6: High-resolution image handling.** A6: Thank you for your valuable suggestion. Parrot currently uses CLIP ViT-L/14-336 that is the maximum input size version. Crucially, our current implementation prioritizes fair comparison with LLaVA-1.5 baselines that share the same CLIP backbone. In future work, we plan to integrate dynamic-resolution approaches like NaViT's flexible patching or SigLIP's high-res pretraining to better handle high-resolution images while maintaining multilingual alignment capabilities. > **Q7: Minor revisions.** A7: Thanks for the detailed review and helpful comments. We will incorporate these updates into the final version. 1) **Figure 2 caption**: We have clarified the caption to "Examples of suboptimal multilingual benchmark design in existing works, as evaluated against MMMB's principles." 2) **Reference**: we shall include a comprehensive comparative analysis of multilingual MLLM's recent advancements in the final version.
null
null
null
null
null
null
Scaling Value Iteration Networks to 5000 Layers for Extreme Long-Term Planning
Accept (poster)
Summary: In this paper, the authors investigated to extend Value Iteration Network (VIN) to address longer-term and larger-scaled planning tasks. It is not feasible just by applying VIN, and they found the reasons from invariant transition in the network and inefficient loss design for long-term planning tasks. To address them, the authors proposed a new module, Dynamic Transition kernel which transits under the consideration of observation knowledge, and adaptive highway loss to give the learning signals to the each layer of network when the planning horizon is over the oracle length. It not just leads the network can find the shorten path but also improves the information flow. They called it as Dynamic Transition VINs (DT-VINs) and they showed this planning modules can be scaled up to 5000 layers empirically. They empirically evaluated their model on large-scaled discrete/continuous action conditioned maze environments and Rover Navigation environment. In the evaluations, their model outperformed baselines. They also studied ablations when different kernels and losses are applied and the performance change through different number of layers applications. ## update after rebuttal The authors properly addressed our concerns, we keep our score. Claims And Evidence: This paper claims that by augmenting the standard VIN architecture with the dynamic transition kernel and an adaptive highway loss, one can overcome the limitations in long-term planning present in conventional VINs. DT-VINs achieve much higher success and optimal rates in tasks where traditional methods fail, and the ablation studies clearly show the contribution of each component. While the improvements are well supported, a deeper discussion on the trade-offs between performance gains and increased computational complexity would further strengthen the evidence. Methods And Evaluation Criteria: The methodology is well motivated. The use of a dynamic transition kernel addresses the representational bottleneck in traditional VINs, and the adaptive highway loss provides an elegant solution to train extremely deep networks. The evaluation criteria—primarily success rate and optimality rate across varied maze sizes—are appropriate for assessing long-term planning performance. The extensive experiments on multiple benchmarks and controlled ablation studies make a strong case for the proposed modifications. Theoretical Claims: The paper references theoretical work linking network depth with value estimation accuracy (e.g., via Theorem 1.12 from prior studies) but does not provide new proofs. Experimental Designs Or Analyses: The experimental design is reasonable and their experiments are comprehensive. The authors evaluated DT-VINs across a variety of settings—from small-scale discrete action spaced mazes to complex continuous control and real-world rover navigation tasks. The soundness of the experimental analyses is supported by thorough ablation studies, comparisons with multiple baselines, and assessments under different noise conditions. Supplementary Material: We checked their appendix, especially for additional experimental detail parts and additional ablation study results. Relation To Broader Scientific Literature: In the perspective of the targeting tasks, long-term planning, it can be related to the recent works applying the Diffusion model to the long-term planning tasks [1,2]. Their methods approach to generate the long-term planning through the powerful generative modeling with the strengths of the diffusion model, holistically generating data while DT-VIN can make it possible through very deep VIN model learning. Essential References Not Discussed: The essential references are properly addressed in this paper. Other Strengths And Weaknesses: Strengths: - A good motivation and well-designed methodology: One of the weaknesses of VIN for long-term planning is well addressed with properly designed methodology. - Extensive empirical results: They empirically evaluated their methodology with several environments and studied diverse ablations. Weaknesses: - Too compressed discussion for their methodology: Their discussion for proposed methodology is relatively short, and it was hard to understand at first glance. More detailed figures and explanations are given, it would be better. For instance, why the dynamic transition kernel is required is shown through figure that shows the strength of it by comparing the region with walls and empty areas are encoded invariantly and observation conditionally. - Computational overhead discussion: Deeper network requires more computational overhead, but it is not discussed in their paper. Other Comments Or Suggestions: We do not have any other comments. Questions For Authors: - For continuous action maze environments, you applied the pre-trained VIN model for planning. Then, in the aspect of the planning, can we think the task is similar to discrete action spaced maze? May the difference is the controller I guess. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate the insightful and helpful comments from Reviewer vY56. Please find our responses to your concerns and questions below. --- ### Suggestion 1: The tasks are related to diffusion models for long-term planning. ### Answer: Thank you for noting the connection to diffusion-based planning. As references [1,2] were missing, we reviewed related works [r3, r4]. Though differing in approach—trajectory generation vs. step-wise planning—both address long-horizon dependencies in complex tasks. We will revise the manuscript to reflect this connection and consider future integration. Additionally, if the reviewer can provide [1,2], we will be happy to cite and discuss them in our final version. [r3] Janner M. et al. Planning with diffusion for flexible behavior synthesis. ICML 2022. [r4] Mishra U.A. et al. Generative skill chaining: Long-horizon skill planning with diffusion models. CORL 2023. --- ### Weakness 1: More discussion on methodology. ### Answer: We agree that the methodology section could be clearer and will revise it for the final version. In particular, we agree with your proposal to lean more on illustrative figures in the paper. Currently, Figure 6(e) visualizes the learned dynamic latent transition kernel of DT-VIN across several states. We plan to integrate this into Figure 2 and include a side-by-side comparison with an invariant transition kernel to better highlight the advantages of the dynamic approach. --- ### Weakness 2: More discussion on trade-offs between performance gains and the additional computational costs introduced by using deeper networks. ### Answer: We already partially address this in Appendix G, where we found that: - At the same depth, DT-VIN achieves better performance with a similar or lower GPU memory and training time (Appendix G.1, Tables 14–15). - DT-VIN’s depth scales almost linearly with planning steps, ensuring manageable complexity as tasks grow harder (Appendix G.2). To more specifically address your concern, we have produced Tables R1 and R2 (shown below), which explicitly analyze the trade-off between computational cost and performance across different network depths $N$. While deeper networks naturally incur higher computational costs, DT-VIN demonstrates substantial performance gains as depth increases. Specifically, it achieves +81.23% and +99.77% success rate improvements at $N=300$ and $N=600$ over $N=100$, respectively. In contrast, VIN, GPPN, and Highway VIN show little to no improvement despite incurring similar or even higher computational costs. We will incorporate these new tables into the revised manuscript to clarify the performance–efficiency trade-offs of DT-VIN. Table R1: *GPU memory (GB)* and *GPU hours (h)* during training, on $35\times 35$ maze tasks for varying network depths $N$. GPU memory (GB): ||N=100|N=300|N=600| |---|---|---|---| | VIN |1.3 |2.7|4.2| | GPPN |42.1|135.2|182| | Highway VIN|15.2|31.5|41.3| | DT-VIN (ours)|18.8|35.8|53.3| GPU hours (h): ||N=100|N=300|N=600| |---|---|---|---| |VIN |2.8|5.2|8.4| |GPPN |4.2|9.1|12.6| |Highway VIN |4.9|9.8|14.3| |DT-VIN (ours) |4.8|8.7|12.1| Table R2: Success rates of different models at varying depths on $35\times 35$ mazes with shortest paths in [200, 300]. See Appendix Fig. 9 for plots. ||N=100|N=300|N=600| |---|---|---|---| | VIN | $0.0_{\pm 0.0}$ | $0.0_{\pm 0.0}$ | $0.0_{\pm 0.0}$| | GPPN| $0.0_{\pm 0.0}$ | $0.0_{\pm 0.0}$| $0.0_{\pm 0.0}$| | Highway VIN | $0.0_{\pm 0.0}$| $34.23_{\pm 11.27}$| $54.41_{\pm 10.2}$| | DT-VIN (ours)| $0.0_{\pm 0.0}$ | $\mathbf{81.23_{\pm 1.34}}$ | $\mathbf{99.77_{\pm 0.23}}$ | --- ### Q1: Whether planning in continuous action mazes is essentially the same as in discrete ones. May the main difference is the controller. ### Answer: You're generally right—the high-level planning in continuous control tasks is conceptually similar to that in discrete mazes (all planning in reinforcement learning is essentially about producing a sequence of actions—discrete or continuous—that will lead to a desired state or sequence of states). However, DT-VIN is not simply a stack of a high-level planner and a low-level controller. Instead, it is trained end-to-end to map observations directly to actions, learning high-level planning implicitly from expert demonstrations without requiring high-level planning supervision. This simplifies training and improves robustness. For example, in D4RL tasks, we supervise using expert control actions (e.g., from a 2-DOF ball or a 8-DOF quadruped robot). While D4RL uses the same $9\times 12$ maze layout for all training samples (varying only the start and goal positions), we evaluate on much larger, unseen mazes ($35\times35$, $100\times100$), making the task significantly more challenging.
Summary: Value Iteration Networks (VIN) struggle to scale with large scale planning problems, typically in problems involving a higher number of steps to reach to goal. The paper provides two observations that explain the poor performance (1) low representation capacity (2) lack of depth in VINs - due to vanishing gradient issue for higher depths. The authors propose to use dynamic transition kernels to improve representational capacity (solve (1)) and propose ‘adaptive highway loss’ to solve issues with vanishing gradients allowing higher depths. Their method, named DT-VIN, shows state-of-the-art performance with respect to other VIN frameworks, especially for large scale planning problems. ## Update after Rebuttal Thanks for a detailed response. That is, I still think the paper lacks clarity in proposed method section (aligns closely with reviewer vY56). I do not agree with authors' final comment on "minor" writing issues. Although I understand minor writing issues are fine - however if they obstruct understanding of the paper (and thus critical review of the contributions) then in my opinion those issues are not "minor". Given my concern I will be retaining my original score as of now. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes, to my knowledge. Theoretical Claims: N/A Experimental Designs Or Analyses: The experiments sound solid for most part. See questions section for questions. Supplementary Material: Didn't review in detail - Appendix provides experimental details. Skimmed through the details - which made sense. Relation To Broader Scientific Literature: They advance a widely used framework for large scale planning problems. I think the contribution is sufficient. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths - The paper provides rigorous experiments and strong ablation studies providing insightful conclusions. - The results are impressive on large scale planning problems. - The paper is well structured Weakness - Section 3.1 is not clear to me. Specifically, justification for 'state’-dynamic and 'observation’ dynamic kernels is not clear to me. What is the difference between them in this context? How does a kernel of |A| \times F \times F make it `state’/’observation’ dependent? In this case, should we use |A| \times F \times F parameters for each state? Other Comments Or Suggestions: Minor - Text in images in too small (e.g., Fig. 1 Fig. 2) - Line 244 right ('Additional’ experiments). Due to space constraints we `report’ (or evaluate and report) Additional is also not spelled correctly. Questions For Authors: 1. How is suboptimality measured when success rate is less than 100%? 2. Why does highway VIN get a better optimality rate with increasing shortest path lengths? 3. line 260-261 - we examine depths of N = 600, 5000 what do you mean? Do you mean in range [600, 5000]? 4. Why is pretraining required in section 4.2? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank reviewer TBA2 for the valuable comments. --- ### Concern 1: Clarification on state/observation-dynamic in Sec 3.1. ### Answer: To clarify: the *observation* refers to the maze image input, which is mapped by the model to a latent MDP. Each *latent state* corresponds to a specific node within this latent MDP. We distinguish two types of dynamics: - **Latent state-dynamic**: As the reviewer correctly noted, in this setup each latent state $(i,j)$ has its own transition kernel $\overline{\mathsf{T}}_{i,j} \in \mathbb{R}^{|\overline{\mathcal{A}}| \times F \times F}$. - **Observation-dynamic**: The transition kernel is generated from the input observation $x$ via a learned mapping function: $\overline{\mathsf{T}} = f^{\overline{\mathsf{T}}}(x)$. These two notions are orthogonal, defining independent aspects of variation in the transition kernel. They give rise to four configurations: latent state-dynamic/invariant and observation-dynamic/invariant (see Table R1). Our ablation study (Sec. 4.4) shows that both components are essential—removing either significantly degrades DT-VIN performance. We will clarify this distinction in the final version. Table R1: Four configurations of latent transition kernels based on whether they are dynamic with respect to latent state and/or observation. | Type of latent tranisiton kernels | latent state-dynamic? | observation-dynamic? | represenation of latent tranisiton kernels | |---|---|---|---| | **fully invariant** | $\times$ | $\times$ | parameter $\overline{\mathsf{T}} \in \mathbb{R}^{\lvert \overline{\mathcal{A}}\rvert\times F \times F} $ | | **latent state-dynamic only** | $\checkmark$ | $\times$ | parameter $\overline{\mathsf{T}} \in \mathbb{R}^{M \times M \times \lvert \overline{\mathcal{A}}\rvert \times F \times F} $ | | **observation-dynamic only** | $\times$ | $\checkmark$ | output of the mapping function, $\overline{\mathsf{T}} = f^{\overline{\mathsf{T}} }(x) \in \mathbb{R}^{ \lvert \overline{ \mathcal{A} } \rvert \times F \times F} $ | | **fully dynamic** | $\checkmark$ | $\checkmark$ | output of the mapping function, $\overline{\mathsf{T}} = f^{\overline{\mathsf{T}} }(x) \in \mathbb{R}^{ M \times M \times \lvert \overline{ \mathcal{A} } \rvert \times F \times F} $ | --- ### Q1: How is suboptimality measured when the success rate is less than 100%? ### Answer: As defined in the paper (Line 220), suboptimality is measured relative to the best solution across all models, including the expert. This is applicable in the 2D Maze Navigation task, where the expert solution can be computed using Dijkstra’s algorithm with access to an underlying binary representation of the maze. For tasks without expert access (e.g., continuous control), only the success rate is reported. --- ### Q2: Why does highway VIN get a better optimality rate with increasing shortest path lengths? ### Answer: Thank you for the thoughtful question. One possible explanation is that longer planning tasks offer a more structured search space, allowing Highway VIN to better propagate informative signals over multi-steps. In contrast, shorter mazes may contain more local optima or distractors, leading to suboptimal choices despite a shorter horizon. As Highway VIN is not our focus, we leave deeper analysis to future work. --- ### Q3: line 260-261 - we examine depths of $N = 600, 5000$ what do you mean? Do you mean in range $[600, 5000]$? ### Answer: Thank you for pointing this out. We meant that we examine depths at two specific values, $N = 600$ and $N = 5000$, rather than a continuous range. We will clarify this in the final version. --- ### Q4: Why is pretraining required in Sec 4.2? ### Answer: Pretraining is necessary because the training dataset for the continuous control task includes only a single $9\times12$ maze layout, differing solely in start and goal positions. However, during evaluation, the model encounters significantly larger ($35\times35$ and $100\times100$) and previously unseen maze layouts, substantially increasing task complexity and requiring strong generalization abilities. To address this challenge, we leverage pretraining on the diverse and easily accessible 2D maze navigation dataset. This approach helps the model acquire generalizable planning skills, resulting in strong performance with success rates of 98% on the Point Maze and 93% on the Ant Maze tasks (refer to Table 1 in Sec. 4.2 for further details). While it is feasibly possible for it to solve the task without pretraining, the sample complexity would be high—a property shared with VIN (and VIN variants) here. --- ### Suggestion 1: Minor comments on figure readability and typos. ### Answer: Thank you for bringing this to our attention. We will correct this for the camera-ready version. --- Rebuttal Comment 1.1: Comment: Thanks for a detailed response. I am not convinced about concern 1. That is, I still think the paper lacks clarity in proposed method section (aligns closely with reviewer vY56). Regarding Q1. I think there was some misunderstanding regarding my question. I will try to rephrase. How do you aggregate suboptimality if an algorithm is unable to solve a problem? Do you consider the cost as infinity (or a high number) in that case? Typically, I see (and agree with) suboptimality is calculated over a set of instances that were solved by ALL algorithms that are being compared. Given my concern I will be retaining my original score as of now. --- Reply to Comment 1.1.1: Comment: Thank you for your thoughtful and detailed follow-up. Please find our responses below. --- ### Concern 1-1 (based on Concern 1): Ongoing concern regarding the clarity of Sec. 3.1. ### Answer: We greatly appreciate your careful attention to detail and constructive feedback. We understand that your remaining concern primarily relates to the clarity of the dynamic latent transition kernel, and we sincerely hope our previous response has adequately addressed this issue. Nevertheless, our original submission explicitly defined both the latent state-dynamic transitions (Lines 126–136, Right) and the observation-dynamic transitions (Lines 153–157, Right). We acknowledge that explicitly highlighting their differences and clearly specifying parameter shapes could further help resolve any remaining ambiguity. Given your positive evaluation—highlighting the "rigorous experiments and strong ablation studies" and the "impressive results on large-scale planning problems"—we believe you would also agree that addressing this concern would involve only minor rephrasing of a few sentences rather than substantial modifications. The ambiguity pertains solely to clarity of presentation, rather than to any intrinsic methodological flaw or fundamental issue. We hope these clarifications help demonstrate that the core contributions remain solid and that the concern can be resolved through light adjustments to the presentation. --- ### Q1-1 (based on Q1): Ongoing question about the suboptimality. ### Answer: We apologize for the misunderstanding. As many long-horizon planning tasks are highly challenging—for example, over 60% of tasks on the $35 \times 35$ maze involve at least one algorithm that fails (see Fig. 3a)—*suboptimality* is not well-defined for failed attempts. To address this, our paper adopts the *optimality rate*, which provides a more consistent and fair measure of both solution quality and robustness across all solvable instances. As defined in Lines 223–225, the *optimality rate (OR)* refers to “the rate at which the algorithm provides a relatively optimal path.” OR is computed over instances that are at least solvable by the expert policy. As noted in Appendix C.1, Line 582: “Each maze features a goal position, with all reachable positions selected as potential starting points.” However, to further address your concern, we additionally report *suboptimality* as a complementary metric, defined as: $$ \text{suboptimality} = \frac{\text{cost of solution}}{\text{cost of optimal solution}} - 1 $$ We consider two evaluation settings: - **Suboptimality on instances solved by all algorithms:** Following standard practice, we compute suboptimality only over the subset of instances successfully solved by *all* evaluated algorithms. - **Suboptimality with penalty for failures:** For instances where an algorithm fails to solve the task, we assign a penalty cost (7× the optimal cost) when computing suboptimality. As shown in the table below, DT-VIN consistently achieves the lowest suboptimality under both evaluation protocols, further demonstrating its effectiveness and robustness. Table R2: Suboptimality comparison under two evaluation settings. | | Suboptimality (instances solved by all algorithms) | Suboptimality (with penalty for failures) | |---|---|---| | VIN | $\mathbf{0.73_{\pm 0.27}}$ | $\mathbf{4.23_{\pm 1.76}}$ | | GPPN | $\mathbf{0.41_{\pm 0.19}}$ | $\mathbf{3.73_{\pm 1.22}}$ | | Highway VIN | $\mathbf{0.33_{\pm 0.12}}$ | $\mathbf{3.13_{\pm 1.13}}$ | | DT-VIN (ours) | $\mathbf{0.02_{\pm 0.01}}$ | $\mathbf{0.07_{\pm 0.03}}$ | --- We hope these clarifications address your concerns. Thank you again for your valuable and expert feedback.
Summary: This paper tackles the problem of extending value iteration networks (VINs) to handle very long-horizon planning. To achieve this, the authors propose two main modifications: - A dynamic transition kernel that relaxes the standard weight sharing of convolution layers (i.e., removing strict translation equivariance), and - An adaptive highway loss designed to enable the training of much deeper planning modules. Experimental results are presented primarily on 2D path planning tasks, demonstrating the network’s ability to scale to thousands of planning steps. Overall, the paper is very empirical and focuses on a niche domain related to path planning. Claims And Evidence: - Claims: - The proposed modifications enable VINs to plan over very long horizons (up to thousands of steps) by increasing network capacity and facilitating deep credit assignment. - Relaxing the invariant kernel (i.e., weight sharing) boosts performance on these tasks. - Evidence: - Empirical results support improved planning in extended 2D path planning scenarios. - However, the paper does not convincingly analyze the impact of removing translation equivariance (i.e., the dynamic kernel is simply a relaxation of the weight-sharing property) and lacks detailed insights or ablations on this point. Methods And Evaluation Criteria: - Methods: - The paper proposes modifications to the standard VIN architecture by replacing invariant (shared-weight) convolutions with dynamic kernels and enforcing an adaptive highway loss to help train very deep networks. - While the adaptive loss is an interesting idea, the paper provides limited discussion on its theoretical justification. - Evaluation: - Experiments are conducted on standard 2D path planning benchmarks, which reflect a narrow application domain, although showing extended horizon capability. I would suggest to follow the mentioned work below for more tasks, e.g., for visual navigation with perception input using additional network heads for perception networks. - The evaluation criteria remain focused on path planning success rates; broader or more realistic planning tasks would strengthen the work. Scaling up and Stabilizing Differentiable Planning with Implicit Differentiation. ICLR 2023. Theoretical Claims: paper’s contributions are mainly empirical, and no detailed theoretical proofs were provided regarding the benefits of the dynamic kernel or the adaptive loss. Experimental Designs Or Analyses: - Design: - The experimental setup largely mirrors that of the original VIN paper (from eight years ago) but with extended planning horizons. - Analysis: - While the experiments demonstrate that the approach scales to very long horizons, they remain confined to a niche 2D planning domain. - The paper would benefit from further analysis or ablation studies that clarify how the removal of weight sharing affects network performance and equivariance properties. Supplementary Material: Yes. Mainly additional results and setup. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: This work also studies scaling up the training of value iteration networks, which is potentially related. It also has some more interesting domains to study. - Scaling up and Stabilizing Differentiable Planning with Implicit Differentiation. ICLR 2023. Other Strengths And Weaknesses: - Strengths: - Tackles an interesting and challenging problem of extending VINs to very long-horizon planning. - Introduces conceptually appealing modifications (dynamic kernel and adaptive highway loss) that are intuitively motivated. - Weaknesses: - The approach is limited to a niche domain (2D path planning) and may not generalize well to more realistic problems. - The relaxation of the invariant (weight-sharing) property is not thoroughly analyzed; the paper lacks convincing insights on how this change impacts translation equivariance. - Experimental validation remains confined to benchmarks from older literature, rather than exploring potential applications unlocked by long-horizon planning capabilities (e.g., integration with latent perception modules). Other Comments Or Suggestions: - Consider broadening the experimental evaluation to include more realistic planning scenarios. - Provide additional ablation studies or analysis that clarifies the impact of using a dynamic (non-shared) kernel on equivariance properties. - Elaborate further on the adaptive highway loss to help the reader understand its benefits and limitations. - Discuss potential applications of long-horizon planning beyond pure path planning to motivate the broader relevance of the approach. Questions For Authors: 1. Dynamic Kernel Analysis: Could you provide further analysis or ablation studies on how the dynamic transition kernel (i.e., relaxing weight sharing) affects the network’s translation equivariance and overall performance? 2. Generalization Beyond 2D Planning: Have you considered applying your approach to planning problems outside of 2D path planning? What challenges do you foresee, and how might your method generalize to more realistic or complex environments? 3. Broader Implications: Can you elaborate on what new capabilities or applications might be enabled by scaling VINs to such long horizons? For instance, could this be integrated with perception modules to tackle more challenging planning tasks? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We sincerely appreciate Reviewer HD2P's valuable time and constructive comments. **To improve clarity and conciseness, we have reorganized the reviewers’ comments by grouping similar points.** --- ### Q1: Experiments are limited to 2D path planning. Evaluating more realistic tasks, integrating visual perception, and discussing broader applications. ### Answer: We thank the reviewer for their thoughtful suggestions and for pointing out [r1]; we will cite and discuss it in the final version. However, we believe there may be a misunderstanding—our submission already includes tasks aligned with [r1] and extends them in several ways: - **3D Visual Maze Navigation** (Sec. 4.1, Additional Experiments, Appendix C.3): Our setup is identical to that of [r1], with both following established protocols from prior work [r2]. As the reviewer noted, this involves “visual navigation with perception input using additional network heads.” Agents must infer maze layouts from noisy, ambiguous first-person visual views, making the task especially challenging. - **Continuous Control Tasks** (Sec. 4.2): These involve long-horizon planning with low-level torque control to actuate movement of the agent. While [r1] uses 2-DOF manipulation tasks, we evaluate on a 2-DOF ball and an 8-DOF quadruped robot—sharing the same fundamental task but introducing greater control complexity due to higher dimensionality. - **Rover Navigation** (Sec. 4.3): This more realistic task, inspired by the original VIN paper but absent in [r1], involves planning over noisy, incomplete aerial terrain images—adding complexity beyond synthetic 2D gridworlds. - **Scale**: Our benchmarks feature much larger mazes (up to 100×100 vs. up to 49×49 in [r1]), posing harder long-horizon challenges. These tasks—spanning visual navigation, control, and realistic planning—are already described throughout the original submission (Abstract, Introduction, Experiments). Nonetheless, we will revise the final version and cite [r1] to more clearly highlight their generality and realism. Extending the VIN architecture to broader applications is a compelling direction for future work—previously barred by the limited planning capacity of VIN-based architectures. However, this work aims to focus on the first step in this of demonstrating the performance gains of DT-VIN over related methods, which we view as a necessary precursor to deployment in more applied domains. Indeed, while we are looking into more applied domains currently, such an investigation is of the scale that would necessitate a separate paper. [r1] Zhao et al., Scaling up and Stabilizing Differentiable Planning with Implicit Differentiation, ICLR 2023 [r2] Lee et al., Gated Path Planning Networks, ICML 2018 --- ### Q2: Concern over the impact of relaxed dynamic kernel weight sharing on translation equivariance and performance. ### Answer: Thank you for the thoughtful and constructive feedback. While our dynamic transition kernel relaxes the weight sharing, it preserves translation equivariance—i.e., shifting the input (e.g., maze image) results in a corresponding shift in the output. This may seem counterintuitive but is straightforward upon closer inspection. This holds because the kernel $\overline{\mathsf{T}}^{\rm dyn} = f^{\overline{\mathsf{T}}}(x)$ is generated by a CNN, which is itself translation equivariant. Consequently, the downstream value iteration process retains this property (see Eq. 2). To verify this, we conducted a small experiment comparing DT-VIN, a standard CNN, and VIN on 5,000 maze images with translations of 1 to 10 pixels. All models use stride and pooling of (1,1). We measured the L2 error between the output on a translated input and the translated output of the original. Results in Table R1 show perfect translation equivariance across all models. Additionally, ablation studies in the original submission (Figs. 6a and 6b) compare dynamic vs. invariant kernels. Dynamic kernels consistently outperform their invariant versions, particularly in long-horizon and obstacle-rich settings—highlighting both practical advantages and robustness. Table R1: Translation Equivariance — L2 Error |Model|Avg. L2 Error| |---|---| |CNN|0.0| |VIN|0.0| |DT-VIN (ours)|0.0| --- ### Q3: Benefits and limitations of adaptive highway loss. ### Answer: Thank you for the suggestion. The adaptive highway loss avoids the vanishing gradient problem in very deep networks by adding skip connections from intermediate layers to the final loss, guided by the planning trajectory length (computed from training data). This improves gradient flow and stabilizes training (Sec. 4.4, adaptive highway loss). A potential limitation is computational overhead, which we mitigate by applying the loss every $J$ layers (Eq. 3). This balances efficiency and performance; e.g., $J=10$ significantly accelerating training with minimal performance loss (Appendix F.1). We will clarify this in the revised manuscript.
null
null
null
null
null
null
null
null
PASS: Private Attributes Protection with Stochastic Data Substitution
Accept (spotlight poster)
Summary: This paper introduces PASS (Private Attributes protection with Stochastic data Substitution), a novel approach to protect private attributes in user data while preserving utility. Unlike existing adversarial training-based methods, PASS employs stochastic data substitution where each original sample is replaced with a sample from a substitute dataset according to learned probabilities. The method is derived from an information-theoretic framework and demonstrates superior robustness against probing attacks across multiple data modalities (images, sensor signals, speech). Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: I examined the proofs for Theorem 4.1 (Appendix D.2) connecting the approximate loss Ĺ to the true objective L, and Theorem 4.2 (Appendix A) analyzing entangled attributes. Both proofs are mathematically sound, correctly applying information theory principles to establish bounds and relationships between mutual information terms. No issues were identified in the derivations. Experimental Designs Or Analyses: The experimental design is sound, comparing PASS against six state-of-the-art methods across three datasets. The ablation studies with varying hyperparameters demonstrate robustness. The asymmetric scenario (Table 13) where attackers have more data than protectors is a particularly strong validation. I reviewed all experimental sections, including the appendices showing additional results. Supplementary Material: I reviewed all supplementary materials, including: Appendices A-D containing theoretical proofs and derivations (particularly the proof of Theorem 4.2, connection to local differential privacy, analysis of adversarial training vulnerabilities, and loss function derivations); Appendix E detailing experimental setups; and Appendix F showing additional experimental results including ablation studies and confusion matrices. Relation To Broader Scientific Literature: PASS extends privacy protection literature by connecting local differential privacy with utility-preserving attribute protection. It builds upon information bottleneck/privacy funnel concepts (Makhdoumi et al., 2014; Alemi et al., 2016) but applies them in a novel way through data substitution rather than feature transformation. The work also relates to k-anonymity and l-diversity literature by ensuring multiple samples with different private attributes become indistinguishable. Essential References Not Discussed: No Other Strengths And Weaknesses: Weaknesses: 1.The paper overlooks memory requirements for storing substitute datasets and embeddings, particularly problematic for high-dimensional data applications. 2.Critical visualizations comparing original and substituted samples are missing, which would better demonstrate the method's effectiveness. 3.The substitute dataset construction relies on random sampling without justification or exploration of more optimal selection strategies. Other Comments Or Suggestions: No Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Q1**. The paper overlooks memory requirements for storing substitute datasets and embeddings, particularly problematic for high-dimensional data applications. **A1**. Thanks for your question! Importantly, the substitution dataset itself does not need to be loaded into memory during inference. Instead, we only need to load the corresponding embeddings into memory to calculate the substitution probability $P_\theta(X'|X)$. which greatly reduces memory usage. Furthermore, as demonstrated in the ablation study on substitution dataset size (Appendix F.1, Table 10), PASS maintains consistently strong performance across a wide range of substitution dataset sizes. This flexibility allows users to adapt the size of the substitution dataset based on their available storage and memory. For instance, if a user has limited secondary storage to store the substitution dataset, or limited memory to load its embeddings, then they can opt to use a smaller substitution dataset without substantial loss in performance. In addition, users can further control memory usage by adjusting the dimension of the embeddings. Users with limited memory can reduce the embedding dimension to fit their resource constraints. These characteristics allow PASS to be adapted to different hardware settings without compromising its core functionality. **Q2**. Critical visualizations comparing original and substituted samples are missing, which would better demonstrate the method's effectiveness. **A2**. Thanks for the insightful suggestion! We will visualize the original and the substituted samples in our revised paper to help readers understand PASS's behavior and showcase PASS's effectiveness. **Q3**. The substitute dataset construction relies on random sampling without justification or exploration of more optimal selection strategies. **A3**. Thanks for the thoughtful question! To better understand how the choice of substitution dataset affects PASS’s performance, we conducted a new ablation study on the AudioMNIST dataset. In this study, we constructed multiple substitution datasets with varying attribute distributions: 1. For the sensitive attribute "gender", we varied its distribution from "90\% Male / 10\% Female" to "10\% Male / 90\% Female". 2. For the useful attribute "digit", we varied its distribution from uniform distribution "10\% 0-9" to a highly skewed distribution "50\% 0 / 5.6\% 1-9". The results, shown in **Table A** below, demonstrate that PASS consistently maintains high performance across all these different substitution dataset configurations, even under highly imbalanced conditions. This ablation study provides empirical evidence that the specific selection of the substitution dataset has a limited impact on PASS's overall performance. Consequently, it may justify that a simple random sampling strategy is sufficient to construct an effective substitution dataset and achieve strong PASS performance. That said, we agree that there may still be room to further enhance PASS's effectiveness with more advanced substitution dataset selection strategies. We plan to explore this direction in our future work. **Table A**. Ablation study on the distribution of substitution dataset. The other settings are the same as Table 2. Results are averaged over 3 random seeds (STD is omitted to save space). |gender distribution|digit distribution|gender NAG(↓)|accent NAG(↑)|age NAG(↑)|ID NAG(↑)|digit NAG(↑)|mNAG(↑)| |-|-|-|-|-|-|-|-| |90% Male / 10% Female|10% 0-9|0.0|47.9|28.4|51.2|96.9|56.1| |80% Male / 20% Female|10% 0-9|0.0|46.4|27.6|49.7|96.5|55.0| |50% Male / 50% Female|10% 0-9|0.1|47.0|28.1|50.5|96.5|55.4| |20% Male / 80% Female|10% 0-9|0.0|47.4|27.8|50.8|96.7|55.6| |10% Male / 90% Female|10% 0-9|0.0|47.8|27.4|51.2|96.3|55.7| |80% Male / 20% Female|30% 0 / 7.8% 1-9|0.0|46.7|27.6|50.0|96.0|55.1| |80% Male / 20% Female|50% 0 / 5.6% 1-9|0.0|44.3|25.3|48.1|93.9|52.9|
Summary: PASS (Private Attributes Protection with Stochastic Data Substitution) introduces a novel method to protect private attributes in machine learning datasets by replacing original data samples with others from a substitution dataset using a stochastic algorithm trained with an information-theoretic loss. Unlike existing adversarial training-based methods, which are vulnerable to probing attacks, PASS provides a stronger defense by offering a theoretical foundation for balancing privacy and utility. Empirical results on datasets like CelebA, Motion Sense, and AudioMNIST show that PASS effectively protects private attributes while preserving data utility. Claims And Evidence: The claims made in this submission are mostly supported by empirical and theoretical evidence. However, there are still one concerning point: while PASS is compared to k-l-t privacy and LDP, it lacks direct benchmarks against modern differential privacy mechanisms such as DP-SGD. This makes the privacy guarantees of PASS less quantifiable and more difficult to evaluate in comparison to formal differential privacy methods. The submission presents ablation studies (Tables 9, 10) demonstrating robustness to $\mu$ and $|\mathcal{D}_{\text{substitute}}|$. However, it does not provide clear guidelines for selecting $\lambda$ and $\mu$ in practice, which could make it difficult for users to balance privacy-utility trade-offs without relying on trial-and-error tuning. Additionally, w Methods And Evaluation Criteria: The paper primarily uses Normalized Accuracy Gain (NAG) and mean NAG (mNAG) as evaluation metrics. While NAG appropriately accounts for class imbalance, it depends on classifier accuracy, which could conflate the quality of obfuscation with classifier robustness. Incorporating additional metrics such as precision, recall, and F1-score could provide a more comprehensive and balanced evaluation of the method's performance. Theoretical Claims: I have verified the theoretical claims and proofs presented in the paper. Experimental Designs Or Analyses: 1. In the Motion Sense experiments, the authors evaluate the performance of an "unfinetuned" classifier (a classifier only pre-trained on the original data without fine-tuning on the substituted data). While this provides some insight into the transferability of features, it's not clear why this metric is only used for the Motion Sense dataset and not the others. 2. The paper states that hyperparameters are set to balance privacy protection, utility preservation, and general feature preservation. However, the specific process of hyperparameter tuning is not described in detail. This raises the possibility that the performance of PASS could be sensitive to the choice of hyperparameters. Supplementary Material: I have reviewed the supplementary material, especially the theoretical proof. Relation To Broader Scientific Literature: The paper introduces a novel stochastic data substitution approach that avoids adversarial training altogether. This method relates to broader research on randomized response techniques in privacy-preserving data analysis. However, PASS extends these concepts to high-dimensional data spaces and incorporates utility-preserving requirements, distinguishing it from traditional randomized response methods. Essential References Not Discussed: As far as I know, no related works that are essential to understanding the (context for) key contributions of the paper, but are not currently cited/discussed in the paper. Other Strengths And Weaknesses: Strengths 1. PASS introduces a creative approach to private attribute protection by using stochastic data substitution instead of adversarial training. This represents an original combination of ideas from randomized response techniques and information theory. 2. The method is rigorously grounded in information theory, with clear derivations and proofs connecting the practical loss function to the theoretical objectives. This provides a solid mathematical basis for the approach. 3. PASS demonstrates effectiveness across multiple data modalities (audio, sensor data, images), suggesting it can generalize well to different types of private attribute protection tasks. 4. By addressing the vulnerability of existing methods to probing attacks, PASS tackles a real-world concern in deploying privacy-preserving machine learning systems. Weaknesses 1. While PASS is compared to several baselines, the evaluation lacks comparison to differential privacy methods, which are considered the gold standard for privacy guarantees. 2. Although PASS shows empirical privacy protection, it does not provide formal privacy guarantees comparable to differential privacy's $\epsilon$-$\delta$ bounds. This makes it challenging to precisely quantify the level of privacy achieved. 3. While some ablation studies are provided, there is limited guidance on selecting optimal hyperparameters (λ, μ) for balancing privacy and utility in practice. Other Comments Or Suggestions: 1. It may be helpful to apply the "unfinetuned" classifier metric to other datasets for consistency and broader insights into transferability. 2. Providing more details on the hyperparameter tuning process would help clarify how privacy, utility, and feature preservation are balanced. 3. Adding direct benchmarks against modern DP methods (e.g., DP-SGD) could make the privacy guarantees more comparable and easier to quantify. 4. Discussing potential limitations of the stochastic data substitution approach, particularly in high-dimensional or highly imbalanced data, would strengthen the analysis. Questions For Authors: 1. Could you clarify why the "unfinetuned" classifier metric was only applied to the Motion Sense dataset and not to other datasets? Applying it more broadly could provide a more comprehensive view of feature transferability. 2. Could you provide more details on how the hyperparameters (e.g., $\lambda$ and $\mu$) were tuned? Understanding this process would clarify how privacy, utility, and feature preservation are balanced and help assess the generalizability of the approach. 3. Do you plan to include direct benchmarks against modern DP methods such as DP-SGD? This would make it easier to compare PASS’s privacy guarantees with those of established differential privacy techniques. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Q1**: "...apply the "unfinetuned" classifier metric to other datasets..." **A1**: Thanks for the suggestion! We applied the "NAG-unfinetuned" metric to the useful attributes in the AudioMNIST and CelebA datasets for consistency. The results are shown in **Tables A** and **B**, respectively. For unfinetuned classifiers, PASS still offers the best balance between privacy protection and utility preservation. This is due to our loss function, which encourages replacing each sample with others sharing the same useful attributes. That said, we acknowledge that fine-tuning downstream classifiers can further improve performance, and we plan to explicitly explore adapting PASS to better suit unfinetuned classifiers in future work. **Table A**. Comparison with baselines, using "NAG-unfinetuned" metric on useful attributes. The settings are the same as Table 2. NAG on the private attribute "gender" is also presented for reference. Results are averaged over 3 random seeds (STD is omitted to save space). |Method|gender NAG(↓)|digit NAG-unfinetuned(↑)| |-|-|-| |ADV|71.4|99.7| |GAP|13.3|0.3| |MSDA|78.4|0.0| |BDQ|69.0|5.9| |PPDAR|81.7|66.2| |MASS|88.9|31.4| |PASS|0.0|77.0| **Table B**. Comparison with baselines, using "NAG-unfinetuned" metric on useful attributes. The settings are the same as Table 5. NAG on the private attribute "Male" is also presented for reference. Results are averaged over 3 random seeds (STD is omitted to save space). |Method|Male NAG(↓)|Smiling NAG-unfinetuned(↑)|Young NAG-unfinetuned(↑)| |-|-|-|-| |ADV|99.9|99.9|98.3| |GAP|83.0|0.0|0.0| |MSDA|91.6|0.0|0.0| |BDQ|99.7|98.5|90.2| |PPDAR|99.7|98.1|92.7| |MASS|96.9|0.0|0.0| |PASS|4.9|85.2|49.1| **Q2**. "...guidance on selecting optimal hyperparameters ($\lambda$, $\mu$)..." **A2**. To guide the selection of $\lambda$ and $\mu$, we conducted the following ablation studies. For $\lambda$, the results in **Table C** below show that PASS consistently achieves near-0 NAG on private attribute and stable overall performance across a wide range of values. Thus, we recommend the users to simply fix $\lambda=N/M$ without extensive tuning, where $N$ and $M$ are numbers of useful and private attributes, respectively. This choice is used throughout this paper. For $\mu$, results in Appendix F.1 Table 9 show that increasing $\mu$ enhances general feature preservation but may slightly reduce useful attribute preservation. Importantly, privacy protection remains strong across all $\mu$. Therefore, we suggest that $\mu$ can be flexibly adjusted based on the relative importance of general features in users' specific tasks, without compromising privacy protection. **Table C**. Ablation study on $\lambda$. The other settings are the same as Table 2. Results are averaged over 3 random seeds (STD is omitted to save space). |$\lambda$|gender NAG(↓)|accent NAG(↑)|age NAG(↑)|ID NAG(↑)|digit NAG(↑)|mNAG(↑)| |-|-|-|-|-|-|-| |0.1N/M|0.0|45.0|25.2|48.5|88.5|51.8| |0.2N/M|0.0|45.3|25.7|49.3|91.5|53.0| |0.5N/M|0.0|47.3|28.4|50.8|95.0|55.4| |1N/M|0.0|46.4|27.6|49.7|96.5|55.0| |2N/M|0.0|47.8|28.5|51.3|97.8|56.4| |5N/M|0.1|46.2|27.0|49.9|99.0|55.4| **Q3**. "...direct benchmarks against modern DP methods (e.g., DP-SGD)..." **A3**. We compared PASS with four additional DP-based baselines: Laplace Mechanism (Additive Noise) [1], DPPix [2], Snow [3], and DP-Image [4]. Due to space limitations, the results are presented in **Table A** in our response to **Reviewer RCzr** above. These methods show limited effectiveness on the Private Attribute Protection task because they are designed to prevent inference of **membership** from obfuscated samples, which is not fully aligned with our objective of preventing inference of **specific private attributes** from obfuscated samples while preserving the utility. In contrast, DP-SGD [5] aims to protect against membership inference **from a model’s trained parameters**, which is orthogonal to our goal—PASS focuses on preventing private attributes inference **from obfuscated samples**. Therefore, DP-SGD is not directly comparable to PASS, but could potentially be combined with PASS to provide more comprehensive privacy protection. References [1-4] are shown in our response **A3** to **Reviewer RCzr** due to space limitations. [5]: Abadi, Martin, et al. "Deep learning with differential privacy." Proceedings of the 2016 ACM SIGSAC conference on computer and communications security. 2016. **Q4**. "Discussing potential limitations... particularly in high-dimensional or highly imbalanced data..." **A4**. Thanks for the suggestion! While we have demonstrated PASS's effectiveness on moderately high-dimensional data (e.g., 3×160×160 images in CelebA) and imbalanced datasets (e.g., 80\% Male / 20\% Female for "gender" in AudioMNIST), we acknowledge that handling even higher-dimensional or more imbalanced data may demand PASS to learn more representative features per sample, posing intrivial challenges to PASS's robustness and scalability. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed rebuttal. The authors have addressed the key concerns well. Overall, the rebuttal significantly improves the submission. I’m upgrading my score. --- Reply to Comment 1.1.1: Comment: Thanks for your comments. We genuinely appreciate your careful review and are grateful that our responses helped clarify the key concerns!
Summary: This paper addresses the challenge of protecting private attributes in machine learning (ML) services while preserving the utility of the data for downstream tasks. Existing methods primarily rely on adversarial training to remove private attributes, but the authors identify a fundamental vulnerability in these approaches, both theoretically and empirically. To mitigate this issue, the paper introduces PASS, a novel stochastic substitution mechanism that replaces original samples with alternative ones based on a probabilistic framework. The approach is guided by a newly designed loss function, rigorously derived from an information-theoretic objective. Extensive experiments across multiple modalities—facial images, human activity sensor data, and voice recordings—demonstrate PASS's effectiveness and generalizability. The proposed method offers a fresh perspective on privacy-preserving ML, particularly by moving away from adversarial training. The work is well-motivated, and the empirical results strengthen its claims. Claims And Evidence: Yes, the claims made in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: Yes. Theoretical Claims: I have checked proof of theorem 4.2 and section D in the appendix. It seems correct. Experimental Designs Or Analyses: Yes, I have checked all the experiments and it looks sound to me. Supplementary Material: Proof of theorem 4.2 and section D. Relation To Broader Scientific Literature: This work significantly advances privacy-preserving ML by identifying fundamental weaknesses in adversarial training approaches and proposing PASS, a theoretically grounded and empirically validated alternative. By leveraging probabilistic sample substitution rather than adversarial representation learning, the method contributes to both the privacy literature and broader discussions on fairness and information-theoretic ML. Essential References Not Discussed: Not sure. Other Strengths And Weaknesses: Strengths: 1. The paper is very well-written and easy to follow. Even for a non-expert, the key concepts are clearly conveyed. Weaknesses: 1. The experiments consider only a single private attribute per dataset, despite multiple useful attributes being present. Evaluating the method with multiple private attributes would strengthen the analysis. 2. The NAG metric lacks a clear explanation. More details are needed on its computation, particularly how Acc_guessing is derived. Other Comments Or Suggestions: Typo: Line 32: image/(vedio) detection. Questions For Authors: What happens when we use other attributes like age is used as private? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Q1**: "The experiments consider only a single private attribute per dataset, despite multiple useful attributes being present. Evaluating the method with multiple private attributes would strengthen the analysis." **A1**: Thanks for your question. This paper included experiments with multiple private attributes to demonstrate PASS’s scalability. - In Table 3, we experimented with up to 4 private attributes: "gender", "accent", "age", and "ID", on the AudioMNIST dataset. - In Tables 4 and 13, we experimented with 2 private attributes: "gender" and "ID", on the MotionSense dataset. - In Table 16, we experimented with up to 5 private attributes: "Male", "Smiling", "Young", "Attractive", and "Mouth\_Slightly\_Open", on the CelebA dataset. Across all settings, PASS consistently delivered a strong performance, highlighting its ability to scale to varying numbers and combinations of private attributes. Please see Sections 5.2, 5.3, Appendices F.2 and F.3 for details. **Q2**: "The NAG metric lacks a clear explanation. More details are needed on its computation, particularly how Acc\_guessing is derived." **A2**: Thanks for pointing this out. We will include a more detailed explanation of NAG below as an extension to Section 5.1. NAG, proposed by Chen et al. [1], is a metric for evaluating Private Attribute Protection methods. For private attribute $S_i$, the NAG is defined as $$ NAG(S_i) = \max\left(0,\frac{Acc(S_i) - Acc_\text{guessing}(S_i)}{Acc_\text{no-suppr.}(S_i) - Acc_\text{guessing}(S_i)}\right), $$ where - $Acc(S_i)$ is the accuracy of a classifier trained to classify $S_i$ from the substituted sample $X'$. - $Acc_\text{guessing}(S_i)$ is the accuracy of a majority classifier (a classifier that always predicts the most frequent class regardless of the input, i.e., a poor classifier that can only "guess"), which in most cases represents the lower bound performance of $Acc(S_i)$. - $Acc_\text{no-suppr.}(S_i)$ is the classification accuracy of a classifier trained to classify $S_i$ from the original data $X$, which in most cases represents the upper bound performance of $Acc(S_i)$. By normalizing $Acc(S_i)$ between the lower bound ($Acc_\text{guessing}(S_i)$) and the upper bound ($Acc_\text{no-suppr.}(S_i)$), we ensure that NAG is on a consistent scale across all attributes, regardless of whether an attribute is balanced or imbalanced, or whether it is easy or difficult to predict. As a result, NAG provides a fair and reliable measure of how well each private attribute is suppressed or preserved. This makes it a suitable and consistent metric for comparing the effectiveness of different Private Attribute Protection methods. We will include this detailed description of NAG in our revised paper. [1]: Chen, Yizhuo, et al. "MaSS: multi-attribute selective suppression for utility-preserving data transformation from an information-theoretic perspective." Proceedings of the 41st International Conference on Machine Learning. 2024. **Q3**: "What happens when we use other attributes like age is used as private?" **A3**: Thanks for your question. This paper included experiments using "age" as a private attribute on the AudioMNIST dataset, as shown in Section 5.2, Table 3. Similarly, we experimented with "Young" as a private attribute on the CelebA dataset, detailed in Appendix F.2, Table 16. In both cases, PASS demonstrated a robust ability to suppress the private attribute "age" or "Young", while effectively preserving other useful attributes. Throughout the paper, we have evaluated PASS on a wide range of private attributes, including "gender", "accent", "age", "ID", "Male", "Smiling", "Young", "Attractive", and "Mouth\_Slightly\_Open". Across all these scenarios, PASS consistently achieved strong performance, reinforcing its broad applicability and versatility in handling diverse Private Attribute Protection tasks.
Summary: This paper proposes a feature substitution method based on an information-theoretic objective to preserve privacy for certain data attributes. The method does not depend on any specific adversarial strategy, making it more robustness than existing adversarial-based approaches. #update after rebuttal: the rebuttal has addressed most of the my concerns, except that I still think an end-to-end algorithm is much needed to make the approach easy to understand, and the limitations on scalability should be discussed in the final version. Overall I raised my rating and I hope the final version would incorporate these suggestions. Claims And Evidence: The claims are supported by both theoretical analysis and experimental evidence. Methods And Evaluation Criteria: Key technical details are missing so it is hard to assess the soundness of the approach. Specifically, it is unclear from the paper how is this PASS method trained end-to-end. A formal algorithm depicting the steps is strongly suggested. Additional, there are missing details regarding: 1. Embedding function g(x'). The paper states that it is a "learnable" embedding, but it seems that no trainable parameters are assigned. It is unclear how this embedding function is performed or chosen. 2. Sample replacement vs. NN training. It is not clear how is the sample replacement step integrated with the NN training. e.g. Are all the samples replaced in the batch training? how often are parameters updated for each time that one or some samples are replaced? Theoretical Claims: No I didn't check the correctness of the proof. However, one issue is that it is unclear how this method is compared with DP, both theoretically and experimentally. The theoretical connection with DP seems to show that these are equivalent approaches under certain assumptions, but it does not demonstrate that the approach is advantageous compared to DP. It is suggested that the paper includes more discussions on the comparison with DP. Experimental Designs Or Analyses: The experiments include several datasets and baselines. However,the baseline methods are all adversarial-based. The paper should also compare with other general defenses, such as DP, adding noise, sparsification. It is also suggested that the experiments include comprehensive evaluations on key impacting factors. For example, the distribution and quantity of a substitution dataset seems to be a key factor for the method to work well. However, it is not fully discussed how the composition of dataset affect the performance. Another issue is that the method may be difficult to scale on features. It seems that adding additional features will involve retrain the entire model, which may be a challenge compared to DP. The computation cost regarding training should also be presented. Supplementary Material: Yes. Relation To Broader Scientific Literature: This paper builds upon the existing literature on privacy protection of sensitive attributes, offering new insights from an information-theoretic approach. Essential References Not Discussed: No. Other Strengths And Weaknesses: I am a bit concerned that the boarder applicability of the proposed PASS approach maybe limited by its scalability on features or availability of abundant substitution dataset. I suggest the paper discusses these aspects in more details. Other Comments Or Suggestions: No. Questions For Authors: See the above comments. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Q1**: "Embedding function $g(x')$." **A1**: Thanks for pointing it out! Our embedding function $g(x')$ is implemented as an embedding layer with trainable parameters (e.g., like the embedding layer in language models). We will update it as $g_\psi(x')$ to indicate its associated parameters clearly. **Q2**: "Sample replacement vs. NN training." **A2**: We do not need to perform "sample replacement" during training. Because our novel loss function (Equation 3) can be calculated with solely the substitution probability $P_\theta(X'|X)$, represented as a matrix for each batch, reducing training computation costs. **Q3**: "The paper should also compare with other general defenses, such as DP..." **A3**: Theoretically, as shown in Appendix B, PASS can be viewed as a Local DP method—specifically, a generalized form of Randomized Response with desirable DP properties. Experimentally, as suggested by the Reviewer, we further compared PASS with 4 additional DP-based baselines: Laplace Mechanism (Additive Noise) [1], DPPix [2], Snow [3], and DP-Image [4]. As shown in **Table A** below, these DP methods exhibit limited performance in the Private Attribute Protection task, because they focus on preventing the inference of **membership** from obfuscated samples, which is not fully aligned with our objective of preventing inference of **specific private attributes** while preserving utility. **Table A**. Comparison with DP-based baselines. The other settings are the same as Table 5. Results are averaged over 3 random seeds (STD is omitted to save space). |Method|Male NAG(↓)|Smiling NAG(↑)|Young NAG(↑)|Attractive NAG(↑)|Mouth\_Slightly\_Open NAG(↑)|High\_Cheekbones NAG(↑)|mNAG(↑)| |-|-|-|-|-|-|-|-| |SNOW|97.8|93.7|91.9|95.7|80.4|84.7|-8.5| |DPPix|94.5|81.7|86.8|91.4|63.6|78.1|-14.2| |Laplace Mechanism|91.0|79.3|87.0|89.8|60.8|81.2|-11.4| |DP-Image|79.6|68.5|79.8|79.8|55.0|78.7|-7.2| |PASS|4.9|98.3|78.6|58.1|67.0|86.7|**72.9**| [1]: Dwork et al. "Calibrating noise to sensitivity in private data analysis." Theory of Cryptography: Third Theory of Cryptography Conference, TCC. 2006. [2]: Fan, Liyue. "Image pixelization with differential privacy." IFIP Annual Conference on Data and Applications Security and Privacy. 2018. [3]: John et al. "Let it snow: Adding pixel noise to protect the user’s identity." ACM Symposium on Eye Tracking Research and Applications. 2020. [4]: Xue et al. "Dp-image: Differential privacy for image data in feature space." arXiv preprint. 2021. **Q4**: "...include comprehensive evaluations on key impacting factors. For example, the distribution and quantity of a substitution dataset..." **A4**: Thanks for the suggestion! Below, we demonstrate PASS's strong stability across varying quantities and distributions of substitution datasets. First, an ablation study on substitution dataset **quantity** (Appendix F.1, Table 10) shows that PASS maintains high performance across a wide range of substitution dataset sizes (1024 to 24,000). Second, we conducted a new ablation study on the **distribution** of the substitution dataset on AudioMNIST, where we varied the distribution of: 1. the sensitive attribute "gender", from "90\% Male / 10\% Female" to "10\% Male / 90\% Female". 2. the useful attribute "digit" from "10\% 0-9" to "50\% 0 / 5.6\% 1-9". As shown in **Table B**, PASS achieved consistently high performance, even on highly skewed distributions. **Table B**. Ablation study on the distribution of substitution dataset. The other settings are the same as Table 2. Results are averaged over 3 random seeds (STD is omitted to save space). |gender distribution|digit distribution|gender NAG(↓)|accent NAG(↑)|age NAG(↑)|ID NAG(↑)|digit NAG(↑)|mNAG(↑)| |-|-|-|-|-|-|-|-| |90% Male / 10% Female|10% 0-9|0.0|47.9|28.4|51.2|96.9|56.1| |80% Male / 20% Female|10% 0-9|0.0|46.4|27.6|49.7|96.5|55.0| |50% Male / 50% Female|10% 0-9|0.1|47.0|28.1|50.5|96.5|55.4| |20% Male / 80% Female|10% 0-9|0.0|47.4|27.8|50.8|96.7|55.6| |10% Male / 90% Female|10% 0-9|0.0|47.8|27.4|51.2|96.3|55.7| |80% Male / 20% Female|30% 0 / 7.8% 1-9|0.0|46.7|27.6|50.0|96.0|55.1| |80% Male / 20% Female|50% 0 / 5.6% 1-9|0.0|44.3|25.3|48.1|93.9|52.9| **Q5**: "...the method may be difficult to scale on features..." **A5**: All baseline state-of-the-art Private Attribute Protection methods in this paper require retraining when private attributes change. In comparison, PASS has a lower training cost than them, as its loss can be computed without actual sample replacement (please see **A2** and the last paragraph of Section 4.1 for more details). Furthermore, PASS is designed with scalability in mind, as it supports any number of private and useful attributes without substantially increasing computational cost. In addition, although some DP-based methods do not require retraining, their focus on general **membership protection** unavoidably limits their effectiveness on the Private Attributes Protection task, as shown in **A3** above.
null
null
null
null
null
null
On the Emergence of Position Bias in Transformers
Accept (poster)
Summary: This paper studies the "position bias" in transformers, that is, the bias of the model to focus on certain regions of the input. The authors investigate how causal mask and positional encoding impact this position bias. To that end, they leverage a graph-theoretic formalization of the attention module to study the position bias. In particular, the authors unify under their study this bias and its empirical observations shown in prior works. The authors show that causal masking biases attention towards earlier positions and conduct this analysis for several types of masking used in practice. The authors also study how positional encoding and masking interact and provide insights into how to design them to trade-off between local and global context. Finally, the authors validate their findings experimentally in a controlled setting. Claims And Evidence: The theoretical claims are well supported by clear and detailed proofs, and the authors also provide experimental validation of their theory in a controlled setting. Methods And Evaluation Criteria: The authors provide theoretical results to better understand position bias in attention blocks. They conduct experiments to validate their findings. I believe the method and evaluation criteria make sense for the problem at hand. Theoretical Claims: The proofs are well-detailed and clear. Experimental Designs Or Analyses: The authors provided the code to reproduce their results. A quick inspection of the code seems to indicate that the experiments were well-conducted, although I did not re-run the experiments. The experimental design is sound, and results are analyzed, although the experiments are only done in a controlled, hence not very realistic, setting. Supplementary Material: I read the proofs and the experiments left in the appendix and reviewed the code provided in an anonymous link. Relation To Broader Scientific Literature: I find that related work and prior works are well introduced and compared. Although the graph-formalization was introduced in a prior work [1], the submission's contributions seem to be novel regarding the understanding of position bias in attention layers by leveraging the graph formalization of attention layers. *References* [1] Wu et al. On the Role of Attention Masks and LayerNorm in Transformers, NeurIPS 2024 Essential References Not Discussed: To the best of my knowledge, there are no essential references not discussed in the current submission. Other Strengths And Weaknesses: **Strenghts** - The paper is very well written with detailed prior work, clear notations and technical background - I find the graph formalization very elegant and innovative. - The theory is sound, with theorems contextualized and their implications explained - The contributions summarized in Table 1 are impressive, especially given the many connections to empirical findings in prior works **Weaknesses** I list below what I think are weaknesses, but I would be happy to be corrected if I misunderstood some important aspects of the authors' contributions. - The current analysis focuses on self-attention-only networks, while transformers include feed-forward layers. Although tokens are only interconnected in the attention blocks, the influence of feed-forward layers is not negligible, and I wonder how it would impact the current analysis. Could the authors elaborate on this point? - In the experiments, the network and data considered are simple, which, I believe, is due to the need to be in a controlled setting to investigate properly the position bias. Could the authors elaborate on larger-scale experiments that could validate their theory or that can benefit from their findings (I do not ask the authors to conduct them, but rather to add a discussion for future work in more practical settings)? For instance, this could be in a limitation or discussion section at the end of the paper. - Given that the graph-theoretic framework of attention was introduced in [1], I think that the mention of "novel graph-theoretic framework" in the abstract is not adapted. I acknowledge that this does not reduce the novelty of the contribution since it is the first time it is used to study the position bias, but I would appreciate if the author could elaborate on this if I misunderstood something or remove this term in the abstract if the framework is indeed not new. Overall, I find the paper interesting and the analysis well conducted. Although the model is simplified and the experiments are limited to controlled settings, I believe this is valuable work to better understand position bias in transformers. This is the reason for my current score and I would appreciate if authors could clarify the points mentioned above. **Update after rebuttal**: I increased my score from 4 to 5. Other Comments Or Suggestions: None Questions For Authors: 1) Could the authors elaborate on how the analysis conducted in the current submission could be extended or influenced by considering feed-forward layers? In particular, does the position bias only depend on the attention layers since this is the only block where tokens are interconnected, or can the MLPs also influence it, maybe in some indirect manner? 2) In the paragraph after Theorem 1, the authors discuss the role of softmax. If I understood correctly, softmax cannot lead to disconnected graph components since it is not sparse and cannot have exactly zero entries. The authors mention that, empirically, replacing softmax with ReLU can mitigate the emergence of attention sinks. What do the authors think of other families of "sparse" softmax, like sparsemax [1] or fenchel-young derived softmax [2]. From Theorem 1, would it make sense to study them through the lens of avoiding attention sinks? 3) Could the author add a discussion/limitation section regarding the scope of their study (controlled setting, simplified transformer) and some room for future work in more practical settings? *References* [1] Martins et al. From Softmax to Sparsemax: A Sparse Model of Attention and Multi-Label Classification, ICML 2016 [2] Blondel et al. Learning with Fenchel-Young Losses, JMLR 2020 Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: Thank you for your thoughtful comments and positive assessment of our work. Below, we provide individual responses to the comments and questions you raised. > I wonder how MLPs would impact the current analysis. Thank you for the question. Our analysis focuses on the self-attention mechanism, as it is the primary component responsible for information exchange across tokens. In contrast, MLP layers are typically applied independently to each token and do not directly contribute to inter-token communication. As such, they are unlikely to affect position bias in the same way that attention mechanisms do. That said, we acknowledge that MLPs, being universal function approximators, can theoretically induce position-dependent effects under specific conditions. For instance, one could construct a scenario where an MLP maps $x_i$ to $x_{n-i+1}$ for a sequence $x_1, x_2, x_3, x_4, …, x_{n-1}, x_n$​, thereby indirectly altering token interactions. However, such behaviors would require highly specific weight configurations that are unlikely to arise under standard training, and they would generally lack robustness to input variation. Our goal in this work is to isolate and understand the role of attention masks and positional encodings in shaping position bias—factors that more directly govern how positional information is integrated across tokens. Nonetheless, extending our framework to investigate whether and how MLP layers might interact with or amplify these biases is a valuable direction for future work, and we will add a discussion of this point in the revised manuscript. > What do the authors think of other families of "sparse" softmax, like sparsemax or fenchel-young derived softmax. Would it make sense to study them through the lens of avoiding attention sinks? Thank you for the question. Our current findings suggest that sparser attention mechanisms may help mitigate attention sinks by slowing the convergence of attention flow. In our framework, this is captured by Theorem 4.2, which shows that increased sparsity (e.g., shorter sliding windows) causes center nodes to accumulate influence more gradually, making them less likely to dominate early on. Consistently, we observe empirically that attention sinks are less likely to emerge under sparser masking conditions (Figure 10). This connection suggests that sparse alternatives to softmax could act as implicit regularizers, dampening the formation of attention sinks and promoting more balanced information flow across tokens. Extending our analysis to formally validate and characterize these effects would be a valuable direction for future work, and we will note this in the revised manuscript. > Could the authors elaborate on larger-scale experiments that could validate their theory or that can benefit from their findings? Thank you for the question. A key contribution of our theoretical framework is that it helps identify and quantify the sources of attention bias introduced by masking and positional encodings. These insights can inform the design of alternative transformer architectures, by motivating positional encoding schemes or masking strategies that encourage more uniform or task-aligned attention distributions. Larger-scale experiments could build on our findings to evaluate how different architectural choices impact position bias and downstream performance on different tasks. Conversely, one could also explore whether alignment between a model’s learned position bias and natural language structure (e.g., recency effects in Futrell et al.) correlates with improved language modeling performance. > Given that the graph-theoretic framework of attention was introduced in Wu et al. (2024), I think that the mention of "novel graph-theoretic framework" in the abstract is not adapted. Thank you for the thoughtful comment. We agree that the use of graph-theoretic tools to analyze attention mechanisms has been explored in prior work, notably by Abnar et al. and Wu et al. We will revise the abstract to more accurately reflect our contribution. We will revise the wording in the abstract to more precisely reflect our contribution. The novelty of our work lies not in the use of graphs itself, but in how we apply this perspective to analyze the flow of information and the emergence of positional bias across multiple layers of attention. Our framework leverages the structure induced by causal and other attention masks to provide theoretical insight into how positional preferences arise independent of semantic content. We will clarify this distinction in the revised abstract to better situate our contributions. We sincerely appreciate your feedback and welcome any further suggestions. --- References Abnar et al. (2020) Quantifying Attention Flow in Transformers. Futrell et al. (2015) Large-scale evidence of dependency length minimization in 37 languages. --- Rebuttal Comment 1.1: Comment: I thank the author for the rebuttal that addresses my concerns. I maintain my evaluation: this is a clear and very well-presented work with valuable contributions to better understanding attention-based models. As discussed in the rebuttal above, it also opens interesting questions for analysis on larger models (theoretically or experimentally). Although there are areas of improvement, in my humble opinion, such papers are valuable to the community and should be published. To highlight that, I increased my score to 5.
Summary: This paper presents a graph-theoretic framework to analyze how position bias emerges in transformer architectures. The authors mathematically model attention masks as directed graphs to understand how tokens interact based on their sequential positions across multiple layers of attention. The authors support their findings with experiments that reproduce position biases observed in real-world LLMs, including phenomena like "lost-in-the-middle" and attention sinks. Their framework helps explain why increasing model depth amplifies positional biases, why some positions gain disproportionate attention, and how different masking approaches affect information flow. The main claims of the paper are: - Causal masking inherently biases attention toward earlier positions in deep networks. This happens because tokens in deeper layers attend to increasingly contextualized representations of earlier tokens, amplifying their influence. - There's a nuanced interaction between causal masking and relative positional encodings (like decay masks and RoPE). While these encodings introduce distance-based decay within individual attention maps, their aggregate effect across multiple layers creates a trade-off between local decay effects and the cumulative importance of early sequence positions. Claims And Evidence: The connection between center nodes in their graph-theoretic framework and attention sinks in real models is theoretically elegant but would benefit from more direct empirical validation with actual LLM attention patterns rather than just the controlled experiments. Methods And Evaluation Criteria: The authors' methodological choices seem good: - Using a graph-theoretic framework is appropriate for analyzing information flow in attention mechanisms. - The controlled experimental setting from Reddy (2024) allows for isolating the effects of different architectural components on position bias. - The evaluation metric (accuracy gap between different positions) directly measures position-dependent performance differences. Even though, focusing only on syntetic data leaves the question on whether the same dynamics would emerge in natural language data with more complex semantic structures. Theoretical Claims: The theoretical claims seem sound. Experimental Designs Or Analyses: The experimental design is generally sound but has some minor limitations: - The experiment uses very small model sizes (n=8, d=64) compared to real LLMs, raising questions about whether the observed effects scale appropriately to larger models. - While the authors test three different position encoding schemes, they use only a single implementation of each. Testing variants (like, different decay rates for decay masks or base frequencies for RoPE) would strengthen the generalizability of the findings. - The statistical analysis is minimal, with results presented as averages over five runs but without confidence intervals or significance testing, making it difficult to assess the reliability of the observed effects. Supplementary Material: I quickly read the appendix Relation To Broader Scientific Literature: I think this paper is valuable: the paper builds a bridge between empirical observations of positional phenomena in transformers and theoretical understanding of their architectural causes, while in the literature these phenomena were observed but not fully explained. - The paper connects to the literature on the "lost-in-the-middle" phenomenon described by Liu et al. (2024), Zhang et al. (2024), and Guo & Vosoughi (2024) - the authors provide a theoretical explanation based on the interplay between positional biases from the architecture and biases in the training data, showing how specific patterns of position-dependent performance emerge under different conditions. - The paper connects to recent work on attention sinks by Gu et al. (2025) and Xiao et al. (2024) - the authors provide a theoretical explanation by showing that attention sinks naturally emerge at center nodes in the directed graph defined by the attention mask. This explains why sinks form at the beginning of sequences under causal masking or at all prefix tokens under prefix masking. Essential References Not Discussed: I'd suggest the authors to check these papers: - Stolfo, Alessandro, et al. "Confidence regulation neurons in language models." Advances in Neural Information Processing Systems 37 (2024): 125019-125049. - Cancedda, Nicola. "Spectral filters, dark signals, and attention sinks." arXiv preprint arXiv:2402.09221 (2024). Other Strengths And Weaknesses: other weaknesses: - The paper primarily analyzes the attention mechanism without deeply exploring how feed-forward networks, residual connections, and other transformer components might interact with position bias. other streanghts: - The work successfully connects multiple empirical observations about position bias (attention sinks, lost-in-the-middle effect) within a coherent theoretical framework, offering deeper understanding of these phenomena. - The authors provide formal proofs for their theorems, making their claims verifiable and establishing a solid foundation for future theoretical work in this area. - The controlled experiments are well-designed to isolate the effects of different components (mask type, positional encoding, model depth) on position bias, validating the theoretical findings. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate your positive assessment and constructive comments, which have helped strengthen our work. Below, we provide responses to your comments. > The paper primarily analyzes the attention mechanism without deeply exploring how other transformer components might interact with position bias. Thank you for your thoughtful comment. We agree that these other transformer components could contribute to position bias in an nontrivial way. We view this as an important direction for future work. That said, our focus on the attention mechanism in this work is aligned with a common practice in the theoretical analysis of transformers: isolating the attention mechanism to better understand its intrinsic inductive biases. Many prior studies have adopted similar simplifications by omitting or abstracting away certain components in order to enable precise analysis [1-5]. Nevertheless, our analysis can be naturally extended to account for residual connections. Following the renormalization technique in [6], we can redefine the attention matrix at layer t as $A^{(t)}\_{res} = 0.5 A^{(t)} + 0.5I$. This ensures $A^{(t)}_{res}$ remains a valid stochastic matrix and thus retains interpretability. Under this adjustment, our theoretical results still hold; but the convergence rate would slow down, aligning with findings in [1] that residual connections slow down rank collapse rate under attention. We will add a remark to the paper detailing how to handle residual connections in our analysis and their effect. > ...would benefit from more direct empirical validation with actual LLM attention patterns rather than just the controlled experiments. Thank you for the comment. Natural language lacks ground-truth annotations for position bias, making it difficult to disentangle architectural effects from semantic content. To enable precise control, we adopt the synthetic setup from Reddy, which allows us to study positional bias in isolation. Despite this abstraction, our setup reproduces key behaviors observed in real LLMs, such as the “lost-in-the-middle” effect (Sec. 5.2) and attention sinks (App. K.2). Tab 1 further shows alignment between our results and empirical observations on position bias reported in the literature. We agree that validating these results directly in real models with natural language data is an important next step. As noted in App K, one direction is to quantify position bias in LLMs and relate it to known linguistic structures [7]. We will highlight these directions in the updated version. > The experimental design has some minor limitations. Thank you for your comment. The choices n = 8 and d=64 follow from Reddy. In line with your suggestion, we have added more variants for the decay masks and RoPE for the experiment in Sec 5.1 under the same setup: Decay | m | depth | first vs. middle | first vs. last | middle vs. last | |---|---|---|---|---| | | 2 | -.025 | -.092 | -.070 | | 0.511 | 6 | -.057 | -.059 | -.006 | | | 10 | -.044 | -.049 | -.002 | | | 2 | -.043 | -.064 | -.022 | | 0.223 | 6 | .000 | .039 | .042 | | | 10 | .030 | .075 | .039 | | | 2 | .010 | .009 | -.011 | | 0.105 | 6 | .079 | .121 | .044 | | | 10 | .110 | .148 | .073 | RoPE | $\theta$| depth | first vs. middle | first vs. last | middle vs. last | |---|---:|---:|---|---| | | 2 | .005 | .001 | -.002 | | 1/100 | 6 | .051 | .070 | .008 | | | 10 | .075 | .088 | .012 | | | 2 | 0.006 | .002 | -0.009 | | 1/1000 | 6 | .064 | 0.84 | .015 | | | 10 | .079 | .086 | .018 | | | 2 | .005 | .013 | -0.013 | | 1/10000 | 6 | .078 | .088 | .015 | | | 10 | .092 | .104 | .013 | The results align with our findings in Sec 4, with greater $m$ or $\theta$ inducing greater decay, while deeper attention amplifying the bias toward earlier tokens. We also present here the standard deviations for the results in Fig 2: | | | first vs. middle | first vs. last | middle vs. last | |---|---|---|---|---| | | 2 | 0.016 | 0.014 | 0.009 | | no PE | 6 | 0.026 | 0.013 | 0.021 | | | 10 | 0.011 | 0.013 | 0.005 | | | 2 | 0.016 | 0.015 | 0.007 | | decay mask | 6 | 0.017 | 0.018 | 0.028 | | | 10 | 0.010 | 0.019 | 0.011 | | | 2 | 0.016 | 0.017 | 0.010 | | RoPE | 6 | 0.025 | 0.027 | 0.010 | | | 10 | 0.020 | 0.025 | 0.005 | The standard deviations for Fig 3 show a similar trend that they are small compared to the averages. This supports the robustness of our reported trends. We sincerely appreciate your feedback and welcome any further suggestions. ---- References 1. Attention is not all you need: pure attention loses rank doubly exponentially with depth 2. Signal propagation in transformers: Theoretical perspectives and the role of rank collapse 3. A mathematical perspective on transformers 4. The emergence of clusters in self-attention dynamics 5. On the Role of Attention Masks and LayerNorm in Transformers 6. Quantifying Attention Flow in Transformers 7. Large-scale evidence of dependency length minimization in 37 languages --- Rebuttal Comment 1.1: Comment: I thank the authors for providing these clarifications and i'd like to keep my overall score to 4.
Summary: This paper analyses position bias in transformers, both theoretically and experimentally. The paper first proposes to analyse a transformer as a graph, with attention weights representing weighted-edges between two tokens’ representations in adjacent transformer layers; an *attention flow* can then be computed between tokens $t$ and $t’$ at layers $\ell$ and $\ell’$, respectively, by summing the attention over all possible paths between them. The paper then uses this graph-based framework to show how the attention flow changes as the depth of a transformers grows to infinity. They consider models with autoregressive, sliding, and prefix attention masks. Importantly, they show that for these three models all attention flow converges to the first token position as the depth of a model grows. They also consider models with no position embeddings, with ALiBi, and with ROPe. They then experiment on a synthetic task, with results supporting their theoretical analysis. # Strengths This paper studies an interesting phenomenon—position bias in autoregressive language models—providing new theoretical results justifying their existence. This paper then supports this theoretical analysis with well designed experiments. # Weaknesses The paper misses some critical literature in interpretability and analysis of language models, which could make analyses stronger if considered. In particular, the paper fails to cite Abnar et al. (2020), who first proposed the graph-theoretical analysis of transformers used in this paper (termed attention flow there). This framework has later been criticised/expanded by, e.g., Kobayashi et al. (2022), who point out that considering value-vector norms is important for more meaningful analyses involving attention flow. Further, discussing papers such as Jain et al. (2019) and Wiegreffe et al. (2019), and their relation to the analyses proposed here could significantly strengthen the paper. The theoretical analysis ignores residual connections, which are integral to modern transformers. Residual connections can be interpreted as increasing the effect attention by one on diagonal entries, i.e.,: $A = \widehat{A} + I$. Given this extra term, the condition necessary to prove Theorem 4.1 (i.e., $P_{ij} < 1- \epsilon$) does not hold. So, do the theoretical analyses here only hold for transformers without residual connections? Furthermore, were residual connections used in experiments? * Jain et al. (2019). Attention is not Explanation. https://aclanthology.org/N19-1357.pdf * Wiegreffe et al. (2019). Attention is not not Explanation. https://aclanthology.org/D19-1002.pdf * Abnar et al. (2020). Quantifying Attention Flow in Transformers. https://aclanthology.org/2020.acl-main.385.pdf * Kobayashi et al. (2022). Attention is Not Only a Weight: Analyzing Transformers with Vector Norms. https://aclanthology.org/2020.emnlp-main.574.pdf Claims And Evidence: Yes. The paper supports its claims with both theoretical analyses and practical experiments. Methods And Evaluation Criteria: Yes. The only issue in this regard is the lack of residual connections in the theoretical analyses. Theoretical Claims: I checked the two first proofs and did not immediately find any issues. Experimental Designs Or Analyses: All experiments seemed sound and valid to support the paper's claims. Supplementary Material: Only the two first theorems' proofs. Relation To Broader Scientific Literature: I believe the lack of connection with prior work is the largest issue in this submission. The paper misses some critical literature. In particular, the paper fails to cite Abnar et al. (2020), who first proposed the graph-theoretical analysis of transformers used in this paper (termed attention flow there). This framework has later been criticised/expanded by, e.g., Kobayashi et al. (2022), who point out that considering value-vector norms is important for more meaningful analyses involving attention flow. Further, discussing papers such as Jain et al. (2019) and Wiegreffe et al. (2019), and their relation to the analyses proposed here could significantly strengthen the paper. * Jain et al. (2019). Attention is not Explanation. https://aclanthology.org/N19-1357.pdf * Wiegreffe et al. (2019). Attention is not not Explanation. https://aclanthology.org/D19-1002.pdf * Abnar et al. (2020). Quantifying Attention Flow in Transformers. https://aclanthology.org/2020.acl-main.385.pdf * Kobayashi et al. (2022). Attention is Not Only a Weight: Analyzing Transformers with Vector Norms. https://aclanthology.org/2020.emnlp-main.574.pdf Essential References Not Discussed: Yes. The paper claims one of its contributions is proposing a new graph-based framework for analysing transformers, but this framework was already proposed by Abnar et al. (2020). In particular, the main value analysed in this paper $P_{ij}^{(t)}$ is what Abnar et al. (2020) term attention flow. Abnar et al. (2020). Quantifying Attention Flow in Transformers. https://aclanthology.org/2020.acl-main.385.pdf Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: * Line 125: Should matrices W_q and W_k have size d_{QK} instead of d’? The same question for W_v in line 146. * Attention sink and center node: As I understand it, attention sinks are not about the limiting behavior of attention flow as $t \to \infty$. Attention sinks are typically defined as a specific (single) attention head which puts all attention mass on a single previous position. If that’s the case, then your analysis does not actually justify the existence of attention sinks, right? Or does it? Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate your positive assessment and constructive feedback, which have helped strengthen our work. Below, we provide individual responses to the comments you raised. > The paper misses some critical literature in interpretability and analysis of language models. Thank you for the pointers to these important references. In particular, we will clarify how our approach aligns with, yet differs from, the “attention rollout” and “attention flow” methods proposed by Abnar et al. (2020). While those methods conduct post-hoc graph-based analyses on specific inputs to understand information flow, our work takes a more general and theoretical perspective: we formalize attention masks themselves as directed graphs (Sec. 3) and analyze how their structure shapes the flow of information across layers—independent of semantic content. This allows us to rigorously trace how positional information is integrated across general sequences, highlighting the inductive bias introduced by masking alone. We hope this theoretical framework—which connects architectural design to emergent behavior—offers a useful new tool for understanding and probing attention mechanisms in future research. We appreciate your feedback and will work diligently to ensure that our final version properly suits our contribution within the broader literature on understanding attention. > The theoretical analysis ignores residual connections, which are integral to modern transformers. Thank you for your comment. One way to incorporate residual connections into our analysis is to follow the renormalization approach used in Abnar et al. (2020), where each attention matrix $A^{(t)}$ is adjusted to $A^{(t)}\_{res} = 0.5 A^{(t)} + 0.5I$. This ensures $A^{(t)}_{res}$ remains a valid stochastic matrix and thus retains interpretability. Under this adjustment, our theoretical results still hold; but the convergence rate would slow down, aligning with results in Dong et al. (2021) showing that residual connections slow down the rank collapse rate in token representations. We will add a remark to the final manuscript detailing how to handle residual connections in our analysis and their effect. > Should matrices $W_q$ and $W_k$ have size $d_{QK}$ instead of $d'$? Good catch! Indeed, in many practical implementations, $d'$ matches $d_{QK}$​, However, we keep them distinct in our formulation to accommodate more general scenarios where these dimensions may differ (e.g. Wang et al. (2024) proposes adjusting $d_{QK}$​​ at inference time to handle longer contexts more effectively). > Attention sink and center node: do your analysis does not actually justify the existence of attention sinks, right? Or does it? Thank you for the thoughtful question. Our analysis does not directly justify the existence of attention sinks—defined as single-layer heads that concentrate all attention on a single token. Rather, our goal is to provide structural insight into where attention sinks tend to emerge and why they become more prominent in deeper layers. - In particular, we observe that the positions where attention sinks are most likely to appear (e.g., the first token under causal and sliding window masks, or the first K tokens under prefix masks; see Xiao et al., (2024); Gu et al., (2025)) coincide with the center nodes in the attention-mask graph defined by our framework. These nodes act as structural attractors in the multi-layer attention dynamics, helping to explain why they accumulate disproportionate influence over time. - Thus, while attention sinks remain a per-layer phenomenon, our graph-theoretic perspective offers a complementary explanation for their emergence in specific positions and their amplification in deeper layers. We appreciate your questions and comments very much. Please let us know if you have any further questions. ---- References Wang et al. (2024) Length Generalization of Causal Transformers without Position Encoding. Xiao et al. (2024) Efficient Streaming Language Models with Attention Sinks. Gu et al. (2025) When Attention Sink Emerges in Language Models: An Empirical View. Dong et al. (2021) Attention is Not All You Need: Pure Attention Loses Rank Doubly Exponentially with Depth. Abnar et al. (2020). Quantifying Attention Flow in Transformers. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response. However, their response regarding attention rollouts of Abnar et al. (2020) does not clarify any difference between both frameworks to me. In particular, you claim: > While those methods [Abnar et al.'s] conduct post-hoc graph-based analyses on specific inputs to understand information flow, our work takes a more general and theoretical perspective: we formalize attention masks themselves as directed graphs (Sec. 3) and analyze how their structure shapes the flow of information across layers—independent of semantic content. I still do not see any differences between your graph-formalism, and the one from Abnar et al. (2020). In fact, I am convinced that the values $\mathbb{P}^{(t)}(z_i = j \mid X^{(0)})$ you analyse are mathematically equivalent to attention rollouts. (In my review above I said attention flow, but it's actually the rollouts which are equivalent. I apologise for the confusion between the two names.) If they are not, could you please show mathematically how they differ? In regards to "formalising attention masks as directed graphs and normalised attention logits as edge weights", vs. "directly formalising attention weights (computed using the said mask) as edge weights" (is this the point you're making in how frameworks differ?): these two things are, in my opinion, equivalent. As the attention mask forces some attention weights to 0, those edges will not exist in the formalism of Abnar et al., and the analysed graphs will be identical. > This allows us to rigorously trace how positional information is integrated across general sequences, highlighting the inductive bias introduced by masking alone. We hope this theoretical framework—which connects architectural design to emergent behavior—offers a useful new tool for understanding and probing attention mechanisms in future research. Given the two points above, I still think your graph-based framework is identical to Abanar et al.'s and that this "rigorous tracing" could be derived from either. As highlighted in my review above, I don't think this takes away from your paper, it still has plenty of value to warrant a "4" even without the graph-based framework being one of its contributions -- however, I believe it's important to clearly acknowledge the source of this framework as Abnar et al.'s. As such, I am lowering my score. I will raise it again if the authors either: make a more convincing argument for why their framework differs from Abnar et al.'s, or acknowledge that this contribution comes from prior work. ## Regarding the Other Related Work I would appreciate it if the authors clearly discussed the other related work I mentioned in my review in their paper. Either highlighting the limitations of this paper in not adopting, e.g., the advances to the framework presented by Kobayashi et al. (2022), or discussing why this is not an issue with its analysis. ## Regarding Residual Connections Thanks for the response. I agree that adding that to the paper would make it stronger. A short discussion on why you formalise residuals as `.5A + .5 I` instead of `A + I` would also be pertinent in my opinion. I believe the latter would be more compatible with how residuals are actually implemented in transformers. Furthermore, formalising it as `A + I` would invalidate your theorems, right? Or do results hold then? I think this should be highlighted as a limitation in the manuscript. ## Summary Again, I liked this paper and would like to see it accepted (that's why I originally gave it a 4). On the other hand, I do think it is important to clearly acknowledge prior work's contributions (and thus I am lowering my score). I will increase my score again if either I'm convinced of the difference between this framework and Abnar et al.'s, or if the authors acknowledge they are equivalent. If the AC disagrees the frameworks are equivalent, they should disregard this new lower score and use my old score instead (i.e., a score of 4). --- Reply to Comment 1.1.1: Comment: We appreciate your thoughtful evaluation and would like to clarify several points from our previous rebuttal, as well as the novelty and scope of our contributions. > I still do not see any differences between your graph-formalism, and the one from Abnar et al. (2020). In fact, I am convinced that the values you analyse are mathematically equivalent to attention rollouts. Thank you for pointing this out. We agree that $P(z_i = j | X^{(0)})$ in our work is mathematically equivalent to the attention rollout values $\tilde{A}$ from Abnar et al. (Eq. 1). We were not aware of this prior work when writing the initial draft and appreciate you bringing it to our attention. As described in Sec 3, we derived this quantity independently from the probabilistic interpretation of attention in Kim et al. (2017). We will acknowledge this equivalence and properly attribute Abnar et al. and other works in the revised manuscript. > In regards ... these two things are, in my opinion, equivalent. Given the two points above, I still think your graph-based framework is identical to Abnar et al.'s and that this "rigorous tracing" could be derived from either. Thank you for the comment. As discussed above, we agree that the two formulations are equivalent and we will acknowledge that in our revision. That said, we want to clarify and highlight two main distinctions between ours and Abnar et al.’s work: 1. Goal: Rather than claiming novelty in the graph abstraction itself (we apologize for any confusion caused by our earlier wording, which we will change), we leverage this powerful graph view, pioneered by Abnar et al., to study how architectural choice (masks, PEs, model depth) shapes the way how attention integrates positional information independent of specific input semantics. - This is an important distinction with Abnar et al.. A central challenge in analyzing attention mechanisms lies in **disentangling the effects of semantics and position**, as attention outputs are influenced by both. Empirical approaches like attention rollout (as used in Abnar et al.) are highly valuable for tracing information flow on specific inputs. However, they do not directly yield general analytical insights that hold across all inputs. 2. Methodology and theoretical significance: when we refer to our "graph-theoretic framework," we mean the suite of **graph theory based proof techniques** we develop (such as walk counting, dynamic programming, and graph compression) which enable exact enumeration and analysis of attention paths for general inputs. These techniques enable us to prove **non-asymptotic, input-agnostic theoretical results with explicit convergence rates** that characterize how attention mechanisms propagate information over **arbitrary** sequences. - While the graphs themselves are structurally the same to those in Abnar et al., our analysis focuses on deriving **quantitative and provable statements about model inductive bias**—insights that empirical attention rollout computation alone cannot provide. We will clarify these distinctions and explicitly acknowledge the contributions of Abnar et al. and other related works, including the attention rollout formulation and the use of graphs for visualizing information flow. We hope this helps clarify how our contributions differ in motivation, methodology, and theoretical scope, and how our framework builds on and complements existing literature. ## Other related work Thank you for pointing us to these important references. We will add a dedicated section in the related work to discuss them. - Our analysis focuses on attention weights, as they are the primary component for inter-token communication. In contrast, value projections and MLPs are typically applied independently to each token and are unlikely to affect position bias in the same way that attention weights do. - That said, we agree that under specific conditions, value projections or MLPs can induce position-dependent effects. For example, if a value matrix $V$ maps $x_i$ to $x_{n-i+1}$. However, such behaviors would require highly specific weight configurations that are unlikely to arise under standard training, and they would generally lack robustness to input variation. - Our goal is to isolate the effects of attention masks and positional encodings, which more directly govern how positional information is integrated. Nevertheless, extending our framework to include potential interactions with value projections and MLPs, as explored in Kobayashi et al., is a promising direction for future work, and we will add this discussion in the revision. ## Residual connections Formalizing residual connections as $A+I$ would cause $P(z_i = j | X^{(0)})$ to lose its probabilistic interpretation and diverge as $t\to \infty$. Using $0.5A + 0.5I$ ensures the quantity to retain a probabilistic interpretation and well-behaved limiting behavior. We will note this modeling choice and its motivation in the revised manuscript.
Summary: This paper aims to analyze the effect of attention masks, such as the causal mask, of the observed attention patterns. In particular, the authors suggest modeling the possible paths that the information can flow using attention as edges of a graph. By looking at this graph, they obtain certain bounds which they map to practical observations such as bias towards early tokens in the sequence. Furthermore, the paper looks into the effect of relative and rotary positional encodings on these bounds and show how these encodings can lead to different biases. Claims And Evidence: In addition to the major issue I mention in the next section, some of the claims are not fully justified. For example, theorem 4.1 only yields an upper bound on the weight given to a token. While the upper bound is smaller for later tokens, this doesn't mean that there is necessarily a bias towards earlier tokens as the upper bound can be arbitrarily large and not be strict. Therefore, I find these claims to be misleading. Methods And Evaluation Criteria: Based on my understanding, the graph model that is being used, looks at the weight given to each token overall not per layer. What it means is that if token 1 pays attention to token 0 in layer 1, and token 2 pays attention to token 1 in layer 2, this is counted as attending to token 0. However, these weights are not what is looked at when talking about attention patterns. Attention patterns look at how attention is distributed in a single layer. As such, I do not believe that the mapping of the obtained results (that are based on the graph model) to Theoretical Claims: Look at the above Experimental Designs Or Analyses: In Kazemnejad et. al. the authors explicitly show a set of weights for a transformer model that lead to it learning positional indices whereas the paper is claiming this can not be achieved based on experiments. Since the given weights in Kazemnejad et. al. actually allow learning positional indices, I believe these results require further investigation to understand what is different. Supplementary Material: No. Relation To Broader Scientific Literature: No comments. Essential References Not Discussed: No comments. Other Strengths And Weaknesses: No comments other than above. ## update after rebuttal My concerns remain largely unaddressed. For example: 1) It is possible that understanding the distribution of attention scores over original tokens would be beneficial. However, this paper makes many claims about the relevance of results with the per-layer attention patterns. For example, Table 1 tries to suggest relevance between the theorems and the attention sink phenomenon. These claims are made regularly in the paper and are the essence of how the results are shown to be important. However, these are in no way valid since the thoerems do not apply to per-layer attention patterns at all. As such, I find them extremely misleading to the community. 2) I am still unsure about experimental results showing that the models can not learn positional biases since previous work also include empirical work that show it is possible to decode position information when no explicit positional encoding is provided. I believe a mismatch in settings might be the culprit and I would like to ask the authors to carefully investigate this. 3) Additionally, conclusions drawn from the upper boud remain questionable. For example in the comment "later tokens become progressively less influential, while earlier tokens retain more aggregate attention", the upper bound only shows the first one. However, it does not mean the second one is necessarily true unless a lower bound is drawn. If all the scores go equally to 0, the upper bound would still hold. It might be possible to show such lower bound as well, given that the scores sum to 1 but this needs to be done before making the second part of the claim mentioned above. My score remains unchanged. Other Comments Or Suggestions: No comments other than above. Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: We thank the reviewer for providing thoughtful feedback. Below, we provide detailed responses to the comments. > Theorem 4.1 only yields an upper bound on the weight given to a token. Thank you for the comment. We agree that Thm 4.1 provides an upper bound rather than a strict inequality, and we do not claim it guarantees a strict bias toward earlier tokens in every setting. Rather, it characterizes an emergent tendency, rooted in the causal mask, for earlier tokens to accumulate more attention as the number of attention layers increases. - Importantly, while the upper bound is not always tight, it is not vacuous: it decays exponentially to $0$ for tokens $i\geq 2$ as attention deepens, meaning it cannot be arbitrarily large. This suggests that—regardless of specific token embeddings—later tokens become progressively less influential, while earlier tokens retain more aggregate attention. One can derive more precise and stronger results by imposing additional assumptions regarding token embeddings or model weights. For example, for sequences of identical tokens, the upper bound becomes exact. - Moreover, our empirical results support these core insights from Thm 4.1. As shown in Fig. 2, even in the absence of explicit positional bias in the data, models with causal masking exhibit clear early-token bias—an effect that strengthens with depth. This empirical observation closely aligns with our theoretical findings, suggesting that the upper bound meaningfully captures the inductive bias induced by causal masking. > Attention patterns look at how attention is distributed in a single layer. Thank you for the comment. - While attention patterns are often analyzed at the single-layer level, such analyses may miss how information propagates and accumulates across layers—a key aspect of how transformers build contextual representations. A central contribution of our work is to model this global effect of attention across layers. In your example, while token 2 does not directly attend to token 0, it inherits information from token 0 through token 1. Such multi-hop influences are not captured by per-layer attention patterns but are critical for understanding deep model behavior. - Our graph-theoretic framework is designed precisely to capture these *multi-step dependencies by modeling the global effect of attention across layers*. This complements prior layerwise analyses and aligns with recent work (Abnar et al., 2020; Barbero et al., 2024; Wu et al., 2024) that emphasizes the importance of analyzing multi-layer attention compositionally. By tracing these cumulative token interactions, our framework offers a complementary and more holistic view of attention dynamics. We hope that our methodology can serve as a useful tool for the community and help guide future investigations into multi-layer attention mechanism. > In Kazemnejad et. al. the authors explicitly show a set of weights for a transformer model that lead to it learning positional indices whereas the paper is claiming this can not be achieved based on experiments. Thank you for the comment. We would like to clarify the relationship between our work and the findings of Kazemnejad et al.: - Kazemnejad et al. provide a theoretical construction showing that positional information can, in principle, be encoded using only causal masking via hand-crafted weights. However, their result does not reflect what typically emerges through standard training. - Our work builds on this insight by asking a complementary question: Do transformer models actually learn positional encodings in practice when trained without explicit positional signals? We address this through a controlled task where position is critical and observe whether models develop positional awareness under causal masking alone. - As shown in Sec. 5.2 and Fig. 3, models trained with explicit positional encodings (sin PE or RoPE) capture both start and end-of-sequence biases, while models using only causal masking consistently fail to capture end-of-sequence patterns. This suggests that while position can theoretically be represented via causal masking, *it does not seem to emerge naturally in practice*, even when position is highly informative. Our theory supports this by showing that causal masking induces an inductive bias toward earlier positions. Thus, rather than contradicting Kazemnejad et al., our work provides a more practical and deeper understanding of how position bias actually manifests in Transformers under different PEs. We will clarify this distinction more explicitly in the final version of the manuscript. We appreciate your questions and comments very much. Please let us know for any further questions. --- References Abnar et al. Quantifying Attention Flow in Transformers Wu et al. On the Role of Attention Masks and LayerNorm in Transformers Barbero et al. Transformers need glasses! Information overSquashing in Language Task
null
null
null
null
null
null
Not all solutions are created equal: An analytical dissociation of functional and representational similarity in deep linear neural networks
Accept (spotlight poster)
Summary: This paper studies the relation between functional similarity and representational similarity, finding that there is a disassociation: i.e., functional equivalent networks may have very different representations. To be concrete, they mainly study a two-layer linear networks, concerning their weights $W_1, W_2$ and hidden representations $h$. They study the manifold of least-square solutions, and minimum-weight-norm and minimum-repr-norm solutions. These norms show different hidden representations and representational similarity matrices, even though they all have the same overall function. Least square solutions are the most flexible -- in fact, representations can be arbitrarily manipulated (e.g., into an elephant). For the minimum-weight-norm or minimum-repr-norm solutions, the representations still have some degree of freedom, but RSM is uniquely determined from task data, which is desirable. They finally study why there is representation alignment between biological and artificial systems -- due to their robustness to noise. ## Update after rebuttal The authors have addressed my questions. Overall I think this is a great paper, but 5 is a bit of a stretch, so I keep my score as 4. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: I skimmed through the theorems (although I didn't derive by hand myself) and did not spot any error. Experimental Designs Or Analyses: This is mainly a theory paper. The experiments are minimal but are very informative and nicely designed. Supplementary Material: I briefly skimmed through the whole SM Relation To Broader Scientific Literature: Although training of deep linear networks can be analytically characterized, the solution space seems trivial to characterize, this paper takes a unique point -- by constraining the norm of representations/weights, hidden representations can be more interpretable or more truthful to the underlying structure of data. This paper uses both mathematical derivations and neat illustration examples to support this idea. Essential References Not Discussed: The paper mentioned implicit regularizations in deep learning. It would be nice to mention the implications regularization of SGD: * ON THE ORIGIN OF IMPLICIT REGULARIZATION IN STOCHASTIC GRADIENT DESCENT, ICLR 2021 Other Strengths And Weaknesses: **Strengths** * The paper is nicely written and neatly presented. I like the elephant plot to show that representations can be arbitrarily engineered given the same functionality. * The idea is rigorously supported by mathematical derivations. * The minimum-norm (weight or representation) solutions are interesting objects to identify and study. In particular, their representations reveal the truthful hidden structure of data **Weaknesses** * This is probably too much of an ask for a theory paper, but adding more experiments would make the paper more appealing to practitioners. * The representation alignment between artificial vs biological systems due to robustness to noise is interesting but might be over-stated. Other Comments Or Suggestions: * Line 152, a comma out of place * Figure 3c. It would be nice to point out (in the caption) that the elephant is deliberately engineered. I was a bit confused at first because there is no elephant in the 16 items. * Figure 5 caption did not explain what "duplicate" means. Questions For Authors: * what's the dynamic (optimization) reason that an (artificial) network would seek robust solutions? Is it due to the implicit regularization of optimizers? * I'm a bit confused by the illustration in Figure 1. When a smaller circle is contained inside a larger circle, what does this mean? For example in subfigure D, the grey oval H is contained in the larger orange oval which is connected to W2. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive and detailed feedback on the manuscript. First, we address the reviewer's question regarding the optimization reasons behind an artificial network seeking robust solutions. Our analysis is general—we study the entire solution manifold and derive broad statements about neural computations and their representations without relying on any specific learning or optimization algorithm. An investigation into why and when an algorithm converges to a particular region of the solution manifold is beyond the scope of our study. However, as noted by the reviewer, extensive work in the machine learning literature has explored this question, albeit without focusing on their relation to neural representations. We have now expanded our Discussion to address this important aspect in more detail, including relevant references concerning implicit regularisation (e.g., Smith et al., 2021, as suggested) and neural collapse (e.g., [Zhu et al., 2021](https://papers.nips.cc/paper/2021/file/f92586a25bb3145facd64ab20fd554ff-Paper.pdf)). Next, we address the reviewer's second question regarding the visualizations in Figure 2. Our aim was to follow well-established standard representations found in the literature (e.g., the [Wikipedia article on “Kernel (linear algebra)”](https://en.wikipedia.org/wiki/Kernel_(linear_algebra))). Nevertheless, we recognize the need for additional clarity. We have revised the figure caption and expanded the accompanying explanations to explicitly clarify that: (1) the orange areas represent the general vector space in which representations reside, (2) open black circles denote the image of linear transforms, with the surrounding area corresponding to their kernel, and (3) filled black circles indicate the subspaces occupied by the input, hidden, and output representations of the training data, respectively. We welcome any further suggestions for improving the clarity of our visualizations. We appreciate the reviewer's suggestion for additional experiments. Could the reviewer specify which experiments they believe would most effectively strengthen our paper’s message? We are open to incorporating further empirical evaluations, even if complex, to reinforce the practical implications of our theoretical findings. Furthermore, as suggested, we have tempered our claims regarding noise as the cause of representational alignment between artificial and biological systems, instead framing them as hypotheses that open up an interesting field of investigation. In particular, it remains unclear if, why, and when brains may operate in these regimes. However, we emphasise that our results suggest that assuming they do *without* empirical validation could lead to misleading comparisons and conclusions about decodability and representational similarity. We have also made the following revisions based on the reviewer’s comments: - Corrected the typo in line 152. - Clarified in the caption of Figure 3c that the elephant is deliberately engineered. - Expanded the caption of Figure 5 to explain that "duplicate" refers to networks extended by duplicating hidden neurons, a transformation that preserves the input-output mapping. We once again thank the reviewer for their time and valuable insights. We remain open to further suggestions, particularly regarding simulation studies that could further illustrate the scope and significance of our contributions.
Summary: The paper presents a mathematical analysis of the nature of solutions (for a given problem) in an overparameterised two layer linear neural network. This is done through a theoretical study of the manifold of generic solutions, i.e. different choices of weights values which give the same fit of the training data, and identifying several types of solutions grouped by certain mathematical properties. This leads to a conclusion that analysis of overparametrised networks is challenging, because individual transformations/internal representation is arbitrary given the next transformation can correct for it, and thus the entirety of the mapping needs to be taken into account. Claims And Evidence: Yes. The paper is mathematically dense, but the conclusions seem justified. It's just that I think most would take it as a given, without the presented evidence, that overparamterised networks are non-trivial due to the fact that what individual components do is not informative, and instead one has to take into account the entirety of the mapping. Methods And Evaluation Criteria: Linear network analysis is one of tools for anlysing neural networks, and while it has its limits, for obvious reason of missing the non-linearity aspects, it's still tells us something. I do find the connection made in this paper to the non-linear networks by stating that they also have multiple solutions (through symmetries) somewhat weak - not in accuracy - sure, non-linear network offer multiple solutions - but in relevance of this work to that aspect - do the categories of solutions analysed here tell us anything about how non-linear solutions work? Theoretical Claims: I did not check the proofs carefully. I think it might be sound, since I don't disagree with the ultimate conclusion of the paper. Experimental Designs Or Analyses: Experimental design and analysis seems fine. I find the arbitrary 'elephant' internal representation a "cute" illustrative example. However, though I applaud authors' attempt to give a high-level/intuitive explanation of what different solutions entail (in the paragraph after Theorem 3.7) I am struggling to understand those explanations and I don't find Figure 2, while again appreciated as an attempt at intuitive explanation, all that helpful. Supplementary Material: I did not go carefully through supplementary materials - it's 8 pages of dense math, which makes it feel like perhaps this type of publication is more suited for a journal rather than a conference? Relation To Broader Scientific Literature: This work continues the thread of linear network analysis - an analysis of simplified problem (linear netowk) in hope of understanding a hard problem (the non-linear counterpart). I think this work might be a great starting point of an interesting analysis, just at this moment seems preliminary, since it only that analysis leads to a trivial conclusions - that understanding internal representation is a hard problem. We already know that. Essential References Not Discussed: Not to my knowledge. Other Strengths And Weaknesses: Seems like early work, analysis might be promising, but the current conclusion seems obvious and not new. Other Comments Or Suggestions: The equation in Assumption 2.2 seems inverted. If the network is not bottle-necked then it seems that $N_h \ge min(N_i,N_o)$, not $N_h \le \min(N_i,N_o)$. In Laurent and Brecht this condition is the "thinnest layer is either input or outut", which means $N_h$ is at least the same or greater than input/ouput, not smaller or the same. Not sure if this is misunerstaning on the part of the authors, or just a typo. Questions For Authors: Does the analysis you provide tell us anything else other than it is hard to analyse parts of internal representation in isolation from other parts of the network? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We would like to sincerely thank the reviewer for their time and detailed feedback. Below, we address the key points raised in the review. The reviewer asks whether our analysis provides insights beyond the conclusion that internal representations cannot be interpreted in isolation. First, we emphasize that this is _not_ widely understood, as many subfields of neuroscience, machine learning and their intersections _do center on analysis of representations in isolation_. For example, as we outline in great detail in Section 4, a significant body of research in computational neuroscience assumes that internal representations between systems (models and/or brains) can be meaningfully decoded and compared. Our work analytically demonstrates that this assumption does not necessarily hold—in particular, we identify in which regimes representations are arbitrary with respect to the task, and instead dependent on other parameters like parameter initialization. However, this is only one part of our findings. We further identify specific regimes on the solution manifold where representational similarities are _not_ arbitrary. We derive precise analytical conditions under which internal representations in fact _are_ informative and comparable _without_ requiring knowledge of other parts of the network; in particular, when they are _task-specific_. Additionally, we show that these task-specific stable representations coincide with regions of the solution manifold that are robust to noise, linking identifiable representational structure to a desirable computational property. This goes beyond the claim that “understanding internal representation is a hard problem” because we establish precisely _when_ it is hard. It is not clear to us why the reviewer dismisses these results as “intuitive”, “preliminary”, and “not novel”. To ensure a constructive discussion, we kindly ask the reviewer to provide references to prior work that establishes the same conclusions as we do in the manuscript. Additionally, to clarify our contributions, we have revised the manuscript by adding an explicit contributions section that is detailed in the response to Reviewer rKc5. Minor: We would like to thank the reviewer for catching the typo in Assumption 2.2. Indeed, the greater-than-or-equal sign should have been a less-than-or-equal sign! We appreciate the reviewer’s explicit feedback on Figure 2. Our visualizations follow well-established standards found in mathematical visualizations (e.g., the [Wikipedia article on “Kernel (linear algebra)”](https://en.wikipedia.org/wiki/Kernel_(linear_algebra))). However, we are happy to improve and expand on the explanations of the image and kernel of a matrix in the main text. Further, we have revised the figure caption and expanded the accompanying explanations to explicitly clarify that: (1) the orange areas represent the general vector space in which representations reside, (2) open black circles denote the image of matrices, with the surrounding area corresponding to their kernel, and (3) filled black circles indicate the subspaces occupied by the input, hidden, and output representations of the training data, respectively. We welcome any further suggestions for improving the clarity of our visualizations. Further, the reviewer notes that they did not carefully review the supplementary material due to its mathematical density, but still conclude that the results “might be sound” as they do not disagree with the paper’s results and conclusion. This aligns precisely with [ICML guidelines](https://icml.cc/Conferences/2025/ReviewerInstructions), which state that reviewers are encouraged (but not required) to consult supplementary material and that key claims should be understandable from the main text. Furthermore, rigorous mathematical derivations are essential for theoretical contributions in machine learning, just as extensive simulations are standard for empirical work. Appendices that extend over many pages are common in ICML theory papers, and theory of machine learning and applications to (neuro)science are subject areas in [ICML’s yearly call for papers](https://icml.cc/Conferences/2025/CallForPapers). Throughout our 8-page supplement, we have made substantial efforts to fully provide all our assumptions, proofs and derivations to ensure our results are fully reproducible, testable, and clearly presented. Finally, we sincerely thank the reviewer again for their time and effort. We hope that, in light of our clarifications, the significance of our contributions and the value of our thorough theoretical analysis (including the detailed appendix) become evident. We remain open to any further suggestions or points for clarification and are hopeful that the reviewer may reconsider their assessment accordingly. --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal. I am not willing to concede my point that it is not surprising or novel to find out that internal representation of individual layer is arbitrary in isolation. I can't point to specific literature, because, as I said it's on "intuitive" (somewhat obvious) level that we know this - we know it is hard to discern the internal representation of neural networks, because it's distributed across neurons and layers. I suppose I could point to several works that attempted layer-wise supervised learning that come up short when compared to end-to-end supervised training. And I don't think analysis of internal representation in isolation is driven by the belief that this is the best way to go about it, but rather is necessitated by the need to make things tractable. Just like is the case with with the analysis of deep linear networks – they are not a replacement for non-linear models, and they miss important aspects of the deep neural networks we use in practice, but we study them because it’s tractable and easier. However, in light of the rebuttal, and other reviewers' comments, I am willing to grant that I might have undervalued the mathematical aspects of this work, and that the presented rigorous mathematical treatment might be a decent step towards better understanding of the types of solutions on the solution manifold. I will therefore raise my score. --- Reply to Comment 1.1.1: Comment: We appreciate the reviewer's revised assessment. One clarification that we would like to make: Our claimed contribution is not to be first in noting challenges in analysing representations—our related work section covers several previous negative examples (see "Comparing the solutions of artificial and biological networks")—but rather to provide a rigorous mathematical treatment of these challenges using an appropriate surrogate model (deep linear networks) that may unify and explain these negative observations.
Summary: The authors study a two-layer linear network. They characterize the space of solutions for such networks, with emphasis on several normalization schemes. The result is that there are many zero-loss solutions that differ in how minimal they are. Specifically, whether the transformation from input to hidden or from hidden to output is minimal with respect to the training data. The authors discuss possible implications of these results to representational drift and to the comparison of biological and artificial networks. ## After rebuttal After reading all the rebuttals and discussion with all reviewers, I am keeping my score. Claims And Evidence: Yes. There is degeneracy in solutions, and different regularization schemes choose different solutions. There is also a relation between noise robustness and regularization. It is not clear which of these claims is novel. Methods And Evaluation Criteria: Not relevant Theoretical Claims: I read the proofs, but did not verify their correctness in detail. Experimental Designs Or Analyses: Figure 5: the definitions of scaled, nuisance, etc only appear in the appendices. This seems like something that should be in the main text. Supplementary Material: I read all the supplement. Relation To Broader Scientific Literature: The authors mention the main relevant papers (Baldi & Hornik 1989, Laurent & Brecht 2018). It is not clear which aspects of the current paper are missing from prior work. Essential References Not Discussed: Not aware Other Strengths And Weaknesses: Strengths: This is a simple setting, in which the characterization of the solution space is possible. Qualitative insights from this setting can be useful in broader scenarios. The explanation of the different degeneracies (Figure 2) was very clear and intuitive. Weaknesses: It is hard to understand what exactly is new relative to existing work. For instance, Theorem 5.1 seems like a textbook result on linear regression. The definition of minimum representation-norm is not very intuitive. The first term is the norm of the hidden representation. But the definition also has a sum with the readout weights. Why call this sum “representation-norm”? Figure 5E: It would be useful to discuss the scaling of the effect. Can the theory provide any insights on the actual values of noise in which the different models degrade? Corrolary 5.4: What is the intuition behind the suggested scaling? Line 92 – The way assumption 2.2 is written is quite confusing. It seems as though not having a bottleneck implies narrow hidden layer. Other Comments Or Suggestions: None Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their thorough evaluation of our work, and for providing explicit feedback! First, we address the reviewer’s concern about the novelty of our analysis. To clarify this, we have added a dedicated "Contributions" section to the revised manuscript and reproduce it here in detail. As stated in the manuscript, deep linear networks perform “…multistage computations that give rise to hidden-layer representations.” Their overparameterization through depth, rather than width as in linear regression, introduces nontrivial representational degeneracies not present in linear regression. Our work provides a complete analytical characterization of these degeneracies and their implications for decodability and comparability of neural representations. Further, we study the consequences of our analysis on use cases in neuroscience, where neural networks are a commonly employed model of learning, in Section 4. In particular: - Definition 3.1 directly follows from Laurent and Brecht (2018). - Theorem 3.3 resembles least-squares linear regression, but its derivation in the context of two-layer linear networks requires extra care (e.g., does not hold if the network is bottlenecked). - The constraint satisfaction problem in Definition 3.4 has been studied in Saxe et al. (2019), but only under a set of strong assumptions ($\Sigma_{xx} = I$, $N_i = N_o$, and that $\Sigma_{yx}$ has full rank) as stated in the manuscript. - In contrast, Theorem 3.5 imposes no additional assumptions about the task's statistics or network structure (beyond the network not being bottlenecked). - Definition 3.6 stems from our analysis of parameter-noise robustness (Section 5), but can be related to notions from implicit regularisation (e.g., [Smith et al., 2021](https://arxiv.org/abs/2101.12176)) and neural collapse (e.g., [Zhu et al., 2021](https://papers.nips.cc/paper/2021/file/f92586a25bb3145facd64ab20fd554ff-Paper.pdf)). - Accordingly, Theorem 3.7, which shows the parameterization of minimum representation-norm solutions, is novel to the best of our knowledge. - Theorems 3.8 and Corollaries 3.9–3.11 are novel and central to our analysis, analytically revealing the degrees of freedom within neural representations and defining when representational similarities are fixed and can be used for functional comparison. - For theorems and corollaries regarding robustness to noise in Section 5, similar results exist for linear regression (e.g., minimal norm confers noise-robustness) but again require careful treatment in the multi-layer setting (e.g., cross-correlation terms in the case of parameter noise). In summary, to our knowledge, this is the first work to make an analytical and exact connection between different regions of the solution manifold of deep linear networks and the identifiability and comparability of neural representations, as well as their correspondence to optimality in noise robustness. If the reviewer is aware of any further prior work that has similar results, we would be delighted to include it and provide appropriate reference. Minor: We would like to thank the reviewer for catching the typo in Assumption 2.2. Indeed, the greater-than-or-equal sign should have been a less-than-or-equal sign! Regarding terminology, we made an effort to be precise while avoiding overly lengthy terms like "minimum-readout-and-representation-norm" . We recognize that "minimum-representation-norm" is only partially descriptive, as it omits the fact that we are also minimizing the norm of the readout weights. While minimising the norm of the readout-weights has no influence on the hidden-layer representation, we chose the constraint satisfaction problems in Definitions 3.2, 3.4 and 3.6, such that they align with the analytical results on noise robustness in Section 5. We believe our terminology is sufficiently descriptive but would be grateful for any suggestions the reviewer may have for a more concise naming convention. Regarding Corollary 5.4, the parameter noise is scaled by the norm of the inputs and the size of the output layer (see the first and second summands in Equation 22). By inversely scaling the variance of the noise, we ensure the equation is factored consistently, independent of these measures. Without this adjustment, the solution would be identical up to a fixed scaling factor. Thus, this scaling is simply a matter of convenience. We have added a corresponding comment to the manuscript. Regarding the scaling in Figure 5E, the initial phase of the sigmoidal curve corresponds to test error (near-zero noise), while the ceiling aligns with random guessing (high noise). We lack analytical insights into the shape of this curve due to the non-linear setting. However, we would be happy to include additional simulations if the reviewer has a specific question in mind. Again, we would like to thank the reviewer for their time and effort, and we remain open to any further suggestions or points for clarification.
Summary: This paper analytically studies the hidden representations of two-layer feedforward networks trained to minimize differentiable, convex loss functions. The only sets of weights in the networks studied are read-in and read-out weights, leading to simple expressions for both in terms of the input data. The paper shows that, even in this simple case, function can be dissociated from representation: two networks with different representations can achieve the same loss on the task. The paper then investigates the set of solutions obtained when optimizing the network under various regularization schemes, such as minimizing the sum of squares of the weight norms. These regularized networks, in fact, preserve similarity between one another when taking random walks through function space. Since these regularized networks are also more robust and generalize better, this suggests a mechanism for the empirically observed brain-model similarity "in the wild." Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes, all of them. Experimental Designs Or Analyses: N/A. Supplementary Material: Yes, all of it. Relation To Broader Scientific Literature: This paper will be of interest to many researchers working in both neuroscience and machine learning. Essential References Not Discussed: No. Other Strengths And Weaknesses: This is a very, very good paper. It is hard to find any weaknesses. Other Comments Or Suggestions: - In the supplementary, I think it would be useful to define the "exclamation over equal sign" in the notation and preliminaries section. Questions For Authors: N/A. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We thank the reviewer for their thorough evaluation of the manuscript and supplementary material, and sincerely appreciate their effort and positive feedback! We would like to kindly ask the reviewer, if time permits, to elaborate on the specific strengths and significance of the work to help the Area Chair better understand the basis of their positive assessment. Regarding the $\stackrel{!}{=}$ notation in the supplementary material, it was used to indicate that an equality is not derived but rather an assumed condition from which a separate conclusion is derived. However, since this notation is not widely used, we have replaced it with explicit language such as “such that”, “with the requirement that”, or “we would like to show” for clarity. Please do not hesitate to let us know if there are any further points or suggestions to address.
null
null
null
null
null
null
Incorporating Arbitrary Matrix Group Equivariance into KANs
Accept (poster)
Summary: This paper introduces Equivariant Kolmogorov-Arnold Networks (EKANs), an extension of Kolmogorov-Arnold Networks (KANs) that incorporates matrix group equivariance. The authors follow a similar approach as Equivariant MLPs (EMLP) by Marc Finzi et al. (2021) to enforce equivariance constraints on KANs. Basically, the weight is enforced to be in the intertwiner of the given group representations through a set of linear constraints. In addition, a lift layer is introduced to preprocess the input features, aligning them with the equivariant structure, while a gating mechanism ensures equivariance in the non-linear activation stage. The authors evaluate EKANs on tasks involving known symmetries (e.g., particle scattering, three-body problem, and top quark tagging), showing that EKANs achieve lower error and require fewer parameters compared to baseline models such as KANs, EMLPs, and other equivariant architectures. Claims And Evidence: The main claim of the paper is that EKANs improve upon both KANs and other equivariant architectures by leveraging equivariance constraints in a more efficient manner. The empirical results support the claim that EKANs outperform standard KANs and EMLPs in terms of test error reduction and parameter efficiency. However, the key methodological innovation—incorporating equivariance via equivariant linear layers—is a direct adaptation of the approach used in EMLP (Finzi et al., 2021). The primary difference is that this method is now applied to KANs rather than MLPs. The authors claim novelty by applying equivariance to spline-based architectures, but they do not introduce a fundamentally new method for enforcing equivariance. This aspect is not sufficiently acknowledged in the paper. Methods And Evaluation Criteria: The proposed modifications to KANs follow a well-established framework for imposing equivariance. The evaluation primarily focuses on: * Comparing EKANs against standard KANs, MLPs, and EMLPs on symmetry-sensitive tasks. * Measuring test error across different dataset sizes and numbers of parameters. The benchmarks are chosen from EMLP and other related works. The chosen benchmarks are reasonable, and the paper provides sufficient details on experimental settings. Theoretical Claims: N/A Experimental Designs Or Analyses: The experimental results appear sound, and the comparisons are generally fair. EKANs are compared to multiple baselines, including MLPs, EMLPs, standard KANs, and other relevant equivariant architectures. Also, the study varies the dataset size and the model capacity to analyze their influence on performance. However, the paper does not explore settings where equivariance may not be useful or could degrade performance. It is noted in the EMLP paper that equivariant linear layers and gated nonlinearities are not universal, and there exist simple equivariant functions for some groups that cannot be approximated by EMLP. Since EKAN directly adopts the EMLP technique, it is likely that these limitations still exist and should be discussed in the paper. Also, for the top tagging experiment, the authors have not included the comparison with GNN-based methods, e.g. LorentzNet (Gong et al., 2022), which generally have higher accuracies. ### References * Gong, Shiqi, et al. "An efficient Lorentz equivariant graph neural network for jet tagging." Journal of High Energy Physics 2022.7 (2022): 1-22. Supplementary Material: The supplementary material includes the codebase for experiments with EKAN and baseline methods. There is no instruction on how to run the experiments, and I did not review the code in detail. Relation To Broader Scientific Literature: This work heavily builds upon prior studies in equivariant networks and KANs. The foundational ideas stem from: * EMLPs (Finzi et al., 2021), which already established a general approach for incorporating equivariance into MLPs. EKANs directly adopt this approach for KANs without substantial modification. * KANs (Liu et al., 2024), which introduced spline-based learnable activation functions. The paper correctly identifies the limitations of KANs in handling symmetry constraints but does not present a fundamentally new way of addressing them. Overall, the paper positions itself as an application of existing equivariant principles to a new network architecture rather than a significant methodological advance. While the application to KANs may be useful, the extent of novelty is limited. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: N/A ## Post-rebuttal One of my previous concerns is the novelty of this paper, compared to existing works such as EMLP. After careful reconsideration, I think incorporating equivariance into KAN is a valid and substantial contribution by itself, though the paper uses similar approaches to EMLP. I have updated my score to reflect this. It should still be noted that the additional experiment results show that the advantage of EKAN diminishes with more data. EKAN may only be a better choice when data is relatively scarce. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your careful reading and valuable feedback! Below we will address each of your concerns point by point. **Claims And Evidence** > However, the key methodological innovation—incorporating equivariance via equivariant linear layers—is a direct adaptation of the approach used in EMLP (Finzi et al., 2021). The primary difference is that this method is now applied to KANs rather than MLPs. The authors claim novelty by applying equivariance to spline-based architectures, but they do not introduce a fundamentally new method for enforcing equivariance. This aspect is not sufficiently acknowledged in the paper. Due to the complexity of B-spline function formulations, embedding strict equivariance into KANs is challenging. Hierarchically employing gating mechanisms is a relatively straightforward approach, although we acknowledge this does have expressive limitations. Our claim is that we aim to introduce equivariance into KANs to address their poor performance on symmetric tasks and non-symbolic representation tasks, while preserving their advantages over MLPs. Thus, as an improvement to KANs, we strive to retain their original hierarchical structure and B-spline basis function form. From another perspective, if major structural modifications were made to propose an entirely new architecture, how would it remain related to KANs? Additionally, EKAN is not merely a simple fusion of KANs and traditional equivariant mechanisms—the design of the EKAN layer's post-activation space structure is also challenging (see Sections 4.1 and 4.2 for details). **Experimental Designs Or Analyses** > However, the paper does not explore settings where equivariance may not be useful or could degrade performance. It is noted in the EMLP paper that equivariant linear layers and gated nonlinearities are not universal, and there exist simple equivariant functions for some groups that cannot be approximated by EMLP. Since EKAN directly adopts the EMLP technique, it is likely that these limitations still exist and should be discussed in the paper. Indeed, the use of gating mechanisms can reduce the expressive power of the network. However, due to the inherently complex structure of KANs (which involve intricate B-spline formulations), introducing symmetry into KANs is challenging. Therefore, for the sake of simplicity and clarity, we opted to incorporate gated non-linearities in a hierarchical manner. We anticipate that future improvements could enhance the expressive power and flexibility of EKAN. We will include a discussion of this limitation and potential future work in the paper. Thank you for your suggestion! > Also, for the top tagging experiment, the authors have not included the comparison with GNN-based methods, e.g. LorentzNet (Gong et al., 2022), which generally have higher accuracies. Thank you for your addition! LorentzNet [1] achieves good results in top quark tagging, but it specifically focuses on this particular task. EKAN aims to provide a general framework for all symmetric scientific problems, so methods targeted at downstream tasks are not our primary benchmark for comparison. For example, CGENN [2] can incorporate more symmetries beyond the Lorentz group and outperforms LorentzNet when observations are complete (with 200 components). Therefore, in our paper, we chose to compare with CGENN. As we claimed, our EKAN outperforms CGENN in both accuracy and parameter efficiency when observations are incomplete (with only 3 components and a smaller dataset). Combining EKAN with task-specific models remains a direction for future research. **Supplementary Material** > There is no instruction on how to run the experiments, and I did not review the code in detail. The instructions for running the experiments are provided in the README.md document within the supplementary materials zip file. We will also make the code publicly available upon acceptance. **Relation To Broader Scientific Literature** > Overall, the paper positions itself as an application of existing equivariant principles to a new network architecture rather than a significant methodological advance. While the application to KANs may be useful, the extent of novelty is limited. Refer to the "Claims And Evidence" section in the Rebuttal. **Reference** [1] Gong, Shiqi, et al. "An efficient Lorentz equivariant graph neural network for jet tagging." Journal of High Energy Physics 2022.7 (2022): 1-22. [2] Ruhe, David, Johannes Brandstetter, and Patrick Forré. "Clifford group equivariant neural networks." Advances in Neural Information Processing Systems 36 (2023): 62922-62990. --- Rebuttal Comment 1.1: Comment: Thank you for the response. Regarding the novelty, from the perspective of equivariant networks, this paper just follows an existing method to enforce equivariance. But I agree that incorporating symmetry into KAN is a valid contribution. I will reconsider this and possibly update my score. I still have questions about the top tagging experiment. You only used the three leading jet constituents, while other works (e.g. LorentzNet and CGENN) showed that using more constituents led to much higher accuracy. So what happens if you use more constituents? Does EKAN still have superior performance? I also suggest the authors highlight the differences in experimental setups and datasets from previous works. For example, I was unaware of the different number of jet constituents used in the top tagging experiment and was confused as to why the figures read differently from other works. Other reviewers have also mentioned something related to the n-body dataset, where you should mention the different dataset generation procedures. --- Reply to Comment 1.1.1: Comment: Thank you for your further reply and recognition of the contribution! Now we will try our best to address your remaining concerns. > I still have questions about the top tagging experiment. You only used the three leading jet constituents, while other works (e.g. LorentzNet and CGENN) showed that using more constituents led to much higher accuracy. So what happens if you use more constituents? Does EKAN still have superior performance? For top quark tagging, we increase the number of observed jet components for further comparison. We present the results using the $SO^+(1,3)$-equivariant network as a representative, while the results of $SO(1,3)$ and $O(1,3)$-equivariant networks are very similar. We set the number of jet components to $n_{comp}=10$ and $n_{comp}=20$ respectively, and the experimental results are shown below. Together with Table 3 in the paper, we observe that all models exhibit significant improvements in accuracy as the observed information increases. When $n_{comp}$ is larger, our EKAN achieves comparable results to the baselines. Indeed, EKAN does not show superior performance when the observation information is sufficient, but its advantage in scenarios with insufficient observation information (as shown in Table 3) highlights its stronger generalization capability. $n_{comp}=10$: |Models/Training set size|$10^2$|$10^{2.5}$|$10^3$|$10^{3.5}$|$10^4$| |-|-|-|-|-|-| |EMLP-$SO^+(1,3)$|$\mathbf{78.95\pm0.02}$|$\mathbf{81.52\pm0.48}$|$81.18\pm0.42$|$82.50\pm0.30$|$85.03\pm0.04$| |CGENN|$71.82\pm3.28$|$80.35\pm0.57$|$79.85\pm1.01$|$81.56\pm0.23$|$84.17\pm0.76$| |EKAN-$SO^+(1,3)$|$78.57\pm0.63$|$79.77\pm0.78$|$\mathbf{82.90\pm0.61}$|$\mathbf{84.83\pm0.20}$|$\mathbf{87.14\pm0.03}$| $n_{comp}=20$: |Models/Training set size|$10^2$|$10^{2.5}$|$10^3$|$10^{3.5}$|$10^4$| |-|-|-|-|-|-| |EMLP-$SO^+(1,3)$|$\mathbf{83.71\pm0.49}$|$\mathbf{83.57\pm0.69}$|$\mathbf{83.14\pm0.35}$|$85.00\pm0.35$|$86.81\pm0.14$| |CGENN|$76.24\pm1.28$|$82.34\pm0.80$|$81.85\pm0.40$|$84.67\pm0.87$|$86.76\pm0.50$| |EKAN-$SO^+(1,3)$|$80.36\pm1.99$|$81.25\pm0.70$|$82.89\pm1.18$|$\mathbf{86.21\pm0.21}$|$\mathbf{89.30\pm0.12}$| > I also suggest the authors highlight the differences in experimental setups and datasets from previous works. For example, I was unaware of the different number of jet constituents used in the top tagging experiment and was confused as to why the figures read differently from other works. Other reviewers have also mentioned something related to the n-body dataset, where you should mention the different dataset generation procedures. We will provide detailed explanations of the data sources, experimental setup, and the differences from the baseline's original paper in Section 6 (Experiments). Specifically, the data generation process for particle scattering is entirely consistent with that in EMLP [1]. The dataset for the three-body problem comes from HNN [2], and we note that they predict the motion trajectories of three particles, whereas CGENN [3] addresses a $N$-body problem involving five particles. The top quark tagging dataset is sourced from [4], and we assume that only the three jet constituents with the highest transverse momentum $p_T$ are observed (this has already been mentioned in Lines 409-412, column 1), which differs from [3, 5] where all 200 jet constituents are observed. Additionally, we will include the discussion related to the previous question (concerning $n_{comp}$) in the Appendix. Thank you for your suggestion! **Reference** [1] Finzi, Marc, Max Welling, and Andrew Gordon Wilson. "A practical method for constructing equivariant multilayer perceptrons for arbitrary matrix groups." [2] Greydanus, Samuel, Misko Dzamba, and Jason Yosinski. "Hamiltonian neural networks." [3] Ruhe, David, Johannes Brandstetter, and Patrick Forré. "Clifford group equivariant neural networks." [4] Kasieczka, Gregor, et al. "Top quark tagging reference dataset." [5] Gong, Shiqi, et al. "An efficient Lorentz equivariant graph neural network for jet tagging."
Summary: This paper introduces Equivariant Kolmogorov-Arnold Networks, an extension of KANs that incorporates equivariance to arbitrary matrix groups, addressing a key limitation of KANs: their inability to respect symmetries in data. The authors achieve this by constructing gated spline basis functions and equivariant linear weights, ensuring the model remains equivariant throughout. A lift layer is introduced to preprocess inputs, aligning them with the dataset’s symmetry properties. Claims And Evidence: The motivation for the work is clear, and the theoretical claims are supported by the work. The extensive experiments also show an improvement over the nonequivaraint version. However, there is no validation as to whether the network is actually equivariant (either through an equivariant loss, or through the study of the representations), and the equivariance is only indirectly evaluated through performance. Methods And Evaluation Criteria: The methods and evaluation criteria are sensible for the task. As mentioned above, the paper would be strengthened by showing EKANs are actually equivariant. Theoretical Claims: I read the main theorem and it seemed reasonable, but did not thoroughly examine the proof. Similarly, I followed the derivations but did not thoroughly examine. Experimental Designs Or Analyses: The analyses are sensible and most observations seem insightful and accurate. Reiterating again my skepticism about the degree to which EKANs are structurally equivariant. Supplementary Material: I went over the supplementary material, but did not examine the proof closely. Relation To Broader Scientific Literature: KANs have been very popular in the past year and equivariance has been steadily impactful over the last decade, so the work is of very broad interest. Essential References Not Discussed: There are no references (from the equivariance side) that I think are missing. Other Strengths And Weaknesses: The motivation is very clear and the work is interesting. Evaluation of the degree to which the model is actually equivariant is the main weakness. Also, the lifting layer could be discussed in greater depth: why is it necessary, why does it maintain equivariance, etc. Other Comments Or Suggestions: There are 3 figures that showcase the EKAN architecture, all are different, and actually none of them is explained in the text. Questions For Authors: No further questions, everything was addressed in the previous sections. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your careful reading and valuable feedback! Below we will address each of your concerns point by point. **Claims And Evidence** For all trained models $f_\theta$ in the paper, we use $L_{equi}=E_{x,g}\\|\rho_o(g)f_\theta(x)-f_\theta(\rho_i(g)x)\\|^2$ to evaluate their equivariant loss. The experimental results are presented as follows, which indicate that our EKAN and EMLP can structurally guarantee strict equivariance, whereas non-equivariant models cannot. Particle scattering (for this experiment, the results of EMLP and MLP are sourced from the original paper, so we do not present their equivariance errors here): |Models/Training set size|$10^2$|$10^{2.5}$|$10^3$|$10^{3.5}$|$10^4$| |-|-|-|-|-|-| |KAN|$(6.01\pm2.45)\times10^{-1}$|$1.11\pm0.63$|$(4.08\pm0.27)\times10^{-1}$|$(3.21\pm0.27)\times10^{-1}$|$(1.36\pm0.14)\times10^{-1}$| |KAN+augmentation|$(2.49\pm0.12)\times10^{-1}$|$(9.77\pm7.74)\times10^{-1}$|$(1.03\pm0.14)\times10^{-1}$|$(8.78\pm1.13)\times10^{-2}$|$(2.16\pm0.10)\times10^{-1}$| |EKAN|$(9.85\pm3.12)\times10^{-14}$|$(7.87\pm1.47)\times10^{-14}$|$(7.90\pm0.74)\times10^{-14}$|$(7.69\pm0.48)\times10^{-14}$|$(6.95\pm0.70)\times10^{-14}$| 3-body problem: |Models/Number of parameters|$10^{4.5}$|$10^{4.75}$|$10^5$|$10^{5.25}$|$10^{5.5}$| |-|-|-|-|-|-| |MLP|$(1.44\pm0.13)\times10^{-3}$|$(1.51\pm0.14)\times10^{-3}$|$(1.54\pm0.03)\times10^{-3}$|$(1.33\pm0.17)\times10^{-3}$|$(1.33\pm0.10)\times10^{-3}$| |MLP+aug|$(3.16\pm0.32)\times10^{-3}$|$(3.47\pm0.24)\times10^{-3}$|$(3.43\pm0.24)\times10^{-3}$|$(3.35\pm0.16)\times10^{-3}$|$(3.32\pm0.26)\times10^{-3}$| |EMLP|$(2.80\pm2.18)\times10^{-13}$|$(1.57\pm1.39)\times10^{-13}$|$(2.17\pm0.37)\times10^{-14}$|$(2.99\pm2.51)\times10^{-14}$|$(1.87\pm0.61)\times10^{-14}$| |KAN|$(3.00\pm2.28)\times10^{-1}$|$(1.21\pm0.26)\times10^{-2}$|$(5.90\pm0.78)\times10^{-3}$|$(5.93\pm1.45)\times10^{-3}$|$(4.03\pm0.81)\times10^{-3}$| |KAN+aug|$(2.78\pm0.10)\times10^{-3}$|$(2.49\pm0.11)\times10^{-3}$|$(2.37\pm0.11)\times10^{-3}$|$(2.35\times0.11)\times10^{-3}$|$(2.29\pm0.08)\times10^{-3}$| |EKAN|$(3.80\pm3.84)\times10^{-13}$|$(3.02\pm3.05)\times10^{-13}$|$(9.66\pm5.85)\times10^{-14}$|$(3.33\pm1.47)\times10^{-14}$|$(3.31\pm1.31)\times10^{-14}$| Top tagging: |Models/Training set size|$10^2$|$10^{2.5}$|$10^3$|$10^{3.5}$|$10^4$| |-|-|-|-|-|-| |MLP|$(4.94\pm0.40)\times10^{-1}$|$(4.00\pm0.17)\times10^{-1}$|$(3.73\pm0.05)\times10^{-1}$|$(3.24\pm0.11)\times10^{-1}$|$(1.87\pm0.03)\times10^{-1}$| |MLP+aug|$(4.73\pm0.28)\times10^{-1}$|$(2.15\pm0.12)\times10^{-1}$|$(1.90\pm0.55)\times10^{-1}$|$(3.73\pm0.39)\times10^{-1}$|$(3.40\pm0.63)\times10^{-1}$| |EMLP|$(2.92\pm2.30)\times10^{-6}$|$(1.09\pm0.84)\times10^{-7}$|$(3.88\pm1.44)\times10^{-9}$|$(3.54\pm2.47)\times10^{-9}$|$(1.64\pm1.53)\times10^{-9}$| |KAN|$(1.36\pm1.78)\times10^{-1}$|$(1.35\pm1.77)\times10^{-1}$|$(1.34\pm1.77)\times10^{-1}$|$(0.99\pm1.40)\times10^{-4}$|$(1.27\pm1.79)\times10^{-1}$| |KAN+aug|$(1.20\pm1.61)\times10^{-1}$|$(2.10\pm2.97)\times10^{-3}$|$(1.10\pm1.51)\times10^{-1}$|$(1.52\pm2.15)\times10^{-4}$|$(1.44\pm1.78)\times10^{-1}$| |EKAN|$(3.41\pm2.99)\times10^{-7}$|$(1.66\pm0.85)\times10^{-8}$|$(1.62\pm0.51)\times10^{-9}$|$(1.65\pm1.01)\times10^{-9}$|$(1.64\pm1.53)\times10^{-9}$| **Other Strengths And Weaknesses** As mentioned in the first paragraph of Section 5 (Lines 269-272, Column 2), EKAN is composed of stacked EKAN layers. The EKAN layer constructed in Section 4 has input space $U_{gi}$ and output space $U_{go}$ as shown in Equation (9). However, the feature space $U_i$ and label space $U_o$ of real-world datasets may not conform to this structure (specifically, they lack gate scalars). Therefore, as discussed in the second paragraph of Section 5 (Lines 295-299, Column 1), alignment is required. To align $U_{go}$ with $U_o$, we simply discard the gate scalars in $U_{go}$. For aligning $U_{gi}$ with $U_i$, we prepend a lift layer before the first EKAN layer to introduce gate scalars into $U_i$. The lift layer is essentially an equivariant linear layer between $U_i$ and $U_{gi}$ (as described in Lines 300-302, Column 1). Its specific construction is detailed in Section 4.3, ensuring that it inherently preserves equivariance. Additionally, in the Claims And Evidence section of the Rebuttal, we also experimentally validate that the entire EKAN architecture is strictly equivariant. **Other Comments Or Suggestions** Figure 1 provides a general comparison between the EKAN and KAN architectures, which we elaborate on in the third paragraph of Section 1 (Lines 45-54, Column 2, and Lines 55-66, Column 1). Figure 2 illustrates the structure of an individual EKAN layer, which we explain in detail in the last paragraph of Section 4.1 (Lines 214-219, Column 1, and Lines 180-182, Column 2). Figure 3 presents the overall architecture of EKAN, which we describe in the final paragraph of Section 5 (Lines 304-318, Column 1). We will make it clearer to avoid confusion—thank you for pointing this out!
Summary: This paper introduces Equivariant Kolmogorov-Arnold Networks (EKANs), a framework to construct group equivariant architectures, with respect to arbitrary matrix groups, as an extension of the previously proposed Kolmogorov-Arnold networks, akin to the way Equivariant MLPs (EMLPs) extend conventional MLPs. Contrary to the linear blocks that are the heart of MLPs, KANs contain several components (spline basis functions, as well as silu activations) that are non-trivial to be converted to equivariant analogues. To address this, in a nutshell, the authors follow a gating mechanism strategy (similarly to EMLP) - input scalars are gate-transformed and then given as inputs to the basis functions and silu which are subsequently scalar multiplied to the input tensors. In that way input features are transformed to post-activations, in a manner that is proven to be equivariant. Finally, post-activations are linearly transformed to the output features, where the linear equivariant parameters are calculated using a method obtained from EMLP. Empirically, the method is shown to achieve improved results on various symmetry-related tasks, even with fewer parameters and/or training samples, against previously proposed equivariant models, as well as standard KANs and MLPs. Claims And Evidence: The authors' central claim is that their architecture is an equivariant extension of KANs that improves upon baselines (e.g. MLP) with fewer parameters and fewer data. Equivariance is proven theoretically, while the rest of the claims are indeed convincingly supported by experiments. Methods And Evaluation Criteria: Since KANs have shown promise in other tasks, it is natural to extend them to symmetric problems. The benchmarks and evaluation criteria are obtained by EMLP, a known baseline in the literature. Therefore they are reasonable for the studied problem. Theoretical Claims: I studied the proof of Theorem 4.1, namely the equivariance of the proposed method, and I found it correct. Experimental Designs Or Analyses: As discussed above, the experimental designs follow those of EMLP and their implementations and analyses seem sound. Supplementary Material: I reviewed the supplementary material, besides Appendix D. Relation To Broader Scientific Literature: Constructing group equivariant neural architectures has enjoyed a fruitful line of research in recent years, and a wide range of applications. Notably, the first paragraph of the related work section of the paper includes some of the most prominent results in this direction. On On the other hand, Kolmogorov-Arnold networks are a relatively new architecture, serving as a promising alternative to Multi-Layer Perceptrons. The method proposed in this paper attempts to address the poor performance of KANs on certain tasks, owing to their difficulty in respecting the data type and symmetry, as per the authors, by incorporating group equivariance in the KAN framework for the first time. Essential References Not Discussed: Nothing to note. Other Strengths And Weaknesses: **Strengths** - The paper is well-structured and contains sufficient material on the background and related methods. - The equivariant conversion of KANs is easy to implement and widely applicable. - The empirical results demonstrate that the proposed method holds promise in real-world scenarios. **Weaknesses** - Apart from the improved experimental results provided in this paper, it is not evident how EKANs address the (i) scaling issues and (ii) limited expressivity (see Appendix D in Finzi et al., 2021) of Equivariant Multi-Layer Perceptrons (EMLPs) with gated non-linearities. Regarding (ii) it is unclear why the authors decided to resort to gating mechanisms for their construction and what are the implications of this choice. - Additionally, in certain parts the method is hard to follow for the reader not versed in symmetries, in particular due to the fact that the notation is quite specialised and not quite intuitive. I believe that some concepts need to be simplified with some indicative examples. For example, Eq. 3 could be explained with a concrete example. Similarly, I am confused with the notation T(p, q), since in the experimental section only T(p,0) spaces are encountered. Can the authors give concrete examples here as well? Other Comments Or Suggestions: - To my knowledge, it is not correct that any (continuous, real, finite-dimensional) representation $U$ of a matrix group can be written as in Equation (3). It can be shown (in the case of a compact Lie group for example) that $U$ is a subrepresentation of the direct sum on the right hand side. However, it is worth noting that, for the vast majority of practical applications, considering input/output representations of this form should suffice. - In lines 189-191, why are $p_{i,a}, p_{o,a}, q_{i,a}, q_{o,a}$ squared? Questions For Authors: - Besides the provided experimental results, are there any advantages of EKANs against EMLPs with gated non-linearities? Could their scaling issues or limited expressivity, somehow be addressed? - Does the linearity of the lift layer, between $U_i$ and $U_{gi}$, not prove problematic in some cases? For example, the only equivariant linear map from $T_1$ to $T_0$ is the constant zero, say, in the case of the orthogonal group (see Appendix D in Finzi et al., 2021). This would mean that the gate scalars would always be zero in the gated input space. - In the experiments, would an EMLP with fewer parameters achieve potentially improved results? Having more parameters than the EKAN would probably mean that it is a more expressive model, prone to overfitting phenomena. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your careful reading and valuable feedback! Below we will address each of your concerns point by point. **Other Strengths And Weaknesses** Weaknesses (1) Indeed, the use of gating mechanisms can reduce the expressive power of the network. However, due to the inherently complex structure of KANs (which involve intricate B-spline formulations), introducing symmetry into KANs is challenging. Therefore, for the sake of simplicity and clarity, we opted to incorporate gated non-linearities in a hierarchical manner. We anticipate that future improvements could enhance the expressive power and flexibility of EKAN. We will include a discussion of this limitation and potential future work in the paper. (2) First, let's intuitively understand the dual ($\*$), direct sum ($\oplus$), and tensor product ($\otimes$) operations. Consider two vector spaces $X=R^2,Y=R^3$, and vectors $x=(x_1,x_2)\in X,y=(y_1,y_2,y_3)\in Y$. Then, all $x\oplus y=(x_1,x_2,y_1,y_2,y_3)$ form the space $X\oplus Y=R^5$, all $x\otimes y=(x_1y,x_2y)=(x_1y_1,x_1y_2,x_1y_3,x_2y_1,x_2y_2,x_2y_3)$ form the space $X\otimes Y=R^6$, and all the coefficients $vec(W)$ of the linear maps $Wx=y$ form the space $Y\otimes X^*=R^6$. Then, if we define how the group transformation acts on $X,Y$, the form of its action on these composite spaces can naturally be derived. Eqn (4) provides the derivation rule. The paper assumes the group to be a matrix group. If a group element $g\in G$ acts on a vector $x\in X$ in the form of its corresponding linear transformation $gx$, then we call $X$ the base vector space of $G$ (as described in Lines 102-105, Column 2). We have defined the "addition" and "multiplication" between spaces. Thus, for the base vector space $V$ of a group $G$, any complex spatial structure can be organized into the form of a "polynomial" with respect to $V$, which is the origin of Eqn (3). Note that Eqn (3) simultaneously defines $T(p,q)=V^p\otimes (V^*)^q$. We abbreviate $T(p,0)$ as $T_p$. In the vast majority of scenarios, the feature space of a dataset takes simple forms such as vector stacking $cT_1=cV$ or matrices $T_2=V\otimes V$, while complex spaces like $T(p,q)$ rarely appear. However, the latent spaces between equivariant layers can be highly intricate (e.g., they may have 384 dimensions), and their decomposition forms with respect to the base vector space $V$ often involve $T(p,q)$ (the decomposition is automatically handled by the software based on dimensionality, as described in Lines 317-320, Column 2). The practical implication is that, according to the rules of Eqn (4), we define how group transformations operate on the latent space. We will add these intuitive explanations to the appendix to avoid confusion. Thank you for your suggestion! **Other Comments Or Suggestions** (1) Thank you for pointing that out! We will note this in the paper to be more rigorous. (2) We aim to express that $p_{i,a}$ and $q_{i,a}$ are not simultaneously zero. Indeed, in the case where both are natural numbers, $p_{i,a}^2+q_{i,a}^2>0$ and $p_{i,a}+q_{i,a}>0$ are equivalent, with the latter being more concise. We will make the corresponding revision. Thank you! **Questions For Authors** (1) Refer to point (1) in the Other Strengths And Weaknesses section of the Rebuttal. (2) When the dimension of the latent space between equivariant layers is too small, its parameters may degenerate to all zeros. However, this issue rarely occurs when the latent space is more complex. (3) Overfitting often occurs with small datasets, so we reduced the number of parameters in EMLP and trained it on particle scattering with a relatively small training set size. The experimental results are shown below, which indicate that it does not perform as well as models with larger parameter sizes. In fact, such equivariant architectures are less prone to overfitting compared to non-equivariant models because they strictly respect the symmetries in the data. |Models/Training set size|$10^2$|$10^{2.5}$| |-|-|-| |EMLP-$SO^+(1,3)$ (210k parameters)|$(5.98\pm5.40)\times10^{-2}$|$(3.65\pm1.60)\times10^{-3}$| |EMLP-$SO(1,3)$ (210k parameters)|$(6.13\pm5.62)\times10^{-2}$|$(3.76\pm1.73)\times10^{-3}$| |EMLP-$O(1,3)$ (210k parameters)|$(6.14\pm5.71)\times10^{-2}$|$(3.64\pm1.64)\times10^{-3}$| |EMLP-$SO^+(1,3)$ (450k parameters)|$(1.27\pm0.35)\times10^{-2}$|$(2.21\pm0.56)\times10^{-3}$| |EMLP-$SO(1,3)$ (450k parameters)|$(1.47\pm0.91)\times10^{-2}$|$(2.58\pm0.25)\times10^{-3}$| |EMLP-$O(1,3)$ (450k parameters)|$(8.88\pm2.51)\times10^{-3}$|$(1.95\pm0.18)\times10^{-3}$| |EKAN-$SO^+(1,3)$ (435k parameters)|$\mathbf{(6.86\pm6.28)\times10^{-3}}$|$(1.85\pm1.75)\times10^{-3}$| |EKAN-$SO(1,3)$ (435k parameters)|$\mathbf{(6.86\pm6.27)\times10^{-3}}$|$(1.85\pm1.75)\times10^{-3}$| |EKAN-$O(1,3)$ (435k parameters)|$(7.77\pm5.85)\times10^{-3}$|$\mathbf{(1.64\pm1.87)\times10^{-3}}$|
Summary: The work introduces an equivariant version of the KAN by incorporating two principal components: 1) introducing an additional scaler that controls the gating mechanism and 2) using equivariant MLP for different non-scaler features. The proposed model, EKAN, is evaluated on particle scattering, top quark tagging, and three body problem datasets, and it outperformed EMLP in almost all the scenarios across different train dataset sizes. Claims And Evidence: The work is theoretically sound. The proposed architecture is equivariant with respect to the desired matrix group. However, the claim regarding the superiority of the equivariant KAN compared to other models is ill-demonstrated. This, in the end, depends on the following two probable interpretations of claims made. Claim v1. **The work proposed an alternative to EMLP using KAN**: In this case, the experiments (with some additional details) support the claim. However, the scope becomes narrow. Claim v2. **The work proposed a new Equivariant architecture:** In this case, more experiments are required. For example, E(3) GNN[1] or [2] should be considered. The work should state the claim more precisely. [1]. Geometric and Physical Quantities Improve E(3) Equivariant Message Passing [2]. Scalars are universal: Equivariant machine learning, structured like classical physics Methods And Evaluation Criteria: The method and evaluation criteria are valid. Depending on the interpretation of the claim (if Claim v1), the choice of dataset is also reasonable. Theoretical Claims: The theoretical claim is correct, to my understanding. Experimental Designs Or Analyses: I have found the following issues and uninvestigated questions: 1. I do not find any details on the implementation of baselines. 2. Why the number of parameters of EMLP is very high? Does it follow the architecture proposed in the original paper? is the performance gap is due to overfitting? What would be the performance if both EKAN and EMLP had a similar number of parameters? 3. Why do we not consider equivariant GNN for three body problems? And it does not follow the exact setup of [a,b], i.e., five-body problem? [a] Clifford Group Equivariant Neural Networks [b] Geometric and Physical Quantities Improve E(3) Equivariant Message Passing Supplementary Material: Read the supplementary material. However, I did not run the code. Relation To Broader Scientific Literature: KAN is an emerging area in machine learning. This work introduces equivariance to the KAN framework, which broadens its applicability. However, from the perspective of equivariant neural networks, most of the techniques used in this work are known results. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The work shows strong results compared to the EMLP baseline (some questions still need to be addressed). Apart from limited novelty and evaluations, the writing of the paper can be significantly improved. For example, most of the topics discussed and notations introduced in the "Background" section are not used in Section 4 or in the main paper. For example, I do not believe the list in Eqn 4 is necessary to go through the main text. These can be moved to supplementary. Other Comments Or Suggestions: N/A Questions For Authors: No additional questions. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your careful reading and valuable feedback! Below we will address each of your concerns point by point. **Claims And Evidence** Our claim is: We propose a method to incorporate symmetry into KANs. As mentioned in Section 1 (Lines 25-29, Column 2), KANs struggle to respect symmetry, which is one of the reasons for their underperformance in non-symbolic representation tasks. By introducing equivariant layers, we aim to address this limitation. Experimental results show that EKAN outperforms KANs on symmetry-related tasks while preserving KANs' advantages over MLPs, which supports our claim. Furthermore, even in scenarios where KANs are weaker than MLPs, EKAN still surpasses EMLP—an insight we expect to provide valuable contributions to this emerging topic of KANs. **Experimental Designs Or Analyses** (1) The model architecture of EMLP is exactly the same as in the original paper [1]. To control the number of parameters, we adjust its depth and width (i.e., shape), with relevant details provided in Section 6 of the main text (specifically the second paragraphs of Sections 6.1, 6.2, and 6.3). For fairness, EKAN and all baseline models follow identical training settings, which we elaborate on in Appendix E (Implementation Details). We will emphasize this in the paper to avoid confusion. Thank you for pointing it out! (2) Overfitting often occurs with small datasets, so we reduced the number of parameters in EMLP and trained it on particle scattering with a relatively small training set size. The experimental results are shown below, which indicate that it does not perform as well as models with larger parameter sizes. In fact, such equivariant architectures are less prone to overfitting compared to non-equivariant models because they strictly respect the symmetries in the data. |Models/Training set size|$10^2$|$10^{2.5}$| |-|-|-| |EMLP-$SO^+(1,3)$ (210k parameters)|$(5.98\pm5.40)\times10^{-2}$|$(3.65\pm1.60)\times10^{-3}$| |EMLP-$SO(1,3)$ (210k parameters)|$(6.13\pm5.62)\times10^{-2}$|$(3.76\pm1.73)\times10^{-3}$| |EMLP-$O(1,3)$ (210k parameters)|$(6.14\pm5.71)\times10^{-2}$|$(3.64\pm1.64)\times10^{-3}$| |EMLP-$SO^+(1,3)$ (450k parameters)|$(1.27\pm0.35)\times10^{-2}$|$(2.21\pm0.56)\times10^{-3}$| |EMLP-$SO(1,3)$ (450k parameters)|$(1.47\pm0.91)\times10^{-2}$|$(2.58\pm0.25)\times10^{-3}$| |EMLP-$O(1,3)$ (450k parameters)|$(8.88\pm2.51)\times10^{-3}$|$(1.95\pm0.18)\times10^{-3}$| |EKAN-$SO^+(1,3)$ (435k parameters)|$\mathbf{(6.86\pm6.28)\times10^{-3}}$|$(1.85\pm1.75)\times10^{-3}$| |EKAN-$SO(1,3)$ (435k parameters)|$\mathbf{(6.86\pm6.27)\times10^{-3}}$|$(1.85\pm1.75)\times10^{-3}$| |EKAN-$O(1,3)$ (435k parameters)|$(7.77\pm5.85)\times10^{-3}$|$\mathbf{(1.64\pm1.87)\times10^{-3}}$| (3) Our dataset for the three-body problem originates from LieGAN [2] rather than CGENN [3] (note that LieGAN is a method for symmetry discovery rather than the design of equivariant networks, so it is not our comparison target), so there are differences in dataset generation details and experimental setups. We supplement a comparison between CGENN and EKAN on the three-body problem, with the results shown below, which demonstrate EKAN's higher accuracy compared to CGENN. |Models/Number of parameters|$10^{4.5}$|$10^{4.75}$|$10^5$|$10^{5.25}$|$10^{5.5}$| |-|-|-|-|-|-| |CGENN|$(2.11\pm0.11)\times10^{-3}$|$(1.93\pm0.40)\times10^{-3}$|$(1.64\pm0.21)\times10^{-3}$|$(1.56\pm0.18)\times10^{-3}$|$(1.38\pm0.26)\times10^{-3}$| |EKAN-SO(2)|$\mathbf{(1.12\pm0.13)\times10^{-3}}$|$\mathbf{(7.06\pm0.65)\times10^{-4}}$|$\mathbf{(6.09\pm0.27)\times10^{-4}}$|$\mathbf{(4.26\pm0.19)\times10^{-4}}$|$\mathbf{(4.84\pm0.68)\times10^{-4}}$| |EKAN-O(2)|$(1.48\pm0.37)\times10^{-3}$|$(1.12\pm0.24)\times10^{-3}$|$(7.91\pm0.52)\times10^{-4}$|$(6.06\pm0.36)\times10^{-4}$|$(6.02\pm0.88)\times10^{-4}$| **Other Strengths And Weaknesses** The Background section primarily helps readers unfamiliar with symmetry theory to understand the relevant knowledge and avoid confusion. An important purpose of Eqn (4) is to inform readers that we decompose the space $U$ in the form of Eqn (3), whose practical significance is to define how group transformations act on $U$. This is crucial when constructing the latent space of the EKAN Layer (especially at the code implementation level), because the form of the group representation on the latent space depends on the definition provided by Eqn (4). More details can be found in point (2) of Other Strengths And Weaknesses section in the Rebuttal to Reviewer nHH7. In the revised version, we will move relatively less important concepts to the supplementary material. Thank you for your suggestion! **References** [1] Finzi et al. "A practical method for constructing equivariant multilayer perceptrons for arbitrary matrix groups." [2] Yang et al. "Generative adversarial symmetry discovery." [3] Ruhe et al. "Clifford group equivariant neural networks."
null
null
null
null
null
null
BaWA: Automatic Optimizing Pruning Metric for Large Language Models with Balanced Weight and Activation
Accept (poster)
Summary: This paper focuses on unstructured pruning of LLMs and introduces a new pruning metric. Unlike previous methods that estimate parameter importance based solely on magnitude, activations, or gradients, the proposed approach also considers the impact of outliers in model parameters. The authors first demonstrate that a small number of outlier parameters with large magnitudes can significantly affect existing pruning metrics. To address this issue, the proposed method normalizes each model parameter using the $\ell_2$ norms of the corresponding input and output channels. Additionally, to handle outliers in the input, the authors introduce a power factor after computing the input norm. To optimize the newly introduced hyperparameters, the authors propose using a zeroth-order gradient approach, which allows optimization without backpropagation. Experimental results show that the proposed method outperforms baseline approaches. Claims And Evidence: The claims made in the submission are well-supported. The authors provide clear evidence for their core arguments regarding the influence of outliers in model parameters and input activations. Additionally, the proposed method is well-reasoned and justified. Methods And Evaluation Criteria: The proposed method and empirical evaluation follow common practices in this domain. The reviewer generally agrees with the evaluation settings and does not find any significant issues with them. Theoretical Claims: This paper does not contain theoretical claims. Experimental Designs Or Analyses: The experimental design is sound and valid. It follows previous methods (e.g., Wanda, SparseGPT) and uses widely accepted benchmarks. Supplementary Material: All. Relation To Broader Scientific Literature: This paper is closely related to previous literature and methods in this domain. It builds upon existing pruning metrics and further enhances the effectiveness of unstructured model pruning. Essential References Not Discussed: I have only one main comment here. The paper does not fully discuss past research on model pruning for LLMs, focusing mainly on the baselines used in the experiments. While this may be due to space constraints caused by the extensive evaluation and analysis, I still recommend that the authors provide a broader discussion of related work. In particular, discussing structured pruning methods [1-4] for LLMs would be valuable, as structured pruning is generally more hardware-friendly compared to unstructured pruning, which is the focus of this paper. [1] Xia, Mengzhou, et al. "Sheared llama: Accelerating language model pre-training via structured pruning." *arXiv preprint arXiv:2310.06694* (2023). [2] Sreenivas, Sharath Turuvekere, et al. "Llm pruning and distillation in practice: The minitron approach." arXiv preprint arXiv:2408.11796 (2024). [3] Ling, Gui, Ziyang Wang, and Qingwen Liu. "SlimGPT: Layer-wise Structured Pruning for Large Language Models." Advances in Neural Information Processing Systems 37 (2024): 107112-107137. [4] Hou, Bairu, et al. "Instruction-Following Pruning for Large Language Models." *arXiv preprint arXiv:2501.02086* (2025). Other Strengths And Weaknesses: This paper is well-motivated, with clear writing and a coherent logical flow. The reviewer enjoyed reading it. Additionally, the proposed method is reasonable and well-structured, and the evaluation is rigorous and comprehensive. The only concern is the performance under semi-structured sparsity. I assume that the unstructured sparsity results in Table 2 and Table 4 do not lead to efficiency improvements. As shown in Table 5, Table 8, and Table 10, the performance degradation remains significant compared to the original dense model. Furthermore, SparseGPT sometimes outperforms the proposed method. However, the results improve in Table 11 and Table 12. Overall, the reviewer considers this a strong paper. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear zn1Z: We sincerely appreciate the valuable suggestions provided by the reviewer. We note the two main concerns you raised, which we address below. Firstly, we thank the reviewer for emphasizing the importance of comparing with structured pruning methods. We would like to compare structured and unstructured pruning in terms of pruning granularity, sparsity, accuracy, efficiency, and training after pruning. Structured pruning [1, 2, 3] removes complete substructures or weight groups (layers [4], FFN neurons [1], MHA heads [1], embedding dim [5], they are coarse-grained) from LLMs, enabling hardware-independent efficiency gains. However, under the constraints of this coarse-grained pruning, the post-pruned accuracy of LLM is prone to be drastically reduced, and therefore the generally applicable sparsity ratio is between 15% and 30%. Post-pruning fine-tuning/training/knowledge distillation may be used to restore the performance of the pruned model when faced with high sparsity ratios [2, 5]. Unstructured pruning removes elements from the weights (fine-grained) and stores or loads the pruned weights in a compressed assortment. Combined with decompression (memory bottleneck optimization) or hardware advantage (2:4 sparse tensor core), unstructured sparse LLM can also obtain considerable efficiency improvement. Unstructured pruning is less likely to harm the accuracy of the model due to the fine-grained constraints, so the sparsity can generally exceed 50%, and the adoption of a stricter 2:4 sparsity is also acceptable. It is possible to do unstructured pruning without loss of accuracy for large LLMs such as Llama-2-70B, and BaWA has done it. Unstructured pruned LLM can also be further trained, and some techniques such as PEFT [6] and STE [7] have emerged to provide support. In the revised manuscript, we will include a dedicated discussion (Section 2) comparing structured and unstructured pruning for LLMs to demonstrate the necessity of high-performance unstructured/semi-structured LLM pruning. The following table compares a portion of the experimental results of structured pruning and unstructured pruning as a reference. | Method | Type | Sparsity | reference speedup | BoolQ | RTE | HellaSwag | WinoGrande | ARC-e | ARC-c | OBQA | AVG | |--|--|--|--|--|--|--|--|--|--|--|--| | Llama-2-13B | Dense | 0% | 1.0x | 83.43 | 67.51 | 61.25 | 73.8 | 80.83 | 50.43 | 32.8 | 64.3 | | LLM-Pruner | Structured Pruning | 25% | 1.25x | 68.35 | 50.54 | 46.81 | 61.56 | 70.2 | 37.63 | 28.8 | 51.98 | | ShortGPT | Layer Pruning | 25% | 1.25x | 62.54 | 59.57 | 47.7 | 70.96 | 61.24 | 37.88 | 27 | 52.41 | | Wanda | 2:4 Semi | 50% | 1.4x | 75.26 | 56.68 | 46.43 | 66.77 | 68.35 | 34.47 | 24.4 | 53.19 | | BaWA | 2:4 Semi | 50% | 1.4x | 78.26 | 56.32 | 48.5 | 66.93 | 70.79 | 35.49 | 26 | 54.61 | Secondly, although unstructured pruning has traditionally lacked hardware support, recent advancements such as Flash-LLM [8] demonstrate its practical speedup through specialized kernels. Moreover, our method’s compatibility with N:M sparsity (natively supported by Ampere GPUs) further ensures deployability while maintaining higher accuracy than structured alternatives (Table 9). Additionally, we would like to clarify that the slight performance gap between BaWA and SparseGPT in semi-structured settings (Table 8) stems from SparseGPT’s weight reconstruction—a complementary technique orthogonal to pruning metrics, which is explained in our rebuttal to the reviewer xLm6. As shown in "BaWA+ADMM" (Table 3), combining our metric with weight reconstruction outperforms all baselines universally. Furthermore, larger models (e.g., LLaMA2-70B) exhibit greater robustness to N:M constraints due to inherent redundancy, reducing accuracy drops to <1% in 4:8 sparsity (Table 12). **Reference** [1] Llm-pruner: On the structural pruning of large language models, NeurIPS'23. [2] Sheared llama: Accelerating language model pre-training via structured pruning, ICLR'24. [3] Instruction-Following Pruning for Large Language Models, ArXiv'25. [4] SlimGPT: Layer-wise Structured Pruning for Large Language Models, NeurIPS'24. [5] Llm pruning and distillation in practice: The minitron approach, ArXiv'24. [6] SPP: Sparsity-preserved parameter-efficient fine-tuning for large language models, ICML'24. [7] Sparsity-accelerated training for large language models, ACL'24. [8] Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity, VLDB'23. --- Rebuttal Comment 1.1: Comment: I thank the authors for the detailed response. After careful assessment, I still think this is a good paper with clear motivations and solid techniques. During rebuttal, the authors further address my concerns on the application of semi-structured pruning. Based on this, I will maintain my rating (4, accept) and recommend the acceptance of this paper. --- Reply to Comment 1.1.1: Comment: Dear zn1Z: We sincerely appreciate your constructive feedback and continued support. Thank you for recognizing our efforts in addressing the semi-structured pruning concerns. We will carefully incorporate your suggestions in the final manuscript to further strengthen the technical presentation.
Summary: This work proposes a weight pruning method based on Wanda by performing normalization through input and output channels and scaling normalization factors. Wanda is a simple weight pruning method which uses scores measured by L1 of weight and L2 of input, but it suffers issues of imbalance in weight magnitude and influence from outliers. This work alleviate the issues by normalizing weights according to the L2 norm of input and output channels and by scaling the normalization terms. Scaling terms are optimized by forward-only methods for faster computation. Experiments are carried out on several zero-shot benchmarks and shows that the proposed method achieves better results when compared with other SOTAs, including ADMM-Iter. Claims And Evidence: The major claim regarding the weight magnitude normalization for input and output channels is supported by the discussion and analysis in section 2 and 3, and by ablation studies in section 4.4. The use of scaling is clearly described in section 2 and 3, and the proposed optimization algorithm is a nice contribution. However, further analysis on the more controlled scaling might be helpful to clearly understand their impact. For instance, we could intentionally introduce scales in the range of 0.1, 0.25, 0.5, 1.0 etc. and measured the impact in perplexity. The combination of optimization algorithm together with normalization is not throughly measured in the ablation studies if my understanding is correct. For instance, it is possible to optimize a single scale parameter $\theta$ in Equation 4 when showing ablations in Table 5 to measure the impact of optimization. Similarly, outlier regularization in Table 5 uses a fixed scaling of 0.5, but it could be optimized to show the effectiveness of the proposed algorithm. Methods And Evaluation Criteria: Experiments cover diverse benchmarks for zero-shot settings as well as WikiText-2 for perplexity to compare prior baselines. Theoretical Claims: No theoretical proofs for the proposed method, since this work focuses on empirical studies by carefully analyzing the behavior of weight magnitudes for pruning. Experimental Designs Or Analyses: Experimental design sounds good to me, but I'd suggest additional ablation studies as noted in the "Claims and Evidence" section, i.e., running experiments by optimizing scales when ablating normalization factors. * Ablations in the rebuttal, that look promising. Supplementary Material: I checked the code in appendix F and the combination of pruning mask in appendix G. Relation To Broader Scientific Literature: This work is an extension of prior work in unstructured weight pruning such as Wanda as noted in this manuscript. Essential References Not Discussed: This work is missing discussion on ADMM-Iter and DSOT, given that this work shows experiments by combining them with the proposed method. It is not clear whether the proposed method is orthogonal to those method, and it is even not clear what kind of findings or conclusion could be drawn by showing the results. * Additional details in the rebuttal. Other Strengths And Weaknesses: Strengths - It is an extension of prior work on unstructured pruning by adding normalizations and scaling. The design is well motivated and optimization for scaling factor is a yet another contribution to this field. Weaknesses - Further ablations are necessary to justify the claim regarding the scaling, since it is not well supported by the experiments. Other Comments Or Suggestions: None. Questions For Authors: It is not clear whether scales are optimized in Table 5 when showing the proposed method with, e.g., input channel normalization. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear xLm6: We greatly appreciate your insightful comments. Below we provide a point-by-point response to your concerns. ### **Scaling Factor Analysis**: We agree that analyzing scaling factors is critical. In the revised manuscript, we will add: + A new comparative table (below) demonstrating the superiority of optimized scales over fixed values on LLaMA2-7B + A novel discussion in the revised Section 4.4 will highlight how adaptive scaling addresses LLM-specific distribution challenges. Specifically, different models and task settings will be explored to illustrate the effectiveness of BaWA's scaling strategy. The evaluation results can be shown as follows. | Scaling Strategy | $\theta_1$ (Input) | $\theta_2$ (Output) | $\theta_3$ (Activation) | PPL | Δ vs. Best Fixed (Fixed at 0.5) | |------------------------|------------|-------------|------------------|------|------------------| | **Fixed Scales** | | | | | | | - $\theta$=0.1 | 0.1 | 0.1 | 0.1 | 8.92 | +24.1% | | - $\theta$=0.5 | 0.5 | 0.5 | 0.5 | 7.18 | +0% (baseline) | | - $\theta$=1.0 | 1.0 | 1.0 | 1.0 | 7.53 | +4.9% | | **BaWA Optimized** | 0.42 | 0.51 | 0.38 | 6.30 | **-12.3%** | The key findings include: + Optimized scales reduce perplexity by 12.3% compared to the best-fixed scale ($\theta$=0.5) + Fixed scales exhibit significant sensitivity (±24.1% PPL variance) ### **Relationship with ADMM-Iter/DSnoT**: We apologize for the lack of explanation for the relationship between BaWA and ADMM-Iter/DSnoT. In fact, the LLM pruning procedure can be divided into two stages, pruning mask selection and weight reconstruction. Different from BaWA which optimizes pruning metric (Stage 1), these methods (both ADMM and DSnoT) focus on reconstructing post-pruning weights (Stage 2). To illustrate the orthogonality, we validate when adding weight reconstruction methods (both DSnoT and ADMM) with BaWA, as well as comparing them with SparseGPT, ADMM-Iter and DSnoT without BaWA pruning metric on various LLaMA models with 2:4 sparsity. The results (Table below) clearly depict that using BaWA pruning metric with weight reconstruction methods achieves the best pruning performance. Furthermore, our evaluation in Table 17 (Appendix G) also demonstrates the effectiveness of adding weight reconstruction to BaWA pruning metric. We will add this pipeline diagram to Appendix G. | PPL | 1-7B | 1-13B | 1-30B | 1-65B | 2-7B | 2-13B | 2-70B | |------------|-------|-------|-------|-------|-------|-------|-------| | SparseGPT | 11.00 | 9.11 | 7.16 | 6.28 | 10.17 | 8.32 | 5.40 | | Admm-Iter | 9.90 | 8.60 | 6.89 | 6.02 | 9.74 | 7.78 | 5.19 | | DSnoT | 10.89 | 9.05 | 6.76 | 6.15 | 10.46 | 8.09 | 5.11 | | BaWA | 10.32 | 7.94 | 6.37 | 5.61 | 9.93 | 7.13 | 4.84 | | BaWA+DSnoT | 10.21 | 7.91 | 6.42 | 5.69 | 9.84 | 7.08 | 4.86 | | BaWA+Admm | 9.71 | 7.86 | 6.39 | 5.60 | 9.75 | 7.04 | 4.71 |
Summary: Existing pruning metrics are limited by their reliance on simple symbolic combinations of weights and activations, failing to account for imbalanced weight magnitudes and the disproportionate impact of activation outliers. To address these shortcomings, this paper introduces BaWA, a pruning metric that balances Weight and Activation distributions for more effective pruning. BaWA incorporates two key innovations: 1. Magnitude Normalization, which mitigates weight imbalances across channels, enabling fairer pruning decisions. 2. Outlier Regularization, which reduces the influence of activation outliers, ensuring more appropriate channel prioritization. To further improve its effectiveness, BaWA includes an efficient, automated framework for optimizing normalization and regularization hyperparameters. Extensive experiments demonstrate that BaWA outperforms existing pruning metrics. For instance, applying BaWA to induce 2:4 sparsity in Mistral-7B reduces perplexity by 2.49 and increases average downstream task accuracy by 3.08%, surpassing the previous method Wanda. ## update after rebuttal I revisited the paper and would like to keep my original rating for two reasons: 1. The experimental comparisons are somewhat outdated. Except for Table 3, most of the comparisons are Wanda (proposed in mid-2023) and SparseGPT, which is even older. In Table 4, the method is only slightly better than Wanda. Even in Table 3, the proposed method only shows good improvement when combined with the weight reconstruction method ADMM. Without it, the improvement seems marginal. So, I feel the proposed method may not offer a significant advancement. 2. Techniques like *Magnitude Normalization* and *Outlier Regularization* have already been extensively studied in previous work. This paper doesn't introduce anything particularly new or exciting for me. I think this work is a slight extension of Wanda, with an additional normalization step applied to the weights. The method is reasonable but the contribution feels moderate to me, I'd be fine with the paper being either accepted or rejected. Claims And Evidence: The claims in the paper are well-supported by clear empirical evidence. However, the concepts of magnitude normalization and outlier regularization are not entirely novel, and the overall contribution may not appear significantly innovative. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are appropriate for the problem. Theoretical Claims: This is an empirical paper, no theoretical claims are presented in the paper. Experimental Designs Or Analyses: Yes, all parts. Supplementary Material: Yes, all parts. Relation To Broader Scientific Literature: The paper builds on prior work in LLM pruning that uses weight and activation magnitudes to guide pruning decisions (e.g., Wanda). While magnitude normalization and outlier regularization have been widely explored in the context of model adaptive sparsity and robust pruning, BaWA refines these ideas by introducing a balancing mechanism for weight and activation distributions. The contribution is a little bit incremental and limited for the community. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: The motivation of the paper is clear, the proposed method is simple and easy to follow. Weaknesses: 1. The proposed two methods *magnitude normalization* and *outlier regularization* in LLM pruning are a little bit trivial and not significant enough. 2. The novelty of the paper is somehow limited. The contributions appear somewhat incremental, making the overall presentation less engaging. 3. The proposed method will involve additional complexity and computation than baseline Wanda, especially w/ the search strategy. Other Comments Or Suggestions: Please see the comments above. Questions For Authors: Please see above. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Dear mDNk: We sincerely appreciate your thoughtful feedback regarding BaWA’s novelty and computational overhead. Below, we address your concerns in detail. ### **Novelty of BaWA**: We respectfully disagree with the provided novelty concern for three key reasons: (a) Problem Characterization in LLMs + Structural Outliers: LLMs exhibit extremely sparse activation outliers (>50% outliers in <5% channels, Fig 1c), unlike CNNs with uniform noise patterns [1]. + Cross-Layer Imbalance: Weight magnitudes vary by 100× within layers (Fig 1a), violating CNN pruning assumptions. (b) Methodological Advancements + Dual-Channel Normalization: Joint input/output scaling (Eq. 5) handles asymmetric LLM distributions, unlike standard normalization. + Dynamic Outlier Suppression: Learnable threshold $\theta_3$ adapts to layer-wise outlier density, improving upon fixed-threshold methods [2]. (c) Empirical Superiority + BaWA reduces perplexity by 2.49 over Wanda (Table 3), which is a great improvement. + Direct application of robust CNN pruning degrades accuracy by 4.1% (Table 8 in Appendix). ### **Computational Overhead**: Furthermore, the evaluation results in our paper have shown that the additional overhead of BaWA is negligible in two aspects. (a) One-Time Search Cost: Searching a 70B model takes only 16 minutes (Table 6), <0.01% of typical training costs (thousands of GPU hours). (b) Search-Free Mode: Even without search (BaWA w/o search), BaWA outperforms Wanda while maintaining the same pruning efficiency (Table 5). **References** [1] Li et al., Pruning Filters for Efficient ConvNets, ICLR 2017. [2] Wei et al., Outlier Suppression+, EMNLP 2023.
null
null
null
null
null
null
null
null
Catching Two Birds with One Stone: Reward Shaping with Dual Random Networks for Balancing Exploration and Exploitation
Accept (poster)
Summary: This work develops a new reward shaping approach “DuRND” specialized for sparse reward environments, which uses two random networks (RNs): one RN guides agents to goal states while the other RN prevents the agent from getting stuck in distracting or harmful states. The method is tested against other reward shaping methods in a variety of environments, with the learned reward signals and high-level exploration analyzed by additional experiments. Claims And Evidence: From “main contributions”: **Claim:** DuRND achieves exploration-efficient and convergence-stable learning in challenging sparse-reward tasks. **Evidence:** Figure 3,5,6 supports this claim. **Claim:** DuRND is lightweight and highly scalable (in high-dimensional environments). **Evidence:** Table 2 supports this claim. **Claim:** The effectiveness and efficiency of DuRND are validated across a variety of sparse-reward tasks with high-dimensional states, demonstrating its superior performance compared to several benchmarks. **Evidence:** Figure 3 supports this claim. Other claims stated in the paper: **Claim:** “However, both branches require an environmental transition model, which makes them challenging in adapting to large-scale scenarios with complex dynamics” **(Lacking Evidence):** This claim does not seem accurate, and needs backed up. In potential-based reward shaping, the next-state is indeed required to calculate $\Phi(s’)$, but this is always given in the (SARS') mini-batch and does not require a dynamics model. Can you please explain further, or validate or adjust the phrasing of the claim? Methods And Evaluation Criteria: Do proposed methods and/or evaluation criteria (e.g., benchmark datasets) make sense for the problem or application at hand? Yes. The environments are all suitable choices for sparse reward settings. Theoretical Claims: N/A It does not seem there are any theoretical claims needing substantiation. Providing additional theoretical insights to the method could strengthen the work, though. Experimental Designs Or Analyses: The experiments are sound; testing on well-established sparse reward environments. However, the chain environment can benefit from a figure and/or reference based on its previous use. I am also somewhat concerned about the performance of the RND baseline. Upon referring to the paper https://arxiv.org/pdf/1810.12894, I see much higher returns for Montezuma and Pitfall. Could you please explain the discrepancy? I also see that e.g. ReLara was tested only on Mujoco/robotics environments, not any of your chosen benchmarks. Could you explain if there is anything stopping DuRND from performing well on such environments? Supplementary Material: Upon reviewing the supplementary material, I found section B.1 "random networks error norm" to be quite helpful. Perhaps some of this discussion can be placed in the main text, as it seems to be a clever and crucial idea for proper normalization. Relation To Broader Scientific Literature: This paper offers a new method for navigating exploration-exploitation in sparse settings. Typically, RS methods in sparse environments focus solely on exploration, which can explain the performance improvement. The use of two RNDs and the corresponding construction of two reward signals seems novel and interesting. The use of this method however, may not be plug-and-play with any RL approach, specifically in dense reward environments, as this was not tested. The idea connects to a similar line of thought in recent RS research, including SORS, ROSA, ReLara and RND. Essential References Not Discussed: Baselines to consider (low memory, work on dense reward also). If unable to provide further experiments, could you please discuss the relationship of your method to these? - https://arxiv.org/abs/2107.08888 - https://arxiv.org/pdf/2412.01114 - https://arxiv.org/abs/2501.00989 Papers for a broader audience: - https://arxiv.org/pdf/2408.10215 - https://arxiv.org/pdf/2409.05358 Other Strengths And Weaknesses: Strengths: Although I am not entirely familiar with the relevant literature, this appears to be a novel and creative use of RND that has a significant impact on performance. The illustrative example in Fig 4 clearly provides an intuition for the algorithm's effect. Importantly, the method does not require considerable memory nor additional data. Weaknesses: There is room for further improvement and understanding on the theoretical and algorithmic sides. Algorithmically, further ablation studies (e.g. on $\lambda, \omega, T_{pos}$) and tests on dense reward environments would have helped gain a broader perspective of performance. "Other": (Both strength and weakness) Though it is not a theoretical paper, some further grounding in the choices for reward functions could help: Section 5.3.1 is quite helpful here, but can we further understand the asymptotic reward values? What about the relationship to the extrinsic env reward scale? How about the effect of $T_pos$ and its relationship to important MDP parameters like mixing time? On the flip side, I think these questions can open the door for interesting future work. Other Comments Or Suggestions: - Can you discuss or define "hidden values" since it is mentioned early in the paper? - Is the "N" network (Eq. 2) still trained before a reward is observed? Or do we have to wait to observe a reward before training the networks? Can this lead to inefficiencies? - Can condense discussion after Eq 3 and move table 1 to appendix in exchange for enlarging Fig 3 (split into two bigger figures) - typo: "libarary" L312 - "Explotation" subsection p6, can you connect this discussion to Equation 4? - Fig 3 caption: what are shaded regions? - Fig 4 caption: which env? - typo: section 6, "DuRND." to "DuRND," - typo: should be $\gamma \in [0,1)$ Questions For Authors: - What if in sparse reward setting, some rewards are negative? How does this impact the discussion after L170? - Why is a bigger $T_{pos}$ better for exploitation? - You mention combining DuRND with SAC; do you have any intuition on how does the MaxEnt "exploration/entropy bonus" affect/interact your method? - L312, how did you tune such hyperparameters? - What's the difference between (1) and (3) in section 5.2? - Since the shaping method is not PBRS, is this method guaranteed to find the optimal policy? Can you please comment on this caveat in the main text? Perhaps in the limit, if one can show the auxiliary rewards tend to zero, such a guarantee can be made. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear reviewer, Thanks for the valuable comments. We respond as follows. --- Regarding the claim on PBRS, thanks for pointing this out, our statement may cause confusion, many PBRS methods don't require a dynamic model, instead compute potentials directly by collected data. What we intended to highlight is that many information gain-based methods rely on explicit models (e.g., train forward models to predict state and compare with observed state). We'll clarify this. --- Regarding comments on experiments, i.e., baselines, continuous-control, dense-reward, and ablation studies: 1. **RND baseline discrepancy**: The gap is mainly due to training budget: (a) the best results reported in RND paper used 1B steps and 1024 parallel envs (from Appendix A.4 and official codes). Due to limited resources, we ran ~1/4 of their full training steps with 32 parallel envs. Our reproduced results align with RND paper's Figure 6 (left), achieving ~2k return in MZ's Revenge. However, with the same reduced training steps, DuRND achieves ~6k, also able to show the improvement. 2. **ReLara's continuous-control benchmarks, and dense-reward tasks**: The intention we focused on Atari, VizDoom and 3DMaze is to highlight the advantage of DuRND for image-based high-dimensional states. We agree that MuJoCo and robotics in ReLara are important benchmarks, and also for reviewer's comment on dense-reward tasks, we added [experiments on 6 new tasks, covering both sparse and dense reward settings (Fig A.2, Table A.2 A.3) [link]](https://anonymous.4open.science/api/repo/ano/file/2.pdf). DuRND can be naturally extended to dense rewards by a thresholding strategy: states will $R^{env}$ exceeds a threshold (0.5 in implementation) is recorded by positive RN; otherwise negative RN. Results show DuRND is robust in continuous-control and dense-reward tasks, validating generality. 3. **More ablation studies**: We extended ablations on: (a) [the weights $\lambda$, $\omega$ (Fig A.3) [link]](https://anonymous.4open.science/api/repo/ano/file/3.pdf) and (b) [the $T_{\text{pos}}$ (Fig A.4) [link]](https://anonymous.4open.science/api/repo/ano/file/4.pdf). Results show DuRND remains stable under varied settings. All new experiments will be added in paper. --- We'll move appendix B.1 to main text. --- Regarding suggested related works: 1. Yuan et al. (MMRS): count-based exploration that treats number of visits as a limited resource and uses Jain’s fairness index to allocate visits to states, prioritizing under-visited states. 2. Koprulu et al.: PBRS method constructing potential from both task-agnostic prior experience and task-specific expert demonstrations, addressing distribution mismatch. 3. Adamczyk et al. (BSRS): PBRS method using agent’s current estimate of value function as potential, provided convergence guarantees. 4. Ibrahim et al.: survey on reward engineering and shaping. 5. Lidayan et al. (BAMDP): Unifies intrinsic motivation and PBRS, ensuring convergence to the original optimal policy under BAMAP framework. They are relevant to RS and will be properly discussed and cited in paper. --- Other Comments response - OC1. "Hidden values" mean state's latent importance that not reflected in env rewards, e.g., in maze tasks, keys or doors carry high value compared to other states - OC2. RN modules update after each rollout. Once episode ends or $R^{env}$ is observed, states in rollout buffer are devided to $R_P$ or $R_N$. The delay is bounded by rollout length and doesn't affect efficiency - OC7. Shaded areas: standard error over 10 seeds - OC8. Figure 4: The toy task in Section 5.2 We'll fix remaining comments (typos, captions, layout). --- Question response - Q1: negative rewards usually indicate undesirable states, which DuRND naturally assigns to negative RN - Q2: Larger $T_{pos}$ classifies more states as positive, potentially increasing $R^{con}$ coverage. But overly large $T_{pos}$ may mislabel irrelevant states. The effect is non-monotonic and requires balance - Q3: Current DuRND has actually used the entropy in PPO backbone (by CleanRL), which encourages action diversity. While shaped rewards guide directional exploration, they are not conflicting. Further analysis is a promising direction - Q4: We mostly follow author-recommended hyperparameters from original papers or official codes, with a small set tuning. Results are averaged over 10 seeds - Q5: DuRND with only $R^{nov}$ retains the dual-RN structure, standard RND used a single RN. It preserves three-level novelty and enables broader exploration - Q6: DuRND is not PBRS, thus doesn't theoretically guarantee the original optimal policy, but empirically converges to high-performing policies. In Fig 5, $R^{nov}$ often diminishes while $R^{con}$ remains active, but since $R^{con}$ aligns with task rewards, DuRND has a high likelihood of converging to original optimal policy. We'll add this discussion. Thanks again for the comments and hope our response has addressed your concerns.
Summary: This paper proposes DuRND, a simple variation on top of RND that uses two random networks in order to compute two reward bonus terms for sparse reward tasks: 1) a modified novelty bonus and 2) an exploitative reward shaping bonus. The two random networks are trained on different data, with the positive network only trained on states that lead up to the sparse reward, while the negative network is trained on all states. Both reward bonus terms summed up is able to better adapt to sparse reward tasks as well as automatically do the exploration-exploitation tradeoff better. Experimental results show that DuRND is consistently better than RND and other related baselines. ## Update after Rebuttal I maintain my positive score on this paper. Claims And Evidence: - The claims made are mostly supported by empirical results and ablations. - A minor but important clarification is that the novelty bonus computed from the two RNs depends on the data being more or less on-policy. If we imagine using an off-policy algorithm like DQN, where we use a replay buffer, but we also use e-greedy noise where we actually run a family of policies each with different epsilons (noise levels), and don’t decay the epsilons (this was the case for Agent57), then in theory the novelty bonus won’t go away completely. This is because there will always be some high epsilon-greedy data in the replay buffer that will be sampled, which may never see the “goal”, and thus never be labeled as positive states. Then the positive RN will never reduce uncertainty on those states, and thus you will get an irreducible novelty signal. For the original RND, as well as other pseudo-count or ensemble based methods, they don’t suffer this issue because they see all the data (like the negative RN). Since the paper uses an on-policy algorithm, this doesn’t come up, but it is important to clarify. Methods And Evaluation Criteria: Yes domains and ablations make sense. Theoretical Claims: n/a Experimental Designs Or Analyses: The overall description of the experiments make sense. Supplementary Material: No Relation To Broader Scientific Literature: The algorithm builds upon RND and the reward shaping literature in a simple but nice way. Essential References Not Discussed: Seems like the most essential ones are covered. Other Strengths And Weaknesses: As mentioned by the paper, the approach is designed for sparse reward tasks and not for dense reward tasks, which limits the applicability of the approach to many continuous control and robotics tasks. Other Comments Or Suggestions: In Figure 6, the ablation with each separate novelty bonus, I would like to also see (either in this plot, or a new plot in the appendix) two more baselines like RND and Relara as it would just be easier to compare all on one plot rather than scrolling back and forth. Questions For Authors: In Pitfall, RND flatlines, but DuRND with just novelty is actually able to take off quite well. In Montezuma’s revenge, DuRND with just novelty is also doing better than RND. Outside of these two domains, it seems like DuRND with just novelty and RND are similar. I wonder if the authors have any detailed insight into why DuRND with just novelty is much better than RND for these two domains? The current explanation of the 3 levels of novelty is just a hypothesis, and it would be great if see where the difference actually is. In Figure 4, from the state visitation plots, RND and DuRND with just novelty basically look identical. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear reviewer, Thank you for your valuable feedback. We respond to your comments as follows: --- Regarding the reviewer’s insightful comment on the interaction between novelty estimation and data distribution under off-policy settings, as noted, since DuRND is built on an on-policy algorithm (PPO), the dual RN modules receive data from a consistent policy distribution, and the labeling of positive/negative states remains stable and accurate throughout training. This prevents the kind of irreducible novelty bias described in off-policy settings. We appreciate this important observation and will clarify the current on-policy assumption and its implications for the RN-based novelty estimation in the revised paper. --- Regarding the concern about applicability to continuous-control and dense-reward tasks: 1. First, DuRND is fully compatible with continuous-control environments, as it is a relatively independent module that can be seamlessly integrated into continuous-action backbone algorithms. 2. Second, although DuRND is designed for sparse-reward challenge, it can be naturally extended to dense-reward settings. We adopt a simple yet effective threshold-based strategy: if a state's environmental reward exceeds a predefined threshold (0.5 in implementation), it is recorded in the positive RN; otherwise, negative RN. We evaluated this extension on [six continuous-control tasks across MuJoCo and Robotics domains, covering both sparse and dense reward scenarios (Fig A.2, Table A.2 A.3) [link]](https://anonymous.4open.science/api/repo/ano/file/2.pdf). We compared DuRND with baselines suited for dense-reward tasks. The results show that DuRND performs robustly in continuous-control and dense-reward tasks, confirming its generality beyond the discrete-control and sparse-reward settings. The new experiments will be included in the revised paper. --- Regarding the suggestion on Figure 6, we have added [RND and ReLara in the ablation plot (Fig A.8) for easier comparison [link]](https://anonymous.4open.science/api/repo/ano/file/8.pdf), and it will be included in the revised paper. --- Regarding the question of why DuRND with only novelty reward (DuRND-only-nov) outperforms standard RND in challenging tasks like Montezuma’s Revenge and Pitfall, we offer two potential explanations: 1. **Longer training horizon**: Both Montezuma’s Revenge and Pitfall are more challenging, thus, as shown in our plots (x-axes), they were trained for 10× the duration of other tasks. The increased training time, when combined with DuRND’s three-level novelty estimation and its capacity for sustained exploration, allows the benefits of DuRND to become more pronounced relative to RND. 2. **Higher policy entropy and exploration diversity**: To further investigate this performance gap and provide direct empirical evidence, we analyzed the [policy entropy over the training process (Fig A.7) [link]](https://anonymous.4open.science/api/repo/ano/file/7.pdf). Our results show that DuRND-only-nov consistently maintains higher entropy than RND, with this effect being especially pronounced in Montezuma’s Revenge and Pitfall, where the gap is much larger than in other tasks. This shows that DuRND encourages more diverse action selection, thus broader exploration, which can be a direct empirical support to explain the experimental observation. --- Once again, we appreciate your valuable comments and hope our responses addressed your concerns.
Summary: The authors propose Dual Random Networks Distillation (DuRND), a reward shaping framework for sparse-reward reinforcement learning. DuRND consists of two random networks as its primary components, which simultaneously generate complementary rewards: one encouraging novelty-driven exploration, and the other measuring contributions toward task completion. Empirical results demonstrate that DuRND achieves superior performance compared to baseline algorithms in environments with sparse rewards. Claims And Evidence: Terms such as *convergence* or *optimal* appear frequently throughout the manuscript. However, there is insufficient theoretical or empirical results supporting for these claims. Methods And Evaluation Criteria: Yes. Theoretical Claims: There are no theoretical claims in the manuscript. Experimental Designs Or Analyses: Clear. Supplementary Material: I did review from the section A to C. Relation To Broader Scientific Literature: The proposed method showed notable performance improvement in sparse reward settings. Improved performance in sparse-reward environments enables more efficient learning and faster convergence toward desirable behaviors. This can be particularly valuable in real-world scenarios where immediate or dense external rewards are difficult to obtain. Essential References Not Discussed: No. Other Strengths And Weaknesses: **strengths** - The proposed method is simple and easy to understand. - The authors provide various experimental results, along with clear visual representations (e.g., Figure 4), making it straightforward to understand the advantages of their approach. - The empirical results demonstrate notable performance improvements compared to the baseline algorithms. **weaknesses** - The theoretical or empirical results provided to support the authors' claims is somewhat insufficient (e.g., optimal, convergence, efficient). - The proposed approach does not significantly differ from existing intrinsic reward-based exploration methods, suggesting limited novelty or contribution. Other Comments Or Suggestions: - It would be more accurate to comment in the right column of line 44 that reward feedback is provided when the task's goal or sub-goal is achieved, rather than at the end of each episode. - The authors state that DuRND is designed for efficient exploration and stable convergence. However, it is unclear whether the performance improvements of the proposed method result from more efficient exploration or simply from maintaining exploration over a longer period. It appears more likely to be the latter. Furthermore, based on the experimental results, the agent's performance does not seem to converge. - In the left column, line 256, the term *improving convergence* sounds somewhat awkward. It would be better to replace it with a term like *improving convergence rate*. - The authors state that the use of novelty and contribution rewards effectively broadens the exploration horizon during the early stages of training and reinforces meaningful hidden values in later stages. An approach called Never Give Up [2], which significantly expands the exploration horizon, could serve as a good baseline for comparison with the proposed method. - The authors claim that the proposed method operates with minimal computational overhead. However, since it employs two random networks, it naturally requires more computation compared to the original method that use only one random network. Therefore, given that RND is included as a baseline in Table 2, the term *minimal* does not seem appropriate. Questions For Authors: - In the left column, lines 101 and 104, the authors define the reward function as a function of state $s$. Typically, in Markov decision processes, the reward function $R$ is defined as a function of both state $s$ and action $a$. Is there a particular reason for defining it in this manner? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Dear reviewer, Thanks for the comments. Our responses include new experiments and detailed elaboration below. Regarding the claims of optimal, convergence and efficient, we highlight that they're grounded in comprehensive empirical evidence: 1. **Optimal** refers to final evaluation performance. Our experiments across 12 tasks (Fig 1, Table 1 in paper) and [6 additional continuous-control tasks with sparse and dense rewards (Fig A.2, Table A.2 A.3) [link]](https://anonymous.4open.science/api/repo/ano/file/2.pdf), convered 24 tasks, 5 domains. DuRND achieves highest returns. 2. **Convergence** is evidenced by training curves (Fig 1, A.2, A.3), where DuRND stabilizes in most tasks, we clarify that we refer to empirical convergence. 3. **Efficiency** (a) sample efficiency: DuRND reaches higher returns with fewer steps (Fig 1, A.2); (b) computational efficiency: Table 2 shows DuRND has lower overhead than other RS methods, except its backbone RND/PPO. We agree "minimal" may be misleading and revise it as "lower overhead" --- Regarding the novelty and contribution over existing methods, we highlight that DuRND introduces a novel $R^{con}$ to complement pure novelty-based exploration. Importantly, $R^{con}$ is computed by lightweight RN modules, and the dual reward shaping achieves effective exploration-exploitation balance. To our knowledge, this synergy between two reward types has not been explored in prior work. --- Regarding the reason for DuRND's performance gain, it stems from **task-aligned exploitation via $R^{con}$ and more effective exploration, rather than merely extending exploration phase**: 1. **More effective (not naïvely longer) exploration.** First, from the entropy perspective, we show [additional results of DuRND and RND's entropy (Fig A.5) [link]](https://anonymous.4open.science/api/repo/ano/file/5.pdf), which reveals: (a) Entropy in both methods drops in similar period (~30% of training) with similar rates, indicating DuRND doesn't extend exploration phase. (b) in early stage, DuRND maintains higher entropy, especially in complex tasks (MZ’s Revenge), indicating higher action diversity and broader exploration. This arises from the dual-RN design: some states remain novel in one RN, enabling continued exploration when necessary. Second, from the state visitation perspective in Fig 4, exploration ends even earlier for DuRND, skewing to right side in 25-50k steps, while RND continuously visits the left side up till 50-75k steps. Last, $R^{nov}$ decay periods vary across tasks in Fig 5 (MZ’s Revenge vs Freeway), indicating an adaptive rather than a naïve prolonging of exploration. 2. **$R^{con}$ is crucial** as it bridges exploration and exploitation. As novelty doesn't imply usefulness, $R^{con}$ considers state's goal-reaching potential, prioritizing states that are both exploitable and novel over novel-only states. This is shown in [new results in the toy task (Fig A.6) [link]](https://anonymous.4open.science/api/repo/ano/file/6.pdf). We tracked $R^{nov}$ and $R^{con}$ every 20k steps. In later stages, DuRND and RND show high novelty for left and right ends due to less visits, but only the right end is goal, so $R^{nov}$ in RND leading to explore left is unreasonable. In contrast, DuRND’s $R^{con}$ assigns higher rewards to right side, hence $R^{nov}+R^{con}$ is more reasonable. This shows **novelty alone is insufficient, $R^{con}$ is essential**. Also, Fig 5 fruther shows $R^{nov}$ drops early (~first 20% of training), and $R^{con}$ dominates large later portions. --- Regarding Never Give Up (NGU), we [add it as a baseline (also DEIR suggested by reviewer XS8c) (Fig A.1, Table A.1) [link]](https://anonymous.4open.science/api/repo/ano/file/1.pdf). DuRND outperforms NGU for two main reasons: 1. Though NGU considers short and long-term novelty, it considers ONLY novelty, while $R^{con}$ in DuRND assesses goal-reaching value, prioritizing novel **and useful** states, rather than blindly exploring. 2. DuRND's exploration is also effective. Early training mainly updates negative RN; later the positive RN kicks in as positive states are collected, the pre-updated negative RN and newly updated positive RN are jointly considered, encouraging further exploration for necessary states. --- Regarding R(s) vs R(s,a), consider two common scenarios: 1. Image obs and discrete actions (Atari, VizDoom, 3DMaze in our paper), actions are integer IDs, it's uncommon to input IDs together with image states into networks, as they provide limited information, thus most works use R(s), e.g., DQN, RND. 2. Continuous-control (MuJoCo, robotics), both states and actions are vectors, it's natural to concatenate s-a as joint input to networks, well-suited for R(s,a). Our new experiments in Fig A.2 exactly used R(s,a). Another practice is ReLara. --- For terminology "rewards for both final and sub-goals", and "convergence rate", we clarified in paper. Thanks again and hope our response has addressed your concerns.
Summary: The paper proposes Dual Random Networks Distillation (DuRND), a novel reward shaping framework designed for efficient exploration and stable (extrinsic reward) convergence in sparse-reward reinforcement learning tasks. DuRND utilizes two lightweight random network modules, namely positive and negative Random Networks (RN), to simultaneously compute a novelty reward for exploration and a contribution inclined towards exploitation. The novelty reward encourages exploration of less-visited states, while the contribution reward assesses states based on their likelihood of yielding higher environmental rewards. They provide both performance and qualitative evaluations across 12 tasks, including high-dimensional tasks with challenging sparse rewards (e.g Atari, VizDoom, and MiniWorld). DuRND demonstrates superior performance and efficiency relative to several existing benchmarks (compared to 7 other algorithms, 3 with exploration bonus and 2 hidden value reward shapping). Claims And Evidence: The paper's claims regarding improved exploration efficiency, stable convergence, and minimal computational overhead seem to be well supported by experimental evidence across various tasks. However, I consider as important caveat the lack of analysis concerning the sensitivity of the reward coefficients (coefficients for all types of reward intrinsic, extrinsic, R^con, R^nov, in both their setups and baselines like RND). There is a small possibility that Rcon may merely amplify extrinsic rewards, a behavior that could be simply replicated by scaling rewards. Additionally, the comparison to existing intrinsic reward (IR) literature, such as "Never Give Up" (Badia et al., 2020), could be strengthened to clarify how DuRND uniquely addresses the balance between novelty exploration and extrinsic rewards. Methods And Evaluation Criteria: The proposed evaluation criteria and methods are appropriate for the problem addressed, covering 12 diverse sparse-reward tasks effectively. The evaluation could benefit from including even denser reward environments where existing intrinsic reward mechanisms are known to struggle, providing a stronger validation of DuRND's robustness. Theoretical Claims: No theoretical proofs seem to be presented. Experimental Designs Or Analyses: The experimental designs and analyses are overall sound. However, It would be very important to have **sensitivity analyses concerning reward coefficients**. This omission is significant, because the contribution reward (Rcon) might inadvertently act as a simple scaling factor for extrinsic rewards. This effect might be easier to be analysed in the toy example from Section 5.2. Additionally, the authors could discuss explicitly the difference in value ranges between Rnov (unbounded) and Rcon (bounded between 0 and 1) and how these can be tackled, if necessary. Would be good to include comparisons with newer IR mechanisms like DEIR (Wan et al., 2023) or "Never Give Up", strengthening significance of the evaluation. Supplementary Material: I reviewed the appendix. Relation To Broader Scientific Literature: The paper seem to relates its contributions adequately to the broader literature on reward shaping and intrinsic rewards. However, I consider quite problematic a terminology issues I identified starting from the introduction section; the authors conflate reward shaping (RS) and intrinsic rewards, which are most often treaded differently in the literature and mostly represent different concepts. They refer only to reward shaping, while cited papers from the introduction are clearly part of the intrinsic reward RL literature (e.g. the exploration bonus ones). Additionally, terms like "hidden state value approaches" are unconventional. Clarifying or explicitly stating the reason for introducing new terminologies would significantly enhance RL literature alignment. Essential References Not Discussed: The paper insufficiently positions itself relative to intrinsic reward literature addressing the balancing problem between novelty and extrinsic rewards. Specifically, while it references "Never Give Up" (Badia et al., 2020), it fails to discuss how its approach differs in tackling this previously identified challenge, making its positioning unclear. NGU, just being one of the articles in the literature addressing this problem Other Strengths And Weaknesses: The paper is clear, well-structured, demonstrating originality in combining dual random networks to balance exploration and exploitation. The paper is interesting, by identifying the need for having a seperate, delayed reward for novelty of states that have higher chances of leading to extrinsic reward. However, its significance could be considerably strengthened through explicit reward coefficient analyses. Despite these minor weaknesses, the work is detailed, thorough, and generally well-executed. Other Comments Or Suggestions: On line 017, the reference to Sorg et al., 2010a does not appear to be related to sparse reward environments. Questions For Authors: Not at the moment. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear reviewer, Thanks for the comments and below we provide detailed responses. Regarding the coefficient sensitivity and the nature of contribution reward: 1. **Coefficient sensitivity**: we conducted [additional experiments to evaluate the reward coefficients in Atari games (Fig A.3) [link]](https://anonymous.4open.science/api/repo/ano/file/3.pdf), varying the weights of the two rewards. Results show DuRND is robust to the coefficients, and its performance remains stable under different settings. These experiments will be included in the revised version. 2. **Normalization and balance with extrinsic rewards**: First, between extrinsic and intrinsic rewards, we follow RND and ReLara, which states that setting a shaped-to-environmental reward ratio of 2:1 leads to robust and stable learning, which is also a common practice in RS research. Second, for $R^{nov}$ and $R^{con}$, to control their magnitudes and ensure scale consistency, we applied a normalization mechanism described in **Appendix B.1** (will move to main text, also per suggestions from Reviewer GFgX). Specifically, based on the fact that the RN prediction errors are minimized thus will gradually decrease in training, we record the average error during a burn-in phase as an estimate for the upper bound, and normalize both $e_P$ and $e_N$ by twice this value. This bounds the $R^{nov}$ lies within $[0, 1]$ empirically. And as $R^{con}$ is a ratio, it is also bounded in $[0, 1]$. This design guarantees stable and comparable reward magnitudes across different tasks. 3. **$R^{con}$ is not a simple scaling of env rewards**, rather it densifies the sparse rewards into task-relevant dense rewards by assessing the goal-reaching likelihood. As suggested, we analyzed this effect by [new results in the toy task (Fig A.6) [link]](https://anonymous.4open.science/api/repo/ano/file/6.pdf), tracking $R^{nov}$ and $R^{con}$ for every 20k steps. We observe that both left and right ends got high $R^{nov}$ due to less frequent visits, but only the right side got $R^{con}$, and it increased along nearer to the goal, effectively densifying the sparse signal in a task-relevant value. This also evidences that **novelty alone is insufficient, and $R^{con}$ is essential for goal-directed exploitation**. --- Regarding comparison with NGU (Badia et al., 2020) and DEIR (Wan et al., 2023), we added [new experiments including both as baselines [link]](https://anonymous.4open.science/api/repo/ano/file/1.pdf). NGU uses short-term and long-term novelty to guide intra-episode and entire-training exploration. DEIR considers novelty from both environmental transitions and stochastics, and agent's own behavior. Both are representative intrinsic motivation methods. In the new experiments, DuRND outperforms both NGU and DEIR, and the key reason is DuRND explicitly used contribution reward to evaluate each state's latent value, allowing it to prioritize states that are more likely to succeed task, and avoid excessive attention to novel but less meaningful ones. This effect is also evidenced in the aforementioned results in toy task (Fig A.6), where both left and right-end states receive high $R^{nov}$ but only the right side get $R^{con}$, indicating $R^{nov} + R^{con}$ is more reasonable and effective than only $R^{nov}$. The experiments will be added to paper, both NGU and DEIR papers will be cited properly. --- Regarding dense-reward tasks, though DuRND targets sparse reward, it can be naturally extended to dense-reward settings. We adopt a simple yet effective threshold-based strategy: states with $R^{env}$ exceeding a predefined threshold (0.5 in implementation) are recorded by positive RN; otherwise, negative RN. We tested this extension on [6 continuous-control tasks in MuJoCo and Robotics (sparse-reward setting is also included) Fig A.2 Table A.2, A.3 [link]](https://anonymous.4open.science/api/repo/ano/file/2.pdf), comparing with baselines suited for dense rewards. Results show DuRND remains robust, confirming its generality. --- Regarding the terminology on RS and intrinsic rewards, we acknowledge that different classification schemes exist; in our paper, we adopt a broad definition of RS, which includes any approach that integrates auxiliary reward into environmental reward, including intrinsic rewards such as exploration bonus. This follows several recent works that also categorize intrinsic exploration as part of the general RS research (e.g., Gupta et al., Unpacking Reward Shaping, NeurIPS 2022; Devidze et al., ExploRS, NeurIPS 2022, etc.). We understand that some works treat intrinsic rewards as a distinct line of research and will clarify our definition to avoid confusion. We've also revised the terms “hidden state value approaches” to better align with conventional literature. --- Regarding the reference Sorg et al., 2010, we've revised it to ensure accurate citation. Once again, thanks for your feedback and hope our responses have addressed your concerns. --- Rebuttal Comment 1.1: Comment: Thank you for all the extra experiments and for taking the time to address my notes. --- Reply to Comment 1.1.1: Comment: Dear reviewer, Thanks a lot for your feedback and for raising the score from 3 to 4. We are delighted that our response addressed your concerns and sincerely appreciate your support.
null
null
null
null
null
null
Perception in Reflection
Accept (poster)
Summary: This paper introduces Reflective Perception (RePer), a system for improving VLMs through iterative self-reflection. It adopts a policy–critic framework, where a policy model generates outputs, and a critic model provides feedback to refine responses over multiple turns. The paper also proposes Reflective Perceptual Learning (RPL), a training strategy leveraging automatically generated reflection data and unlikelihood loss to improve factual accuracy and reduce hallucinations. Experiment results show RePer's quantifiable improvements in image understanding, captioning precision, and hallucination reduction Claims And Evidence: Most of the claims make sense to me; however, there are two worth discussing. 1. "RePer achieves strong alignment between model attention patterns and human visual focus". From the discussion in Section 3.1, I can see the model shifts its attention after learning, but it is unclear to me how the model aligns with human perception. 2. Advantages in complex reasoning tasks. The paper emphasizes RePer's advantages in complex reasoning tasks, but the experimental dataset is not very convincing. The covered dataset includes image understanding, hallucination detection, and image captioning; none of the mentioned focuses on complex reasoning. Methods And Evaluation Criteria: The overall method makes sense to me. I agree that the proposed method is a commendable way to address the targeted problem. The only concern about the method is the novelty (see Strengths And Weaknesses section). The evaluation part is clear and reasonable, but there is potentially an overclaim concern, as mentioned in the Claims And Evidence section. Theoretical Claims: No serious flaws found Experimental Designs Or Analyses: See Claims And Evidence Supplementary Material: Appendix reviewed Relation To Broader Scientific Literature: See Other Strengths And Weaknesses Essential References Not Discussed: Related works are clear Other Strengths And Weaknesses: The proposed method is both reasonable and meaningful, with experimental results across multiple datasets demonstrating its effectiveness. However, the novelty of this work is questionable, as the core concept of Reflective Perceptual Learning (RPL) does not introduce fundamentally new learning principles or new findings. Specifically, the proposed learning framework closely resembles RLHF and self-reflection mechanisms, where a critic model provides iterative feedback to refine predictions. The underlying learning paradigm aligns with process supervision, and the process signal is derived from an LLM-as-a-Judge (plus rule-base scoring) approach. Additionally, the data construction strategy is similar to self-learning or self-distillation techniques. As a result, the overall framework does not present a fundamentally novel methodological contribution beyond its application to vision-language models. That being said, expanding the success of language model training to VLM represents a meaningful step toward improving real-world applications. Given its potential impact, I am inclined to assign a positive score at this stage. Other Comments Or Suggestions: None Questions For Authors: (Refer to the Claims And Evidence Section for more context on the following two questions.) 1. While I appreciate and generally agree with the findings presented in Section 3.1, the evidence provided does not sufficiently demonstrate alignment with human perception. I would like to know if the authors have additional evidence to support this claim or if they would consider narrowing the scope of this claim for greater accuracy. To support this discussion, I am curious about the definition of "ground-truth human attention" in line 243. Is it from human annotation? 2. I believe including datasets focusing on complex reasoning can greatly improve the soundness of the paper. Depending on the reasoning type of interest, the corresponding dataset may be tested, such as common sense reasoning (ScienceQA), complex multimodal reasoning (MultiModalQA), etc. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful feedback and for recognizing the value of our method and experiments. We carefully address the concerns and clarify potential misunderstandings as follows: ## Q1: Alignment with Human Visual Focus We address this concern by clearly defining “ground-truth human attention” and conducting human evaluations: **“Ground-truth human attention”** refers to the qualitative visual focus humans naturally adopt when answering questions—typically prioritizing the main subject over background elements. An attention map that better reflects human perception tends to activate more semantically relevant image tokens and highlight regions aligned with human focus patterns [1]. To further support this claim, we conducted a **human evaluation** to compare the image attention of RePer and LLaVA-1.5. - **Setup**: Six annotators (PhD/Master’s students in CV/NLP) were shown anonymized and shuffled attention maps from both models on 100 randomly sampled test images. They were asked to choose: “Which attention map better reflects your own visual focus if you were to answer this question?” The interface is shown in Re-Fig. 2 at https://reper-vl.github.io/ICML-Rebuttal/. - **Metric**: We report the **win rate**—the percentage of cases where RePer’s map was preferred over LLaVA-1.5’s. - **Result**: Image attention maps from RePer was preferred in **70.27%** of cases over LLaVA-1.5, indicating **stronger alignment with human visual focus**. We will revise the paper to clarify the definition and more accurately scope this claim based on the supporting evidence. [1] Huang et al. OPERA: Alleviating Hallucination in Multi-Modal Large Language Models via Over-Trust Penalty and Retrospection-Allocation, CVPR 2024. ## Q2: Advantages in Complex Reasoning We appreciate the reviewer’s suggestion and would like to clarify **a potential misunderstanding: our work focuses on perception-oriented tasks, while complex reasoning is positioned as a future direction**. As noted at the end of the Introduction, we see strong perception as a foundation for advanced reasoning capabilities. To further explore this potential, we evaluated 13B models on **three reasoning benchmarks**—ScienceQA, MMMU, and AI2D—using VLMEvalKit. As shown below, RePer consistently achieves the highest performance, demonstrating stronger reasoning capabilities and generalization. | Method | ScienceQA | MMMU | AI2D | |-|-|-|-| | LLaVA-SFT | 65.76| 34.67 | 47.67 | | LLaVA-RLHF | 63.85| 34.89 | 44.62 | | VOLCANO | 67.29 | 34.33 | 55.89 | | LLaVA-1.5 | 68.43 | 35.40 | 58.84 | | RePer | **69.10** | **36.89** | **59.32** | ## Q3: Novelty of RePer **1. Reflection in LVLMs vs. LLMs (Importance & Uniqueness)**: While reflection has been explored in LLMs, it holds distinct importance in the context of LVLMs. Unlike language models, where input tokens are discrete and semantically stable, visual perception involves high uncertainty. LVLMs are more prone to hallucinating nonexistent objects, misinterpreting visual cues, or overlooking salient regions—challenges that extend beyond typical language ambiguity. Our reflection-based method directly addresses these issues by enabling iterative refinement. Beyond improving output quality, it also enhances human-aligned visual attention and reduced hallucination, underscoring its unique value for multimodal perception. **2. Dual Methodological Contributions**: - First, we propose reflective perception learning to LVLMs. where image-grounded, step-wise corrections provide fine-grained rewards for aligning vision and language. This is fundamentally distinct from language-only feedback and proven highly effective. - Second, we introduce a reward-weighted unlikelihood training to reinforce high-quality responses while explicitly penalizing suboptimal ones. This mitigates the response collapse observed in prior works (e.g., RISE, SCoRe), where early-stage answers dominate learning regardless of quality, and enables stronger first-turn performance. **3. Data Construction innovations**: While not extensively emphasized in the main paper, our data construction pipeline introduces key innovations by combining VLM-based and rule-based reward signals to construct high-quality reflective conversations. This dual-reward structure improves the precision and interpretability of supervision, making a clear improvement over prior self-reflection datasets (e.g., RISE, SCoRe). Notably, it also resonates with both RM-based and RM-free RL paradigms. Finally, regarding the notion of novelty itself, we borrow a perspective from Novelty in Science [1]: > *If you hear a good idea, there is a moment of surprise and then, the better it is, the more obvious it may seem. If it is easy to explain and obvious in hindsight, this in no way diminishes the creativity (and novelty) of the idea.* [1] Black. Novelty in Science. Medium. https://medium.com/@black_51980/novelty-in-science-8f1fd1a0a143 --- Rebuttal Comment 1.1: Comment: Thanks for the clarification. The author's responses aligned with my initial understanding of the work; thus, I maintained my original overall recommendation.
Summary: This paper proposes RePer, which teaches the VLM to iteratively revise and provide gradually better responses given a strong pre-built critic model. The algorithm works by first collecting responses of varying quality, using these to construct an iteratively revised dataset, and then employing a “Reflective Unlikelihood Training” method that effectively teaches the model to follow this data. Results show some improvement across benchmarks. Claims And Evidence: 1. The authors claim “RePer’s quantifiable improvements in image understanding, captioning precision, and hallucination reduction” compared to vanilla Llava 1.5. However, given that RePer requires an additional, pre-trained critic model, it is unclear whether the comparison with standalone Llava is fair. I think the paper would benefit from a Best-of-N baseline, which compares RePer’s N-step revision with sampling the solution from Llava-1.5 N times and using the critic to pick the best solution. 2. The authors also claim, “The model achieves strong alignment between model attention patterns and human visual focus,” but the only evidence provided is a single example in Figure 4, which is not convincing. Although Figure 6a shows that average image token activations increase more with RePer, this does not necessarily imply a stronger alignment with human visual focus. Methods And Evaluation Criteria: Yes, the method appears to be well-motivated, and the benchmarks are solid. However, one issue is that this approach assumes the existence of a robust critic model, which might not always be available. Additionally see Claims And Evidence 1 Theoretical Claims: N/A Experimental Designs Or Analyses: See Claims And Evidence* Supplementary Material: Yes, the Appendix, it looks good. Relation To Broader Scientific Literature: It extends prior success of improving model's performance through revision, as shown in SCoRe and RISE, from math reasoning, text-based setting to vision-language models. Essential References Not Discussed: No. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: The paper could benefit from deeper discussion with prior works, such as RISE. Questions For Authors: 1. How does RePer compare to Best-of-N against the same critic model? 2. The model receives textual feedback from the critic model, while in training data, the next round of response isn't really correlated with the textual feedback, is this correct? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the positive feedback, particularly for recognizing our work as a “well-motivated method” with solid experiments. Below, we carefully address each of your concerns and clarify potential misunderstandings. ## Q1: LLaVA-1.5 BoN vs RePer We appreciate the suggestion and clarify **a possible misunderstanding: the critic model is not reqired but optional** for RePer during inference. As discussed in Sec. 4.6 (Lines 370–381), RePer already outperforms LLaVA-1.5 in the first turn without any external feedback, demonstrating a **fair comparison** that reflects the effectiveness of our reflective perception training. To further address the reviewer’s concern, we conducted another fair comparison on DetailCaps-4870 by implementing a **Best-of-N** (N=1,3,5,6) baseline for LLaVA-1.5, where we sample N responses and use the same critic (LLaVA-Critic-7B) to select the best one for computing the final score. As shown in the table, even with a larger sampling budget, **LLaVA-1.5-13B Best-of-6 still falls short of RePer-13B’s performance using just a single-turn response**. This underscores RePer’s efficiency and effectiveness in answer generation: rather than relying on brute-force sampling, RePer produces strong initial responses through reflective perception learning mechinism, and further enhances them via feedback-driven refinement. | Model | #BoN | #Reflection | Capture | Precision | Recall | |---------------|:------:|:-------------------:|---------|-----------|--------| | LLaVA1.5 | 1 | - | 51.23 | 65.54 | 43.92 | | LLaVA1.5 | 3 | - | 51.28 | 65.73 | 43.81 | | LLaVA1.5 | 5 | - | 51.06 | 65.55 | 43.75 | | LLaVA1.5 | 6 | - | 51.36 | 66.11 | 43.95 | | RePer | - | 1 | 54.29 | **66.15** | 47.66 | | RePer | - | 2 | 54.68 | 65.24 | 48.68 | | RePer | - | 3 | **54.73** | 64.74 | **49.1** | We will clarify the fairness of the comparison more explicitly in the revision. ## Q2: Alignment with Human Visual Focus To address this concern, we conducted a human evaluation to further support the claim that RePer’s image attention better aligns with human visual focus. - **Setup**: We recruited 6 annotators (PhD/Master’s students in computer vision and NLP) and randomly sampled 100 images from the test set. For each image, we collected image attention maps (as shown in Fig. 4) generated by RePer and LLaVA-1.5 during inference. The attention maps were anonymized and randomly shuffled before being shown to annotators, who were asked: “Which attention map better reflects your own visual focus if you were to answer this question?” The annotation interface is shown in Re-Fig. 2 on our supplementary website: https://reper-vl.github.io/ICML-Rebuttal/. - **Metric**: We report the **win rate**, defined as the percentage of cases where RePer’s image attention map was preferred over LLaVA-1.5’s by annotators. - **Result**: Image attention maps from RePer was preferred in **70.27%** of cases over LLaVA-1.5, indicating **stronger alignment with human visual focus**. ## Q3: Discussion about RISE Please refer to **Lines 196–206** for our discussion on the connection and differences with RISE. While both methods leverage iterative supervision, RePer introduces a distinct imitation learning formulation based on **reward-weighted reflective unlikelihood training**. Moreover, it explicitly proposes a reflective perception mechanism aimed at addressing a key limitation of current LVLMs—the unrealistic assumption of perfect initial responses. ## Q4: Correlation Between Feedback and Next-Round Response We clarify that the next-round responses are **indeed correlated with the critic feedback**. As illustrated in Figure 8, during data construction, each successive response in the conversation is selected based on a higher GPT-4o score than the previous one. Concretely, the first response hallucinates non-existent objects like “bottle or vase” which the critic points out. The second response removes these errors and correctly identifies the image as a book. In the third turn, the model further corrects the authorship and improves conciseness, directly following the critic’s feedback. This step-by-step correction process demonstrates a strong alignment between the feedback and the revised responses, providing effective supervision for learning from reflection.
Summary: The paper proposes a reflective perception framework, named Reflective Perception (RePer), aimed at enhancing the capabilities of large vision-language models (LVLMs). By introducing dual-model interaction between policy and critic models, RePer seeks to enable LVLMs to iteratively refine their visual perceptions, akin to human observation processes. The core idea is to replace single-pass perception with an iterative feedback loop, allowing models to refine their understanding over multiple rounds. The authors also introduce Reflective Perceptual Learning (RPL), a training approach that fortifies the model's intrinsic reflective capabilities via a constructed visual reflection dataset (LLM-assisted) and reflective unlikelihood training. Experimentation demonstrates improvements in image understanding, captioning precision, and hallucination reduction, with RePer achieving alignment between model attention patterns and human visual focus. The methodology promises robustness in tasks requiring complex reasoning and multi-step manipulation. Claims And Evidence: The paper's claims are reasonably well-supported by experimental evidence. * (+) The authors assert that RePer improves image understanding and reduces hallucinations, which is backed by quantitative results across several benchmarks (i.e. MMHal-Bench and HallucinBench). * (+) The experiments demonstrate superior performance in capturing details and aligning model attention with human focus, which is supported by the results on DetailCaps. * (-) I am not fully convinced of the eval with GAVIE and GAPE: the evals seem to use the same LLMs involved in finetuning data generation? (sec 2.2, step-2, scoring). One may make the "leveraging discrimination -> generation gradient" argument here but that would need to be further supported empirically which the paper doesn't currently do. Methods And Evaluation Criteria: The methods and evaluation metrics employed are appropriate (but not the strongest) for addressing the challenges in multimodal perception. * (+) RePer's dual-model architecture—using both policy and critic models in a feedback loop—adequately mimics iterative human perception, which seems fitting for tackling hallucinations and enhancing refined understanding. * (+) The use of the MMHal-Bench + HallucinBench + DetailCaps combo makes a good case for the improved capabilities with RePer. * (-) Again, not fully convinced of the results with GAVIE and GAPE unless empirical evidence is provided to show a) absence or minimal bias introduced with gen + eval with the same models; b) effective discrimination -> generation gradient at work (i.e. model judges better than it generates wrt. task at hand). * (-) Would be informative to have a A/B comparison on the latency & cost, since RePer likely generates a lot more tokens and many overlap across rounds of responses (e.g. see the example figure 9), so an efficiency eval helps understand the practicality aspect. On additional comment on the negative point above on human evals -- generally the best replacement for GPT-eval-GPT here would be a medium-sized human evaluation. (Reference: see how this Google's text2image evaluation runs a convincing human evaluation) Not a practical fix now but should the work missed the acceptance this time, this would be the most ideal action before the next sub in my humble opinion. Theoretical Claims: The paper does not present formal theoretical proofs but does establish a conceptual framework grounded in reinforcement learning and imitation learning principles. The theoretical basis for Reflective Perceptual Learning aligns with existing reinforcement learning methodologies, yet does not delve deeply into formalizing the convergence properties or theoretical guarantees of the iterative reflective perception process. While the conceptual claims are plausible, they remain largely empirical rather than rigorous theoretical assertions. Experimental Designs Or Analyses: The experimental design appears robust, with comprehensive evaluation across multiple well-chosen benchmarks. The authors thoughtfully designed ablation studies to analyze the influence of reflection turns, scoring disparities, and unlikelihood loss weights. These provide strong evidence of RePer’s advantages and the importance of iterative refinement in perception. However, the experiments could be expanded to include more varied real-world scenarios to ensure generalizability. Additionally, again, because of the lack in strong-enough automated evals, including qualitative **human** feedback would strengthen the assessment of how well model improvements align with human expectations and perception nuances. Overall, the work looks very promising, but slightly incomplete. To echo some of the key points mentioned on the matter of "what would have made a more complete experimentation" * Efficiency analysis (with the added reflection budget); * Either empirically ground effectiveness of Discrimination -> Generation gradient with this design (GPT-eval-GPT), or human eval. Supplementary Material: Yes. Primarily A and C. A gives good details to have help clarify the data collection, and C shows informative example to complement the main prose. Relation To Broader Scientific Literature: The paper fits within the broader context of enhancing multimodal models by refining perception processes and reducing hallucinations. It draws on established concepts like Chain-of-Thought reasoning and imitates human-like iterative perception, enriching the current strategies used in vision-language models. RePer's approach contrasts with traditional single-pass perception methods by offering a cyclical refinement process. The empirical focus distinguishes it from theoretical studies and aligns with practical applications in multimodal research areas. However, a deeper engagement with similar iterative learning strategies used in adjacent fields, such as curriculum learning or active learning in machine learning, could enrich the discussion about RePer’s novel contributions and possible future extensions. Essential References Not Discussed: On the method front, not that I know of. For evaluation, perhaps mentioning below to contextualize better the evaluation of detailed text<->image alignment. * Image->Text: - CLAIR: https://arxiv.org/abs/2310.12971 - DOCCI: https://arxiv.org/abs/2404.19753 * Text->Image: - Gecko: https://arxiv.org/abs/2404.16820v1 - DSG: https://arxiv.org/abs/2310.18235 Since the core proposal with RePer is better image understanding on (fact-grounded) granularity, detailed semantic alignment is a natural topic to touch on. Other Strengths And Weaknesses: Not much beyond what I'd discussed above. The paper has a clean presentation with well-versed prose; provides good empirical investigation (despite slightly flawed evaluation). Other Comments Or Suggestions: Reference typo: GAIVE -> GAVIE (https://arxiv.org/pdf/2306.14565) Questions For Authors: RePer is a great attempt to move beyond the somewhat "zeigeist" of "let's see if we could prompt LLMs this novel way" with a well-executed (what I'd consider as) knowledge distillation investigation: LLM-assisted data curation -> model finetuning. How much do you think the reliance on massive LLMs (>100B) form a semi-hard/hard barrier for smaller research groups to make breakthroughs on pushing the boundary of better reasoning? If not, what types of large step functions can we expect which are more free from the reliance. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you very much for constructive feedbacks and for recognizing our presentation quality and experimental design. We carefully address each of your concerns below. ## Q1: Evaluation Reliability To address concerns about potential bias from using the same LLM for both data construction and evaluation, we followed the suggested direction and conducted: **(1) a human evaluation** comparing LLM ranking with human preferences, and **(2) GAPE evaluation using alternative LLMs** not involved in data construction. These results collectively show: - **Our proposed GAPE evaluation aligns well with human judgment**, indicating minimal bias. - **RePer consistently achieves the highest preference**, both in human evaluations and in GAPE scores across different LLM evaluators. - The strong alignment confirms **GPT-4o as an effective discriminator**, which provides a “discrimination to generation” gradient that guides the model to learn accurate preferences. **1. Human Evaluation** - **Setup**: This human study uses captions generated by three 13B models on 119 GAPE samples. For each image, three anonymized captions were randomly ordered and shown to six annotators (PhD/Master’s students in CV), who ranked them based on the GAPE criteria (Fig. 7). The interface is shown in Re-Fig.1 at https://reper-vl.github.io/ICML-Rebuttal/. - **Metric**: - Mean Rank: The average ranking position across all cases. - Top-1 Rate: The percentage of times a model’s caption was ranked best by humans. - **Results**: As shown below, both human evaluation metrics exhibit a consistent model ranking, aligning with the ranking from the GPT-4o-based GAPE benchmark. Notably, RePer is ranked as the top-1 in 63.87% of cases, validating its effectiveness. |Model|Mean Rank↓| Top-1↑|GAPE↑| |-|-|-|-| |LLaVA-1.5| 2.46|15.13%|77.37| |Volcano|2.01|35.29%|78.17| |RePer|**1.53**|**63.87%**|**82.54**| **2. Evaluation with Alternative LLMs** - To further rule out evaluator bias, we replaced GPT-4o with Claude and Gemini for GAPE scoring on 13B models. As shown below, RePer consistently outperforms baselines, reinforcing the reliability of observed improvements. |Model|Gemini↑|Claude↑|GPT-4o↑| |-|-|-|-| |LLaVA-SFT|78.55|74.64|74.88| |LLaVA-RLHF|78.89|74.18|75.36| |LLaVA-1.5|80.65|75.87|77.37| |Volcano|81.59|76.70|78.17| |RePer|**83.39** |**78.41**|**82.54**| ## Q2: Latency & Cost We justify the practicality of RePer from two angles: 1. RePer already **outperforms baselines with its initial response** (Tab. 3), achieving better performance under the same inference cost. 2. As suggested, we **evaluate RePer’s efficiency** on 100 samples from the DetailCaps, measuring latency (ms/token) and cost (total generation time). As shown below, the added cost from extra turns is modest (e.g., +11 ms/token from 1 to 3 turns), and multi-turn refinement remains **optional** based on budget constraints or quality demands. |Model|#Turn|Gen. Token|Cost (ms)|Latency (ms/token)|CAPTURE| |-|-|-|-|-|-| |LLaVA-1.5|1|85.83|4750.0|55.34|51.23| |RePer|3|275.42|18377.5|66.73|55.55| ## Q3: Reliance on Large-scale LLMs We believe breakthroughs are possible without relying on >100B LLMs. In RePer, core improvements stem from the reflective perception mechanism and RPL paradigm—not from reliance on large-scale LLMs. Both the reward scoring and the critic model can be implemented without large models: Fig. 2 (Step-3) illustrates rule-based scoring aligning vision and text, and Tab. 3 shows a smaller LLaVA-Critic-7B can be an effective critic. A promising direction beyond large LLMs is to build more reliable and efficient supervision systems. Large models are convenient for data generation but often introduce hallucinations that cap smaller models’ performance. Combining expert models for pre-/post-processing, or incorporating lightweight human-in-the-loop feedback [1] for corretion/validation, offers a cost-effective way to improve supervision quality. Another direction is to pursue efficient modeling paradigms. Recent work such as OpenAI-O1/Deepseek-R1 employ RL methods like PPO/GRPO on verified data, enabling models to better exploit internal capabilities and self-generate high-quality reasoning trajectories. [1] Garg R et al. (2024). Imageinwords: Unlocking hyper-detailed image descriptions. Google. ## Q4: Relations to Iterative Learning Strategies RePer shares conceptual ties with active and curriculum learning: it uses feedback to refine predictions and progressively improves responses from coarse to accurate. Unlike traditional approaches, this progression is self-structured within the RePer’s own decision loop. As a future direction, integrating uncertainty-based reflection or organizing learning trajectories from simple to complex corrections could further enhance adaptability. ## Q5: Reference & Typo Thank you for the helpful references. We carefully considered them for designing our human evaluation, and will include the citations and fix the typo in the revision.
null
null
null
null
null
null
null
null
Principled Algorithms for Optimizing Generalized Metrics in Binary Classification
Accept (poster)
Summary: This paper studies the problem of optimizing a broad class of metrics used in class imbalance or class asymmetric scenarios. Previous approaches rely on threshold-based methods that approximates Bayes-optimal classifiers with guarantees of consistency which is asymptotic. This paper first shows that optimizing such kind of metrics can be interpreted into the problem of optimizing an alternative cost-sensitive loss, provided an oracle access of the loss value of the best hypothesis in hypothesis set. The authors then shows that the cost-sensitive loss of the above form can all be optimized with non-asymptotic theoretical guarantee via the technique of surrogate loss. Meanwhile, the oracle access required to compute the loss can also be relaxed by leveraging binary search using Rademacher bounds as separating condition. Experimental results show the effectiveness of the proposed approach. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. Experimental Designs Or Analyses: Yes. Supplementary Material: Yes. I've checked the proofs, and they're correct. Relation To Broader Scientific Literature: The authors propose a novel approach to deal with class-imbalanced and cost-sensitive binary classification problems, which is fundamental in many other literatures. Essential References Not Discussed: No. Other Strengths And Weaknesses: - Strength: The problem investigated in the paper is fundamental. The solution is intuitive and make sense, and specifically, using Rademacher bounds to make binary search for metric optimization is a nice idea. The structure of the paper is clear, each section covers a relatively self-contained step. - Weakness: I do not have concern on weakness. Other Comments Or Suggestions: No. Questions For Authors: (1) The authors propose to leverage Rademacher bounds, which can be approximated via classical learning process such as ERM. I am curious on the inherent difficulty in the metric optimization step (i.e., finding $\lambda^*$) in this paper. For example, is metric optimization at least as difficult as supervised learning multiple times with alternative losses that are independent of the metric learning process? Anyway, the answer to this question may not let me increase my evaluation to this paper, since it cannot be increased anymore. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We thank the reviewer for their strong support of our work. Below please find responses to specific questions. **1. Questions: The authors propose to leverage Rademacher bounds, which can be approximated via classical learning process such as ERM. I am curious on the inherent difficulty in the metric optimization step (i.e., finding $\lambda^\*$) in this paper. For example, is metric optimization at least as difficult as supervised learning multiple times with alternative losses that are independent of the metric learning process? Anyway, the answer to this question may not let me increase my evaluation to this paper, since it cannot be increased anymore.** **Response:** This is a natural question. The reviewer is asking about the difficulty of directly optimizing a general metric defined as the ratio of the expectations of two loss functions, both linear in ${\mathsf h}$, where ${\mathsf h}(x) = \text{sign}(h(x))$. While the metric is quasi-concave in ${\mathsf h}$, its optimization with respect to $h$ is NP-hard (even with a constant denominator), even for a linear hypothesis set. In contrast, each surrogate loss optimization problem we consider (framed as supervised learning) can be solved in polynomial time over a convex hypothesis set, as the surrogate loss functions we adopt are convex. Moreover, directly optimizing the empirical ratio of the numerator and denominator may not yield a provably good approximation of the metric, since their expectation does not align with the ratio of expectations. We will include this discussion in the final version. If our interpretation of the reviewer’s question is incorrect, please clarify, and we will be happy to address it.
Summary: This paper proposes METRO for generalized metric optimization in binary classification. The authors reformulate metric optimization as a generalized cost-sensitive learning problem, and introduce a new family of surrogate loss functions. They theoretically prove the $\mathcal{H}$-consistency guarantees for theses losses and develop finite-sample learning bounds for the proposed algorithm. Experiments on image classification demonstrate that their algorithm outperforms other baselines. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Proofs seem correct to me. Experimental Designs Or Analyses: Yes Supplementary Material: N.A. Relation To Broader Scientific Literature: The proposed framework can be used to design algorithms with finite-sample learning bounds, unlike Bayes-consistency. Essential References Not Discussed: I am not very familiar with Bayes learning and related works are well discussed as far as I am concerned. Other Strengths And Weaknesses: Strengths 1. The motivation and the introduction to the generalized metrics optimization framework are clear. 2. The reformulation of generalized cost-sensitive learning and the derivation of finite-sample learning bounds seem novel to me. Weaknesses 1. The experiments were limited to the classic image classification task. Algorithms were not applied to scenarios such as significant class imbalance, which is a fundamental motivation for this paper as stated in Section 1. Other Comments Or Suggestions: See Questions. Questions For Authors: I am curious about how the runtime of the proposed method compares to other baselines. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your encouraging review. We will take your suggestions into account when preparing the final version. Below please find responses to specific questions. **1. Weaknesses: The experiments were limited to the classic image classification task. Algorithms were not applied to scenarios such as significant class imbalance, which is a fundamental motivation for this paper as stated in Section 1.** **Response:** Thank you for the valuable suggestion. Following previous work, we used standard image classification datasets for our empirical evaluation to demonstrate the effectiveness of our methods compared to prior baselines. However, we agree that evaluating our algorithms on more imbalanced datasets would be beneficial, and we plan to include such experiments and comparisons in the final version. **2. Questions: I am curious about how the runtime of the proposed method compares to other baselines.** **Response:** The per-epoch computational cost of our method is comparable to that of Algorithm 2 in Koyejo et al. (2014). Both methods involve a single hyperparameter, and for a fixed value of this parameter, the computational cost is similar to training a standard binary classifier using a standard surrogate loss. We will include a more in-depth discussion of this topic in the relevant section of the paper.
Summary: This article introduces a novel optimization approach for generalized metrics in binary classification. The primary method involves converting the fractional form into a summation form. However, the method introduces a parameter, $\lambda$, whose optimal value is unknown and requires estimation, indicating that cross-validation may be necessary to identify the appropriate parameter. ### update after rebuttal The authors have provided explanations for the theoretical technical details I raised and discussed the limitations of their approach. They have also clarified aspects of their experimental design and incorporated relevant discussions of the literature I suggested. Overall, I appreciate the theoretical results and motivation behind the authors' proposed method. However, from practical perspective, the approach does have certain limitations, such as the selection of $\lambda$ and/or the sacrifice of data for cross-validation. Additionally, more comparisons with existing consistency methods in the literature will be better. Nevertheless, I believe this paper is worthy of publication. Therefore, I maintain my current rating. Claims And Evidence: Yes, the method has been validated through both theoretical analysis and experimental results. I have included some comments in the following comment boxes. Methods And Evaluation Criteria: ### **Method** Overall, the methods presented in the article are motivated by several theorems, primarily Theorem 3.1 and Theorem 5.1, which I find to be a particularly interesting aspect of the work. I would like to double-check the estimation of $\lambda$ using the proposed method. In both Algorithms 2 and 3, $\hat{\mathcal{E}_{l^{\lambda}}}$ and $\hat{h}$ are estimated based on the same dataset, will this approach be effective? Moreover, is it valid to derive the result of Theorems 5.4 and 5.5? I can understand if the data were different. Specifically, in the proof of Theorem 5.4, $$ \hat{\mathcal{E}_{l^{\lambda}}}(\hat{h}) \leq \epsilon, $$ implies $$ \mathcal{E}_{l^{\lambda}}(\hat{h}) \leq 2\epsilon. $$ Consider the case of an overparameterized model $h$ such that $\hat{\mathcal{E}}_{l^{\lambda}}(\hat{h})$ is pretty small; but, in this scenario, it appears that $\mathcal{E}_{l^{\lambda}}(\hat{h})$ could still be large. ### **Experiments** The experimental section could be improved. Some statements lack clarity; for instance, it is unclear whether the METRO Algorithm ultimately employs Algorithm 2 or Algorithm 3. Additionally, since the datasets are generated from multiclass datasets into binary datasets, providing more details would be beneficial. The model only utilizes a three-hidden-layer CNN with ReLU activations; incorporating additional models could enhance the overall convincingness of the results. Theoretical Claims: I reviewed the proofs of Theorems 3.1 and 5.1, as well as part of Theorem 5.4. Theorems 3.1 and 5.1 are correct; I have posed some questions regarding Theorem 5.4 in the previous comment box. Experimental Designs Or Analyses: The experimental design is reasonable, but it could be further improved or clarified. Please refer to the detailed comments in the previous comment box. Supplementary Material: N.A. Relation To Broader Scientific Literature: I do not fully understand why the proposed method outperforms Koyejo et al. (2014). Both approaches theoretically present correct methods with an additional parameter; it seems to me that the difference lies primarily in how this parameter is tuned. I currently cannot grasp why the proposed algorithm has an advantage in tuning this parameter. Essential References Not Discussed: The paper systematically discusses the literature, and I would like to add few references. ### Bayes-rule for F-score Jansche, M. (2007, June). A maximum expected utility framework for binary sequence labeling. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics (pp. 736-743). Dai, B., & Li, C. (2023). RankSEG: a consistent ranking-based framework for segmentation. Journal of Machine Learning Research, 24(224), 1-50. ### Threshold study for F-score Lipton, Z. C., Elkan, C., & Narayanaswamy, B. (2014). Thresholding classifiers to maximize F1 score. arXiv preprint arXiv:1402.1892. Other Strengths And Weaknesses: N.A. Other Comments Or Suggestions: N.A. Questions For Authors: N.A. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your appreciation of our work. We will take your suggestions into account when preparing the final version. Below please find responses to specific questions. **1. Method:** Thank you for the insightful comments. The quantities $\hat{\mathcal{E}} _{\ell^{\lambda}}$ and $\hat{h}$ are approximated using data sampled from the same distribution, but they can be obtained from different samples. In practice, as done in Koyejo et al. (2014), we can split the training data into two parts: $\hat{\lambda}$ is obtained from one part, and then used to train the hypothesis $\hat{h} _{\hat{\lambda}}$ on the other. Theorems 5.4 and 5.5 remain valid as long as the data are sampled independently from the same distribution. Regarding your observation: in the case of an overparameterized model, it is true that $\mathcal{E} _{\ell^{\lambda}}(\hat{h} _{\lambda}) \leq 2\epsilon_m$ still holds by the standard generalization bound (Mohri et al., 2018), but the value of $\epsilon_m$ may be larger due to the high complexity of the model. As you correctly noted, $\epsilon_m$ becomes small only when the sample size is sufficiently large relative to the complexity of the hypothesis set. This limitation applies broadly to most generalization bounds for complex neural networks. The current analysis of overparameterized settings typically requires alternative tools, particularly those that account for the optimization algorithm (e.g., SGD) and its dynamics. Such analyses often apply only to more restricted model families. We will elaborate on these points and revise the presentation of Algorithms 2 and 3 for improved clarity in the final version. **2. Experiments:** Thank you for the valuable suggestions. As indicated on line 435 (left), we used the METRO Algorithm 3 in our experiments. We will follow the reviewer’s recommendation to add more details about the datasets and include additional experimental results using alternative models in the final version. We also plan to incorporate experiments on more imbalanced datasets, as suggested by Reviewer wfsZ. **3. Relation To Broader Scientific Literature:** First, we should emphasize that the guarantees for these methods differ fundamentally. Our METRO algorithms for optimizing general metrics are backed by strong theoretical guarantees that apply to arbitrary hypothesis sets and include finite-sample bounds. In contrast, prior methods (Koyejo et al., 2014) provide only Bayes-consistency guarantees, which hold solely for the class of all measurable functions and offer no convergence bound. Moreover, their approach lacks convergence rate guarantees for parameter tuning, unlike our finite-sample bounds. Beyond these theoretical advantages, a key limitation of prior methods is their dependence on the structure of the Bayes-optimal solution. Since the Bayes-optimal predictor for a given metric often differs from binary classification only by an offset, their approach first trains a binary classifier and then selects an optimal threshold or offset. However, this strategy fails when the best predictor within a restricted hypothesis set does not align with the Bayes-optimal form (see Figure 1). Consequently, irrespective of parameter estimation or tuning, their approach does not succeed in general, as our example illustrates. In contrast, our approach and algorithms provide convergence rate guarantees for arbitrary hypothesis sets and do not rely on the specific form of the Bayes-optimal solution, ensuring robust and theoretically grounded optimization. **4. Essential References Not Discussed:** We thank the reviewer for pointing out these relevant references on optimizing the F-score. They are indeed closely related to our work, and we will be sure to include and discuss them in the final version.
null
null
null
null
null
null
null
null
ProSec: Fortifying Code LLMs with Proactive Security Alignment
Accept (poster)
Summary: The paper introduces PROSEC, a method for proactively identifying weaknesses in code-generating AI models by creating specific coding scenarios that are likely to introduce vulnerabilities. PROSEC creates a significantly larger dataset of vulnerability-inducing situations compared to previous methods. Experiments compare different code models with PROSEC and test their ability to perform regular coding tasks. ## update after rebuttal Thank you again for the detailed explanations in the rebuttal regarding the difference from related work. Claims And Evidence: I could not identify specific unclear claims, but check the comments section for more details. Methods And Evaluation Criteria: The proposed method seems reasonable from a high level. However, the concept is not new, so the contribution is not really novel. Theoretical Claims: N/A Experimental Designs Or Analyses: The experimental design is described and contains details necessary to understand the results. However, aspects of the verification such as the utility and other details remain unclear. Supplementary Material: I did not check all the details in the supplementary material, but the parts that are required to understand the dataset construction. Overall, it looks good to me. Relation To Broader Scientific Literature: The paper has a good overview of related work. However, critical related work is missing. Essential References Not Discussed: Many references are generally published in that domain, but no obvious paper is missing, as far as I can tell. Other Strengths And Weaknesses: Strengths: - Timely topic Weaknesses: - The presentation of the paper needs improvement - The novelty remains unclear Other Comments Or Suggestions: Thank you for submitting the paper. It elaborates on a very timely topic. Therefore, research in this domain remains required to build more secure code-generative LLMs. **The idea**: Unfortunately, the novelty of the paper is unclear. This paper [new1] uses a similar approach for the same purpose. Therefore, I would expect a comparison with this method and a detailed elaboration on the differences to this method for benchmarking code LLMs. In Section 2, the paper claims “…to reduce the likelihood of generated code being detected by the static analyzer…”. This sounds like the paper’s target is to prevent detection. However, a better ultimate goal would be to prevent the generation of vulnerable code or at least to fix the code. Does the paper really try to prevent detection? **The presentation:** What is preferred/win and a less preferred/lose? This is mentioned in Section 2 after Equation 2. Figure 1 misses some more detailed explanations. It may become clearer when reading the paper. However, I am missing a designated section for explaining the details of the figure. The paper measures the functionality of the generated code. However, there are no details on how the functionality is assessed. This is generally not an easy task and several different strategies are possible. What is used in this paper? It remains unclear why data needs to be selected for the evaluation in Section 3.3. It also remains unclear why an optimizer is required. Further, at the beginning of Section 5, it is not explained how the results are reported, specifically how and why at least 20% remain in the dataset. Effects on model utility: Where is this shown? The paper does not point to a result table or plot. Also, it is not mentioned which and how many CWEs are considered and how the selection is justified. Further, I am missing an explanation of how the diversity of the code is measured and assessed. [new1] Hajipour et al. “CodeLMSec Benchmark: Systematically Evaluating and Finding Security Vulnerabilities in Black-Box Code Language Models,” SatML 2024 Questions For Authors: - Does the paper really try to prevent detection? - What is preferred/win and a less preferred/lose? - What strategy to meassure the functionality of code is used in this paper? - Why is there a selection of data in Section 3.3? - Effects on model utility: Where is this shown? Ethical Review Concerns: Not necessarily critical, but since the paper is about security, I would expect some comment on that in the paper. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate your feedback and respectfully clarify key differences between CodeLMSec and ProSec to illustrate our unique contributions: ## Different goals CodeLMSec and ProSec serve fundamentally different purposes. CodeLMSec is a codeLM security **benchmark** that evaluates codeLMs with vulnerability-inducing prompts. In contrast, ProSec is an **alignment training technique** that employs scalable data synthesis to produce a high-quality dataset, effectively securing codeLMs without harming utility. Each entry in the ProSec dataset includes a vulnerability-inducing prompt, an insecure implementation, and a secure implementation. The alignment training loss requires a model to generate secure implementation with a higher probability than generating the insecure one. ## Scalability Effective alignment requires more samples than a typical benchmark. While CodeLMSec includes 280 prompts, the previous SOTA alignment dataset contained 1.5k prompts. ProSec scales up to more than 10k entries. ProSec automates the data synthesis process by composing diverse vul-inducing scenarios using CWE definitions and existing instruction-tuning datasets, avoiding the labor-intensive manual curation required by CodeLMSec. ## Reference code Benchmarks need only prompts; alignment requires paired insecure and secure code. ProSec uses rejection sampling with static analyzers to curate insecure code snippets, and it instructs LLMs to produce secure fixes, thereby constructing pairwise alignment data entries. ## Balancing security and utility An alignment dataset needs to balance between enhancing security and preserving utility. ProSec proposes a training dynamic-based data selection algorithm to construct a utility-preserving dataset, ensuring models’ utility is not compromised during security alignment. Technically, we follow standard practices by evaluating model utility on coding benchmarks (i.e., HumanEval and MXEval) and assessing correctness via test cases. E.g., Table 1 shows that for Phi3mini-Inst, ProSec achieves 44% on MXEval versus SafeCoder's 42%, showing better utility preservation. ## Empirical comparison Although the two works are different, we did experiments during the rebuttal to show unique challenges in synthesizing alignment data. We used the prompts from CodeLMSec to construct an alignment training dataset with the same pipeline as ProSec. (The coding instructions were obtained from CodeLMSec rather than synthesized by ProSec.) The ratios of vulnerable code generated are (lower is better): the original model: 45.6, ProSec: 19.7, CodeLMSec: 38.9. We can see that the model aligned with ProSec is more secure, underscoring the challenge of effective alignment data synthesis. ## Clarifications > Q1: What is preferred/win and a less preferred/lose? Each entry in an alignment dataset consists of three parts: a prompt, a preferred (or “win”) response, and a less-preferred (or “lose”) response [1]. In ProSec, the prompt is a vulnerability-inducing prompt, while the “win” and “lose” responses correspond to secure and insecure implementations, respectively. > Q2: The goal of ProSec? The goal of ProSec is to prevent the generation of vulnerable code. It trains the model to favor generating the secure code over the insecure ones. We will revise section 2. > Q3: How to measure functionality? We follow established practices[2, 3, 4] to set up the experiments. The functionality correctness is evaluated by test cases. Code generations that pass all test cases are considered correct. > Q4: Why is data selection necessary? The data selection is critical for balancing security and utility of the aligned model. Intuitively, including too many utility-preserving data samples weakens the security alignment, while too few impairs the model’s utility. ProSec employs a training dynamic-based algorithm to identify and selectively include those utility-preserving data samples whose distribution is disrupted by the security alignment. > Q5: How to measure utility? As shown in Table 1, the utility is measured as the performance on two coding benchmarks, HumanEval and MXEval. > Q6: What and how are CWEs selected? We select 38 CWEs that overlap between PurpleLlama and SafeCoder to set up a fair evaluation. Please see Section 4 (line 269) for details. > Q7: How to measure data diversity? We use the cosine similarity of semantics embeddings to analyze the diversity of the dataset (line 359, Fig. 4). We use string editing distance (line 255, Section 3.3) to deduplicate code snippets and thus increase dataset diversity. [1] ​​Rafailov, Rafael, et al. Direct preference optimization: Your language model is secretly a reward model. NeurIPS’23 [2] He, Jingxuan, et al. Instruction tuning for secure code generation. ICML’24. [3] He, Jingxuan,et al. Large language models for code: Security hardening and adversarial testing. CCS’23 [4] Wei, Yuxiang, et al. Magicoder: Empowering code generation with oss-instruct. ICML’24. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed rebuttal and for answering my questions. Since I am the negative reviewer here, I want to emphasize (which I missed before) that my main concern is mainly the novelty and the difference from related work. However, I feel that this could be addressed in the rebuttal. Specifically, the goal is to **prevent** the generation of vulnerable code. I would like to ask you to clarify a few things in the paper to make it more accessible to the reader. Having that said, I am happy to increase my score. --- Reply to Comment 1.1.1: Comment: Thank you for your insightful feedback. We will update the paper to include a more detailed discussion on the differences between ProSec and related work, emphasizing its unique contributions. We appreciate your constructive suggestions and your willingness to increase your score.
Summary: This work enhances the security of code LLMs by proposing the PROSEC framework. PROSEC is an automated pipeline designed to synthesize code security-related preference data. It consists of three stages: 1) Construct instructions that induce insecure code based on Common Weakness Enumerations (CWEs) and ensure diversity of the instructions through a clustering method. 2) Analyze the responses for insecure content using a static analyzer. If insecure content is detected, it is regarded as a negative sample. Subsequently, the LLM is prompted to modify the response to remove the insecure content, and the modified response is regarded as a positive sample. 3) To ensure LLMs' utility, this work introduces normal preference data and retains the most impactful data. Experimental results show that the preference pairs constructed using the PROSEC framework, when applied with the SimPO alignment algorithm, can enhance the security of the code LLMs while maintaining their utility. Claims And Evidence: Yes, the claims made in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: Yes, proposed methods make sense for the problem at hand. Theoretical Claims: Yes, I check the correctness of any proofs for theoretical claims. Experimental Designs Or Analyses: Yes, Ic check the soundness of experimental designs. Supplementary Material: Yes, I review the supplementary material. Relation To Broader Scientific Literature: The key contributions of the paper are good. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: 1) The motivation is clear. The study designs an automatic pipeline for synthesizing code preference data and considers the diversity of instructions and the issue of data quality during the design. 2) The results are good. We note that the method proposed in this study significantly improves the security of the code LLMs while ensuring their usability. Weakness: 1) Although the method proposed in this work is effective, it is essentially a simple data synthesis method. This feels more like an engineering-focused work, and I am somewhat concerned whether this can be published at ICML. 2) The focus of this work should be on synthesizing higher-quality code preference data, but I am concerned about whether the introduction of normal preference data has had an impact. You should integrate the normal preference data into Safecoder or remove it from your method to conduct an ablation study. Other Comments Or Suggestions: None Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your detailed and supportive review. ## ProSec’s relevance to ICML We respectfully contend that our work aligns well with previous contributions recognized at ICML. For example, prior works such as data selection for language model training [Qurating, ICML’24], data synthesis for code language models [MagicCoder, ICML’24] and security-focused SFT for code language models [SafeCoder, ICML’24] introduced innovative practices in LLM training and alignment, thereby establishing a strong precedent for valuable engineering-focused work. Furthermore, we note that ICML has a tradition of supporting application-driven research. For instance, [Invariant; ICML’23] prompts LLMs to generate loop invariants – addressing a significant challenge in the software engineering domain; [Next; ICML’24] enhances program repair performance by incorporating execution trace information to prompts; and [StackSight; ICML’24] leverages LLMs to translate low-level code into readable C++ code via CoT prompts. Similarly, ProSec offers a scalable approach to securing code language models during the alignment stage. In particular, our introduction of utility-preserving dataset and our formulation of alignment side effects via training dynamics provide valuable technical insights into balancing security alignment and model utility. We believe these contributions address important practical challenges and pave the way for further advancements in aligning codeLM with domain specific constraints. ## Ablation study for normal preference data We appreciate the suggestions and will include the following discussion in the paper. During rebuttal, we conducted experiments to illustrate the effectiveness of introducing the utility-preserving dataset (DNorm). Due to time constraints, we evaluate all models on a subset of PurpleLlama and MXEval for security and utility assessment, respectively. The results are summarized below: | Model | Vul(%) | Util(%) | |--------------------|--------|---------| | The original model | 40.8 | 42.8 | | SafeCoder | 33.1 | 43.4 | | SafeCoder+DNorm | 34.4 | 46.1 | | ProSec | 25.0 | 45.2 | | ProSec w/o DNorm | 4.1 | 3.1 | We can see that DNorm indeed helps preserve the models’ utility. By adding DNorm to the SafeCoder dataset, the utility of the aligned model improves slightly while maintaining a comparable level of security performance compared to the vanilla SafeCoder dataset. In contrast, when DNorm is removed from the ProSec dataset, the model’s utility performance drops dramatically, indicating that aligning the model solely on the security dataset would significantly compromise its utility. [Qurating] Wettig, Alexander, et al. "Qurating: Selecting high-quality data for training language models." ICML’24 [MagicCoder] Wei, Yuxiang, et al. Magicoder: Empowering code generation with oss-instruct. ICML’24. [SafeCoder] He, Jingxuan, et al. Instruction tuning for secure code generation. ICML’24. [Invariant] Pei, Kexin, et al. "Can large language models reason about program invariants?."ICML’23 [Next] Ni, Ansong, et al. "Next: Teaching large language models to reason about code execution." ICML’24 [StackSight] Fang, Weike, et al. "StackSight: Unveiling webassembly through large language models and neurosymbolic chain-of-thought decompilation." ICML’24 --- Rebuttal Comment 1.1: Comment: Thank you for your comprehensive response. I would like to clarify that I am more interested in seeing the improvements in code security achieved **solely by using Dnorm**. In simple terms, I want to understand how much of the performance improvement comes from Dnorm and how much comes from the synthetic data you constructed. However, based on the results you presented, it seems that **Dnorm has played a significant role in enhancing security**. Does this mean that the significance of the data you constructed is diminished? In other words, could we improve model security by constructing more normal data instead? --- Reply to Comment 1.1.1: Comment: Thank you for the detailed question. **We would like to clarify that both security-focused data (DSec) and utility-preserving data (DNorm) are synthesized by the ProSec pipeline. Simply adding more normal (DNorm) data does not improve model security.** Intuitively, an excessive number of utility-preserving data samples (DNorm) can dilute the security alignment, while too few can impair the model’s utility. ProSec employs a training dynamic-based algorithm to identify and selectively include those utility-preserving data samples whose distribution is disrupted by the security alignment. The key technical contributions of ProSec are (1) synthesizing DSec that **enhances the model’s security performance** with diverse security-focused coding scenarios, (2) synthesizing DNorm that **preserves the model’s utility** during the security alignment process, and (3) proposing a data selection algorithm that achieves better balance between the security enhancement and utility preservation. We appreciate the question and will add the following discussion to the paper to clarify this further. Following are the details. ## Clarification on Metrics Following established practices[1,2], we evaluate a security alignment dataset from two perspectives: (1) Security performance, measured as the ratio of vulnerable code generated. **A lower percentage indicates better security.** (2) Utility performance, measured by the model’s performance on coding tasks, with a higher score indicating better utility preservation. ## Clarification on Results **Our results indicate that DNorm primarily supports utility preservation rather than enhancing security.** For example, consider the rows “SafeCoder” and “SafeCoder + DNorm” in the table we shared during the initial rebuttal. The model aligned without DNorm achieved a security performance of 33.1, compared to 34.4 when DNorm was included. Although the security metrics remain relatively similar, the inclusion of DNorm clearly benefits utility performance, with scores rising from 43.4 to 46.1. A similar trend is observed with the ProSec dataset (rows “ProSec” and “ProSec **w/o** DNorm”). In this instance, the model aligned without DNorm achieved a better security performance (4.1) than its counterpart with DNorm (25.0). However, the utility dropped substantially – from 45.2 to 3.1 – when DNorm was omitted. The results show that DNorm’s role is not to enhance security but to mitigate the adverse effects on utility that can arise during security alignment. Further evidence is provided by the trends shown in Table 2 (line 385) of the paper. The key results are as follows: | Configuration | Vul(%) | Util(%) | |-------------------|--------|---------| | DSec+10%DNorm | **5.9** | 15.3 | | DSec+30%DNorm | 27.5 | 42.1 | | DSec+70%DNorm | 25.6 | **45.1** | As the proportion of DNorm increases—from 10% to 70%—utility performance improves (from 15.3 to 45.1), while security performance shifts (from 5.9 to 25.6). The results demonstrate that simply increasing the amount of DNorm data does not lead to enhanced security; rather, it primarily preserves the model's utility while striking a balance between security and performance. Note that the security metric exhibits a minor (<2%) increase when DNorm is raised from 30% to 70%. This slight variation is likely due to randomness in the sampling of DNorm subsets, given that the change is much smaller compared to the shift observed between 5.9 and 27.5. We hope the above discussion clarifies potential misunderstandings. We will add the discussion to our paper. [1] He, Jingxuan, et al. Instruction tuning for secure code generation. ICML’24. [2] He, Jingxuan,et al. Large language models for code: Security hardening and adversarial testing. CCS’23
Summary: The paper proposes ProSec (Proactive Security Alignment), an approach to align code LLMs with secure coding practices. * It exposes the vulnerabilities by synthesizing error-inducing scenarios from Common Weakness Enumerations (CWEs) and generates fixes to vulnerable code. * Models are then trained with preference learning objectives The proposed synthesis procedure trigger more vulnerable code and resulting in a much larger dataset than previous work. Models trained with ProSec are significantly more secure without degrading performance. ### update after rebuttal ### The authors have addressed my concerns and questions. I'm keeping my score as "accept". Claims And Evidence: The claims are well-supported. Methods And Evaluation Criteria: Yes, the evaluations for the proposed ProSec framework make sense. Theoretical Claims: N/A Experimental Designs Or Analyses: Yes, the experimental designs and analyses are valid and thorough. Supplementary Material: I skimmed through appendix A & D. Relation To Broader Scientific Literature: The main contribution lies in the method for constructing a relatively diverse vulnerable code dataset, across many types of weaknesses, tasks and coding languages. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: * The proposed method can greatly enrich datasets of vulnerable code. * It shows that with (offline) preference optimization, models post-trained on the ProSec generated dataset outperforms a baseline dataset in security, without degrading capabilities. Weaknesses: * the vulnerabilities are limited to the CWE (common weakness enumerations) set. Other Comments Or Suggestions: N/A Questions For Authors: Is the improvement only affected by the fact that the generated dataset is 7x larger than SafeCoder? Can you do a control experiment where both datasets have the same size (and same secure / vulnerable mixture ratio), and test if the data quality of ProSec is also higher (e.g. more diverse)? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the supportive review. >Is the improvement only affected by the fact that the generated dataset is 7x larger than SafeCoder? Can you do a control experiment where both datasets have the same size (and same secure / vulnerable mixture ratio), and test if the data quality of ProSec is also higher (e.g. more diverse)? The quality of ProSec dataset is higher than SafeCoder because ProSec scales to more diverse scenarios without requiring manual efforts. SafeCoder constructs alignment data entries by collecting real-world vulnerabilities and the corresponding fixes. However, real-world vulnerabilities and their fixes are sparse. For example, SafeCoder only collects 465 entries from 145 million git commits. On the other hand, ProSec leverages LLMs to enumerate potentially vulnerable-inducing scenarios, systematically scaling up to more diverse scenarios. Therefore, the overall quality of the ProSec dataset is better than that of the SafeCoder dataset. ## Empirical Support During rebuttal, we did another set of experiments as suggested by the reviewer. - **[Size]** We randomly sample a subset of the ProSec dataset to match the size of the SafeCoder dataset and run the alignment algorithm again. The resulting model is noted as “ProSec[Subset]”. - **[Mixture ratio]** The original SafeCoder dataset does not contain utility-preserving data for preference optimization. To make sure the ratio of security-focused and utility-preserving data samples are the same between ProSec and SafeCoder, we mix the vanilla SafeCoder dataset with the utility-preserving dataset of ProSec at the same ratio. The resulting model is noted as “SafeCoder+DNorm”. The empirical results show that the ProSec dataset is better than the SafeCoder dataset when both datasets contain the same number of entries and when both datasets contain the same ratio of security/utility data samples. Note that we evaluate all models on subsets of PurpleLlama and MXEval for security and utility measurements due to time constraints. | Model | Vul(%) | Util(%) | |:---------------------|--------|---------| | The original model | 40.8 | 42.8 | | SafeCoder | 33.1 | 43.4 | | SafeCoder+DNorm | 34.4 | 46.1 | | ProSec[Subset] | 28.9 | 47.0 | | ProSec[Full] | 25.0 | 45.2 | We can see that the model trained on the ProSec subset is safer than that trained on the SafeCoder dataset with the same size. It also outperforms the SafeCoder dataset mixed with the same ratio of benign data samples. That indicates the data quality of ProSec is indeed higher. Moreover, the model trained on the full ProSec dataset is safer than that trained on the subset, as expected. That is because the full ProSec dataset covers more scenarios. The alignment training on all datasets do not significantly affect coding utilities. We will include the experiments in the paper. --- Rebuttal Comment 1.1: Comment: Thank you for your response and the additional experiment! It has addressed my concerns, and it'll be great to include this experiment in the paper. I'll keep my score, which is already an accept.
Summary: This work proposes ProSec, a LLM-based framework to generate synthetic preference/alignment data containing security vulnerabilities using CWEs (Common Weakness Enumerations) data. Authors demonstrate that models (Phi3-mini-Inst and CodeLlama-7B-Inst) trained (SimPO) with ProSec alignment data produce code that is more secure in comparison to training via state of the art safecoder dataset. Ablation studies demonstrate preservation of utility. Broadly the idea of aligning LLMs for security is novel and the approach to generating synthetic preference data using CWEs is interesting, although the technique used by prompting chatGPT (line 283 mentions claude-3.5-haiku, not clear if this is used). The dependence on GPT like model is heavy. Authors use chatGPT to generate a vulnerability inducing instruction using the CWEs, use the instruction to generate code that has a security vulnerability, and also use the model to correct the vulnerability. Claims And Evidence: Models aligned with ProSec data more secure than safecoder dataset on vulnerable code ratio scores - Phi3-mini-Inst models (28.86% vs. 44.72% ) - CodeLlama-7B-Inst models (28.55% vs. 40.33%) Utility preservation: ProSec achieves security improvements with minimal impact on model performance on regular benchmarks. Differences in performance on coding benchmarks < 2%. Good performance on multi-lingual HumanEval and MBPP benchmarks. Coverage across multiple programming languages (C/C++, Java, JavaScript, Python) Methods And Evaluation Criteria: Yes Theoretical Claims: NIL Experimental Designs Or Analyses: Yes Supplementary Material: Prompts in appendix B. Relation To Broader Scientific Literature: I have not seen similar work on alignment for security. Essential References Not Discussed: NIL Other Strengths And Weaknesses: Strengths - novel idea to align LLMs for security by proactively generating synthetic data using CWEs (vs relying on previous approaches that construct datasets of vulnerable code and corresponding fixes from GitHub commits which can be quite sparse - 465 samples from 145 million git commits) Weakness - heavy reliance of ChatGPT like models for synthetic data generation. No open-source models in conjunction with agentic approaches were attempted. Other Comments Or Suggestions: None Questions For Authors: - Could you please explain what static analysers are used to validate generated (a) code with security vulnerabilities (b) fixed code. - Could you please explain how the alignment data is filtered and what type of samples are removed in this process. - Is the assumption that all CWEs are detectable via static analyzers. If so, why can't one continue to use existing LLMs (without alignment) and simply run the static checks post generation to check if code has security vulnerabilities. How does this work compare with other possible approach that might run the static checkers post generation via regular LLM and iteratively fix vulnerabilities using ChatGPT. - what was the cost of generating synthetic data by querying ChatGPT. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the supportive and detailed review. We will include the discussions below to the paper. ## Q1: Explain static analyzers We use the static analyzers in PurpleLlama. It consists of three tools: regular expressions, semgrep, and weggli. All tools work on the generated source code. Regular expressions simply match insecure patterns in code (e.g., matching deprecated APIs). Semgrep and weggli first build the AST from a code snippet, and then search for problematic patterns (e.g., a potential NULL pointer that is dereferenced without being checked). ## Q2: How alignment data is filtered The alignment dataset contains two parts: the secure practice preference data (DSec) and the utility preservation preference data (DNorm). For both dataset, a data entry contains a prompt, a preferred code snippet, and a less-preferred code snippet. We control the quality of DSec by heuristics. We first check the syntactic correctness of the code snippets. Then we use static analyzers to make sure the preferred code is secure. After that, we use heuristics to make sure the preferred (secure) code snippet has corresponding functionality with the vulnerable code. Finally, we use string similarity (based on editing distance) to deduplicate data entries. For DNorm, we develop a data selection algorithm that takes training dynamics into consideration. Intuitively, the algorithm identifies and prioritizes the data entries whose utility are broken during the security alignment to mitigate degradation in utility. Specifically, we first align the target model with DSec only. For each candidate entry in DNorm, the algorithm keeps track of how its generation probabilities change during the alignment training. Then the algorithm selectively includes the entries whose generation probabilities decrease significantly during the security training. Please refer to section 3.3 (line 240) in the paper for details. ## Q3: Comparison with post-processing Yes, we focus on CWEs that can be detected by static analyzers. There are more complex CWEs that rely on global information or knowledge about program functionality. Detecting such CWEs are open challenges[1,2]. We leave it as future work to improve alignments on complex CWEs. An agentic workflow incurs higher computational costs and increased latency because it runs a static analyzer for every coding request and may require multiple queries to the code language model. This design could degrade the user experience in scenarios like code copilots, where swift completions are expected. In fact, post-processing with an agentic design complements code model alignment techniques: an aligned codeLM may reduce the number of conversational turns needed, while the agent can capture edge cases where the model produces insecure code. We further use empirical results to show the cost an agentic workflow may introduce. During rebuttal, we additionally implemented an agentic baseline that uses static analyzers to check generated code and iteratively asks the code generation model to fix. For each fix request, we provide the feedback from the static analyzers, the problematic code, and the initial coding instruction to the model. We evaluate the agentic workflow on a randomly sampled subset of PurpeLlama due to time constraints. Here are the statistics: | Max Fix Attempts | Success Rate (%) | |------------------|------------------| | 3 | 68.6 | | 5 | 73.7 | | 10 | 80.6 | Moreover, on average, a coding request requires five rounds of fixes to achieve the secure performance of ProSec. This demonstrates that using a security-aligned model is more efficient than simply applying post-processing to a secure code generation agentic workflow. Note that we choose to use the tested codeLM to fix the code, instead of using black box LLMs. That is because in a realistic use scenario of a smaller code LM, querying a larger black box model might be less preferred (due to the latency and cost) or even not allowed (due to privacy and policy concerns). ## Q4: Cost We use Claude-3.5-haiku to synthesize the instructions. For each CWE, we synthesize ~10k initial instructions, and then cluster them to identify the most diverse 2k instructions. The cost to synthesize instructions for each CWE is around 5 USD. [1] Ding, Yangruibo, et al. "Vulnerability detection with code language models: How far are we?." arXiv preprint arXiv:2403.18624 (2024). [2] Google, Project Zero. https://googleprojectzero.blogspot.com/2024/06/project-naptime.html
null
null
null
null
null
null
Symmetric Reinforcement Learning Loss for Robust Learning on Diverse Tasks and Model Scales
Accept (poster)
Summary: This paper proposes a “Symmetric Reinforcement Learning Loss” (SRL) to improve the robustness of policy-gradient algorithms—namely A2C and PPO—when facing noisy or inconsistent advantage estimates. The core idea is to adapt the concept of “symmetric cross-entropy,” to the RL setting, originally developed for classification under label noise. Concretely, the authors add a “reverse RL loss” term that, together with the standard policy-gradient loss, forms a “symmetric RL loss.” They name the resulting methods SA2C (Symmetric A2C) and SPPO (Symmetric PPO). The paper demonstrates that in both standard control tasks (Atari, MuJoCo, Box2D) and RLHF-style language tasks (IMDB positive-sentiment generation and TL;DR summarization), the symmetric loss improves performance over standard A2C/PPO—especially when reward signals (or advantage estimates) are noisy or suffer from misalignment. The authors attribute the gain primarily to PPO’s increased susceptibility to advantage “sign flips” (confusion) and show that introducing the reverse RL term helps correct or dampen these detrimental effects. Claims And Evidence: - Main Claim: Adding a symmetric term to the actor loss, which mirrors the standard policy-gradient update in reverse, yields more stable and robust learning, leading to higher final performance across diverse tasks and scales. Instabilities are hypothesized to be mainly caused by advantage "sign flips" which are emphasized through advantage normalization based on mini batch statistics. - Evidence: The paper reports results on 22 Atari games (with and without artificially flipped rewards), several MuJoCo/Box2D continuous-control tasks (with injected noise), and two RLHF tasks (IMDB sentiment and TL;DR summarization). In nearly all tested scenarios with noise or sign confusion, SPPO outperforms PPO, often by a substantial margin. Plots show that advantage sign flips are common, which corroborates the main motivation. In RLHF experiments, SPPO achieves higher model-based reward scores and higher human-like preference judgments (via GPT-4) than PPO. A different potential fix to the problem of the high fraction of advantage sign flips would be to use significantly larger mini batch sizes. To the best of my knowledge, PPO practitioners have found much larger batch sizes of $\gt 100k$ samples to work much better than smaller batch sizes (Rudin et al. 2022, Learning to Walk in Minutes Using Massively Parallel Deep Reinforcement Learning). While the results are overall convincing, the paper primarily focuses on final performance metrics (rewards, policy returns, or model-based reward scores) and includes standard deviation ranges across multiple seeds. The paper would benefit from presenting the results visually (training curves or bar charts) rather than in tabular form, which is very hard to interpret well. Further, it could benefit from using more thorough metrics like IQM and confidence intervals aggregated over multiple environments/experiments (Agarwal et al. 2022, Deep Reinforcement Learning at the Edge of the Statistical Precipice). Methods And Evaluation Criteria: - Theoretical Claims: I did not check the derivations. Experimental Designs Or Analyses: - Soundness/Validity: The designs follow standard practice in RL research: multiple seeds, standard baselines (A2C/PPO vs. SA2C/SPPO), well-known benchmark environments. The chosen “noise injection” strategies are straightforward and sensible to highlight the method’s robustness. The RLHF experiments use standard reward-model training and GPT-4 preference judgments, which is a recognized approach in alignment research. - Potential Issues: RLHF tasks rely on pretrained reward models, which can itself be noisy. While that’s precisely the paper’s motivation, it slightly complicates attributions of success (i.e., is the improvement from simply ignoring reward-model mistakes or from genuinely mitigating advantage confusion?). The authors do partially demonstrate examples (e.g., empty summaries yielding higher reward) to show how confusion arises. Overall, the experiments appear valid, though it would be beneficial to see more fine-grained ablations (for example, comparing the effect of bigger vs. smaller batch sizes on advantage confusion). Supplementary Material: Figure 3. Relation To Broader Scientific Literature: The work draws work with robust supervised learning under label noise (symmetric cross-entropy). It also builds on well-known RL fundamentals (A2C, PPO) and RLHF techniques. The paper cites relevant prior methods for dealing with overestimation, distribution shifts, and training instability (Double DQN, GAE, trust-region methods). This connection to “robust supervised loss” methods is the main novelty. The authors also mention alternative strategies for RLHF like ranking-based or preference-based RL. Overall, the references and positioning in the literature appear reasonable. Im the introduction the authors further mention the use of supervised learning techniques in RL (ensembles, layernorm, batch norm), without any citations. These works should be cited: - REDQ: Chen et al. 2021 - DroQ: Hiraoka et al. 2021 - CrossQ: Bhatt et al. 2024 Essential References Not Discussed: - Other Strengths And Weaknesses: Strengths: - Simple, elegant idea that is easy to integrate with existing policy-gradient code. - Demonstrates robust gains across various tasks (discrete/continuous control, language tasks) and with artificially perturbed rewards and real RLHF-based noise. - Thorough experiments with multiple seeds and hyperparameter sweeps. Weaknesses: - The method adds extra hyperparameters (α, β, Z), which can be a drawback for ease of adoption—though the authors fix α and Z in their experiments. - While the paper shows that advantage sign flips frequently occur, the discussion is largely heuristic rather than formal. - For the RLHF tasks, all evaluation rests on an imperfect reward model and GPT-4-based comparison (no direct human eval). Though typical in modern LLM research, more thorough human-based or multi-metric evaluations could be enlightening. Other Comments Or Suggestions: - Questions For Authors: - Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thorough review and constructive feedback on our paper. To address your questions and concerns, we have provided detailed responses below. **Question 1** > PPO practitioners have found much larger batch sizes of 100k samples to work much better than smaller batch sizes => Thank you for pointing out this work that uses very large batch sizes. However, the effectiveness of batch size remains somewhat controversial—likely depending on the environment—as other studies [1, 2] argue that smaller batch sizes can be more beneficial. The success of the work you mentioned may be due to its specific setting, where over 10,000 agents can generate a massive number of data points. In general, we believe that, given typical computational and cost constraints, updating an LLM with a batch size of 100k is often not feasible in standard setups. **Question 2** > Is the improvement from simply ignoring reward-model mistakes or from genuinely mitigating advantage confusion? => Rather than focusing on reducing the error rate of a trained reward model (which is certainly one way to improve performance), our method directly targets the mitigation of advantage confusion. Even when there is no reward model error, advantage confusion can still arise due to techniques like advantage normalization (Figure 2). We observe that SPPO still shows improvements in the noise-free setting (Section 5.4). **Question 3** > Comparing the effect of bigger vs. smaller batch sizes on advantage confusion => Thank you for suggesting this interesting experiment. If the paper is accepted, we will include results evaluating our method across a range of batch sizes. However, we believe the effect is quite task-dependent. Based on our current understanding, Atari games tend to perform better with smaller batch sizes [1, 2], while MuJoCo tasks may benefit from larger batch sizes [3]. **Weakness 1** > The method adds extra hyperparameters $(\alpha, \beta, Z)$, which can be a drawback for ease of adoption => While the symmetric RL loss introduces three hyperparameters, $\beta$ and $Z$ can be treated as a single term. To demonstrate ease of adoption, we conducted a hyperparameter sensitivity analysis for $\beta$ while keeping $\alpha$ and $Z$ fixed—particularly in the PPO setting (see Table 11 in the Appendix). We observed consistent performance improvements across a wide range of $\beta$ values, suggesting that the method is easy to apply in practice. **Weakness 2** > While the paper shows that advantage sign flips frequently occur, the discussion is largely heuristic rather than formal. => We provide empirical evidence of sign flip ratios in Table 2, evaluated across multiple random seeds for a diverse set of environments (5 for Atari games and 30 for the MuJoCo benchmark). We believe that advantage sign changes are obviously expected after advantage normalization, but we will include results from additional environments to further support this observation. **Weakness 3** > For the RLHF tasks, all evaluation rests on an imperfect reward model and GPT-4-based comparison (no direct human eval). Though typical in modern LLM research, more thorough human-based or multi-metric evaluations could be enlightening. => We used GPT-4 as the evaluator and measured win rates based on two comparison examples, following the AlpacaEval [4]. Our experiments involve relatively smaller models and simpler tasks, where GPT-4’s evaluations have been shown to align closely with human judgments. **Thank you for suggesting REDQ, DroQ and CrossQ. We will make sure to cite them in the our paper.** [1] The Phenomenon of Policy Churn, Neurips 2022\ [2] Small batch deep reinforcement learning, NeurIPS 2024\ [3] Sample Efficient Deep Reinforcement Learning via Uncertainty Estimation, ICLR 2022\ [4] Length-Controlled AlpacaEval: A Simple Way to Debias Automatic Evaluators, COLM 2024
Summary: The manuscript introduces a symmetric reinforcement learning (RL) loss function designed to improve the robustness of RL algorithms like A2C and PPO. The proposed symmetric RL loss is inspired by reverse cross-entropy (RCE) used in noisy classification tasks, and the authors apply it to both discrete (Atari) and continuous (MuJoCo, Box2D) RL tasks. The experiments demonstrate that Symmetric A2C (SA2C) and Symmetric PPO (SPPO) outperform traditional A2C and PPO, particularly in environments with added reward noise. Claims And Evidence: Strengths 1. Strong Experimental Setup: The manuscript provides comprehensive experiments across diverse environments (Atari, MuJoCo, Box2D) and real-world applications (IMDB, TL;DR summarization), demonstrating the generalizability of the approach. Weaknesses and Areas for Improvement 1. Insufficient Comparison with Other Robustness Methods: While the paper focuses on the symmetric RL loss, it lacks detailed comparisons with other established methods for robust RL, such as Double Q-learning or SAC, which could provide a clearer context for its benefits. 2. Limited Theoretical Justification: The paper would benefit from a more detailed theoretical explanation of why the symmetric RL loss is particularly effective in addressing the noise in reward prediction, especially when compared to existing techniques like Generalized Advantage Estimation (GAE). 3. Hyperparameter Sensitivity: The sensitivity of the performance to hyperparameters such as α and β is not sufficiently explored. More extensive analysis is needed to understand how these hyperparameters affect training and performance stability. Scalability and Computational Cost: The manuscript does not provide enough information regarding the computational complexity and scalability of the proposed method, especially in large-scale environments or real-time applications. Methods And Evaluation Criteria: Clear Results: The experiments clearly show the effectiveness of SPPO over PPO, especially in noisy reward environments. Theoretical Claims: 3. Methodology 3.1 Symmetric RL Loss • Section 4, Paragraph 2: The explanation of the reverse RL loss and its application to A2C and PPO is somewhat vague. The connection between the noisy advantage estimates in RL and the reverse cross-entropy loss is not fully established. ○ Suggestion: Provide a more detailed explanation of how the reverse RL loss functions in practice, particularly focusing on how it corrects the advantage sign confusion caused by noisy rewards. 3.2 Symmetric A2C and PPO • Section 4, Equation 9: The use of constants α and β is introduced, but the rationale for their selection is not clear. ○ Suggestion: Discuss how the values of α and β are chosen. Are they determined through cross-validation, or are they fixed based on prior empirical knowledge? A sensitivity analysis of these hyperparameters would be valuable. Experimental Designs Or Analyses: 4. Experiments 4.1 Atari Games • Section 5.1: The results on Atari games show SPPO performing well in noisy settings, but the authors do not discuss the implications of these results for the real-world application of RL algorithms. ○ Suggestion: Provide insights into how the improvements observed in Atari games can be generalized to real-world tasks, particularly those that involve more complex reward models or environments. 4.2 MuJoCo and Box2D Tasks • Section 5.2: While the performance of SPPO is evaluated on continuous action tasks, the impact of reward noise is not thoroughly discussed. ○ Suggestion: Include a more detailed analysis of how the added noise specifically affects the training process and how the symmetric RL loss mitigates these effects in continuous action spaces. Supplementary Material: Yes. Appendix C, experimental setups and results. Relation To Broader Scientific Literature: 5. Conclusion • Section 6, Paragraph 2: The conclusion briefly mentions future work but does not propose specific directions for improving or extending the symmetric RL loss method. ○ Suggestion: Discuss potential future improvements, such as exploring the scalability of the method in large-scale environments, or applying the approach to other RL algorithms or tasks (e.g., multi-agent systems). Essential References Not Discussed: Scalability and Computational Cost • Provide a discussion on the computational cost of the symmetric RL loss, especially in large-scale or real-time settings. • Analyze the scalability of the approach across different model architectures. Other Strengths And Weaknesses: Please refer to Claims and Evidence. Other Comments Or Suggestions: 1. Abstract • Suggestion: The abstract outlines the problem and proposed solution but lacks quantitative details on the performance improvements achieved by SPPO over PPO, especially under noisy conditions. ○ Recommendation: Include key findings, such as the percentage improvement in reward scores or stability in noisy environments. 2. Introduction • Section 1, Paragraph 4: The authors mention that "RL methods introduce challenges such as moving targets and high gradient variance," but they do not sufficiently explain how these issues are specifically mitigated by the symmetric RL loss. ○ Suggestion: Provide a clearer link between the inherent challenges of RL and how the proposed symmetric RL loss addresses these challenges. Specifically, discuss how reverse cross-entropy aligns with the noise in reward models. Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thorough review and valuable feedback on our paper. We hope the response below addresses your concerns. **Weakness 1** > Insufficient Comparison with Other Robustness Methods: While the paper focuses on the symmetric RL loss, it lacks detailed comparisons with other established methods for robust RL, such as Double Q-learning or SAC, which could provide a clearer context for its benefits. => Rather than expanding to additional types of reinforcement learning algorithms, we focused on those whose working mechanisms align with the symmetric loss formulation in noisy classification settings. Instead, we aimed to demonstrate the effectiveness of the symmetric RL loss across a diverse range of problem domains—including discrete action spaces, continuous action spaces, and NLP tasks—highlighting its applicability even to large language models. **Weakness 2** > Limited Theoretical Justification: The paper would benefit from a more detailed theoretical explanation of why the symmetric RL loss is particularly effective in addressing the noise in reward prediction, especially when compared to existing techniques like Generalized Advantage Estimation (GAE). => In all our experiments, we used Generalized Advantage Estimation (GAE), and we will explicitly clarify this in the experimental section. Since GAE depends on the reward, noise in the reward can lead to confusion in advantage estimation. Our symmetric RL loss acts as an effective accelerator to mitigate this issue. We provide a detailed analysis of how this acceleration benefits the learning process in Section 4.3 and Appendix A.1 to B.2. **Weakness 3** > Hyperparameter Sensitivity: The sensitivity of the performance to hyperparameters such as $\alpha$ and $\beta$ is not sufficiently explored. => We provide additional hyperparameter sensitivity analysis in Table 11 of the Appendix for SPPO, both with and without reward noise, showing consistent improvements across a range of values. Note that $Z$ and $\beta$ can be treated as a single hyperparameter (see the first paragraph of Section 4.1 for details). **Weakness 4** Scalability and Computational Cost: More extensive analysis is needed to understand how these hyperparameters affect training and performance stability. => The symmetric RL loss is scalable, even for large language models. When selecting the next action (token) via softmax, the model also produces probabilities for non-selected tokens. Computing gradients with respect to these non-selected probabilities introduces some additional overhead, but this cost is roughly equivalent to adding a single MLP layer (we will include this explanation in the paper). Given the overall size and structure of LLMs, this additional cost does not significantly affect training speed. In fact, SPPO showed faster training than PPO for some seeds—i.e., within the overhead of adding one MLP, GPU memory allocation conditions had a greater effect. We briefly address training speed in the last paragraph of Section 5.5.
Summary: Reinforcement learning (RL) training is inherently unstable due to factors such as moving targets and high gradient variance. Reinforcement Learning from Human Feedback (RLHF) and Reinforcement Learning from AI Feedback (RLAIF) introduce additional challenges. For instance, diverse preferences complicate the alignment process, and prediction errors in a trained reward model can become more severe as the LLM generates unseen outputs. These RL challenges create confusion about whether the probability of an action for a given state should be increased or decreased, similar to the noise in labels for classification tasks. In this work, the authors focus on RL algorithms that share learning difficulties with cross-entropy loss, especially for low-probability predictions. To enhance stability, they adapt reverse cross-entropy (RCE) from supervised learning for noisy data, defining a symmetric RL loss. They demonstrate performance improvements across various tasks and scales. # Update after rebuttal All my concerns have been addressed thanks to the authors' efforts in rebuttal. In all, I decided to raise my score to weak accept. Claims And Evidence: The SPPO applies symmetric loss by adding traditional RL loss with a reverse RA2C or RPPO loss. And the result in Table 1 somewhat validates the claims that SPPO avoids value estimation errors due to the sign of the advantage in advantage normalization dependent on how the batch is composed. Methods And Evaluation Criteria: I don't understand why adding the reverse advantage loss could help relieve the problem of estimation error resulting from noisy human feedback and multiple sources of scaled models. Theoretical Claims: The authors use gradient analysis to formulate the policy update and show the convergency guarantee in appendix. Experimental Designs Or Analyses: Experimental design is valid and analysis is comprehensive. Supplementary Material: I didn't read the Supplementary Material. Relation To Broader Scientific Literature: It has a large impacts to the domain of RLHF with noisy reward lables and non-stationary feedback signal. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strength: - Formulation is clear and the idea combines symmetric cross entropy and A2C/PPO framework. - It provides theoretical proof of gradient analysis of RL loss and reverse RL loss. Weakness: - Motivation is unclear about why bother to use symmetric CE loss other than directly train on large amounts of data sets of diverse tasks. - Baselines are too few for comparison in Table 2. Other Comments Or Suggestions: Add more recent RL baseline comparison in Table 2 and 3. Questions For Authors: - What is the role of Z in equation 7. - Why SPPO still performs well in noisy-free setting. - What is the benefits of advantage normalization with small batch sizes. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thorough review and valuable feedback on our paper. We try to handle your questions and provide additional details below. **Please let us know if our responses resolve your questions and concern. If so, we would greatly appreciate your consideration in updating your score. We’re also happy to continue the discussion if you have further questions. Our answers are as follows:** **Question 1** > I don't understand why adding the reverse advantage loss could help relieve the problem of estimation error resulting from noisy human feedback and multiple sources of scaled models. => In the Gradient Analysis section, we describe how the reverse RL loss helps the learning procedure in the presence of noise, acting as an accelerator. We believe you would agree that when noise is present, the policy lacks certainty about which action to take. Since the reverse RL loss gradient has its maximum magnitude at 50\% (a parabolic function) and its direction aligns with the original RL loss, it helps the policy move away from this ambiguous state. Please refer to further details in Section 4.2 and Sections A.1 to A.4 in the Appendix. We provide a comprehensive derivation and analysis for all cases. **Question2** > Why bother to use symmetric CE loss other than directly train on large amounts of data sets of diverse tasks? => Increasing the data amount is one potential solution, but Reinforcement Learning (RL), especially with large models, has burdensome such as generating actions and getting the corresponding rewards. For the rewards, we would need a highly engineered reward function, human, or AI evaluators (which can also have noise). Therefore, achieving better performance using the same amount of data is obviously advantageous. The symmetric RL loss (or symmetric CE loss) facilitates this and is easy to implement. Since action (or token) sampling is done through the softmax, the other probabilities can be obtained naturally for symmetric losses. **Question 3** > What is the role of $Z$ in equation 7? => $Z$ was introduced in previous work [1] on noisy classification datasets to address the issue of $\log 0 = -\infty$, which cannot be used when updating a neural network. To resolve this numerical problem, $-\infty$ is replaced with a negative value $Z$, which has been shown to still satisfy the conditions of a robust loss function. In our case, we also use $Z$ to handle negative advantages (see Section 4.1 for more details) **Question 4** > Why SPPO still performs well in noisy-free setting? => Compared to A2C, PPO has off-policy update parts and typically uses smaller batch sizes (e.g., 64), whereas A2C processes all data points at once (i.e., no batch). Additionally, PPO applies the advantage normalization by default, which can introduce sign changes in the advantages. While PPO is more sample-efficient than A2C, these characteristics introduce sources of noise—even in noise-free settings. Our symmetric RL loss helps mitigate these noisy factors, which can lead to confusion in advantage estimation, thereby improving performance even in clean environments (see Section 5.4 for details). **Question 5** > What is the benefits of advantage normalization with small batch sizes? => While some studies [2, 3] suggest that smaller batch sizes can be more beneficial than larger ones, our main point is not directly about batch size. In practice, we commonly use batch sizes such as 32, 64, or 512 for PPO, whereas A2C processes the entire dataset without batching.However, after applying advantage normalization, the signs of the advantages can flip depending on how the batch is sampled. For example, a given sample $(s,a)$ might end up with a positive or negative advantage solely based on the composition of the batch. This introduces confusion, which can be interpreted as a form of noise. In such cases, the symmetric loss can help mitigate the impact of that noise. That said, this doesn’t mean advantage normalization is harmful. Although it can cause sign changes in advantages, it also helps limit the influence of extremely large advantage values, thereby stabilizing the policy update process (Please refer to Section 5.4). In summary, the symmetric RL loss retains the benefits of advantage normalization while further alleviating the instability caused by sign changes. **Weakness 1** > Baselines are too few for comparison in Table 2. => We further provide results for PPO in a noise-free setting in Table 12, including an additional environment. Additionally, we report A2C results in Tables 8 and 9 to support further comparison. [1] Symmetric Cross Entropy for Robust Learning with Noisy Labels, ICCV 2019\ [2] The Phenomenon of Policy Churn, Neurips 2022\ [3] Small batch deep reinforcement learning, NeurIPS 2024 --- Rebuttal Comment 1.1: Comment: Thanks for the author's detailed explanation and provide additional comparison results. All my concerns have been resolved and I decided to raise my score accordingly.
Summary: The paper proposes a new family of loss, symmetric A2C and symmetric PPO loss for RL tasks. Claims And Evidence: Claims: The paper asserts that its policy gradient formulation with the newly proposed loss achieves superior or at least competitive performance in Atari benchmarks, and that using GPT-J with RLHF on TLDR/IMDB datasets demonstrates improved alignment. Evidence: For Atari, the evidence primarily comes from charts showing improved scores vs. baseline agents, which seem reasonably convincing if one accepts the standardness of Atari benchmarks. For RLHF, the evidence includes limited experimental data on GPT-J with relatively small and somewhat outdated datasets, making it less clear if the approach scales to more advanced language models and richer feedback datasets. Methods And Evaluation Criteria: Methods: The main methodological innovation is the introduction of new loss functions (Equations 8 and 9) that factor in reward signals and the policy gradient. However, the approach enumerates all possible actions in the loss, which can be computationally feasible for small/medium discrete action spaces (e.g., Atari), but becomes very large and inefficient for language modeling. Evaluation Criteria: Benchmarking on Atari is fairly standard; the paper also attempts to assess alignment improvements on TLDR/IMDB datasets. However, these datasets and the GPT-J baseline may not fully reflect current state-of-the-art for language model RLHF, limiting the generalizability of the reported results. More recent studies on LLM alignment typically use newer models (e.g., Llama 2 / 3, Qwen 2.5 or beyond) and richer feedback datasets (e.g., Ultrafeedback, Chatbot Arena open data, Nectar etc.). Incorporating these newer data sources and more modern baselines would better situate the method in the current literature and could potentially unlock more significant empirical gains. Theoretical Claims: The derivation looks good to me Experimental Designs Or Analyses: The experiments on Atari appear consistent, using standard protocols (training steps, seeds, reported scores). For the RLHF experiments, the design is somewhat limited: the use of GPT-J with TLDR/IMDB is not clearly motivated given the more up-to-date large language models and more comprehensive feedback datasets available. Supplementary Material: Yes Relation To Broader Scientific Literature: The paper builds on standard RL methods and extends them with an additional symmetric term. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Please see above Other Comments Or Suggestions: Please see above Questions For Authors: Please see above Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thorough review and valuable feedback on our paper. We hope the response below addresses your concerns. **Weakness 1** > The approach enumerates all possible actions in the loss, which can be computationally feasible for small/medium discrete action spaces (e.g., Atari), but becomes very large and inefficient for language modeling. => In large language models, the next token (action) is sampled via softmax, which already provides probabilities for all other (non-chosen) tokens—so there's no need for additional computation to obtain them. Incorporating the probabilities of other tokens during backpropagation introduces some additional computation, but it is roughly equivalent to adding a single MLP layer. Given the overall size and structure of LLMs, this does not significantly impact training speed. In fact, SPPO (the symmetric RL for PPO) showed faster training than PPO for some seeds—i.e., within the overhead of adding one MLP, GPU memory allocation conditions had a greater effect. We briefly address training speed in the last paragraph of Section 5.5. **Weakness 2** > More recent studies on LLM alignment typically use newer models (e.g., Llama 2 / 3, Qwen 2.5 or beyond) and richer feedback datasets (e.g., Ultrafeedback, Chatbot Arena open data, Nectar etc.). => These methods were motivated from the perspective of noisy rewards in standard deep RL tasks. We show that they perform well across a range of RL tasks and limited experiments in RLHF. It is difficult to cover every setting---we think that the promising nature of this approach is established by our experiments. We promise that if the paper is accepted we will add Qwen2.5 on the TLDR task to the results table.
null
null
null
null
null
null
Lexico: Extreme KV Cache Compression via Sparse Coding over Universal Dictionaries
Accept (poster)
Summary: The paper introduces Lexico, a KV cache compression method using sparse coding over universal dictionaries. By leveraging Orthogonal Matching Pursuit for sparse approximation, Lexico provides flexible compression while maintaining high performance. Claims And Evidence: yes Methods And Evaluation Criteria: yes Theoretical Claims: yes Experimental Designs Or Analyses: yes Supplementary Material: no Relation To Broader Scientific Literature: yes Essential References Not Discussed: nothing specific Other Strengths And Weaknesses: Strengths: 1. The sparse dictionary learning for approximating KV cache compression appears new. 2. Extensive experiments are conducted. Lexico is tested on various LLMs, including Mistral-7B, Llama-3-8B, Llama-3.1-8B-Instruct, Llama-3.2-1B/3B-Instruct, and Qwen2.5-14B-Instruct. Lexico is benchmarked against prior KV cache compression techniques including Per-Token Q, ZipCache, PyramidKV, KIVI, and SnapKV and performs consistently better Weakness: 1. The paper argues that key vectors lie in several low-rank subspaces, as demonstrated in Figure 2. However, it does not explicitly verify whether this property also holds for value vectors. It remains unclear whether the proposed dictionary-based sparse representation is equally effective for both keys and values. 2. A figure similar to Figure 2 for value vectors would strengthen the hypothesis by showing whether value vectors exhibit the same behavior. The paper does not benchmark dictionary training time across different sparsity levels. Other Comments Or Suggestions: nothing in particular Questions For Authors: Questions: 1. Would task-specific dictionaries improve performance? In Figure 6 for MMLU Pro Law, Lexico does not outperform competing methods. Is this due to the input distribution being significantly different from WikiText-103 where the dictionary was trained? 2. In Table 10, the computation only considers the key but does not include the value. Why is the computational cost for value vectors omitted in this analysis? 3. Please see Weakness. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank you for recognizing the novelty of applying sparse dictionary learning for KV cache compression and our extensive experimental evaluation—a strength also highlighted by reviewers Vzmg and EtKf. We address each concern below. ***Q1: Task-specific dictionaries*** Yes, using task-specific dictionaries may improve task accuracies, but would incur the additional training cost of custom fine-tuning. While we find this a promising direction to improve performance, our motivation to employ a universal dictionary was to allow it to be used off-the-shelf for various tasks without the need for additional training. That said, we can see that one may want to explore task-specific dictionaries to further enhance the performance of Lexico in specific domains. As MMLU-Pro Law does not have a training split, we tested task-specific dictionaries for GSM8K. Specifically, we present empirical results with the following dictionaries: 1. **Universal dictionary:** Dictionary trained on WikiText-103. 2. **Dictionary pre-trained on GSM8K:** Dictionary trained from scratch on the GSM8K training set. 3. **Universal dictionary fine-tuned on GSM8K:** Dictionary trained on WikiText-103 and then fine-tuned on GSM8K. | Sparsity | KV size | Universal dictionary | Dictionary pre-trained on GSM8K | Universal dictionary finetuned on GSM8K | | -------- | -------- | -------- | -- | -- | | $s = 24$ | 36.9% | 76.88 | 77.63 | **79.68** | | $s = 14$ | 26.1% | 75.06 | 70.51 | **75.89** | | $s = 10$ | 22.7% | **69.67** | 59.59 | 67.25 | From the experimental results, we observe that training a dictionary from scratch solely on GSM8K significantly underperforms compared to the universal dictionary, especially at low sparsities. This indicates overfitting to the small GSM8K training set, failing to generalize even to its test set. In contrast, fine-tuning a universal dictionary on GSM8K sometimes provides marginal improvements but adds the additional overhead of task-specific fine-tuning. Therefore, we may expect a slight increase in Lexico's performance on MMLU-Pro given a dataset with similar text distributions. ***Q2: Table 10 computation of keys and values*** Apologies for the confusion. To clarify, the reported latency for the Lexico forward pass in Table 10 represents the entire process of generating one token. The row stating "Lexico: forward pass using $q (K_\text{csr}D_k^\top ) ^\top$" should be corrected to "Lexico: forward pass using $q (K_\text{csr}D_k^\top ) ^\top$ and $V_\text{csr}D_v^\top$", as the overall latency measurement indeed includes the additional compute for value vector reconstruction. We will revise the manuscript to explicitly note that the Lexico forward pass timing encompasses both the key and value reconstruction computations. ***W1: Do value vectors also lie in low-rank subspaces?*** Yes, our findings show that value vectors also lie in a low-rank subspace. We will include an analogous figure as Figure 2 for value vectors. Below we computed the average relative reconstruction errors over 1k tokens from the WikiText test split using a universal dictionary of size $N=4096$. | sparsity | $s=4$ | $s=8$ | $s=12$ | $s=16$ | | -------- | -------- | -------- | -------- | -------- | | Keys | 0.158 | 0.091 |0.058 |0.038 | | Values | 0.410 | 0.264 |0.171 |0.110 | Although the sparse approximation for values is less effective than for keys, we believe that our empirical results on real-world benchmarks (LongBench, GSM8K) clearly demonstrate the effectiveness of our dictionary-based sparse representation. ***W2: Dictionary training time across different sparsity levels*** Training dictionaries is cheap because they only need to be trained once and can be provided by a third-party. As we report in L203, a sparsity of s=32 and a dictionary size of N=1024 requires only 2 hours on an A100 GPU. We provide the training runtime for different sparsities and dictionary sizes for Llama-3.1-8B-Instruct below and will incorporate these results in a revised edition. | - | $s=4$ | $s=8$ | $s=16$ | $s=32$ | | -------- | -------- | -------- | -------- | -------- | | $N=1024$ | 37m | 52m | 1h 10m | 1h 59m | | $N=4096$ | 1h 18m | 1h 40m | 2h 40m | 5h 22m |
Summary: LLMs that process long contexts need to store token embeddings in a KV-cache: two matrixes K and V. The KV cache can grow arbitrarily large, and thus should be compressed. Lexico compresses the KV-cache using sparse coding where K is decomposed as D * S where D is a dictionary matrix of fixed size and S a sparse matrix (and similar for V) Lexico uses a fixed dictionary D trained offline and offers a heuristic algorithm to estimate S (computing the optimal S is NP-hard). ## update after rebuttal I read the other reviewer comments and the rebuttals. Although the authors did not do some requested experiments, I think the paper has enough merit to be published with the current experimental body. Claims And Evidence: see below Methods And Evaluation Criteria: The evaluation seems fine. Theoretical Claims: there are none Experimental Designs Or Analyses: Experiments are sufficient. The latency should go into the main paper IMO. Supplementary Material: Parts A and D Relation To Broader Scientific Literature: The related work section is very short and not properly related to Lexico. How does sparse coding compare to quantization methods like eg. Product Quantization? Essential References Not Discussed: None Other Strengths And Weaknesses: S1 the first application of sparse coding to KV-cache compression S2 properly optimized: computation in the compressed domain and very cheap sparse encoder S3 easy-to-read paper W1 The paper is sometimes organized in a weird way -- see detailed comments below W2 judging from algorithm 1 (in the appendix) the OMP sparse encoding is a greedy approximation of sparse coding. Since it is an approximation to a NP-hard problem, there should be a tradeoff between encoding time and accuracy. It would be interesting to see how the same algorithm would perform with eg. a more or less large beam size of encodings Other Comments Or Suggestions: There are some paragraphs that have been compressed to fit in 8 pages which looks ugly. The results for different LLMs are largely redundant, so others could be moved to appendix The OMP algorithm and the latency numbers *are* important so should be in the main text. Detailed comments: L149 what does overcomplete mean? sec 2: There is one KV cache per head, while the exposure mentions one per layer all the time. It is unclear if the compression is performed for all heads of one layer (considering the keys and vals are of size d) or separately for each head (keys and vals of size m) Fig 2: unclear what's on the x and y axis of these plots. IIUC 49 tokens from the same input sequence? Again is this for the whole layer (size d) or for one head of layer 10 (size m) L198 the expectation maximization algorithm is described in the caption of fig 3 -- move to the main text Tab 1: it is unclear what the error is measured on (and what the standard deviation refers to) L216 right: the implementation uses a compressed domain multiplication, which is an interesting optimization that the authors could emphasize more L230 right: AFAICS, n_a has not been introduced (it's described in the supp material) Questions For Authors: The dictionary training algorithm (caption of fig 3) is an EM that alternates between encoding the training set and estimating the dictionary. Estimating the dictionary can be done in closed form (linear least squares to minimize the reconstruction error). Why do the authors perform gradient steps instead ? Does it perform better? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your thoughtful review. We are glad that you appreciated the novelty of Lexico, our effort in optimizations, and the overall readability of the paper. We address the remaining weaknesses and questions below. ***W1: Paper organization*** We greatly appreciate the reviewer’s feedback regarding the organization of our paper. In our revision, we plan to: - Reorganize the Layout: We will reposition diagrams and tables so that they enhance readability, moving tables to the top of pages rather than interspersed between paragraphs. This can be done without difficulty in the case of acceptance given the additional ninth page. - Move Mistral results for readability: We will move the Mistral 7B results to the supplementary material, addressing concerns of redundancy while still providing experiments sufficient for our message. The original intent was to demonstrate that Lexico is not tailored to any particular model. - OMP Algorithm and Latency Measurements: We agree that these are important and will adjust the layout to fit them in the main text. ***W2: Tradeoff between encoding time and accuracy*** We compared OMP with alternative approaches—Iterative Hard Thresholding (IHT) and Lasso—on the WikiText test set using our universal dictionary (size N=4096). While IHT and Lasso are iterative methods that can achieve better accuracy with additional iterations (trading off computation time for improved performance), OMP’s greedy nature does not allow for such tuning. In our experiments, even after increasing the number of iterations (and thus compute time) for IHT and Lasso, they still produced higher reconstruction errors than OMP across various sparsity levels. Moreover, OMP was the fastest since its complexity scales with the sparsity s, which in our case is much smaller than N. |Sparsity|OMP|IHT|Lasso| |-|-|-|-| |s=4|0.28|0.81|0.57| |s=8|0.11|0.57|0.43| |s=16|0.05|0.35|0.28| |s=32|0.01|0.28|0.16| If the reviewer is aware of alternative approaches that may offer better tradeoffs, we would be eager to explore them. ***Q1: Estimating the dictionary via least squares*** Indeed a closed-form solution via least squares exists in each update. However, this approach proved prohibitively slow for large training sets. The solution requires forming and inverting an $N \times N$ matrix, an operation that scales on the order of $\mathcal{O}\left ( N^3 \right )$. With $N = 4096$ in our experiments, such step quickly dominates overall runtime and overshadows any advantage of jumping straight to the global minimizer. ***Related work: Comparing sparse coding to product quantization*** Thank you for the suggestion. We will add the reference and comparison to our Related Work. Product quantization and sparse coding both use a form of "codebook" for vector compression. Sparse coding represents each vector as a sparse linear combination of learned dictionary atoms, while product quantization partitions vectors (e.g., the first k coordinates) and clusters each sub-vector, concatenating the nearest centroids. PQCache (https://arxiv.org/abs/2407.12820) learns its centroids from input tokens; our method trains the dictionary offline. Despite these differences, both exploit the key vectors’ clustering structure for significant compression. ***L149: overcomplete definition*** It means that the dictionary has more atoms (number of vectors) than the dimensionality of the vectors. This allows multiple ways to represent the same signal, making it easier to find a good sparse representation. ***Clarification on head-level compression*** The compression is performed on key vectors of size $m$ (not $d$). The "one per layer" phrase in the paper refers to the dictionary rather than the KV cache. For each token, we have $L \times H$ key vectors of size $m$, while we have $L$ key dictionaries of shape $m \times N$. Thus, all $H$ heads within a layer share the same dictionary. An analogous setup applies to the value cache and dictionary. ***Figure 2 axes*** In Figure 2, each axis represents 7 tokens across all 8 heads of layer 10, yielding 7×8=56 elements (with some elements cropped for clarity). The plotted values are from key vectors for each head (with dimensionality m). In the left panel, both axes correspond to the first 7 tokens from the same sentence, whereas in the right panel each axis represents the first 7 tokens from two different sentences. The elements are sorted by similarity to highlight clustering. ***Table 1 error*** We measure the relative reconstruction error (RRE) for each key or value vector as $\text{RRE}=\frac{\|\mathbf{k}-\hat{\mathbf{k}}\|_2}{\|\mathbf{k}\|_2}$, where $\mathbf{k}$ is the original vector and $\hat{\mathbf{k}}$ its reconstruction. In Table 1, each reported value is the average RRE across 8k key/value vectors—corresponding to 1k tokens, each producing 8 head vectors—obtained from forward passes on the specified datasets. The standard deviation is computed over those 8k samples. --- Rebuttal Comment 1.1: Comment: W2: About the tradeoff experiment: in Algorithm 1, the encoding is greedy, ie. at each step i, the dictionary entry that maximizes the dot product with the current residual is selected. A straightforward extension is to keep not the single max but a set of top-B elements (the beam) and compute the encoding using these B elements. After s iterations, only the best out of B encodings is kept. This is guaranteed to give a better encoding since the top-1 is included in the top-B. --- Reply to Comment 1.1.1: Comment: Many thanks for clarifying your suggestion on extending OMP. We agree that selecting the top-B atoms at each iteration is guaranteed to provide a better (or at least no worse) sparse representation. Our primary motivation for using standard OMP in this work lies in its simplicity and efficiency for memory-constrained settings. While a beam search would likely improve reconstruction performance, it comes with additional computation and memory overhead proportional to the beam size. Moreover, in many cases, greedy OMP has well-established theoretical guarantees and performs near-optimally, especially when the dictionary is highly overcomplete, as in our case (Donoho et al., 2006, https://doi.org/10.1109/TIT.2005.860430; Elad, 2010, *Sparse and Redundant Representations: From Theory to Applications in Signal and Image Processing*, Theorem 4.3). Nonetheless, we appreciate your suggestion and are interested in exploring how beam search extensions might further enhance OMP’s reconstruction accuracy. We will include a discussion of this direction in the revised manuscript.
Summary: This paper proposes a novel KV cache compression method based on sparse coding over a learned “universal” dictionary. The key idea is to represent KV cache vectors with a small number of dictionary “atoms,” reducing memory costs while preserving model performance across different tasks and domains. The paper reports encouraging results on select language tasks (e.g., GSM8K, LongBench), demonstrating near-lossless performance at significant compression ratios. ## update after rebuttal The authors have addressed most of my questions. I keep my opinion that this paper leans toward being accepted. The additional results and clarifications should be integrated into the manustript. Claims And Evidence: The author makes the following claims: - Near-lossless performance: Supported by performance experiments (but only with the most safe settings) - Compression rates beyond 2-bits: Partially supported by theoretical KV cache size analysis (no hardware peak memory report) - Universality: Supported by experiment results across different datasets Methods And Evaluation Criteria: Overall, make sense. Comments on evaluation datasets: - Use Longbench V2 - Consider long output tasks such as reasoning Theoretical Claims: N/A Experimental Designs Or Analyses: Experiment design is valid. Need to report peak memory and throughput/latency to show efficiency. Supplementary Material: Yes, all. Relation To Broader Scientific Literature: This work is about KV cache compression. Related to literatures like quantization, token eviction, etc. Essential References Not Discussed: N/A Other Strengths And Weaknesses: I summarize all strengths and weaknesses here. Strengths - This method is intuitive and flexible, allowing fine-grained trade-offs by adjusting sparsity. - The performance is good; this method often achieves near-lossless performance on standard benchmarks. - The dictionary-based approach is an interesting alternative to quantization or token eviction, and the universality claim is a advantage. Weaknesses - The paper lacks real hardware measurements (latency, throughput) to show efficiency benefits. - Each model needs its own dictionary, potentially adding overhead in multi-model or highly specialized scenarios. - The authors do not benchmark on more advanced long-form tasks like MATH or AIME, where extended CoT generation can have more burden on memory. Other Comments Or Suggestions: N/A Questions For Authors: See weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful and constructive feedback. We are pleased that you appreciated Lexico’s flexible design, near-lossless performance on benchmarks, and the advantage of using a dictionary for KV cache compression. It is also encouraging that reviewers Vzmg and EhUD similarly highlighted our method's strong performance, particularly regarding its improved memory vs. performance trade-offs compared to other compression methods. We address the weaknesses and questions below. ***Hardware measurements*** Section E of the paper already presents a latency analysis and, hence, claiming we "lack real hardware measurements" is misleading. In light of the reviewer's emphasis on this point, we will move that section into the main body in the revised manuscript. We have also extended the same setup described in Section E to include varying sparsity levels for a more fine-grained analysis; these results will be incorporated into the revised edition. |||N=1024||||N=4096|||| |-|-|-|-|-|-|-|-|-|-| ||Full KV|s=4|s=8|s=16|s=24|s=4|s=8|s=16|s=24| |**Forward pass**|48.39 ms|53.90 ms|54.67 ms|55.46 ms|55.56 ms|57.24 ms|56.64 ms|57.13 ms|56.35 ms| |**Sparse approximation via OMP**|-|6.03 ms|10.16 ms|18.31 ms|26.57 ms|9.37 ms|15.55 ms|28.37 ms|40.58 ms| Although Lexico may incur higher latency at larger sparsities, it is designed to address memory-constrained scenarios where a modest latency trade-off is acceptable, and hardware or library optimizations (e.g., support for FP8 multiplications) can further narrow this gap while preserving substantial memory savings. ***Lexico for CoT/reasoning tasks*** We are glad the reviewer noted the potential for Lexico in long-form reasoning tasks. To investigate this, we tested a reasoning model, DeepSeek-R1-Distill-Llama-8B, on the AIME 2024 exam. We evaluated the performance trade-off when using Lexico to compress the KV cache memory of long reasoning traces: |Compression method|KV Size|AIME 2024 (pass@1)| |-|-|-| |Full KV Cache|100%|30.0| |KIVI (4 bits)|31.8%|30.0| |Lexico ($s=24$)|28.6%|30.0| |KIVI (2 bits)|16.3%|23.3| |Lexico ($s=14$)|17.0%|26.7| At a moderate compression level ($s=24$), Lexico retains full-precision performance (30%). When the compression becomes more aggressive, as with Lexico ($s=14$) and KIVI (2 bits), the model's score declines, though Lexico still scores higher than KIVI at similar compression levels. One approach to improve performance is to train task-specific or reasoning-specific dictionaries. Recent reasoning models generate traces that extend well beyond the prompt, so much of the KV cache is used for these model-generated traces. Because these traces may inhabit a different subspace than the prompt KV cache, training the Lexico dictionary specifically on output generations could be a promising direction for improving the performance-memory trade-off in long-form reasoning tasks. We demonstrate this potential for task-specific dictionaries and relevant ablations in the "Dictionary sources and adaptability across domains" section in our response to reviewer Vzmg. --- Rebuttal Comment 1.1: Comment: Thanks for sharing the additional results. Could you provide a throughput comparison under different batch sizes and sequence length settings? This information would be particularly useful because KV cache compression is especially beneficial when serving large batch sizes and longer sequences. --- Reply to Comment 1.1.1: Comment: We appreciate the reviewer's suggestion to include a throughput comparison. We would like to emphasize that our primary focus is on addressing extremely memory-constrained scenarios, where even a small batch size may exceed device limits. Such scenarios are increasingly relevant in real-world applications, especially with the emergence of reasoning models with lengthy CoT generation and LLM agents operating with very large contexts. Nevertheless, we acknowledge that throughput is an important consideration for many use cases. We will include throughput experiments in the revised manuscript to provide a more comprehensive evaluation of our approach. Thank you for highlighting this.
Summary: Lexico compresses the KV cache by finding a sparse representation of each key and value vector using a pre-trained dictionary. Instead of storing the full high-dimensional vectors, Lexico approximates them as a sparse linear combination of a small number of “atoms” (basis vectors) from this dictionary. It uses a technique called Orthogonal Matching Pursuit to efficiently select only a few atoms that best reconstruct the original vector, minimizing the reconstruction error. These sparse representations are much smaller in size and can be stored efficiently using formats like CSR, reducing memory usage. Claims And Evidence: The submission's claims are generally well-supported by clear experimental results across multiple benchmarks and models. The evidence convincingly demonstrates Lexico’s compression effectiveness and performance retention. No major claims appear problematic. Methods And Evaluation Criteria: Yes, the methods and evaluation criteria are appropriate for the problem. The use of sparse coding with universal dictionaries is well-justified, and the evaluation on standard benchmarks like LongBench and GSM8K effectively demonstrates performance and memory trade-offs. Theoretical Claims: The paper does not present formal theoretical proofs, but relies on established techniques like Orthogonal Matching Pursuit and sparse coding. The theoretical framing is sound, and no correctness issues were identified in the described methodology. Experimental Designs Or Analyses: Yes, the experimental design appears sound. The authors compare Lexico against strong baselines across diverse models and tasks, using consistent settings and fair memory budgets. The analyses of compression vs. performance trade-offs are thorough, and no major issues were found. Supplementary Material: No. Relation To Broader Scientific Literature: The paper builds on prior work in KV cache compression, sparse coding, and dictionary learning. It extends ideas from compressed sensing and applies them in a novel way to LLM inference. Compared to quantization and eviction-based methods, Lexico offers finer-grained control and better performance in low-memory regimes, addressing limitations in previous approaches. Essential References Not Discussed: The paper overlooks an essential reference: **QJL: 1-Bit Quantized JL Transform for KV Cache Quantization with Zero Overhead by Zandieh et al**. This work proposes a highly efficient 1-bit quantization method for KV cache compression, offering strong compression with minimal overhead. Given that Lexico aims to outperform quantization-based baselines, omitting QJL misses an important point of comparison—especially in the context of extreme low-memory regimes, where QJL sets a relevant benchmark. Including this citation would strengthen the discussion of related methods. Other Strengths And Weaknesses: Weaknesses include a limited evaluation on LongBench only a subset of tasks is tested, which may not fully capture long-context challenges. The paper also lacks a "needle-in-a-haystack" style experiment, which is important for stress-testing memory compression in precision-critical scenarios. Additionally, while Lexico is positioned as efficient, more detailed analysis of training runtime, dictionary update cost, and latency during decoding (especially under different dictionary sizes and sparsity levels) would strengthen the claims. The absence of ablations on different dictionary sources or adaptability across domains is another gap worth addressing. Other Comments Or Suggestions: - Questions For Authors: - Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We are pleased that you appreciated the clarity of our experiments, Lexico's effectiveness, and our use of sparse coding. EtKf and EhUD similarly recognized the soundness of our experimental design. We address your concerns below, and would be grateful if you reevaluate our work in light of them. ***LongBench*** We do not agree that our evaluation is limited. We have covered a comprehensive set of task categories such as document QA, summarization, few-shot tasks, and code completion; this aligns with prior work (e.g., KIVI) that evaluated on the same subset. For completeness, we have Lexico on all LongBench tasks with Llama-3.1-8B-Instruct below. ||Avg.|NarrativeQA|Qasper|MultiFieldQA|HotpotQA|2WikiMultihopQA|MuSiQue|GovReport|QMSum|MultiNews|TREC|TriviaQA|SAMSum|PassageCount|PassageRetrieval|LCC|RepoBench-P| |-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-| |Full KV|39.02|23.43|22.54|28.42|16.63|15.3|9.05|35.04|24.57|27.44|72.5|91.65|43.47|1.01|93.33|63.15|56.76| |s=16|38.33|23.02|15.45|28.62|16.39|13.8|10.56|31.36|23.13|25.78|72.5|92.25|42.02|1.47|98.33|63.01|55.58| |s=8|34.23|14.29|11.66|22.04|13.89|12.45|9.74|23.41|21.04|22.35|60.0|91.01|40.30|0.89|93.62|59.60|51.46| The newly added tasks demonstrate a similar trend as the other tasks. The performance remains nearly lossless when $s=16$ (21% of the original KV cache size). Our results remain consistent with our subset of tasks in Table 2. Even at $s=8$ (12% of the original), our approach maintains a strong performance-memory tradeoff; note that 12% is a regime 2-bit quantization cannot achieve. These results show Lexico's strength in handling long contexts. ***Needle In A Haystack (NIAH)*** We show that Lexico is resilient to such stress-testing. Using Llama-3.1-8B-Instruct with an 8k context size, we followed the settings in PyramidKV. We compared against token eviction methods—which are well-suited for tasks that require only a few tokens from the context—and report ROUGE-1 multiplied by 10. Note that the columns represent the needle depth ratio. We find that Lexico outperforms competitors on a 20% KV cache budget, while it delivers similar performance on a 10% budget. Note that the 10% budget is not feasible for 2-bit quantization methods. While we believe that NIAH is a valuable synthetic benchmark, performance on the task does not strongly reflect that of practical performance, as exemplified in the underperformance of eviction-based methods on GSM8K (Figure 4). |Method|KV Cache|0.0|0.2|0.4|0.6|0.8|1.0| |-|-|-|-|-|-|-|-| |Full KV|100%|7.06|7.06|9.30|7.06|7.06|7.06| |Lexico$_{s=16}$|20%|**7.06**|**7.06**|**9.09**|**7.06**|**7.06**|7.06| |PyramidKV|20%|6.06|4.12|4.24|4.78|4.18|**7.41**| |SnapKV|20%|6.06|4.12|4.24|5.15|3.94|**7.41**| |Lexico$_{s=8}$|10%|3.87|3.61|**5.88**|4.48|**4.26**|4.75| |PyramidKV|10%|**6.06**|**4.78**|4.18|4.12|4.18|**7.55**| |SnapKV|10%|**6.06**|4.18|4.85|**4.62**|3.94|7.41| ***Dictionary training cost*** Training dictionaries is cheap because they only need to be trained once and can be provided by a third-party. As we report in L203, sparsity of s=32 and a dictionary size of N=1024 require only 2 hours on an A100 GPU. We provide the training runtime for different sparsities and sizes for Llama-3.1-8B-Instruct. | |s=4|s=8|s=16|s=32| |-|-|-|-|-| |N=1024|37m|52m|70m|119m| |N=4096|78m|100m|160m|322m| ***Latency*** We kindly direct the reviewer to the *Hardware measurements* section in our response to reviewer EtKf (R2). ***Dictionary sources and adaptability across domains*** Per your suggestion, we investigated whether training a dictionary on a specific domain enhances the performance. In our ablation below, we trained a dictionary on different sources: 1) Dictionary trained on WikiText-103. 2) Dictionary trained on GSM8K training set. 3) Dictionary trained on WikiText-103 then finetuned on GSM8K. |Sparsity|KV size|Trained on WikiText|Trained on GSM8K|Finetuned on GSM8K| |-|-|-|-|-| |s=24|36.9%|76.88|77.63|**79.68**| |s=14|26.1%|75.06|70.51|**75.89**| |s=10|22.7%|**69.67**|59.59|67.25| We observe that training a dictionary from scratch on GSM8K significantly underperforms, especially at low sparsities. Overfitting to GSM8K fails to generalize even to its test set. In contrast, finetuning the universal dictionary on GSM8K provides marginal improvement but adds the additional overhead of custom fine-tuning. ***Missing reference*** We appreciate the reviewer’s suggestion and will incorporate QJL into our related work. While QJL introduces a 1-bit quantization approach for KV cache compression, its experiments are confined to 3- and 5-bit schemes. In contrast, our work benchmarks Lexico against 2- and 4-bit state-of-the-art methods, and further pushes the efficiency by exploring memory regimes lower than 2 bits. Whereas QJL applies its transformation solely to keys (with per-token quantization for values), our approach compresses both keys and values, offering a more comprehensive and efficient solution.
null
null
null
null
null
null
Large Language Models are Demonstration Pre-Selectors for Themselves
Accept (poster)
Summary: This paper introduces FEEDER, a demonstration pre-selection framework designed to improve the efficiency and effectiveness of large language models (LLMs) in in-context learning (ICL) and fine-tuning tasks. FEEDER identifies a representative subset of training data using two new metrics: "sufficiency" and "necessity." By leveraging this pre-selected subset, the approach reduces computational costs while enhancing model performance. Experimental results show that FEEDER can effectively enhance both ICL and fine-tuning. Claims And Evidence: The author validate the effectiveness of the method using a small dataset and a small-scale model. However, the current experiments are not sufficient, and larger-scale models need to be used. Additionally, the performance improvement of the model is not significant enough. Methods And Evaluation Criteria: The author proposed a new method for selecting data. While it has some scalability, the performance is not particularly impressive. Additionally, the size of the tested model is still too small. Theoretical Claims: I check the rationality of the theoretical claims, and I haven't found unreasonable aspects so far. Experimental Designs Or Analyses: I carefully examine the validity of the experiments, and the current experimental setup is reasonable. Supplementary Material: I primarily examine the experimental section in the appendix, focusing on the results presented from Table A1 to Table A9. Relation To Broader Scientific Literature: The key difference between this work and previous studies is the introduction of sufficiency and necessity metrics for data selection. This approach effectively identifies examples that are both representative and maximize information coverage, thereby improving the model's efficiency and performance. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: 1. The introduction of FEEDER as a demonstration pre-selection framework, along with the novel sufficiency and necessity metrics, offers a fresh perspective on data selection. 2. By reducing redundant data, FEEDER enhances the efficiency of ICL while improving performance. 3. The study evaluates multiple LLMs ranging from 300M to 8B parameters across multiple tasks (text classification, reasoning, semantic parsing), making the results convincing. 4. FEEDER is compatible with various existing demonstration selection strategies. Weaknesses: 1. The current comparisons primarily involve Random, Similarity, and Diversity selection strategies. However, it remains unclear whether combining FEEDER with previous ICL approaches would yield better results. A more informative comparison would involve integrating FEEDER with advanced ICL techniques instead of only evaluating it against basic selection methods. 2. While FEEDER shows performance improvements in most cases, the reported gains are not particularly significant. The paper should further analyze whether these improvements justify the additional computational complexity. 3. This study evaluates models with up to 8B parameters, but does not include state-of-the-art LLMs such as GPT-3 (175B), GPT-4, or the larger LLaMA model. Testing on larger models would better demonstrate the effectiveness of FEEDER. Other Comments Or Suggestions: 1. The paper primarily compares FEEDER with a few selection methods (e.g., similarity and clustering-based approaches). Including comparisons with more recent approaches, such as reinforcement learning-based selection methods, would provide a more comprehensive evaluation. 2. The concepts of sufficiency and necessity are somewhat abstract, and the paper could benefit from more intuitive, visual examples to help readers understand their impact on demonstration selection. Questions For Authors: 1. Does FEEDER maintain its effectiveness as dataset sizes increase, or does its selection quality degrade with larger datasets? 2. How would FEEDER perform on significantly larger models like GPT-4 or LLaMA-3 65B? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank Reviewer mKZf for recognizing our novelty, acknowledging its efficiency and performance improvement, appreciating our comprehensive evaluations on multiple LLMs and tasks, and noting its compatibility with existing demonstration selection strategies. **Q1. More comparison beyound basic selection methods (Random, Similarity, and Diversity)** **R1.** Thank you for your issue. We have compared FEEDER with advanced ICL techniques, specifically including the uncertainty-based, the clustering-based, and the recent latent variable-based demonstration selection method proposed in [1]. This method is explicitly denoted as "Latent" in our paper (Section 5.1, line 317-319, Section A5.1, line 1186-1188 left). Corresponding experimental results are reported in Appendix Tables A2, A4, and A6, clearly demonstrating FEEDER’s performance relative to this advanced baseline. To further address your suggestion, we will highlight this comparison more explicitly in revision and include additional similarity-based methods. [1] Large Language Models Are Latent Variable Models: Explaining and Finding Good Demonstrations for In-Context Learning. NeurIPS 2023 **Q2. While FEEDER shows performance improvements in most cases, the reported gains are not particularly significant.** We have added a significance test for the main results on the GSM8K dataset using the Llama-3 8B model in the ICL setting, as shown below. Results for other datasets and LLMs will be included in our revision. **Table R2**. * indicates *p* < 0.05 in significance tests compared to using TRAIN. | Model | Mode | n | GSM8K (Random) | GSM8K (Similarity) | GSM8K (Diversity) | |--|--|--|--|--|--| | Llama-3 8B | TRAIN | 1 | 78.24 | 79.56 | 79.56 | | Llama-3 8B | TRAIN | 2 | 79.55 | 83.40 | 83.67 | | Llama-3 8B | TRAIN | 5 | 81.45 | 83.47 | 84.52 | | Llama-3 8B | TRAIN | 10 | 82.31 | 84.42 | 84.53 | | Llama-3 8B | FEEDER | 1 | **80.23*** | **81.21*** | **81.21*** | | Llama-3 8B | FEEDER | 2 | **82.13*** | **84.43*** | **83.88** | | Llama-3 8B | FEEDER | 5 | **82.55*** | **85.03*** | **84.77** | | Llama-3 8B | FEEDER | 10 | **84.56*** | **85.79*** | **85.43*** | **Q3. Larger models (>8B)** **R3.** **We add **Qwen 2.5 32B model** as our base model on **GSM8K** and **GPQA**. Please refer to the R1 in response of Reviewer oHZY.** Results for the other benchmarks and LLMs will be included in the revision. **Q4. Whether these improvements justify the additional computational complexity.** Thanks for your question. We have included the runtime of running FEEDER in Figure A6. Running FEEDER only requires pre-running the inference over half of the number of training data points (33.45s for GPT-neo 1.3B on COLA dataset). For the fine-tuning, it costs 867.93s for training GPT-neo 1.3B on COLA dataset for one-epoch on the entire data points, whereas training GPT-neo 1.3B on COLA dataset on our FEEDER data costs 707.79s. As training is significantly more costly than inference, performing pre-selection is always cost-effective if around 10\% can be filtered out. Furthermore, we also would like to emphaize that our FEEDER operates at the pre-selection stage which allows for pre-computation over the entire training dataset, and serves various test data. **Q5. The concepts of sufficiency and necessity are somewhat abstract, and the paper could benefit from more intuitive, visual examples to help readers understand their impact on demonstration selection** **R5.** We have provided detailed descriptions of sufficiency and necessity, along with examples, in Appendix A2. Additionally, we also presented a case study in A11.2 to illustrate the impact of sufficiency and necessity on LLM performance. **Q6. Does FEEDER maintain its effectiveness and selection quality degrade as dataset sizes increase?** **R6.** Thanks for your question. To examine the impact of dataset size, we evaluate the effect of randomly reducing the SUBJ dataset by 20%, 40%, 60%, and 80% using the Similarity demonstration selector. Results are summarized below. **Table R3**. | Model | Mode | n | SUBJ (20%) | SUBJ (40%) | SUBJ (60%) | SUBJ (80%) | SUBJ (100%) | |--|--|--|--|--|--|--|--| | Llama-2 (7B) | TRAIN | 1 | 45.72 | 47.21 | 48.25 | 48.44 | 48.50 | | Llama-2 (7B) | TRAIN | 2 | 87.58 | 88.96 | 89.80 | 90.53 | 90.76 | | Llama-2 (7B) | TRAIN | 5 | 84.25 | 85.46 | 86.09 | 86.56 | 86.88 | | Llama-2 (7B) | TRAIN | 10 | 80.10 | 80.68 | 81.52 | 81.45 | 81.37 | | Llama-2 (7B) | FEEDER | 1 | **47.50*** | **48.95*** | **49.25*** | **49.50*** | **49.73*** | | Llama-2 (7B) | FEEDER | 2 | **89.80*** | **90.94*** | **91.06** | **92.17*** | **92.54*** | | Llama-2 (7B) | FEEDER | 5 | **86.28*** | **87.05*** | **87.56*** | **87.89*** | **87.95*** | | Llama-2 (7B) | FEEDER | 10 | **84.50*** | **85.14*** | **85.67*** | **85.88*** | **85.87*** | The results above demonstrate that our FEEDER is effective across different dataset scales. Please let us know if you have further questions -- thank you so much!
Summary: The paper introduces FEEDER, a pre-selection framework designed to improve in-context learning (ICL) in large language models by identifying a representative subset of training data demonstrations. FEEDER uses "sufficiency" and "necessity" metrics to balance representativeness with redundancy, and employs a tree-based algorithm to efficiently identify the pre-selected examples. This pre-selection set replaces the full training set to improve example selection for ICL or for fine-tuning through a bi-level optimization method. Experimental validation across multiple different LLMs with sizes ranging from 300M to 8B parameters demonstrates that FEEDER can reduce training data size by over 20% while maintaining ICL performance, integrate with other ICL selection strategies for further improved performance, and improve fine-tuning performance compared to other methods. The authors also conduct several ablations on selection strategy and number of rounds to identify the optimal design. ## Update after rebuttal The authors addressed most of my major concerns, so I have increased my score accordingly. Claims And Evidence: The paper claims that FEEDER can significantly reduce the size of training data required for selecting examples for ICL without compromising performance. These claims are substantiated by experiments showing consistent results across different model sizes and ICL strategies. However, it does not provide runtime comparisons, which would help to fully validate the efficiency improvements. Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense for the LLM ICL and fine-tuning setting. However, the work would benefit from benchmarking on other types of tasks besides text classification. Theoretical Claims: I briefly reviewed all three proofs which seemed correct but it is possible that I missed something. Experimental Designs Or Analyses: The experimental design appears sound, with FEEDER tested across a spectrum of LLMs and ICL strategies. The results are extensively documented, particularly in the appendix, which includes detailed methodology and additional results. However, the paper would be strengthened by including actual runtime data and comparisons to fine-tuning on the full dataset, which are crucial for evaluating the practical efficiency gains of the proposed approach (I imagine this is not the case, but if somehow full fine-tuning is faster than searching for this pre-selector set in a large dataset, the only benefit I see to this method is for improving proprietary model performance where fine-tuning cannot be done). Supplementary Material: I read the proofs and skimmed the rest of the Appendix. It provides extensive details that provide more information about the experimental setup and validate the methodology. This level of detail is commendable and helps in assessing the robustness and replicability of the results. Relation To Broader Scientific Literature: The key contributions of the paper have important implications on how to improve example selection for ICL in LLMs, which is a topic of broad interest in the AI community due to the massive increase in interest in and use of LLMs. Their proposed pre-selector method in combination with selection strategies is considerably better than naive selection strategies on the full dataset, which is a new finding that may be of interest in the field. Essential References Not Discussed: The paper does not sufficiently discuss its relationship with established active learning techniques, particularly around the concept of representativeness and informativeness, which have been studied for many years. This might obscure potential similarities and differences that could be crucial for contextualizing the methodological contributions. Other Strengths And Weaknesses: Strengths: 1. Novel approach for improving ICL in LLMS. 2/ Strong results across multiple LLM architectures, sizes, and datasets. 3. Solid theoretical contributions to back methodological design and results. 4. The paper is well-written and accessible. Weaknesses: 1. The paper is not contextualized well within the active learning literature, which has explored similar ideas well before LLMs. 2. Performance of the approach is only benchmarked on text classification tasks. 3. No runtime numbers are presented which makes it difficult to assess the practicality of the method. Other Comments Or Suggestions: Table 2 should bold all highest numbers, not just highest that are their results. Questions For Authors: 1. Do the authors know why COLA+LAG+2 performance is worse with an increasing number of rounds? 2. How do runtimes compare to fine-tuning on the whole dataset? Strong responses to the weaknesses as well as these questions could result in an improvement in my evaluation of the paper. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank Reviewer S3Tb for recognizing our novelty, solid theoretical contributions and good writing. Below, we respond to each of your questions in detail. **Q1. The paper is not contextualized well within the active learning literature.** **R1.** We clarify the relationship between our FEEDER method and active learning research line as follows: 1. Similarity: Both FEEDER and active learning methods focus on selecting representative and informative samples. 2. Difference: Active learning typically selects samples for annotation during training, while FEEDER selects a representative subset from already labeled data to reduce computation. 3. Our novelty: FEEDER proposes sufficiency and necessity metrics tailored specifically to LLMs, efficiently identifying critical demonstrations and removing redundancy. We listed some related active learning works discussed some of them in Appendix A1.3, we will highlight it and add more discussion in revision. **Q2. Performance of the approach is only benchmarked on text classification tasks.** **R2.** 1. As mentioned in our experiment section, in addition to 6 text classification tasks, **we have included the reasoning task GSM8K and the semantic parsing task SMCALFlow,** with corresponding results shown in Tables 2 and A6. 2. Furthermore, we add the science QA task GPQA in our revision. Partial results with the Qwen2.5 32 model are available in Table R1 below. These results further demonstrates FEEDER's ability to enhance LLM performance in math and science tasks. **Table R1**. Performance comparisons on GSM8K and GPQA datasets are conducted in the ICL setting. * indicates *p* < 0.05 in significance tests compared to using TRAIN. | Model | Mode | n | GSM8K (Random) | GSM8K (Similarly) | GSM8K (Diversity) | GPQA (Random) | GPQA (Similarly) | GPQA (Diversity) | |-----------------|--------|----|---------------|----------------|----------------|--------------|----------------|----------------| | Qwen 2.5 32B | TRAIN | 1 | 81.0 | 82.1 | 82.1 | 40.1 | 42.0 | 42.0 | | Qwen 2.5 32B | TRAIN | 2 | 83.2 | 84.2 | 84.7 | 42.0 | 44.5 | 45.2 | | Qwen 2.5 32B | TRAIN | 5 | 85.2 | 89.5 | 89.6 | 43.3 | 46.2 | 46.8 | | Qwen 2.5 32B | TRAIN | 10 | 86.2 | 90.4 | 89.9 | 43.2 | 46.8 | 46.7 | | Qwen 2.5 32B | FEEDER | 1 | **81.8** | **83.5*** | **83.5*** | **40.5** | **42.7** | **42.7** | | Qwen 2.5 32B | FEEDER | 2 | **84.5*** | **85.7*** | **86.0*** | **43.5*** | **45.8*** | **45.6** | | Qwen 2.5 32B | FEEDER | 5 | **86.7*** | **90.2** | **90.1** | **44.9*** | **48.0*** | **48.0*** | | Qwen 2.5 32B | FEEDER | 10 | **87.6*** | **91.2*** | **90.7** | **44.5*** | **47.8*** | **47.9*** | **Q3. No runtime numbers. How do runtimes compare to fine-tuning on the whole dataset?** **R3.** Thanks for your question. We have included the runtime of running FEEDER in Figure A6. Running FEEDER only requires pre-running the inference over half of number of training data points (33.45s for GPT-neo 1.3B on COLA dataset). For the fine-tuning, it costs 867.93s for training GPT-neo 1.3B on COLA dataset for one-epoch on the entire data points, whereas training GPT-neo 1.3B on COLA dataset on our FEEDER data costs 707.79s. As training is significantly more costly than inference, performing pre-selection is always cost-effective if around 10\% can be filtered out. Furthermore, we also would like to emphaize that our FEEDER operates at the pre-selection stage which allows for pre-computation over the entire training dataset, and serves various test data. **Q4. Do the authors know why COLA+LAG+2 performance is worse with an increasing number of rounds?** **R4.** We apologize for mixing the results of different demonstration retrievers when drawing this subfigure in Figure 3. The corrected results for COLA+LAG+2 should be 0.389 (for #Round = 0), 0.412 (for #Round = 1), 0.438 (for #Round = 5), and 0.430 (for #Round = 10), which follow the same trend as other settings. Please let us know if we have properly addressed your questions and we are more than happy to discuss more!
Summary: This submission presents a pre-selection framework for in-context learning designed to identify a representative subset of examples from the training set. The proposed framework FEEDER evaluates demonstration examples based on their sufficiency and necessity. Aside from benefiting in-context learning, the framework can also benefit model training, particularly by accelerating the fine-tuning process with the selected subset. Claims And Evidence: Yes, the claims made in the submission supported by clear and convincing evidence. Methods And Evaluation Criteria: Yes, the proposed method and the evaluation criteria make sense for the problem. Theoretical Claims: No issue found. Experimental Designs Or Analyses: ### Strengths This submission conducts extensive experiments on several commonly used benchmarks and applies the proposed framework to various LLMs, including GPT-2/3, Gemma-2, and Llama-2. The experimental results demonstrate mostly consistent improvements, providing strong evidence of the framework's effectiveness. ### Weaknesses This submission is closely related to the paper *"Large Language Models Are Latent Variable Models: Explaining and Finding Good Demonstrations for In-Context Learning" (NeurIPS 2024)*, with significant overlap in the selection of datasets and LLMs for the experiments. However, there is no direct comparison between the two methods regarding their effectiveness for ICL. It would be valuable to include a more detailed discussion and comparison between this submission and the related work. Supplementary Material: Yes, I read the discussions in the supplementary material on the connections to related work, and additional experimental results. Relation To Broader Scientific Literature: This work builds upon existing research on enhancing ICL through improved sample selection. Its key contribution lies in introducing "sufficiency" and "necessity" metrics to guide the demonstration pre-selection process. Additionally, it proposes a novel tree-based algorithm to efficiently identify optimal demonstration examples, distinguishing it from prior approaches in the field. Essential References Not Discussed: As mentioned in the "Experimental Designs Or Analyses" section above, the authors should include more discussion with *"Large Language Models Are Latent Variable Models: Explaining and Finding Good Demonstrations for In-Context Learning"*. This work is listed in the literature, but without thorough discussion/comparison. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: * In Table 1, for the COLA results on Gemma-2, the 1-shot results under Similarity and Diversity should highlight the D_train values in bold for clarity. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank Reviewer s2sv for recognizing our extensive experiments across multiple benchmarks and LLMs, showing consistent improvements of our framework. Below, we respond to each of your questions in detail. **Q1.1 The differences discussion between this submission and related work [1]. Some overlap in the selection of datasets and LLMs for the experiments.** **R1.1** Thanks for pointing it out. **Our approach differs significantly from theirs in both theoretical analysis and experimental implementation.** 1. Theoretically, their work investigates the causal relationship between the input X and the output Y, whereas ours analyzes the causal relationship between the input X and the selected demonstration C. 2. Empricially, their method operates at the demonstration selection stage, while our approach introduces a **pre-selection** stage before demonstration selection. This allows us to pre-compute a core subset of the training dataset as the demonstration selection pool. 3. Furthermore, their approach focuses on the demonstration selection stage and requires fine-tuning the embeddings of latent variables, which limits its applicability to smaller LLMs. In contrast, our method, which operates at the **pre-selection** stage, does not require fine-tuning LLMs. **Q1.2 No direct comparison and more detailed discussion between the two methods (FEEDER and [1]) regarding their effectiveness for ICL.** **R1.2** We directly compared FEEDER with [1] in our experiments, we have included [1] as one of the baseline demonstration selectors, note that we denoted method [1] as **"Latent"** (Section 5.1 line 317-319, Section A5.1. line 1186-1188 left). We provided the corresponding results **in Appendix Tables A1&A2, A3&A4, 2&A6**, main findings of effectiveness are listed below: 1. Comparing FEEDER (ours) + SUBJ (Similarity) against TRAIN + SUBJ (Latent [1]), we observed that applying a similarity-based demonstration selector on our FEEDER-generated core subset outperforms their advanced demonstration selector applied to the entire dataset. One possible explanation is that their method does not explicitly consider the relationships among candidate demonstrations. 2. Comparing FEEDER (ours) + SUBJ (Latent [1]) with FEEDER [1] + SUBJ (Similarity), we find that our method also improves LLM performance. However, the improvements brought by our method and theirs would have some overlap, as both approaches condition the selected demonstrations on the LLMs being used. **Q2. Clear writing sugguestion:** In Table 1, COLA results on Gemma-2, the 1-shot results under Similarity and Diversity should highlight the D_train values in bold. **R2.** Thanks for your suggestion. We will address these issues in our revision. **Reference:** [1] Large Language Models Are Latent Variable Models: Explaining and Finding Good Demonstrations for In-Context Learning. NeurIPS 2023 Please let us know if you have further questions -- thank you so much! --- Rebuttal Comment 1.1: Comment: Thank you for your responses. The authors have addressed my concerns. I keep my recommendation of weak accept. --- Reply to Comment 1.1.1: Comment: We are glad that all your concerns have been addressed! Thank you for supporting the acceptance of our paper.
Summary: This paper introduces FEEDER (FEw yet Essential Demonstration prE-selectoR), a novel pre-selection framework designed to improve In-Context Learning (ICL) and fine-tuning in large language models (LLMs). The key contribution of FEEDER is a pre-selection stage, where a representative subset of training data is selected based on two new metrics: sufficiency (how well a demonstration represents other samples) and necessity (whether removing a demonstration leads to loss of critical information). The paper proposes a tree-based algorithm to efficiently identify such representative subsets, reducing the computational cost of ICL while maintaining or even improving performance. Claims And Evidence: The claim of effectiveness is validated in the experiments. Methods And Evaluation Criteria: The method proposed is applicable to various tasks. Theoretical Claims: No theoretical analysis. Experimental Designs Or Analyses: Experiments are solid across various datasets. Supplementary Material: Appendix Relation To Broader Scientific Literature: The proposed framework is inspiring and insightful for various tasks in the literature Essential References Not Discussed: NA Other Strengths And Weaknesses: Strengths: 1.One of the major limitations of ICL is its high computational overhead, as each new query requires retrieving demonstrations from a large dataset. FEEDER reduces this cost without sacrificing accuracy, making ICL more practical for real-world deployment. 2.The sufficiency and necessity framework provides a formal and interpretable way to measure the contribution of a demonstration. Weakness: 1.Limited Evaluation of Models: The work mainly considers several LLMs that are very small in parameter size, with the largest being 8B. This is particularly insufficient for the evaluation of ICL tasks, as the more practical LLMs are generally larger. 2.The paper writing could be further improved. There exists a lot of blank spaces in the paper between sections. Lack of Evaluation of Tasks. For evaluation, the authors mainly consider 6 text classification tasks. However, many ICL methods are evaluated on a variety of tasks, including NLI or translation. The authors should consider adding more tasks. Other Comments Or Suggestions: NA Questions For Authors: See weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We would like to thank Reviewer oHZY for recognizing our method is efficient and practical for real-world deployment, as well as we proposing an interpretable way to explain the demonstration. Below, we respond to each of your questions in detail. **Q1. Should consider larger LLM.** **R1.** We add **Qwen 2.5 32B model** as our base model. The corresponding results on **GSM8K** and **GPQA** are listed below. Results for the other benchmarks and LLMs will be included in the revision. **Table R1**. Performance comparisons on GSM8K and GPQA datasets are conducted in the ICL setting. * indicates *p* < 0.05 in significance tests compared to using TRAIN. | Model | Mode | n | GSM8K (Random) | GSM8K (Similarly) | GSM8K (Diversity) | GPQA (Random) | GPQA (Similarly) | GPQA (Diversity) | |-----------------|--------|----|---------------|----------------|----------------|--------------|----------------|----------------| | Qwen 2.5 32B | TRAIN | 1 | 81.0 | 82.1 | 82.1 | 40.1 | 42.0 | 42.0 | | Qwen 2.5 32B | TRAIN | 2 | 83.2 | 84.2 | 84.7 | 42.0 | 44.5 | 45.2 | | Qwen 2.5 32B | TRAIN | 5 | 85.2 | 89.5 | 89.6 | 43.3 | 46.2 | 46.8 | | Qwen 2.5 32B | TRAIN | 10 | 86.2 | 90.4 | 89.9 | 43.2 | 46.8 | 46.7 | | Qwen 2.5 32B | FEEDER | 1 | **81.8** | **83.5*** | **83.5*** | **40.5** | **42.7** | **42.7** | | Qwen 2.5 32B | FEEDER | 2 | **84.5*** | **85.7*** | **86.0*** | **43.5*** | **45.8*** | **45.6** | | Qwen 2.5 32B | FEEDER | 5 | **86.7*** | **90.2** | **90.1** | **44.9*** | **48.0*** | **48.0*** | | Qwen 2.5 32B | FEEDER | 10 | **87.6*** | **91.2*** | **90.7** | **44.5*** | **47.8*** | **47.9*** | These results demonstrate FEEDER can be extended to LLMs larger than 8B. **Q2. Paper writing issue: lot of blank spaces between sections.** **R2.** Thanks for your suggestion. We will address these issues in our revision. **Q3. Lack of Evaluation of Tasks. Authors should consider ICL methods on variety tasks, now only consider 6 text classification tasks.** **R3.** As mentioned in our experiment section, in addition to 6 text classification tasks, we have included the reasoning task GSM8K and the semantic parsing task SMCALFlow, with corresponding results shown in Tables 2 and A6 in the paper. Furthermore, we add the science QA task GPQA in our revision. Partial results with the Qwen2.5 32B model are available in Table R1 above, which further demonstrates FEEDER's ability to enhance LLM performance in math and science tasks. We are eager to hear your feedback. We’d deeply appreciate it if you could let us know whether your concerns have been addressed.
null
null
null
null
null
null
TUMTraf VideoQA: Dataset and Benchmark for Unified Spatio-Temporal Video Understanding in Traffic Scenes
Accept (poster)
Summary: The paper introduces TraffiX-VideoQA, a benchmark for evaluating spatio-temporal video understanding in traffic scenes. It provides 1,000 videos, 85,000 QA pairs, 2,300 object descriptions, and 5,700 grounding annotations, covering diverse traffic conditions. The authors propose TraffiX-Qwen, a baseline model leveraging multi-resolution visual token sampling to improve temporal reasoning. Experiments compare LLaVA-OneVision, Qwen2-VL, VideoLLaMA2, showing that TraffiX-Qwen achieves superior performance, especially in multi-choice VideoQA. The paper contributes a new dataset, benchmarks existing models, and introduces an improved VideoQA method for real-world traffic scenarios. ## update after rebuttal The author addressed most of the issues mentioned in my initial review, so I am maintaining my original positive rating. Claims And Evidence: The paper makes several strong claims, particularly regarding the effectiveness of TraffiX-Qwen and the value of the TraffiX-VideoQA dataset. The dataset is well-constructed, covering diverse traffic conditions, and the model comparison with existing vision-language methods is comprehensive. However, some claims lack sufficient evidence or require further clarification: 1) TraffiX-Qwen processes more frames per video than competing models, which could naturally give it an advantage. It would be more convincing if additional comparisons were made with the same number of frames across all methods. 2) The paper does not introduce new evaluation metrics, even though it discusses performance differences in detail. If the authors claim methodological novelty in evaluation, this should be clarified. 3) While the paper uses SOTA models (YOLOv8, DETR, ByteTrack), it does not analyze their individual contributions. A proper ablation study would strengthen the claim that these choices are optimal for this dataset. 4) The paper asserts that the dataset captures various real-world conditions, but lacks quantitative statistics (e.g., distribution of intersection types). Providing these details would improve the credibility of this claim. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are generally well-suited for the task. TraffiX-Qwen’s multi-resolution visual token sampling is a reasonable approach to improve spatio-temporal reasoning while managing computational cost. The TraffiX-VideoQA dataset is well-structured and relevant for evaluating VideoQA in traffic scenes. However, there are some aspects that need further clarification: 1) TraffiX-Qwen sees more frames than competing models, making it difficult to isolate the method’s true impact. A fairer comparison with a fixed number of frames across all models would strengthen the results. 2) While the dataset covers various traffic scenarios, it lacks quantitative details (e.g., the proportion of different road types or intersection structures). Providing such statistics would enhance its credibility. 3) The paper adopts standard metrics but does not introduce new ones. This is fine, but given the complexity of the task, a discussion on potential evaluation limitations (e.g., how frame count affects results) would be valuable. Theoretical Claims: The paper does not contain theoretical claims or formal proofs, as its contributions focus on dataset construction, model development, and empirical evaluation. The proposed multi-resolution visual token sampling is validated experimentally rather than theoretically. While this is reasonable for the scope of the work, a theoretical discussion on how the sampling strategy impacts spatio-temporal reasoning could strengthen the justification. Experimental Designs Or Analyses: The experimental setup is mostly well-structured, and the benchmarking of multiple vision-language models provides useful insights. However, there are some concerns regarding experimental fairness and completeness: 1) TraffiX-Qwen processes more frames per video than other models, which could give it a natural advantage. A comparison where all models use the same number of frames would make the results more conclusive. 2) The impact of the multi-resolution visual token sampling strategy is not explicitly studied. Ablation studies would help clarify these factors. 3) The paper presents numerical comparisons but lacks discussions on where and why models fail. A qualitative analysis of failure cases would provide valuable insights. Supplementary Material: I reviewed the appendix, and it provides useful details on dataset statistics, experimental analysis, and evaluation metrics. However, some critical aspects are still missing: 1) While the appendix includes weather and time-of-day distributions, it does not provide statistics on different intersection types or traffic participant distributions. Adding these would strengthen the dataset’s representativeness. 2) The appendix includes an analysis of frame count impact, but it lacks ablation studies on object detection & tracking models and multi-resolution token sampling impact. These are necessary to verify their individual contributions. 3) The appendix does not include enough qualitative examples of where and why models fail. Adding specific failure cases with explanations would improve the understanding of model limitations. Relation To Broader Scientific Literature: The paper provides a solid discussion of prior work, particularly in the areas of VideoQA datasets (NuScenes-QA, DRAMA, DriveLM) and vision-language models (LLaVA, VideoLLaMA2, Qwen2-VL). This establishes a clear foundation for its contributions. Essential References Not Discussed: The paper provides a solid literature review, but some essential references are missing. For example, the dataset comparison could be expanded to include related benchmarks from autonomous driving and traffic analysis (e.g., Ego4D, BDD100K, Waymo Open Dataset). While these are not VideoQA datasets, they are relevant for understanding real-world traffic dynamics. Other Strengths And Weaknesses: Beyond the points discussed earlier, there are additional strengths and weaknesses worth mentioning: Strengths: ● The paper is well-structured and written clearly, making it easy to follow, even for researchers unfamiliar with VideoQA. ● The VideoQA task in traffic scenarios is logically structured, making the dataset suitable for future extensions, such as integrating multimodal sensor data (e.g., LiDAR). ● Given the focus on traffic scenarios, this dataset has strong potential for real-world applications in intelligent transportation systems and automated video monitoring. Weaknesses: ● The proposed TraffiX-Qwen model, while effective, appears computationally expensive due to the multi-resolution visual token sampling strategy. The paper does not discuss training or inference efficiency, which may affect deployment feasibility. ● The dataset is sourced from fixed camera locations, which may introduce a bias toward certain traffic patterns (e.g., urban over rural areas). A discussion of dataset biases and their impact on generalization would be valuable. Other Comments Or Suggestions: Here are some additional comments and suggestions for improving the paper: 1) Some figures, particularly Fig 6 (dataset distribution visualization), could be clearer. Improving resolution or providing more readable labels would help. 2) The paper does not specify the training hyperparameters, batch size, or computational resources required for TraffiX-Qwen. Adding this information would improve reproducibility. 3) The definition of spatio-temporal grounding could be made clearer, particularly in terms of how annotations were generated and whether human verification was involved. 4) Some important dataset details (e.g., intersection type distributions) that are currently missing might be better suited for the appendix. Questions For Authors: 1) TraffiX-Qwen processes more frames per video than competing models. Could you provide additional results where all models use the same number of frames? This would help determine whether the performance improvement comes from the model design or simply from having access to more temporal information. 2) The paper uses YOLOv8, DETR, and ByteTrack for object detection and tracking, but does not analyze their individual contributions. Have you conducted ablation studies to evaluate how each of these components affects the final performance? 3) The paper mentions that TraffiX-VideoQA covers diverse traffic environments, but does not provide specific statistics on intersection types, traffic conditions, or vehicle categories. Could you share these statistics to support the claim of dataset diversity? 4) The paper does not discuss the computational cost of TraffiX-Qwen. What are the training time, required GPU resources, and inference speed? This information would help assess the feasibility of deploying the model in real-world settings. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer, We appreciate your valuable feedbacks and address your concerns as follows. Q1: Clarification on whether TraffiX-Qwen’s performance gain stems from more input frames. A1: As open-source VideoQA models often adopt model-specific frame sampling strategies, which are tightly coupled with architecture and training, some models (e.g., LLaVA-OneVision, Video-LLaMA2) are not designed to handle longer frame sequences. Hence, we used different frame settings that were in line with their original paper and codebase. We also fully agree that comparisons under a unified frame setting would strengthen the results. Therefore, we conduct extra experiments with 101 frame inputs for the models used in our paper. (Qwen2-VL, whose model structure imposes constraints on input frame numbers, is adapted to 96 frames.) |**Model**|**Size**|**BLEU_4**|**ROUGE_L**|**CIDEr**|**Temp.E↓**|**Spa.E↓**|**ST.E↓**|**Pos.**|**Count.**|**Motion**|**Class**|**Exist.**|**Overall**| |-|-|-|-|-|-|-|-|-|-|-|-|-|-| |LLAVA-OneVision|0.5B|0.0|0.9|0.0|1.0|1.0|1.0|25.6|25.1|11.9|12.4|0.6|15.1| ||7B|0.0|0.5|0.0|1.0|1.0|1.0|26.4|24.1|13.6|13.0|0.8|15.6| |Qwen2-VL|2B|3.8|15.1|0.18|0.65|0.68|0.73|31.2|38.4|58.2|55.4|74.5|51.5| ||7B|5.0|14.9|0.14|0.70|0.70|0.76|31.8|55.3|54.3|50.0|75.8|53.4| |VideoLLAMA2|7B|0.0|7.1|0.0|1.0|1.0|1.0|27.6|27.5|26.3|18.7|52.2|30.4| |**TraffiX-Qwen**|0.5B|35.0|50.4|2.52|0.12|0.19|0.26|72.0|80.6|82.6|69.8|89.2|78.8| ||7B|36.7|52.0|2.56|0.11|0.17|0.24|76.6|81.9|84.3|73.4|89.4|81.1| Results show that: 1. LLaVA-OneVision and Video-LLaMA2 perform far worse with 101 frames, likely due to the lack of long-range temporal modeling. 2. Qwen2-VL shows improved performance with more frame inputs, suggesting that additional temporal information can indeed be beneficial. This also highlights Qwen2-VL’s stronger temporal reasoning capability. Importantly, even under the same frame input setting, TraffiX-Qwen still consistently outperforms all baselines, confirming that the performance gain stems from multiple aspects, not just access to more temporal information. These results will be added to the revised paper. Q2: The role of YOLOv8, ByteTrack, etc in the pipeline. A2: We use 2D detectors and trackers only during the TraffiX-VideoQA dataset construction to generate meta-information aligned with human annotations, improving quality and consistency. They are not used in TraffiX-Qwen’s training or inference. Our baseline is a fully end-to-end VLM that processes raw videos and generates answers directly, without external modules. This design simplifies the pipeline and promotes research toward integrated video understanding in the traffic domain. Q3: Statistics supporting the claim of dataset diversity. A3: We add 3 figures illustrating dataset distribution at: https://imgur.com/a/eoZXJEb. The figures statistically support the dataset’s diversity. We group scenes in our dataset into 3 key types: highways (rural), urban intersections (city), and country roads (rural/urban). They show that the dataset aligns well with real-world traffic distributions. The figures will be added in the revised version. Q4: Missing information on model efficiency and computational cost. A4: We provide the training and inference details in the paper: |**Version**|**#Vision**|**#Projector**|**#LLM**|**Inf Speed/QA**|**#Trainable**|**Train Hour**| |-|-|-|-|-|-|-| |**0.5B**|397.8M|1.8M|493.8M|~1.6s|495.6M|28h| |**7B**|397.8M|17.0M|7612.6M|~3.8s|7629.6M|36h| Inference time is measured as the average to process a 10s–1min video with autoregressive decoding on 1*A100 GPU (no acceleration). We believe further optimization (e.g., quantization, pruning, distillation) is a promising direction for improving deployment efficiency in traffic monitoring. We will include this table and discussion in the revised version. Response to other suggestions: 1. We have included an initial discussion regarding how sampling strategy impacts spatiotemporal reasoning in Sec 5. We will expand this with new experimental results and discussions. 2. Qualitative results for ST-OG and V-ROC have been added to Appendix B.5, B.6, covering both successful and failure cases. For the MC-QA task, we will provide extra qualitative visualizations and analyses. 3. Initial analysis of frame count effects on TraffiX-Qwen is provided in Appendix B.1. We will expand this with an extra discussion of how frame sampling impacts temporal reasoning capabilities. 4. We agree that including extra AD benchmarks (e.g., Ego4D etc.) brings valuable context and will add them. 5. We will add more details on model complexity, and potential acceleration techniques to support practical deployment. 6. We agree that using fixed camera views may introduce distributional bias in traffic patterns and will include a discussion on this in limitation. 7. We will improve Fig. 6. refine the definition of spatiotemporal grounding, and emphasize the role of human verification during annotation process. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I will keep my positive rating. Additionally, I recommend including this supplementary information in the revised version. --- Reply to Comment 1.1.1: Comment: Dear Reviewer, Thank you very much for your positive rating and valuable feedback. We appreciate the recommendation, and we will include the additional experiments and corresponding details based on your suggestions in the revised version to strengthen the paper further.
Summary: The paper presents a comprehensive video-language dataset designed for complex traffic video understanding, named TrafficX-VideoQA. Meanwhile, a benchmark is provided, including multiple-choice video question answering, referred object captioning, and spatiotemporal object grounding tasks. Experimental results demonstrate that TrafficX-VideoQA is complex. Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense for the problem or application at hand. Theoretical Claims: I have checked proofs for theoretical claims. For example, the paper states that "most existing efforts primarily focus on driving scenarios and are typically constrained to individual tasks such as question answering, video grounding, or referred multi-object tracking". From Table 1, the data are supported. Experimental Designs Or Analyses: I have checked the experimental designs and analyses. A comprehensive analysis was conducted for three key tasks in the paper. Supplementary Material: I have reviewed the supplementary material. The supplementary material includes the more detailed TraffiX-VideoQA dataset statistics, benchmark analysis, and dataset examples. Relation To Broader Scientific Literature: The paper facilitates further advancements in traffic video analysis and contributes to the development of next-generation traffic foundation models. Essential References Not Discussed: There is a need to incorporate the recent works[1] for more comprehensive analysis. [1] Towards Surveillance Video-and-Language Understanding: New Dataset, Baselines, and Challenges. CVPR 2024 Other Strengths And Weaknesses: The writing and the organization are good. And the proposed dataset and benchmark is valuable. Other Comments Or Suggestions: Why only focus on multiple-choice video question answering, referred object captioning, and spatiotemporal object grounding tasks? Wouldn't it be better to cover all the tasks of existing video understanding? Questions For Authors: 1. Why only focus on multiple-choice video question answering, referred object captioning, and spatiotemporal object grounding tasks? Wouldn't it be better to cover all the tasks of existing video understanding? 2. Compared to the reference [1], what are the strengths of the paper? [1] Towards Surveillance Video-and-Language Understanding: New Dataset, Baselines, and Challenges. CVPR 2024 Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer, Thank you very much for your positive feedback and valuable reviews. Our responses to your comments are detailed below: Q1. There is a need to incorporate the recent work [1] for a more comprehensive analysis. A1. Thank you for pointing out this important related work. We have discussed a detailed comparison with it in our response to Q3 below, and we will also include this reference in our final manuscript to improve the completeness and introduce our work more clearly. Q2. Why only focus on multiple-choice video QA, referred object captioning, and spatiotemporal object grounding? A2. Thank you for the insightful suggestion. We fully agree that video understanding covers many valuable tasks beyond our current focus, such as action recognition, event localization, video captioning, and anomaly detection. Following the survey [2], we broadly categorize video understanding tasks based on the granularity of required information into abstract, temporal, and spatiotemporal levels. In traffic scenarios, high-level events are typically composed of interactions among traffic participants. As such, fine-grained spatiotemporal understanding and reasoning serve as a crucial foundation for interpreting complex traffic events. Based on this, our benchmark focuses first on spatiotemporal understanding through three core tasks in traffic scene comprehension: multiple-choice video QA, spatiotemporal object grounding, and referred object captioning. We sincerely appreciate your suggestion and agree that using VLMs for higher-level semantic tasks such as traffic event recognition is a promising direction. We will add this to the Future Work section of our paper. Q3. Compared to [1], what are the strengths of the paper? A3. Thank you for raising this important question. We have reviewed this work and highlighted the following strengths of our dataset and benchmark compared to the work [1] as follows: 1. **Data Source and Data Novelty**. Ref [1] is built upon existing datasets, providing additional textual caption annotations without contributing new video data or organizing raw footage. Moreover, most of the videos used in [1] are sourced from the internet. In contrast, our dataset is collected from real-world traffic scenes, ranging over 2 years using multiple intelligent infrastructure systems installed from over 20 diverse perspectives. From over 1,000 hours of raw video data, we curate our benchmark through a semi-automatic data annotation process. Compared to internet videos, our data more accurately reflects the distribution and characteristics of real-world traffic scenarios, offering higher data novelty and practical value. We include figures to support this point as suggested by Reviewer mL1S, A3: https://imgur.com/a/eoZXJEb 2. **Inherent Limitation for Fine-Grained Spatiotemporal Tasks**. Ref [1] focuses on video understanding tasks that require abstract semantic information, such as high-level event descriptions. However, such object reference is not well-suited for fine-grained spatiotemporal tasks in traffic scenes. This limitation arises from several factors: the inherent ambiguity of descriptive natural language expressions, the modality gap between visual and linguistic representations, and the high visual similarity of traffic participants in surveillance footage. This limitation is also clearly illustrated in Ref [1], Figure 1 (top video). The sentence query states: “A black car drove past the corner, and a gray car followed closely behind it.” However, the video scene contains multiple black cars and multiple gray cars, making the reference inherently ambiguous. Such ambiguity prevents the benchmark from supporting more fine-grained VideoQA tasks, particularly those requiring precise object grounding and temporal reasoning. 3. **Unified Benchmark with Structured Object Representation**. Our benchmark unifies three fine-grained tasks, i.e., multi-choice QA, referred object captioning, and spatiotemporal object grounding, via a tuple-based spatiotemporal object expression. This design enables benchmarks on more complex tasks, such as relative spatial reasoning, which are critical in the traffic domain but missing from [1]. 4. **Unified Model for VideoQA Benchmark**. In [1], each proposed task is evaluated separately using existing models like CoCap or UEDVC. While valuable, these approaches are separate for each task and do not support unified understanding. In contrast, we propose TraffiX-Qwen, a unified VLM-based model trained with multi-task learning across all three tasks. This contributes a solid baseline toward generalizable traffic visual foundation models. [1] Towards Surveillance Video-and-Language Understanding: New Dataset, Baselines, and Challenges. CVPR 2024 [2] Video Understanding with Large Language Models: A Survey. arXiv:2312.17432, 2024.
Summary: This paper proposes a new traffic VQA dataset captured from the roadside. The paper proposes three tasks based on the dataset, including Multi-Choice Question Answering, Video Referred Object Captioning, and Spatio-Temporal Object Grounding. The author further proposes a new, unified method to tackle all three tasks. The established benchmark offers a comprehensive analysis and demonstrates the strong performance of the proposed method. Claims And Evidence: L55 “they often face significant challenges in scalability, generalization to diverse traffic conditions, and real-world deployment.” Unclear why? Although the authors attempt to explain this with the following sentence, it is still unclear why the recent large models can address the mentioned challenges, especially the scalability and generalization to diverse traffic conditions and real-world deployment. In other words, the statements justifying the importance of QA in traffic scenes are very vague and not convincing. Note that I am not against the importance of the topic but feel the statement could be improved. Methods And Evaluation Criteria: The performance of the method is strong and evaluated fairly. Theoretical Claims: Yes, I have checked the correctness of all proofs or equations in the main paper. Experimental Designs Or Analyses: The experimental design is very thorough and comprehensive. Supplementary Material: I have reviewed Appendix A for the dataset statistics. Relation To Broader Scientific Literature: The proposed traffic VQA dataset can offer new opportunities for studying more fine-grained traffic understanding topics and facilitate the development of AV. Essential References Not Discussed: Missing highly relevant discussion of work proposing a description for traffic event recognition [1,2] in Related work - Fine-Grained Video Understanding or Language-Based Object Referring. [1] Agarwal and Chen, Ordered Atomic Activity for Fine-grained Interactive Traffic Scenario Understanding, ICCV 2023 [2] Kung et al., Action-slot: Visual Action-centric Representations for Multi-label Atomic Activity Recognition in Traffic Scenes, CVPR 2024 Other Strengths And Weaknesses: **strengths:** - the paper is well-written and easy to follow. **Weakness:** - It is unclear what is the difference between the proposed Multi-Choice Question Answering and existing VQA work, besides the camera view. - Unclear why the authors want to investigate different strategies of token sampling as they all perform very similarly in terms of performance across different tasks. - Unclear why the authors highlight “traffic anomalies” (L40 in the abstract) and “critical corner cases such as traffic accidents” (L96 in the contributions). This would mislead readers to expect to see benchmark results analyzing the performance under these anomaly scenarios. Other Comments Or Suggestions: - It would be great to have a detailed definition of the difficulty level of questions, i.e., single-hop or multi-hop, in the supplementary material. - Unclear why there are multiple bars for a class of questions in Figure 4 (a) Questions For Authors: Please see the comments. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: Dear Reviewer, Thank you very much for your positive feedback and valuable comments. Below, we provide point-by-point responses to the concerns raised. Q1: What are the differences between the proposed multi-choice QA and existing VQA work, aside from the camera view? A1: Thank you for the thoughtful question. Beyond the camera perspective, our work differs from previous VQA works in several key aspects: 1. The object reference expression in existing video QA works cannot be directly applied to the domain of spatiotemporal traffic scene understanding. We also argue it in reply to **Reviewer z4TA, A3-2**. This is due to the inherent ambiguity of natural language, the modality gap between visual and linguistic representations, and the need for accurate and unique object referring in traffic scenarios. Purely language-based descriptions often fail to achieve precise cross-modal object association. Hence, our work introduces a structured tuple-based object representation, enabling accurate and interpretable fine-grained analysis of traffic videos. 2. Our benchmark is specifically designed for unified intelligent traffic monitoring and scene understanding, which is currently missing in existing works. We aim to bridge this gap by providing a dataset and benchmark tailored to traffic scene analysis needs. Our dataset reflects real-world traffic distributions, and we include additional figures to support this point in response to **Reviewer mL1S, A3**: https://imgur.com/a/eoZXJEb 3. We unify three previously disjoint tasks in this domain, i.e., multiple-choice video QA, referred object captioning, and spatiotemporal object grounding, into a single benchmark. This positions our work beyond traditional VQA, offering a more comprehensive and domain-relevant challenge. Q2: Why investigate different token sampling strategies when they show similar performance across tasks? A2: Thank you for raising this question. In traffic scenes, particularly in the context of intelligent traffic monitoring and scene understanding, a key domain-specific characteristic is that the visual background tends to remain relatively static within a video. For existing VLMs, efficient visual representation is critical to overall performance. Therefore, exploring different visual token sampling strategies that remove redundant information while preserving key content is particularly important in this domain. Our study investigates whether tailored visual token strategies can leverage this static background nature to improve efficiency and performance. This motivation is also consistent with some prior works for roadside visual-centric tasks [3][4], where techniques are applied to suppress background redundancy and enhance downstream task performance. Q3: Why emphasize “traffic anomalies” in the abstract and contributions without dedicated benchmark analysis? A3: Our intention was to reflect real-world distribution, where traffic anomalies (e.g., accidents) are rare but critical cases. While our benchmark does not focus solely on anomaly analysis, we still provide high-level annotations indicating whether each video contains anomalies. This supports valuable future research focused on anomaly detection and emphasizes the diversity and realism of our dataset. We will revise the content to clarify this. Q4: Define the difficulty levels of QA questions (e.g., single-hop vs. multi-hop) in the supplementary material. A4: Thank you for highlighting this point. We currently include a subset of QA templates in Supplementary Material C.2. To improve clarity, we will introduce an additional subsection that clearly defines and categorizes the difficulty levels of QA questions. Q5: Clarify why there are multiple bars for a class of questions in Figure 4(a). A5: In Figure 4(a), we plot the distribution of question word counts for each question task/type. Since questions within the same task type can vary significantly in length and structure, we chose to visualize this distribution to better reflect the linguistic diversity of the dataset, rather than simply reporting the total number of questions per type. We will revise the caption to make this clearer. Q6: L55 - The statement justification for the importance of QA in traffic scenes is unclear. A6: Thank you for pointing out the unclear phrasing. We will revise this part to emphasize the motivation and importance of introducing LLM/VLM for the intelligent traffic scene understanding domain and the development of AV in our revised version. Q7: Missing highly relevant work on traffic event recognition [1,2] in the Related Work section. A7: We appreciate your suggestion. We will include the missing references and expand the section to properly acknowledge prior work on traffic event recognition. [3] Zekun Zhang et al., Object Detection With Self-Supervised Scene Adaptation, CVPR 2023. [4] Shuo Wang et al., The 8th AI City Challenge, CVPR-W 2024. --- Rebuttal Comment 1.1: Comment: I appreciate the authors for their very comprehensive response. All my concerns are addressed well. I encourage the authors to revise the final version accordingly for better readability and clarity. I believe the proposed dataset and study can offer good insights and a platform for future traffic scene understanding and thus I am willing to raise my rating from accept to strong accept. --- Reply to Comment 1.1.1: Comment: Dear Reviewer, We sincerely appreciate the valuable feedback you provided and your recognition of our work! We will revise the final version of the paper based on your suggestions to further improve its readability and clarity.
Summary: This paper provides a comprehensive dataset tailored for multiple tasks in traffic scenarios. It includes QA such as predicting the weather, counting objects, providing motion status, spatio-temporal grounding, and more. It consists of 1,000 videos, with 85,000 QA pairs, 2,300 object captioning, and 5,700 object grounding annotations. The benchmark tasks include video QA, object captioning, and grounding. The accompanied baseline model is based on Qwen-2 plus multiple important visual modules such as multi-resolution token sampling and temporal/spatial/token pooling. The paper provides the results from both the baseline model as well as other open-sourced multimodal LLMs. Claims And Evidence: The paper’s data analysis, model comparisons, and ablation studies show how existing open-sourced models perform relatively poorly with fine-grained spatio-temporal tasks, and the authors’ enhanced baseline with multiple techniques still shows the dataset’s complexity. Methods And Evaluation Criteria: Yes. The tasks—multi-choice QA, spatio-temporal grounding, and referred object captioning—directly address real roadside surveillance needs like identifying moving vehicles, counting objects, and localizing them over time. The chosen metrics (QA accuracy, spatio-temporal error, and NLG measures on captions) are standard and align with these tasks. Theoretical Claims: No theoretical claims are provided in this paper. Experimental Designs Or Analyses: One problem with this dataset is that the baseline models already score relatively high on the QA task already (e.g., 81.95% for overall QA accuracy in Table 3). Would the authors provide a brief discussion on how this dataset, especially the QA part, will remain challenging for upcoming new multimodal LLMs? Supplementary Material: No supplementary material is provided. Relation To Broader Scientific Literature: This dataset is a great comprehensive video dataset that is positioned as a challenging benchmark for advancing research in intelligent transportation systems. Essential References Not Discussed: NA. Other Strengths And Weaknesses: No assets are provided in the submission. The reviewer recommends the authors to release the codebase as well as the dataset upon acceptance. Other Comments Or Suggestions: NA Questions For Authors: See Section Experimental Designs Or Analyses. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer, Thank you very much for your positive feedback and insightful questions. We provide detailed responses to your concerns below. Q1: Would the authors provide a brief discussion on how this dataset, especially the QA part, will remain challenging for upcoming new multimodal LLMs given the high baseline results? A1: During our literature review, we also observed that some VQA benchmarks in vertical application domains, such as earth observation, autonomous driving, and smart cities, tend to achieve relatively high baseline accuracy (generally > 60\%) after fine-tuning (e.g., EarthVQA[1], nuScenes-QA [2], City-3DQA [3]). Our dataset shows a similar trend, indicating that large VLMs can more easily adapt to constrained, domain-specific tasks. However, we believe that the high baseline accuracy does not indicate that the task is simple or lacking in meaningful challenges. Our benchmark involves multiple complex tasks that remain crucial for developing future multimodal LLMs, especially those for traffic-related applications. Below, we provide a detailed discussion: 1. Our benchmark unifies three core tasks, i.e., multi-choice QA, referred object captioning, and spatiotemporal object grounding through tuple-based spatiotemporal object expressions for traffic scene understanding. While the QA task demonstrates relatively high accuracy, the other two tasks still exhibit substantial room for improvement. Importantly, as discussed in the paper, techniques that enhance QA performance, such as visual token strategies, do not consistently benefit the other tasks. This reveals that optimizing for QA alone is insufficient, and highlights the need for holistic model designs that jointly address all tasks. 2. Even within the multi-choice QA task, simply scaling model size yields diminishing returns. Although the 7B model achieves higher accuracy, the improvement over smaller models (e.g., 0.5B) is marginal. This suggests that simply increasing LLM size or replacing it with newer variants alone is unlikely to bring large improvement in this domain. Together, these findings emphasize the importance of domain-specific model design and training strategies that consider multiple tasks tailored to traffic-centric understanding. 3. In real-world traffic applications, where computational resources are often limited, developing lightweight models becomes a key priority. Rather than focusing solely on improving benchmark scores, we encourage future work to explore trade-offs between performance and efficacy with lightweight LLMs and techniques (i.e., pruning, quantization, and knowledge distillation). Our benchmark and baseline models provide practical references to guide and evaluate such efforts toward building efficient, real-time multimodal systems. In summary, we believe our dataset and benchmark remain highly important for the development and evaluation of upcoming multimodal LLMs, particularly those tailored to traffic domains. It presents practical challenges, encourages efficient model development, and supports a holistic evaluation across multiple multimodal tasks. [1] Junjue Wang, et al., “EarthVQA: Towards Queryable Earth via Relational Reasoning-Based Remote Sensing Visual Question Answering” [2] Tianwen Qian, et al., “NuScenes-QA: A Multi-modal Visual Question Answering Benchmark for Autonomous Driving Scenario” [3] Penglei Sun, et al., “3D Question Answering for City Scene Understanding” --- Rebuttal Comment 1.1: Comment: The reviewer is satisfied with the rebuttal and keeps the positive rating. --- Reply to Comment 1.1.1: Comment: Dear Reviewer, Thank you very much for your positive rating and for taking the time to review our rebuttal. We’re glad to hear that our responses addressed your concerns, and we sincerely appreciate your recognition of our work.
null
null
null
null
null
null
Disentangling Invariant Subgraph via Variance Contrastive Estimation under Distribution Shifts
Accept (poster)
Summary: This paper presents VIVACE for learning invariant subgraphs under distribution shifts using variance contrastive estimation. The authors propose a three-module framework to disentangle invariant and variant subgraphs, estimate the impact of spurious correlations, and employ inverse propensity weighting for predictions. The framework's effectiveness is validated by the experiments. The experimental results across multiple benchmarks demonstrate the method's superiority against existing approaches. The results indicate that VIVACE achieves better robustness to distribution shifts and effectively captures invariant subgraphs while mitigating the influence of spurious correlations. ## update after rebuttal Thanks for the response. I will keep my score. Claims And Evidence: The claims are supported with extensive experimental compare. The authors compare VIVACE against baseline GNNs, showing consistent improvements in OOD generalization. The ablation studies provide further evidence for the effectiveness of the proposed modules. To be specific, removing the variant subgraph contrastive module or the inverse propensity weighting module leads to a substantial drop in performance, indicating the important role of these components. The hyperparameter sensitivity analysis further demonstrates that VIVACE is robust across a wide range of settings. Methods And Evaluation Criteria: The framework combining self-supervised contrastive learning and causal inference techniques is sound. The evaluation criteria are appropriate. The authors report classification accuracy for synthetic datasets (e.g., CMNIST, CFashion, CKuzushiji) and ROC-AUC for real-world datasets (e.g., MOLSIDER, MOLHIV), ensuring fair comparisons with prior works. The inclusion of multiple runs and standard deviation reporting strengthens the statistical significance of the results. Theoretical Claims: The theoretical foundations of the method are ground by the causality. The authors formalize the problem using causal invariance assumptions, ensuring that the learned subgraphs have stable predictive power across environments. But a deeper theoretical analysis on the rationale of the approach should be introduced. Experimental Designs Or Analyses: The experiments are comprehensive, covering both controlled and real-world scenarios. The authors ensure significant performance gains by including multiple runs. The method shows consistent improvements in those datasets. Supplementary Material: I reviewed the supplementary material, which provides further details on experimental setups and pseudocode. These additions enhance reproducibility. Relation To Broader Scientific Literature: It builds upon prior work on graph neural networks and causality. The discussion of related work is comprehensive and contextualizes the contributions well. Essential References Not Discussed: No critical references appear to be missing. Other Strengths And Weaknesses: Pros: - The paper is novel overall, which introduces variance contrastive estimation as a self-supervised learning technique to explicitly model and quantify spurious correlations in graph data. - Unlike prior work that either assumes predefined environments or relies on heuristic-based disentanglement, VIVACE directly estimates the variant subgraphs, making the process adaptive to various real-world datasets. - This methodology advances existing invariant learning approaches by integrating contrastive learning with causal inference, effectively modeling and mitigating the impact of spurious correlations. Cons: - Detailed analysis for some experiment results are weak (e.g., hyper-parameter sensitivity). - Since VIVACE introduces additional modules (e.g., variant subgraph contrastive estimation, inverse propensity weighting), it is unclear how much extra computation is required. - Some designs in the method lacked the detailed explanations (e.g., GCE loss in equ. 7). Other Comments Or Suggestions: - Clarifying the computational cost compared to baselines would be helpful. - Equ. 7 introduces the Generalized Cross-Entropy (GCE) loss, but the reason for using GCE instead of standard cross-entropy is not fully discussed. Questions For Authors: - Could you discuss more on the computational cost compared to baselines? - Why was the Generalized Cross-Entropy (GCE) loss chosen rather than Cross-Entropy (CE) loss? - What impact does the hyperparameter q in GCE have on model performance? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: - **Q1. Theoretical analysis on the rationale of the approach.** We would like to clarify that the rationale of our method is to achieve OOD generalization by **accurately disentangling invariant and variant subgraphs**. We have added the following theorem. **Theorem 1.** Denote the optimal invariant subgraph generator $\Phi^*$ that disentangles the ground-truth invariant subgraph $G_I^*$ and variant subgraph $G_V^*$ given the input graph $G$, where $G_I^*$ satisfies Assumption 2.1 and denote the complement as $G_V^* = G\backslash G_I^*$. Assume the second variance term of Eq. (5) is minimized, the first contrastive loss term is minimized *iff* the invariant subgraph generator $\Phi$ equals $\Phi^*$. *Proofs.* Denote the first contrastive loss term of Eq. (5) as $L_{ssl}$ and the second variance term as $L_{var}$. $\Leftarrow$: To prove $\Phi^* = \arg\min_{\Phi} L_{ssl}$, assume there exists another $\Phi^\prime \neq \Phi^*$ such that $L_{ssl}(\Phi^\prime) < L_{ssl}(\Phi^*)$. It implies that $G_V^\prime$ includes the ground-truth invariant subgraph information, $$ G_V^\prime \cap G_I^* = G_V^\prime \cap G\backslash G_V^* \neq \emptyset. $$ Therefore, the contrastive loss $\ell_{ssl}\left(\Phi, h_V; k\right)$ among the environments partitioned by the graph label $k = 1,\dots,|Y|$ is dependent on the label itself, i.e., $$ \exists\, k_1, k_2 = 1, \dots, |Y|, k_1 \neq k_2, \text{such that } \ell_{\text{ssl}}(\Phi^\prime, h_V; k_1) \neq \ell_{\text{ssl}}(\Phi^\prime, h_V; k_2). $$ Thus $$ L_{var}(\Phi^\prime) = \mathrm{Var}\left(\ell_{ssl}\left(\Phi^\prime, h_V; k\right)\right) > 0. $$ However, $G_V^*$ excludes all the ground-truth invariant information that is sufficiently predictive to the graph label. We have $$ L_{var}(\Phi^*) = 0. $$ Thus, $$ L_{var}(\Phi^\prime) > L_{var}(\Phi^*). $$ This contradicts the assumption that the variance term $L_{var}$ is minimized. Finally we prove that the optimal invariant subgraph generator $\Phi^*$ can minimize the contrastive loss $L_{ssl}$. $\Rightarrow$: To prove minimizing the contrastive loss $L_{ssl}$ can derive $\Phi$ equals $\Phi^*$, assume there exists another invariant subgraph generator $\Phi^\prime$ derived by minimizing $L_{ssl}$, i.e., $\Phi^\prime \neq \Phi^*$, where $G = (G_I^\prime, G_V^\prime)$ and $\Phi^\prime = \arg\min_{\Phi} L_{ssl}$. Because $L_{ssl}$ is minimized, $G_V^\prime$ preserves all the intrinsic features of the ground-truth variant subgraph, i.e., $G_V^* \subseteq G_V^\prime$. Since the second variance term $L_{var}$ is minimized, only ground-truth variant patterns can be included in $G_V^\prime$, i.e., $G_V^\prime \subseteq G_V^*$. Therefore, we have $G_V^\prime = G_V^*$, so that $\Phi^\prime = \Phi^*$. Finally, we prove that there exists a unique $\Phi^*$ that can minimize the first contrastive loss term of Eq. (5). - **Q2. Detailed analysis on hyper-parameter sensitivity.** We clarify that $\alpha$ is the coefficient to control the balance between the contrastive loss and the invariance regularizer. A large $\alpha$ encourages the invariance among different training environments and a small $\alpha$ can learn informative representations but may not be sufficient to encourage the invariance. $q$ is a hyperparameter in GCE loss, which controls the degree of fitting spurious correlations. A small $q$ pays more attention to correctly classified samples and suffers more from noisy samples, while a large $q$ means the model becomes less sensitive and prevents overfitting to the spurious correlations. Fig. 4 shows our method outperforms the best baselines within a wide range of hyper-parameters choices. - **Q3. Computation cost of the modules.** Denote the number of nodes and edges as $|V|$ and $|E|$ of the input graph, the representation dimensionality as $d$, and the batch size is $B$. Our method mainly consists of three modules: 1. For the invariant and variant subgraph identification module, the time complexity is $O(|E|d+|V|d^2)$ from the GCN component. 2. For the variant subgraph contrastive module, the time complexity is $O(B^2d)$, which is mainly from the pair-wise similarity calculation within a batch of graphs. 3. For the inverse propensity weighting based invariant prediction module, the time complexity is $O(B)$. Finally, the overall time complexity of our method is mainly induced by the message-passing GNN. The variant subgraph contrastive estimation and inverse propensity weighting modules will not have extra higher time complexity. - **Q4. Reason to use GCE loss.** We adopt GCE loss to fit the spurious correlations as it has been shown that GCE loss can emphasize spurious correlations. - **Q5. Impact of $q$.** As shown in Figure 4, as a hyperparameter in GCE loss to control the degree of fitting the spurious correlations, $q$ has a moderate impact on the model performance, but our method is not very sensitive to it by outperforming the best baselines within a wide range of hyper-parameters choices.
Summary: The submission explores the challenge of out-of-distribution generalization in GNNs. The authors propose a novel model that enhances out-of-distribution generalization by explicitly identifying invariant subgraphs and leveraging contrastive learning on variant subgraphs. Their approach consists of three key components: (1) distinguishing invariant and variant subgraphs, (2) applying contrastive learning to variant subgraphs to estimate the degree of spurious correlations, and (3) predicting invariant subgraphs with inverse propensity weighting to mitigate these spurious correlations. The model is evaluated on multiple benchmark datasets under varying degrees of distribution shifts, demonstrating its superiority over existing methods. Claims And Evidence: The authors claim that this is the first work to explicitly utilize variant subgraphs to help capture invariant subgraphs under distribution shifts and that the three mutually promoted modules significantly enhance performance over state-of-the-art baselines. This claim is supported by their experimental results, which consistently show the superiority of their method across various datasets with varying bias levels. Methods And Evaluation Criteria: The method aligns well with existing literature on invariant learning and causality based OOD generalization. The three proposed modules are motivated, particularly the introduction of variance contrastive estimation for variant subgraph learning, which differentiates this work from previous approaches that only focus on learning invariant representations. Theoretical Claims: The problem formulation, which defines the OOD generalization objective and the role of invariant subgraphs, is clearly stated and follows a well-defined causality framework. The use of inverse propensity weighting is a reasonable and theoretically sound approach to mitigating spurious correlations. Experimental Designs Or Analyses: The experimental design is well-structured and includes strong baseline comparisons. The authors compare against graph learning methods, including standard GNN architectures (GCN, GIN), pooling-based methods (DiffPool), and recent invariant learning approaches (DIR, LDD, DisC). Supplementary Material: Yes, all supplementary material is checked. Relation To Broader Scientific Literature: The work builds upon prior research (invariant learning) effectively. Essential References Not Discussed: The authors should cite or discuss more recent graph OOD papers. Other Strengths And Weaknesses: This work addresses an important problem in graph machine learning and presents a novel solution with strong theoretical and empirical backing. The proposed approach is innovative in its explicit estimation of spurious correlations and its use of contrastive learning to refine invariant subgraph identification. The extensive experimental validation strengthens its contribution. The method is relatively efficient, as shown by the complexity analysis, which indicates that VIVACE maintains comparable computational cost to existing baselines. The scalability of the method makes it suitable for real-world applications beyond the datasets tested. One potential limitation is the reliance on the accuracy of the variant subgraph identification step. If the model fails to accurately disentangle variant subgraphs, the effectiveness of the entire approach could be questioned. While the authors’ ablation studies suggest that their method is robust, additional discussions on how this challenge would be useful. Another limitation is that they could further elaborated on the underperformance of the baselines in the experiments. A more detailed discussion on this issue would provide valuable insights into the limitations of existing methods. Other Comments Or Suggestions: In addition to addressing these limitations, the authors should provide more explicit explanations of the training objective in the main text, particularly for the contrastive learning module. Questions For Authors: Could you clarify the mechanism to ensure the accuracy of the variant subgraph identification step? Would the approach be extended to node or link predictions? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: - **Q1. Clarification on the reliance on accurate variant subgraph identification.** We would like to clarify that our method can provably learn accurate variant subgraphs with theoretical guarantee. **Theorem 1.** Denote the optimal invariant subgraph generator $\Phi^*$ that disentangles the ground-truth invariant subgraph $G_I^*$ and variant subgraph $G_V^*$ given the input graph $G$, where $G_I^*$ satisfies Assumption 2.1 and denote the complement as $G_V^* = G\backslash G_I^*$. Assume the second variance term of Eq. (5) is minimized, the first contrastive loss term is minimized *iff* the invariant subgraph generator $\Phi$ equals $\Phi^*$. *Proofs.* Denote the first contrastive loss term of Eq. (5) as $L_{ssl}$ and the second variance term as $L_{var}$. $\Leftarrow$: To prove $\Phi^* = \arg\min_{\Phi} L_{ssl}$, assume there exists another $\Phi^\prime \neq \Phi^*$ such that $L_{ssl}(\Phi^\prime) < L_{ssl}(\Phi^*)$. It implies that $G_V^\prime$ includes the ground-truth invariant subgraph information, $$ G_V^\prime \cap G_I^* = G_V^\prime \cap G\backslash G_V^* \neq \emptyset. $$ Therefore, the contrastive loss $\ell_{ssl}\left(\Phi, h_V; k\right)$ among the environments partitioned by the graph label $k = 1,\dots,|Y|$ is dependent on the label itself, i.e., $$ \exists\, k_1, k_2 = 1, \dots, |Y|, k_1 \neq k_2, \text{such that } \ell_{\text{ssl}}(\Phi^\prime, h_V; k_1) \neq \ell_{\text{ssl}}(\Phi^\prime, h_V; k_2). $$ Thus $$ L_{var}(\Phi^\prime) = \mathrm{Var}\left(\ell_{ssl}\left(\Phi\prime, h_V; k\right)\right) > 0. $$ However, $G_V^*$ excludes all the ground-truth invariant information that is sufficiently predictive to the graph label. We have $$ L_{var}(\Phi^*) = 0. $$ Thus, $$ L_{var}(\Phi^\prime) > L_{var}(\Phi^*). $$ This contradicts the assumption that the variance term $L_{var}$ is minimized. Finally we prove that the optimal invariant subgraph generator $\Phi^*$ can minimize the contrastive loss $L_{ssl}$. $\Rightarrow$: To prove minimizing the contrastive loss $L_{ssl}$ can derive $\Phi$ equals $\Phi^*$, assume there exists another invariant subgraph generator $\Phi^\prime$ derived by minimizing $L_{ssl}$, i.e., $\Phi^\prime \neq \Phi^*$, where $G = (G_I^\prime, G_V^\prime)$ and $\Phi^\prime = \arg\min_{\Phi} L_{ssl}$. Because $L_{ssl}$ is minimized, $G_V^\prime$ preserves all the intrinsic features of the ground-truth variant subgraph, i.e., $G_V^* \subseteq G_V^\prime$. Since the second variance term $L_{var}$ is minimized, only ground-truth variant patterns can be included in $G_V^\prime$, i.e., $G_V^\prime \subseteq G_V^*$. Therefore, we have $G_V^\prime = G_V^*$, so that $\Phi^\prime = \Phi^*$. Finally, we prove that there exists a unique $\Phi^*$ that can minimize the first contrastive loss term of Eq. (5). We will add more detailed proofs in the revised paper. - **Q2. Additional discussions on ablation studies.** We have added additional discussions on ablation studies. The first two ablated versions "w/ GCNII" and "w/ GIN" denote replacing the backbone with GCNII and GIN. The next two ablated versions "w/o Var." and "w/o IPW" denote removing the variant subgraph contrastive module and further removing inverse propensity weighting module. Fig. 3 shows that the performance nearly keeps unchanged for the first two ablated versions and drops significantly for the next two ablated versions, indicating that (1) our method is compatible with the other popular GNNs, and (2) it is important to explicitly identify variant subgraphs for estimating its degree of spurious correlations and remove it by reweighting. More details will be added in the revised paper. - **Q3. Why the baselines perform worse.** We have revised the discussions into sec. 4.2 as follows. Some baselines (e.g., GCN and GIN) did not consider the specific design for generalization under distribution shifts. And some baselines (e.g., FactorGCN and DIR) also did not show good performance since their assumptions might be invalid under severe bias. Also, some debiased methods (e.g., LDD and DisC) did not explicitly capture the spurious correlations for each input graph. Therefore, the baselines perform worse than our method. We will add the discussions above to the revised paper. - **Q4. Explanations of training objective.** We would like to clarify that our method is one joint framework to optimize Eq. (11), including (1) the self-supervised contrastive objective in Eq. (5) that can ensure the accurate identification of invariant and variant subgraphs, (2) the objective in Eq. (7) for accurately estimating the degree of the spurious correlations, and (3) the objective in Eq. (8) to learn the predictions on the invariant subgraph after reweighting. - **Q5. Would the approach be extended to node or link predictions?** In this paper, we mainly focus on the graph-level prediction task. But our method can be easily extended to the node or link prediction tasks which we leave for future work.
Summary: This study addresses a critical problem in GNNs regarding their limited generalization capabilities under distribution shifts. Current approaches mainly use correlations in graph patterns rather than discovering fundamental causal substructures for predictions. To overcome this limitation, the paper jointly considers the identification of both invariant and variant subgraphs. Specifically, the method estimates the impact of spurious correlations induced by variant subgraphs and leverages this estimation to enhance the learning of invariant subgraphs. The proposed model demonstrates substantial performance gains over representative baselines, and comprehensive ablation studies confirm the effectiveness of each designed module. ## update after rebuttal I will keep my positive opinion towards the paper after rebuttal. Claims And Evidence: The paper's claims about improved generalization are clear and convincing. The paper claims that the method improves out-of-distribution generalization in graph classification tasks by explicitly modeling spurious correlations through contrastive learning and mitigating their impact via inverse propensity weighting. The empirical results support the claims by outperforming existing graph OOD generalization baselines across five datasets, including CMNIST, CFashion, CKuzushiji, MOLSIDER, and MOLHIV. Methods And Evaluation Criteria: The proposed method and evaluations make sense for the graph OOD problem. The use of contrastive learning for estimating variant subgraph effects is novel and well-motivated. Traditional methods only focus on learning invariant subgraph. But the proposed method focuses on learning both the variant subgraph and invariant subgraph. The key idea is inspiring. As said above, the evaluations included existing graph OOD generalization baselines and five common datasets (CMNIST, CFashion, CKuzushiji, MOLSIDER, and MOLHIV). Theoretical Claims: The application of inverse propensity weighting is grounded in causal inference literature and is employed in a principled manner to correct for spurious correlations. The theoretical foundations of the method align with prior works on invariant learning and causal representation learning. Experimental Designs Or Analyses: I have checked the soundness/validity of the experimental designs or analyses. I think the experimental designs are good but the analyses are a little limited. Supplementary Material: I reviewed supplementary material mainly on sections B and C. Relation To Broader Scientific Literature: The paper belongs to the literature on OOD generalization. Essential References Not Discussed: There are no additional references that need to be discussed. Other Strengths And Weaknesses: The other strengths are summarized as follows: - The work made strong methodological contributions with an interesting idea. - The method effectively disentangles invariant and variant subgraphs, which is a novel approach to handling distribution shifts. - The empirical results show consistent improvements across diverse datasets. The other weaknesses are summarized as follows: - Figure 1 is not clear to show the method’s training procedure. - The discussions on the related works are arbitrary. For some contents, the authors just simply listed the references without necessary discussions. - Typos: line 412, it should remove ‘5.1’; line 322, “well handling distribution shifts” should be “well handle distribution shifts”. Other Comments Or Suggestions: The authors should address the weaknesses above. The model framework in Figure 1 can be clearer. And the discussions on the related works are arbitrary. For the related work of OOD Generalization part, they just listed the relevant works (even in several lines), which should be revised. The differences among these works should be introduced. Questions For Authors: What are the computational trade-offs of using inverse propensity weighting? What are the differences among the listed OOD generalization methods from line 392 to line 400. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable feedback. We addressed all the comments. Please kindly find the detailed responses to the comments below. - **Q1. Figure 1 is not clear to show the method’s training procedure.** Thank you for this comment. We would like to follow your suggestion to improve Figure 1 by incorporating additional details to better illustrate our method. Specifically, we have made the following three improvements: (1) We have included the key equations used in the method directly into the figure for the readers to connect the pipeline in the figure with the corresponding details in the text. (2) We have refined the pipeline by adding more detailed step-by-step flows indicated by arrows, making the process clearer and easier to follow. (3) We have highlighted more technical details with concrete examples and included a small legend that explains the meaning of specific symbols, colors, or arrow styles. We will update Figure 1 following your suggestion in the revised paper. - **Q2. Differences among the listed related works (line 392-400).** Thank you for this question. We have revised the discussions on the related works (line 392-400) as follows: "Several famous works are proposed to tackle this problem on graphs by learning subgraph backed by different theories or assumptions, including causality [1-2], invariant learning [3-6], disentanglement [7], information bottleneck [8]. Different from these works that output explainable or invariant subgraphs under distribution shifts, some works directly learn generalizable graph representations for the problems where distribution shifts exist on graph size [9-10] or the other structural patterns [11-12]. And the learned representations are expected to remain invariant across different environments. We will also add more discussions on related works in the revised paper. - **Q3. There are some typos.** Thank you for this comment. We have carefully proofread the paper and revised the following typos: 1. Line 412: we have revised the section name "5.1. Disentangled Graph Neural Network" into "Disentangled Graph Neural Network". 2. Line 322, we have revised the expression "well handling distribution shifts" into "well handle distribution shifts". We will update the revised paper. - **Q4. What are the computational trade-offs of using inverse propensity weighting?** Thank you for this question. We would like to clarify that it will not have additional computation cost to use inverse propensity weighting. Specifically, in the variant subgraph contrastive estimation module, the time complexity of estimating the degree of the spurious correlations is $O(B^2d)$, which is mainly from the pair-wise similarity calculation within a batch of graphs, where $d$ is the representation dimensionality and $B$ is the batch size. After that, we calculate the inverse propensity weights, whose time complexity is $O(B)$. So the overall time complexity of inverse propensity weighting is $O(B^2d)$, which is significantly lower than the complexity of message-passing GNNs used in our invariant and variant subgraph identification module as well as the other baselines, whose time complexity is $O(|E|d+|V|d^2)$ where $|E|$ and $|V|$ denote the number of edges and nodes. We will add these detailed analyses in the revised paper to clarify the efficiency of our method. **References:** [1] Discovering Invariant Rationales for Graph Neural Networks [2] Causal attention for interpretable and generalizable graph classification [3] Handling Distribution Shifts on Graphs: An Invariance Perspective. [4] Learning Invariant Graph Representations Under Distribution Shifts [5] Empowering Graph Invariance Learning with Deep Spurious Infomax [6] Does invariant graph learning via environment augmentation learn invariance? [7] Debiasing Graph Neural Networks via Learning Disentangled Causal Substructure [8] Interpretable and generalizable graph learning via stochastic attention mechanism [9] Size-invariant graph representations for graph classification extrapolations [10] Sizeshiftreg: a regularization method for improving size-generalization in graph neural networks [11] Graph out-of-distribution generalization with controllable data augmentation [12] Graphmetro: Mitigating complex distribution shifts in gnns via mixture of aligned experts --- Rebuttal Comment 1.1: Comment: Thank you for the clarification. I will keep my positive score unchanged.
Summary: This manuscript studies out-of-distribution generalization issue in graph neural networks. The authors propose learning invariant subgraphs via variant subgraph contrastive estimation, which can handle graph distribution shifts with severe bias. The key innovation is leveraging contrastive learning on variant subgraphs to estimate spurious correlations, which is then mitigated using inverse propensity weighting. This method explicitly addresses the scenario where environment labels are either unavailable or unreliable, significantly enhancing the robustness of GNNs to severe biases in datasets. Claims And Evidence: The claims made by the authors regarding the benefits of leveraging contrastive learning to estimate and mitigate spurious correlations for better generalization are convincing. The provided empirical evidence supports these claims effectively. Methods And Evaluation Criteria: The proposed approach is reasonable. The inverse propensity weighting, coupled with self-supervised contrastive learning, effectively addresses the identified issues. The evaluation criteria using established datasets and metrics are appropriate and effectively highlight the strengths of the method for the problem. Theoretical Claims: The assumptions made for invariant subgraph learning are acceptable given the problem setting. However, the detailed theoretical discussions and proofs on why this method can solve this OOD problem are missing. Experimental Designs Or Analyses: The experiments provide strong empirical validations for the proposed method. Extensive experiments on several graph classification benchmark datasets demonstrate the superiority of the proposed method over baselines. However, the analyses on the important hyperparameters are weak. Supplementary Material: I checked the supplementary materials; additional details on implementation are helpful. Relation To Broader Scientific Literature: The paper is well-situated within the literature on graph learning and OOD generalization. Essential References Not Discussed: The important references are fully discussed from my point of view. Other Strengths And Weaknesses: ### strengths (1) The research problem is interesting and important to the community. As the real-world application of GNNs continues to expand, improving robustness against real-world out-of-distribution scenarios becomes increasingly crucial I think. (2) The proposed method is technically sound. The technical details are easy to understand. The use of variance contrastive estimation and inverse propensity weighting to mitigate spurious correlations is particularly novel. (3) The comparative results against baselines validate the effectiveness of the approach. The presented experimental results indicate consistent improvements over several state-of-the-art baselines, validating the effectiveness and robustness of the proposed approach under varying degrees of bias. ### weaknesses: (1) While the paper is methodologically solid and clearly explained, the authors should provide more detailed theoretical analyses of the methods. (2) The discussions on the experimental results in the experiment section can also be more detailed. Deeper insights into hyperparameter analyses and ablation studies would improve clarity. (3) The discussions on the time complexity are limited. More comprehensive analysis on each module is missing. Other Comments Or Suggestions: I would suggest that the authors incorporate more rigorous theoretical analyses into the proposed method. And I also strongly encourage the authors to incorporate more details on the time complexity of each module. Questions For Authors: Can you provide more theoretical analyses on the method to explain why it works? Can you incorporate the time complexity of each module? Do these variant subgraph contrastive estimation and inverse propensity weighting modules introduce unacceptable time complexity? Can you explain why the hyperparameter $\alpha$ and $q$ can influence the performance? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: - **Q1. Theoretical analyses.** We have added **Theorem 1** to show our method can accurately identify the invariant and variant subgraphs for OOD generalization. **Theorem 1.** Denote the optimal invariant subgraph generator $\Phi^*$ that disentangles the ground-truth invariant subgraph $G_I^*$ and variant subgraph $G_V^*$ given the input graph $G$, where $G_I^*$ satisfies Assumption 2.1 and denote the complement as $G_V^* = G\backslash G_I^*$. Assume the second variance term of Eq. (5) is minimized, the first contrastive loss term is minimized *iff* the invariant subgraph generator $\Phi$ equals $\Phi^*$. *Proofs.* Denote the first contrastive loss term of Eq. (5) as $L_{ssl}$ and the second variance term as $L_{var}$. $\Leftarrow$: To prove $\Phi^* = \arg\min_{\Phi} L_{ssl}$, assume there exists another $\Phi^\prime \neq \Phi^*$ such that $L_{ssl}(\Phi^\prime) < L_{ssl}(\Phi^*)$. It implies that $G_V^\prime$ includes the ground-truth invariant subgraph information, $$ G_V^\prime \cap G_I^* = G_V^\prime \cap G\backslash G_V^* \neq \emptyset. $$ Therefore, the contrastive loss $\ell_{ssl}\left(\Phi, h_V; k\right)$ among the environments partitioned by the graph label $k = 1,\dots,|Y|$ is dependent on the label itself, i.e., $$ \exists\, k_1, k_2 = 1, \dots, |Y|, k_1 \neq k_2, \text{such that } \ell_{\text{ssl}}(\Phi^\prime, h_V; k_1) \neq \ell_{\text{ssl}}(\Phi^\prime, h_V; k_2). $$ Thus $$ L_{var}(\Phi^\prime) = \mathrm{Var}\left(\ell_{ssl}\left(\Phi^\prime, h_V; k\right)\right) > 0. $$ However, $G_V^*$ excludes all the ground-truth invariant information that is sufficiently predictive to the graph label. We have $$ L_{var}(\Phi^*) = 0. $$ Thus, $$ L_{var}(\Phi^\prime) > L_{var}(\Phi^*). $$ This contradicts the assumption that the variance term $L_{var}$ is minimized. Finally we prove that the optimal invariant subgraph generator $\Phi^*$ can minimize the contrastive loss $L_{ssl}$. $\Rightarrow$: To prove minimizing the contrastive loss $L_{ssl}$ can derive $\Phi$ equals $\Phi^*$, assume there exists another invariant subgraph generator $\Phi^\prime$ derived by minimizing $L_{ssl}$, i.e., $\Phi^\prime \neq \Phi^*$, where $G = (G_I^\prime, G_V^\prime)$ and $\Phi^\prime = \arg\min_{\Phi} L_{ssl}$. Because $L_{ssl}$ is minimized, $G_V^\prime$ preserves all the intrinsic features of the ground-truth variant subgraph, i.e., $G_V^* \subseteq G_V^\prime$. Since the second variance term $L_{var}$ is minimized, only ground-truth variant patterns can be included in $G_V^\prime$, i.e., $G_V^\prime \subseteq G_V^*$. Therefore, we have $G_V^\prime = G_V^*$, so that $\Phi^\prime = \Phi^*$. Finally, we prove that there exists a unique $\Phi^*$ that can minimize the first contrastive loss term of Eq. (5). We will add more detailed proofs in the revised paper. - **Q2.1. Hyperparameter analyses.** We want to clarify that $\alpha$ is the coefficient to control the balance between the contrastive loss and the invariance regularizer. A large $\alpha$ encourages the invariance among different training environments and a small $\alpha$ can learn informative representations but may not be sufficient to encourage the invariance. $q$ is a hyperparameter in GCE loss, which controls the degree of fitting spurious correlations. A small $q$ pays more attention to correctly classified samples and suffers more from noisy samples, while a large $q$ means the model becomes less sensitive and prevents overfitting to the spurious correlations. Fig. 4 shows our method outperforms the best baselines within a wide range of hyper-parameters choices. - **Q2.2. Ablation studies.** The first two ablated versions "w/ GCNII" and "w/ GIN" denote replacing the backbone with GCNII and GIN. The next two ablated versions "w/o Var." and "w/o IPW" denote removing the variant subgraph contrastive module and further removing inverse propensity weighting module. Fig. 3 shows that the performance nearly keeps unchanged for the first two ablated versions and drops significantly for the next two ablated versions, indicating that (1) our method is compatible with the other popular GNNs, and (2) it is important to explicitly identify variant subgraphs for estimating its degree of spurious correlations and remove it by reweighting. - **Q3. Time complexity of each module.** Denote the number of nodes and edges as $|V|$ and $|E|$ of the input graph, the representation dimensionality as $d$, and the batch size is $B$. Our method mainly consists of three modules: 1. For the invariant and variant subgraph identification module, the time complexity is $O(|E|d+|V|d^2)$ from the GCN component. 2. For the variant subgraph contrastive module, the time complexity is $O(B^2d)$, which is mainly from the pair-wise similarity calculation within a batch of graphs. 3. For the inverse propensity weighting based invariant prediction module, the time complexity is $O(B)$. Finally, the overall time complexity of our method is $O(|E|d+|V|d^2)$. --- Rebuttal Comment 1.1: Comment: Thanks for addressing my concerns. I'd like to raise the score to 4.
null
null
null
null
null
null
Ca2-VDM: Efficient Autoregressive Video Diffusion Model with Causal Generation and Cache Sharing
Accept (poster)
Summary: This paper introduces a diffusion-based method for video generation via causal transformers. The main idea is to apply kv-caching (technique widely used for AR transformers in NLP) to a causal diffusion transformer. This leads to faster video generation and potentially enables streaming scenarios. The method is based on a OSS text/image2video model (opensora) and results are demonstrated on two video generation benchmarks. ## Update after rebuttal I am inclined to keep my original score (weak reject). Evaluation is limited, and qualitative results are unsatisfactory (e.g. 19uZ shares this viewpoint). Similarly, evaluation of the cyclic TPE is not complete: I am not convinced by the argument that comparison to model w/o cyclic TPEs is infeasible - one can probably come up with a smaller-scale experiment to validate it. Claims And Evidence: - The claim on applicability of the method to long-term or live-stream video generation is not validated. Methods And Evaluation Criteria: - The set of benchmarks is quite limited (UCF101, MSR-VTT), there are more recent and relatively widely adopted benchmarks: https://github.com/Vchitect/VBench Theoretical Claims: N/A Experimental Designs Or Analyses: - Validation on longer sequence generation or streaming seems missing. - Benchmarks are limited. Supplementary Material: I did review the three video samples attached to supplementary. Relation To Broader Scientific Literature: This paper is similar in spirit to LVDM-like AR diffusion but applies a common solution for transformers to improve speed. Essential References Not Discussed: - Auto-regressive diffusion models [Hoogeboom'22] is one of the first works to combine AR and DM into one model. Other Strengths And Weaknesses: - Clarity: there is a bit of confusing text in the intro about transformers-based VDMs - temporal attention can certainly be causal and must not necessarily be bidirectional - and in practice it is a matter of applying a different mask? - Quality: The resulting videos do not look like particularly temporally consistent - thus making it hard to understand whether using causal attention (in contrast to dense attention which is more frequently used in recent models such as moviegen). - Originality: the main contribution of this work is applying auto-regressive modeling and kv-caching which have been widely used in transformer architectures. It is unclear if applying those to VDMs this is a significantly original contribution, especially given that the visuals do not look convincing. - Limited results: 3 videos seems like not enough to really understand the quality of the model. Other Comments Or Suggestions: - Might be worth clarifying the intro as it reads like temporal attention is inherently bidirectional. - More results would be useful to understand the quality, on longer sequences in particular. Questions For Authors: - One of the issues with the auto-regressive approach is error accumulation. Given that the claim of the paper is ability to generate longer-term videos, did authors try applying their model on a longer context? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: ## Anonymous link for additional experiment results https://anonymous.4open.science/r/additional-exp-results-for-anonymous-github-F6EB/readme.md This includes: Table_R1, Table_R2, Figure_R1, and Figure_R2 $~$ ## Q1: Evaluation on VBench VBench is primarily designed for text-to-video evaluation. For our assessment, we selected four metrics: aesthetic quality, imaging quality, motion smoothness, and temporal flickering. The first two measure spatial (appearance) quality, and the last two assess temporal consistency. We compared Ca2-VDM and OS-Ext on the Skytimelapse test set. As shown in Table_R2, Ca2-VDM achieves comparable performance in both appearance quality and temporal consistency. Given our primary focus on efficiency, we conclude that Ca2-VDM matches the bidirectional baseline while being more efficient in both computation and storage for autoregressive video generation. $~$ ## Q2: The claim on applicability of the method to long-term or live-stream video generation is not validated. Long-term video generation remains an open problem, with numerous challenges to be solved, including short-term frame quality, long-term content consistency, and generation efficiency. For our Ca2-VDM, as claimed in the introduction, **the primary focus is to improve the generation efficiency (both computation and storage) of auto-regressive video generation**. Providing superior long-term video generation performance (surpassing SOTAs) is not the primary focus of Ca2-VDM. For the visual quality, we achieve comparable visual quality to SOTA methods, as evidenced by the FVD results on MSR-VTT and UCF-101 datasets (Tables 1 and 2), as well as the results on VBench (Table_R2). For the long-term quality, we conducted additional experiments to show long-term content drift, as in Figure_R1. In fact, OS-Ext and our Ca2-VDM show comparable visual quality. Both models exhibit a similar degree of error accumulation over time. $~$ ## Q3: Essential References Not Discussed > Hoogeboom, Emiel, et al. "Autoregressive Diffusion Models." in ICLR, 2022 Thank you for the suggested reference. This work proposes Autoregressive Diffusion Model (ARDM), which is built on autoregressive models and incorporates techniques from **discrete diffusion models**. It offers efficient training without the need for causal masking and parallel generation at test time. In contrast, our Ca2-VDM is built on **continuous latent diffusion models** for video generation, and enables efficient autoregressive generation with causal attention and cache sharing. We will add more discussion in the revised paper. $~$ ## Q4: Clarity: Confusion in the Introduction about the temporal attention Thank you for your suggestion. We agree that temporal attention does not necessarily have to be bidirectional. Our introduction aims to highlight that, in current video diffusion models, temporal attention is commonly used and is typically applied in a bidirectional manner, both in UNet and DiT structures. We will improve the writing of the introduction to avoid any potential misleadings in the revised paper. $~$ ## Q5: Quality: Unsatisfactory temporal consistency, limited qualitative results. We would like to clarify that the causal attention is not intended to improve the temporal consistency. The causal attention is designed to combine with the kv-cache queue and cache sharing to improve the generation efficiency. For temporal consistency, as evidenced in Table 3 and Figure 5, our Ca2-VDM offers extendable conditions and achieves better temporal consistency than fixed-length condition SOTA models (StreamT2V, Gen-L-Video, and OS-Fix). We also conducted an additional evaluation in terms of frame-differencing, as shown in Figure_R2. The results show that our method has good temporal consistency, while the other three have periodic content mutations (at the edge of each autoregression chunk), especially GenLV. Due to limited computational resources and the tight schedule of the rebuttal period, we regret that currently we cannot conduct additional large-scale training to further improve the quality of videos in the supplementary material. $~$ ## Q6: Originality: Concerns about the contribution We would like to clarify that applying auto-regressive modeling and kv-cache to VDMs is not trivial (as acknowledged by Reviewer-dKiu). Our original contribution includes: 1) kv-cache queue boosted with cyclic-TPE and 2) cache sharing 1. KV-cache queue with cyclic-TPE: It addresses the cases where positional indices grow beyond the training length while keeping the training-inference alignment. More detailed analysis can also be found in the **Q4 of Reviewer-19uZ**. 2. Cache sharing: It enables the model to store only KVs of clean frames, leading to much less GPU memory usage than concurrent work (e.g., Live2diff). More detailed analysis can also be found in the **Q4 of Reviewer-7MWe**.
Summary: This work propose an optimized ar video diffusion model, Ca2-VDM, which aims to enhance the efficient long-term, and real-world video generation. In Ca2-VDM, **causal generation** is proposed to reduce the redundant computations of previous conditional frames, and **cache sharing** is proposed to reduce the storage cost during inference. The effectiveness of Ca2-VDM is validated on three different benchmarks, compared with two different conditional generation baselines. ## update after rebuttal I have read the response from the authors. My concerns are addressed. Some other reviewers are still concerned about the overall poor generation results. I will keep my initial rating. Claims And Evidence: Yes, the proposed Ca2-VDM is validated by thorough experimental results on different benchmarks. Methods And Evaluation Criteria: Yes, the evaluation is the commonly used metrics in generation community like FVD. Theoretical Claims: No theoretical claims in this work. Experimental Designs Or Analyses: Yes, Supplementary Material: Yes, video comparisons are provided in the supplementary material. Relation To Broader Scientific Literature: The core contributions of this work, i.e., causal ar temporal generation with cache sharing to significantly improve efficiency of ar video diffusion models, which are deeply related to several lines of research in the broader scientific literature. Essential References Not Discussed: - From Slow Bidirectional to Fast Autoregressive Video Diffusion Models Other Strengths And Weaknesses: Strenghts: 1. This work identifies the limitation of existing ar vdms, the repeated computation of conditional frames, which clearly motivate the proposed work to address real-world applications of video generation like long-term video generation and live-stream scenarios. 2. The proposed causal attention and cache sharing is both innovative for long term ar video generation. 3. The paper is well-written, and figures help clarify the proposed method. Other Comments Or Suggestions: No Questions For Authors: 1. The model is initialized from open-sora v1.0, which is a bidirectional video generation model and there exists mismatch for the causal video generation. Did the authors observe any negative impact or instability during finetuning? 2. One of the causal video generation strengths is that it can be generalized to longer-time video generation, there is no discussion about the this point in this work. The evaluation is conducted under the settings same as the training. Dose the content drift over longer times, like hundreds of frames? 3. The authors claim the cache sharing helps avoid the huge cache storage cost in previous works, but it remains unclear how much actual memory is saved empirically. Could the authors provide detailed metrics of memory footprint comparisons between Ca2-DM and baselines or other methods? 4. The proposed cycle TPEs are interesting contribution, but the paper dose not fully explore its importance separately. Dose the model sensitive to tpe and how much sensitive to the embeddings? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ## Anonymous link for additional experiment results https://anonymous.4open.science/r/additional-exp-results-for-anonymous-github-F6EB/readme.md This includes: Table_R1, Table_R2, Figure_R1, and Figure_R2 $~$ ## Q1: Essential References Not Discussed > Yin, Tianwei, et al. "From slow bidirectional to fast causal video generators." CVPR 2025. Thank you for your suggestion. This work also introduces causal attention into autoregressive video diffusion models (AR-VDMs). They use distributed matching distillation to align the visual quality of causal student model to the bidirectional teacher model. This referenced work is a concurrent work with our paper. I.e., Our initial version of Ca2-VDM was developed concurrently with it. Our approach distinguishes itself through the kv-cache queue (with cyclic-TPE) and cache sharing mechanisms that provide a both computational and storage efficient framework for (AR-VDMs). We will include this referenced work and add additional discussions in our revised paper. $~$ ## Q2: Any negative impact or instability observed during finetuning from bidirectional VDM ? We did not observe any obvious negative impact or instability during our fine-tuning. Indeed, as verified by the aforementioned reference work (Yin, Tianwei, et al, 2025), distilling the knowledge from a pre-trained bidirectional teacher model to the causal student model brings quality improvement over directly fine-tuning. Additionally, as discussed in our "Possible Future Directions" section of the Appendix (Sec. F), pretraining the causal attention from scratch might have potential improvements. We will explore these two strategies in our future work. $~$ ## Q3: Content drift over longer generated videos As discussed in our Introduction section, bidirectional models are also capable of generating longer videos beyond the training video length, and this is not a unique strength of causal models. Also, we did not claim long video quality improvement over the bidirectional baseline (OS-Ext) as our contribution. We conducted additional experiments to show long-term content drift. As shown in Figure_R1, OS-Ext and our Ca2-VDM show comparable visual quality. Both models exhibit a similar degree of error accumulation over time. Nevertheless, long-term video generation remains an open problem. Both models face challenges in this regard. We can apply the aforementioned distillation technique to enhance the visual quality. $~$ ## Q4: GPU memory usage between Ca2-DM and baselines We conducted empirical GPU memory statistics, as shown in Table_R1 We compare Ca2-VDM with a concurrent work, Live2diff [1]. It stores the kv-cache for every denoising step (with different noise level $t$ and thus different KV features) , which costs much more GPU memory than ours. Live2diff uses StreamDiffusion [2]'s pipeline denoising, which puts frames with progressive noisy levels into a batch and generates one frame each autoregression step. So, its batch size in the model forward shape is equal to the denoising steps, i.e., $B=T$. Note that cache sharing is an inherent property of Ca2-VDM, and it cannot be evaluated without cache sharing. Instead, we demonstrate that Ca2-VDM’s KV-cache memory cost is independent of denoising steps, as its fixed shape $(1,25, hw, C)$ ensures stable memory usage. In contrast, Live2diff's memory scales with $T$ (e.g., from 1.42 GB at $T=4$ to 17.70 GB at $T=50$), confirming that cache sharing saves $T \times$ GPU memory. As a result, Ca2-VDM requires only 0.86 GB (w/ PE) or 0.77 GB (w/o PE), with the difference due to spatial KV-cache for prefix-enhancement (PE). While Live2diff uses distillation (e.g., LCM) to reduce $T$, existing few-step acceleration methods still require at least 4 steps to generate frames with acceptable quality. This means we can save at least 4 x GPU memory. In addition, Live2diff cannot run 50-step denoising with KV-cache when GPU memory is limited or for high-resolution videos. This prevents proper evaluation of the teacher model before distillation, further restricting its applicability. [1] Xing, Zhening, et al. Live2diff: Live stream translation via uni-directional attention in video diffusion models. 2024. [2] Kodaira, Akio, et al. Streamdiffusion: A pipeline-level solution for real-time interactive generation. 2023. $~$ ## Q5: How much is the model sensitive to cyclic TPEs The cyclic TPE in Ca2-VDM is specifically designed to enable the model to generate videos that exceed the training length. In other words, a direct comparison of model performance with and without cyclic TPEs is not feasible. Also, it is not intended to enhance visual quality. In terms of the model's sensitivity to cyclic TPE, we discussed the impact of training with cyclic TPE in the "Limitations" of Appendix (Sec. F). It requires the model to learn all the possible situations during training and making it difficult to converge.
Summary: Ca2-VDM is an autoregressive video diffusion model designed to generate long videos more efficiently. The paper identifies that existing autoregressive video diffusion models (VDMs) suffer heavy redundant computation when generating videos in chunks: overlapping frames between successive clips are repeatedly processed, leading to quadratic time complexity as more clips are generated​​. To address this, Ca2-VDM introduces two key innovations: (i) a causal generation mechanism with unidirectional temporal attention, so each frame only attends to previous frames. This allows the model to cache intermediate features (keys/values) of past (conditional) frames and reuse them in subsequent generation steps, eliminating redundant computations​. (ii) a cache sharing strategy that uses a fixed-size KV-cache queue along with cyclic temporal positional embeddings (Cyclic-TPEs) to recycle the cache across all diffusion denoising steps​. By sharing cached features rather than storing separate caches per denoising step, memory usage is kept in check. Using these techniques, Ca2-VDM can autoregressively extend video length with only a slight increase in computation per step (linear in the number of steps, rather than quadratic)​. The model is built on a spatial-temporal Transformer (initialized from Open-Sora v1.0, a pre-trained video diffusion model) and is evaluated on both text-to-video generation and video prediction tasks. Experiments show that Ca2-VDM maintains state-of-the-art video generation quality while significantly improving inference speed, enabling generation of longer videos (dozens of frames) more practically​. In summary, the paper’s primary contributions are: (1) a causal generation architecture for VDMs that enables reusing past frame features via caching, (2) a cache-sharing mechanism (with a cyclic positional encoding scheme) that provides long-term context without prohibitive memory growth, and (3) an implementation that achieves comparable or better video quality than prior state-of-the-art models but at much lower computational cost and faster inference​. Claims And Evidence: The claims in the paper are generally well-supported by empirical evidence. The authors claim that Ca2-VDM eliminates redundant computation and achieves a significant speedup in autoregressive video generation. This is convincingly demonstrated by detailed runtime comparisons. For example, to generate an 80-frame video at 256^2 resolution (with 100 denoising steps), Ca2-VDM required only ~52 seconds on an A100 GPU, whereas a comparable baseline with extendable context (OS-Ext) took 130 seconds and a prior streaming method took 150 seconds​. This confirms a 2.5–3× speedup in practice, matching the claim of improved efficiency. The claim of reduced computational complexity (from quadratic to linear scaling with the number of autoregressive steps) is backed by analysis of FLOPs: as the number of autoregressive steps grows, baseline models see dramatic increases in computation at each new step, while Ca2-VDM’s computation grows only slightly in temporal attention and stays constant in other parts​. The paper also claims state-of-the-art video generation quality (or at least comparable to SOTA). This is supported by quantitative results on standard benchmarks: for text-to-video on MSR-VTT, Ca2-VDM achieves an FVD of 181, matching the best prior model (SEINE) and substantially better than older models (e.g. PixelDance 381, ModelScope 550)​. For video prediction on UCF-101, after fine-tuning, Ca2-VDM attains an FVD of 184.5, outperforming previous methods like VDT (225.7) and VideoFusion (220) by a large margin​. These results substantiate the claim that efficiency was achieved without sacrificing quality. In fact, Ca2-VDM often slightly improves quality in long video settings due to its extended temporal context: the paper shows it yields lower FVD between early and later video chunks compared to baselines, indicating better temporal consistency over long generations​. The qualitative examples (Figure 7) further illustrate that Ca2-VDM avoids the serious frame discontinuities (“mutations” between consecutive frames) that occur in other methods at clip boundaries​. All these observations give clear and convincing evidence for the paper’s claims. Methods And Evaluation Criteria: The proposed methods are well-chosen for the problem of long video generation. The use of causal (unidirectional) temporal attention is a natural solution to avoid re-computation: it’s analogous to how Transformer language models cache past token embeddings to generate long sequences efficiently. Adapting this idea to video diffusion is appropriate and non-trivial – the authors had to modify the video UNet/Transformer to mask temporal attention so that each frame attends only to previous frames​. This ensures that features for past frames (the “conditional” context frames) can be computed once and reused. The introduction of a KV-cache queue for those features, combined with cyclic temporal positional embeddings, is an effective architectural innovation to handle unlimited sequence length. The cyclic positional encoding scheme addresses the fact that positional indices will grow beyond the model’s training length; by wrapping around and randomizing the offset during training, the model learns to handle very long sequences without divergence​. This is an appropriate solution to maintain alignment between training and inference when extending to longer video than seen during training. Additionally, the paper introduces a “prefix-enhanced spatial attention” (a method to feed the past frames’ information into spatial attention layers) – this is a sensible design to strengthen temporal continuity, and the ablation study confirms it modestly improves performance​. Overall, the methodology directly tackles the identified inefficiencies in a principled way and is well-grounded in Transformer design practices. The evaluation criteria and experiments are appropriate for the application domain. The authors evaluate on standard video generation benchmarks for both text-to-video (MSR-VTT, UCF-101 with text prompts) and unconditional video prediction (Sky Timelapse for extrapolating future frames) – covering both major use cases. They use FVD as the primary quantitative metric, which is standard for generative video quality, and they follow common practice by computing FVD with a pre-trained I3D model​. This choice is appropriate, as FVD captures both visual fidelity and temporal coherence. The paper also reports inference time and breakdowns of computation (FLOPs) to support the efficiency claims, which are crucial given the paper’s focus. The combination of quality metrics (FVD), speed measurements, and qualitative examples covers the relevant evaluation angles for a generative model. One could argue for additional metrics like text-to-video relevance (e.g., CLIP score for text alignment) or human evaluation for visual quality, but FVD and the provided visuals are generally accepted indicators for this task. The evaluation protocol is fair: Ca2-VDM is compared against strong baselines, including existing SOTA models (e.g. ModelScope, VideoFusion, SEINE) and carefully constructed ablations (OS-Fix, OS-Ext) that isolate the impact of their contributions. The authors generate a sufficient number of samples (e.g., 2990 videos for MSR-VTT, 2048 for UCF101) for reliable FVD estimation​, and even evaluate temporal consistency by measuring FVD across chunks within a generated video​, which is a thoughtful evaluation of long-term quality. In summary, the methods are well-suited to the problem and the evaluation methodology is rigorous and appropriate for validating both the performance and the efficiency of the proposed approach. Theoretical Claims: The paper does not heavily focus on new theoretical derivations。 Experimental Designs Or Analyses: The experimental design in this paper is solid and thorough. The authors conduct experiments on multiple datasets covering different scenarios, which strengthens the validity of their findings. For text-conditioned video generation, they evaluate on MSR-VTT (a large benchmark with diverse videos and captions) and UCF-101 (using class names or prompts from prior work for text, ensuring a broad range of actions)​. For unconditional video prediction, they use the Sky Timelapse dataset​, which tests the model’s ability to continue a given video. These choices cover both open-domain content (MSR-VTT) and structured motion (Sky Timelapse’s moving skies), demonstrating the model’s generality. The experiments are designed to isolate the effects of the proposed innovations. Specifically, they compare Ca2-VDM against two controlled baselines built on the same architecture: OS-Fix (which uses fixed-length context like conventional models) and OS-Ext (which allows extended context but without caching, akin to a naive autoregressive extension)​. This is a critical comparison because it shows what benefits come purely from the new design. Indeed, Ca2-VDM vs. OS-Ext reveals the efficiency gains at similar quality, and Ca2-VDM vs. OS-Fix reveals the benefit of having extendable context. They also compare to external baselines like Gen-L-Video (a tuning-free method using overlapping denoising) and StreamingT2V (another autoregressive diffusion approach), which demonstrates where Ca2-VDM stands relative to contemporary solutions​. The inclusion of these baselines indicates a conscientious experimental design aimed at covering all relevant alternatives. The analysis of results is detailed and convincing. The paper reports quantitative metrics (FVD scores) for each dataset and method, and breaks down results by scenario (zero-shot vs. fine-tuned) to ensure fairness​. Notably, the authors include an ablation study (Table 3) on the Sky Timelapse video prediction task, examining the effect of the prefix-enhanced spatial attention and different maximum prefix lengths​. This ablation shows, for instance, that using a longer prefix (more past frames) and the prefix enhancement yields better FVD in later chunks of the video, validating those design choices. They also specifically analyze temporal consistency using their chunk-wise FVD metric (Table 4)​: this analysis revealed that methods with fixed context (OS-Fix, GenLV) or naive streaming can accumulate error and cause noticeable distribution shift in later frames (higher FVD against the first chunk), whereas Ca2-VDM (with extended context) maintains lower drift​. Such an analysis directly addresses the long-term quality claim of the paper. Additionally, the authors present runtime analysis: a table of cumulative inference time per autoregressive step and a FLOPs breakdown by component​. The time analysis (also visualized in Figure 10) clearly demonstrates the linear vs. quadratic growth of time cost, which is central to the paper’s thesis​. It’s commendable that they tested on the same hardware and settings for all methods, even quoting StreamingT2V’s numbers from its GitHub under the same conditions​ – this lends credibility to the comparisons. Overall, the experiments are sound and well-controlled. The paper uses large sample sizes (e.g., generating 512 videos for each condition in consistency tests) to ensure statistical reliability​. Where direct comparison wasn’t feasible (e.g., some baselines not available for certain datasets), the authors either cite published results or reasonably adapt the methods. No obvious flaws or missing analyses were noted. One might have wanted to see a qualitative user study or per-frame perceptual scores, but given the consistency of FVD and the provided visuals, the conclusions seem trustworthy. The only slight issue is that memory usage (GPU memory) for caching vs. not caching is not quantified in the main text – the authors qualitatively claim huge memory savings due to cache sharing​, which is logical. It could have been interesting if they had reported actual VRAM usage for generating long videos with and without cache sharing. However, this omission does not undermine the results; the focus was clearly on computation time, which they addressed thoroughly. In summary, the experimental design is comprehensive, and the analyses directly support the paper’s conclusions without glaring omissions or confounding factors. Supplementary Material: The paper provided a few video qualitative comparisons. Relation To Broader Scientific Literature: This work is well-situated in the context of the broader literature on video generation and diffusion models. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The biggest weakness of the paper is its scale. This is inevitable considering not all researchers have sufficient GPU resources, but one always may question the results, especially the videos authors provided in the supplementary material --- they are not really convincing, firstly there are only four of them, secondly they all appear blurry and have significant artifacts. Quantitatively, the paper is sound and complete, but these qualitative results do not convince me. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ## Anonymous link for additional experiment results https://anonymous.4open.science/r/additional-exp-results-for-anonymous-github-F6EB/readme.md This includes: Table_R1, Table_R2, Figure_R1, and Figure_R2 $~$ ## Q1: Additional metrics for per-frame perceptual evaluation We conducted additional evaluations on the VBench (https://github.com/Vchitect/VBench) benchmark. It is primarily designed for text-to-video evaluation. For our assessment, we selected four metrics: aesthetic quality, imaging quality, motion smoothness, and temporal flickering. The first two measure spatial (appearance) quality, and the last two assess temporal consistency. We compared Ca2-VDM and OS-Ext on the Skytimelapse test set, as shown in Table_R2. The results show that Ca2-VDM achieves comparable performance in both appearance quality and temporal consistency. Given our primary focus on efficiency, we conclude that Ca2-VDM matches the bidirectional baseline while being more efficient in both computation and storage for autoregressive video generation. $~$ ## Additional evaluation on temporal consistency We also conducted an additional evaluation in terms of frame-differencing, as shown in Figure_R2. The results show that our method has good temporal consistency, while the other three have periodic content mutations (at the edge of each autoregression chunk), especially GenLV. $~$ ## Q2: GPU memory usage for w/ cache sharing vs. w/o cache sharing Cache sharing is integral to Ca2-VDM due to the clean prefix condition design, meaning it cannot be evaluated without cache sharing. Instead, we compared Ca2-VDM with a concurrent work, Live2diff [1], which also uses kv-cache during autoregressive video generation. We conducted empirical GPU memory statistics, as shown in Table_R1 Live2diff stores the kv-cache for every denoising step (with different noise level $t$ and thus different KV features) , which costs much more GPU memory than ours. Live2diff uses StreamDiffusion [2]'s pipeline denoising, which puts frames with progressive noisy levels into a batch and generates one frame each autoregression step. So, its batch size in the model forward shape is equal to the denoising steps, i.e., $B=T$. Benefited from cache sharing, Ca2-VDM’s KV-cache memory cost is independent of denoising steps, as its fixed shape $(1,25, hw, C)$ ensures constant memory usage. In contrast, Live2diff's memory scales with $T$ (e.g., from 1.42 GB at $T=4$ to 17.70 GB at $T=50$), confirming that **cache sharing saves $T \times$ GPU memory.** As a result, Ca2-VDM requires only 0.86 GB (w/ PE) or 0.77 GB (w/o PE), with the difference due to spatial KV-cache for prefix-enhancement (PE). While Live2diff uses distillation (e.g., LCM) to reduce $T$, existing few-step acceleration methods still require at least 4 steps to generate frames with acceptable quality. This means we can save at least 4 x GPU memory. In addition, Live2diff cannot run 50-step denoising with KV-cache when GPU memory is limited or for high-resolution videos. This prevents proper evaluation of the teacher model before distillation, further restricting its applicability. [1] Xing, Zhening, et al. Live2diff: Live stream translation via uni-directional attention in video diffusion models. 2024. [2] Kodaira, Akio, et al. Streamdiffusion: A pipeline-level solution for real-time interactive generation. 2023. $~$ ## Q3: Unsatisfactory qualitative results of supplementary videos. We acknowledge that the current qualitative results are not ideal. However, our primary focus is on improving the generation efficiency of video diffusion models, providing a computational and storage efficient framework. The visual quality improvement over the bidirectional attention baseline (OS-Ext) is not claimed as our contribution. Due to limited computational resources and the tight schedule of the rebuttal period, we regret that we could not conduct additional large-scale training to further improve the qualitative results in the supplementary material. Instead, we conducted experiments on the Sky-Timelapse dataset, as shown in Figure_R1. We can observe that Ca2-VDM achieves comparable qualitative performance to OS-Ext, both in terms of single-frame quality and long-term visual content drift (with a similar degree of error accumulation during long-term video generation). Nevertheless, we can still apply distillation techniques to improve the quality, e.g., distilling from a bidirectional teacher model to enhance the generation results of our causal generation model (as the student). Additionally, as discussed in the "Future Directions" section of the Appendix (Sec. F), pretraining the causal attention from scratch might have potential improvements.
Summary: This paper introduces Ca2-VDM, an autoregressive video diffusion model tailored for generating long videos efficiently. The core idea is to eliminate redundant computation of conditional (overlapped) frames when chaining multiple short clips. To achieve this, the model applies causal generation that replaces standard (bidirectional) temporal attention layers with causal temporal attention and cache sharing that implements a queue-like caching system that holds key/value features from prior frames. Empirical results on text-to-video and video prediction tasks (e.g., MSR-VTT, UCF-101, SkyTimelapse) show that Ca2-VDM achieves SOTA generation quality. Claims And Evidence: Overall, the evidence is consistent with the main claims. The ideas and methods are straightforward to understand. Methods And Evaluation Criteria: The proposed approach makes sense for scenarios where longer or continuous video generation is needed. Theoretical Claims: No particularly novel theoretical results (in terms of closed-form proofs or new sampling theorems) are emphasized. The standard diffusion framework is used (Ho et al., 2020 style training objectives; re-parameterization with 𝜖, 𝜃, ϵ, θ), and the model architecture reconfigures attention mechanisms in a causal manner. Experimental Designs Or Analyses: The primary design tests text-to-video (MSR-VTT, UCF-101) and unconditional video prediction (SkyTimelapse). The experimental designs are convincing. The analysis is straightforward and aligns well with prior video diffusion literature. The chosen metrics (FVD, speed) are standard for generation tasks. Repeated runs or more diverse metrics (like SSIM or user study) could add to robustness, but the chosen metrics are acceptable. Supplementary Material: The authors mention additional design details and further ablations in the appendix. The supplementary material only contains three demo videos. The quality of the video is not satisfactory. The supplementary material is not enough to replicate or closely approximate the method. Relation To Broader Scientific Literature: This work is situated in the line of latent diffusion models (Rombach et al., 2022) and follows expansions to video (e.g., Imagen Video, Make-A-Video, LVDM, and so on). It closely resembles other methods that attempt to handle long video generation by chaining short segments (e.g., Gen-L-Video, StreamT2V). Essential References Not Discussed: It might be beneficial to include more discussion of modular approaches (e.g., “ControlNet” or “T2I-Adapter” style methods adapted to video) if relevant for controlling domain shift. Other Strengths And Weaknesses: Strengths: 1. The causal generation approach plus key/value caching substantially reduce time and GPU cost for longer sequences. 2. Extending the conditional prefix in an autoregressive way often yields more stable transitions between chunks. 3. Multiple standard datasets, thorough runtime analysis, and FVD-based evaluation. Weaknesses: The paper is difficult to follow. The writing needs significant improvement. While I appreciate the authors’ efforts in implementing and evaluating Ca2-VDM, The design appears to be ad-hoc/incremental to existing techniques. Other Comments Or Suggestions: Further exploration into concurrency or distributed generation strategies might be helpful, especially for generating extremely long or high-resolution videos. More user-centric evaluations (e.g., user preference tests or other human metrics) could highlight how visually consistent transitions are perceived. Questions For Authors: Cache sharing is a popular design for video generation framework, could you highlight/discuss the differences between existing works? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ## Anonymous link for additional experiment results https://anonymous.4open.science/r/additional-exp-results-for-anonymous-github-F6EB/readme.md This includes: Table_R1, Table_R2, Figure_R1, and Figure_R2 $~$ ## Q1: Unsatisfactory supplementary video quality We acknowledge that the current qualitative results are not ideal. However, our primary focus is on improving the generation efficiency of VDMs, providing a computational and storage efficient framework. Due to limited computational resources and the tight schedule of the rebuttal period, we regret that we can not conduct additional large-scale training to further improve the qualitative results in the supplementary material. Instead, we conducted experiments on SkyTimelapse, as in Figure_R1. It shows that Ca2-VDM achieves comparable qualitative performance to OS-Ext, both in terms of single-frame quality and long-term visual content drift (with a similar degree of error accumulation). In addition, we can still apply distillation techniques to improve the quality, e.g., distilling from a bidirectional teacher model to enhance the visual quality of Ca2-VDM (as the student). Also, as discussed in the "Future Directions" of Appendix (Sec. F), pretraining the causal attention from scratch might have potential improvements. $~$ ## Q2: Essential References Not Discussed > Modular approaches (ControlNet or T2I-Adapter style methods) for domain shift ControlNet and T2I-Adapter are methods designed to introduce additional conditional signals to VDMs. They provide structural guidance like edges, depth maps, or segmentation maps. However, our work does not focus on domain adaptation or structure-guided generation. Instead, we generate future frames autoregressively conditioned on previous frames, rather than conditioning on external guidance to synthesize realistic frames from sketches or animate realistic videos. $~$ ## Q3: Highlight the differences between existing works about cache sharing In our work, "cache sharing" specifically refers to sharing kv-cache across denoising steps, not for "sharing the cache across autoregressive steps". **To this end, we are the first to introduce cache sharing to autoregressive VDMs.** To the best of our knowledge, there is also a concurrent work Live2diff [1] that introduces kv-cache to autoregressive VDMs. However, it stores kv-cache for every denoising step (with different noise level $t$ and thus different KV features), which costs much more GPU memory than ours. In contrast, Ca2-VDM enables cache sharing and only stores the KVs of clean frames. This is boosted by our "clean prefix" conditional frames, the corresponding timestep sampling strategy, and the training objectives (cf. Eq.2). We further provide GPU memory comparison in Table_R1. More detailed analysis can be found in the Q4 of Reviewer-7MWe. $~$ ## Q4: The design of Ca2-VDM compared to existing techniques We would like to clarify our original contribution. Our model is the first one to introduce 1) kv-cache queue boosted with cyclic-TPE and 2) cache sharing to VDMs. 1. KV-cache queue with cyclic-TPE: It's non-trivial to apply LLMs' kv-cache to VDMs. In LLMs, all tokens from the beginning are maintained during all autoregression steps. However, in VDMs, early conditional frames can be (or should be) removed due to the large memory cost for visual data, as the appearance and motion of new frames are primarily influenced by the most recent frames. Due to the kv-cache queue, early TPEs have been bound to previous KV-caches and can not be reassigned (as discussed in Figure 4(c)). This motivates us to propose Cyclic TPEs to keep training-inference alignment while enabling generation beyond the training length. 2. Cache sharing: As answered in Q3. $~$ ## Q5: Exploration on distributed generation strategies Thank you for your suggestion. We surveyed some related works. E.g., Video-Infinity [2] offers a distributed approach across multiple devices, where spatial modules operate independently, whereas temporal modules synchronize context via collaborative communications. We plan to incorporate this kind of method into Ca2-VDM as future work to improve the scalability of video generation. $~$ ## Q6: More evaluations for visually consistent transitions We evaluated Ca2-VDM and OS-Ext on the Skytimelapse test set using VBench evaluation, as shown in Table_R2. The last two metrics (motion smoothness and temporal flickering) measure consistent transitions. It shows that Ca2-VDM achieves comparable performance with OS-Ext. $~$ ## Q7: Writing improvement Thank you for your suggestion. We will revise the paper to improve the writing and incorporate the discussions and clarifications in the above answers. $~$ [1] Xing, Zhening, et al. Live2diff: Live stream translation via uni-directional attention in video diffusion models. 2024. [2] Tan, Zhenxiong, et al. Video-infinity: Distributed long video generation. 2024.
null
null
null
null
null
null
Test-Time Immunization: A Universal Defense Framework Against Jailbreaks for (Multimodal) Large Language Models
Reject
Summary: This paper focuses on the task of jailbreak detection, which is based on the concern of large language models's vulnerability against jailbreaking attack. The paper proposed a method that is universal against different types of attacks. The assumption held by this paper is that detection is easier to implement for direct defense. The proposed method is tested on large benchmark like MM-SafetyBench. Claims And Evidence: Yes. The paper uses experiment results on large benchmark to support the claim. Methods And Evaluation Criteria: The proposed method makes sense. The proposed detection-based method is helpful for building a universal defense strategy. Theoretical Claims: N/A. This paper does not include proofs. Experimental Designs Or Analyses: The paper is tested on large benchmark like MM-SafetyBench, which is helpful for verifying the soundness of the proposed mechanism. Supplementary Material: Yes. I have reviewed all parts provided for the supplementary material. Relation To Broader Scientific Literature: Desipte the performance reported by the authors, the current contribution is incremental because jailbreaking attacks have been heatedly studied by experts in this domain. The proposed method mainly improves the defense method in the original setting. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/a Questions For Authors: Please refer to comments left for previous discussions. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your reviews. > Desipte the performance reported by the authors, the current contribution is incremental because jailbreaking attacks have been heatedly studied by experts in this domain. The proposed method mainly improves the defense method in the original setting. We first build a framework that adaptively defends jailbreak attacks in an online manner, instead of static defense, which differs from previous works a lot. Moreover, we build an efficient detector and a dataset to train it, making our defense more practical. We sincerely thank you for your kind reviews.
Summary: This paper proposes Test-Time Immunization, a defense framework against jailbreak attacks for LLMs and multimodal LLMs. Specifically, this method actively collects jailbreak instructions during model deployment, then continues to improve the defense performance during deployment. Extensive experiments demonstrated that the proposed method achieves state-of-the-art performance against jailbreak attacks while maintaining utility. Claims And Evidence: Yes, the design goal is achieved by experiments. Methods And Evaluation Criteria: Overall, the proposed method makes sense. However, I have a question related to the pipeline. The success of the defense appears to be largely dependent on the initial dataset D_d. If the proposed method cannot detect a sophisticated jailbreak attack, then it might never be able to capture it. Conversely, if there is too much data in the initial dataset, the system might become overly sensitive to benign prompts. Additional ablation on the choice of the initial dataset D_d could help address this concern.N/A. There is no theoretical claim. Theoretical Claims: N/A. There is no theoretical claim. Experimental Designs Or Analyses: Yes, I checked the soundness and validity of the experimental designs and analyses. There are two potential issues: First, the number of jailbreak attacks tested is limited (one for MLLM and one for LLM), raising questions about the method's effectiveness against more advanced jailbreak techniques. Second, the ASR calculation method using prefix matching is known to lack accuracy. It is suggested to implement LLM-based ASR calculation methods to validate the results more reliably. Supplementary Material: Yes, all parts. Relation To Broader Scientific Literature: Compared to exisiting literature, this paper proposes a new test-time defense framework, which adaptively defend against jailbreak attacks. This method can inspire community in developing safer language models. Essential References Not Discussed: N/A. Other Strengths And Weaknesses: Strength: The experiments and ablation analysis are extensive, demonstrating the method's strong performance. Other Comments Or Suggestions: All the details in Section 4 appear to be compressed into a confined space, making it difficult to follow. I recommend restructuring this section to improve clarity and readability. Questions For Authors: How does the pipeline respond when it encounters a previously unseen jailbreak attack? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thanks for your kind reviews. We provide additional experimental results in the link https://anonymous.4open.science/r/ICML109sda-E0E4/tab_and_fig.pdf. We will address your concerns one by one. > If the proposed method cannot detect a sophisticated jailbreak attack, then it might never be able to capture it. Conversely, if there is too much data in the initial dataset, the system might become overly sensitive to benign prompts. Additional ablation on the choice of the initial dataset D_d could help address this concern. We designed the D_d to detect harmful responses instead of the jailbreak instruction. Although the attacker's jailbreak attack may be complex, the answer is not particularly difficult to identify. Moreover, as long as a small number of samples can be identified, we can use these small samples to implement our defense effectively according to our experiments. > The number of jailbreak attacks tested is limited (one for MLLM and one for LLM), raising questions about the method's effectiveness against more advanced jailbreak techniques. We supplemented the results of the white-box attack, GCG, in Table 3 in the link for reference. TIM still shows its effectiveness on GCG. After that, we evaluate two jailbreaks for LLM (i.e., GCG, I-FSJ) and two for MLLM (i.e., Figstep, MM-SafetyBench). > The ASR calculation method using prefix matching is known to lack accuracy. It is suggested to implement LLM-based ASR calculation methods to validate the results more reliably. We report the ASR evaluated by LLM in Table 3. Indeed, prefix match is more accurate than LLM, according to our observation. For example, VLGuard refused to answer the jailbreak question successfully, but most samples were still judged to have generated harmful content under LLM evaluation. Moreover, the attack success rates of the other three methods are lower than the actual situation. Methods based on large model evaluation are subject to fluctuations in many factors, whether it is the large model used, the template, or the decoding way. > How does the pipeline respond when it encounters a previously unseen jailbreak attack? All of our experiments are conducted under the unseen jailbreak attack, the response pipeline can be seen in Algorithm 1 in the appendix. Or if you want to ask the static transferability, like Reviewer 77PY, you can refer to the rebuttal for him. > Rewriting of Section 4. Thanks for your advice. We will rewrite Section 4 in our manuscript to make it clearer and more readable.
Summary: The authors propose a novel defense framework for mono- and multi-modal generative models. in response to test-time LLM classifier defenses. The core novel contribution is their development of a detector for harmful outputs which uses an optimised gist token inserted at the end of a input + output pair to summarise whether the model's output was harmful. They train a binary classifier to recognise harmful gist tokens, and inputs that caused harmful model completions are stored in memory for future safety finetuning rounds. Claims And Evidence: * The paper claims that its method is more efficient than employing another LLM as input / output classifier, or augmenting prompts in other ways. In fact, they claim that such measures are "time-consuming and impractical." I think they're correct that there are important latency and cost considerations, but the paper does not provide sufficient comparative evidence of how much SOTA/similarly well performing classifiers would be, compared to their method. This leaves open the question of how effective their method truly is, which seems to me to be a core claim of why their method is worth employing. I'd expect this method to be more efficient, but evidence of this seems important. * I'm also confused about the paper's focus on the efficiency of detection methods, given a comparative lack of focus on the efficiency of the test-time training they suggest to actually defend models. The paper does not sufficiently address whether test-time training is more or less efficient than its alternatives - I would be interested in discussion of factors such as latency, compute requirements, perhaps number of tokens). * I'll address their claims about successful classification and defense in the next section. Methods And Evaluation Criteria: * I think it's hard to evaluate the results in this paper without discussion of the capability retention of these models when re-finetuned on the jailbreak samples that their classifier discovers. The paper does not sufficiently address the likely increase in false positive refusals, both in what is classified as harmful and in the subsequent outputs of the re-finetuned model. * Given that their method relies on access to model weights, it's hard to evaluate how successful this defense would be when defending larger SOTA models. The authors only evaluate jailbreaks against three models (LLaVA-v1.6-Vicuna7B and LLaVA-v1.6-Mistral-7B), and LLaMA2-7B-chat. These are large and capable open-source models, but I don't think there is enough evidence that this is a practical defense method for frontier settings from just defending on these three models (especially only one text model). This is a challenge for all papers that work on open-weight models, but to be strongly convinced of the results in this paper I would at least need to see results on a wider array of models (and perhaps model sizes - to see if there might be trends or differences at different scales). * Even if the authors couldn't have defended e.g. GPT-4 with their method, I would have been curious to hear how well their finetuning defense dataset could transfer. Theoretical Claims: No Experimental Designs Or Analyses: I'm a little uncertain of how much the authors verified the accuracy of their binary harmfulness classifier post-training: I don't see mention of e.g. manual verification, or testing on held-out examples from the training sets they use (AdvBench and MM-SafetyBench). Supplementary Material: Yes, all appendices. Relation To Broader Scientific Literature: The result of defending successfully (and classifying harmful outputs efficiently) is a reasonably compelling first result for a method that I'd need to see applying to more realistic jailbreak scenarios. The core idea of leveraging gist tokens to more efficiently train harmfulness classifiers seems like a promising direction for jailbreak defense systems to attempt to implement. They mention it in the conclusion briefly, but I would be interested in more discussion about the realism of this defense method against more complex (and in my opinion more harmful jailbreaking methods). I'd appreciate, for example, discussion of how this method performs against multi-turn jailbreaks (e.g. PAIR, TAP, MSJ), or or what their defense does to change the scaling laws of the Best-of-N Jailbreaking paper from Anthropic (though I recognise that the last of these might not have been released when this paper was written). Essential References Not Discussed: Overall, as a discussion of defense against static, single-turn jailbreaks I think the literature cited is sufficient. I'm a little surprised that they don't make reference to papers such as Anthropic's Rapid Response paper (Peng et al 2024, "Rapid Response: Mitigating LLM Jailbreaks with a Few Examples"), which is the main recent paper I'm aware of that addresses similar concerns about measuring the efficiency of different test-time jailbreak defense methods. A key metric that this paper used to measure efficiency was how many jailbreak samples are needed for each method, which would have been helpful for me to see more detailed discussion of in this paper (the authors include some discussion of ASR-50, and the comments on the number of jailbreaks needed to be seen from figstep to learn to reject that type of attack). Other Strengths And Weaknesses: The gist token approach for detecting harmful content is a novel contribution that seems to offer efficiency benefits compared to using separate classifier models. The approach works on both text-only and multimodal models, which helps me to feel optimistic about transfer between very different kinds of model. Their method seems to learn effective defenses after seeing relatively few examples (as shown by their ASR-50 results). Other Comments Or Suggestions: I'm not sure the biological immunisation framing was very helpful for me to understand the paper, it took me a while to actually understand your method, and you could use this immunisation analogy even for classifier defenses that don't rely on gist tokens and internal states (which seems to me to be the main novel contribution here). Questions For Authors: 1. Did you perform any manual verification of your classifier's outputs? I'm curious about whether you analyzed false positives/negatives beyond just reporting accuracy metrics. 2. I'd be curious about how well your fine-tuned models perform on new jailbreaks (do they get more robust even to held-out jailbreaks when finetuned against e.g. FigStep?) 3. How practical do you think your test-time training approach would be on frontier models? I'd be interested in understanding how you think your method would affect inference latency and training costs compared to alternatives, if you'd like to make claims about efficiency. 4. Do you have any evidence or theoretical basis for how your approach might perform on much larger models (e.g., 70B+ parameters)? 5. Have you measured how your fine-tuning process affects the model's ability to handle legitimate requests? Specifically, do you observe an increase in false positive refusals after adaptation? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We are grateful for your valuable suggestions. The figures and tables of the additional results are provided in the link https://anonymous.4open.science/r/ICML109sda-E0E4/tab_and_fig.pdf. We will do our best to address your concerns. > The concern about the efficiency and effectiveness of our detector. And the training cost of our defense training. To demonstrate the efficiency of our methods, we report the time cost in Table 1 (see the link). TIM takes a short time for detection (about 0.3% of inference cost), and the latency caused by detection is extremely low. For effectiveness, we compare metrics such as Accuracy, True Positive Rate (TPR), and False Positive Rate (FPR) with detection methods like Self-Defense and LVLM-LP in Table 3 of the original paper. Additionally, we report the detection metrics in Figures 3, 4, and 5b of the original paper. Our method shows effective detection performance (90+% TPR, and FPR less than 1%) compared to other methods and our variants. We also add the accumulated TPR and FPR during training in Figures 1 and 2 (in the link) for further analysis. According to the results in Table 1, the training time accounts for 12.2% of the total time used. The testing process of TIM is more efficient than the vanilla model because fine-tuned model can generate short rejection instead of long harmful responses for jailbreak instructions. For computation requirements, we only use one RTX A6000 for training. In practical applications, defense training can be conducted on the training GPU instead of the inference device. > The paper does not sufficiently address the likely increase in false positive refusals, both in what is classified as harmful and in the subsequent outputs of the re-finetuned model. We report the​ Over Defense Rate (ODR) to assess false positive refusals. Our method exhibits relatively few false rejections compared to other methods, as shown in our paper. In the experiment results, there are usually two types of samples that are judged as harmful: malicious instructions with harmful responses or rejection responses. Normal samples are rarely judged as harmful. Based on the ODR of our experiment, most samples can still be answered normally by our fine-tuned model. >Manual verification and testing on held-out examples. We think the detector performance can be validated by Acc, TPR, and FPR. For manual verification, the false positive examples are more like to be the samples metioned above and normal samples are rarely misjudged as jailbreak samples (see FPR of the TIM-NA). Moreover, we including the held-out accuracy of the validation set from detector training (99+% for all experiments) in the Table 6 (in the link). > Our method against more complex jailbreaks. The jailbreak method (i.e. PAIR, TAP) performs poorly (<10% ASR) on llama7b-chat, according to original paper and [1]. MSJ is a similar attack to the I-FSJ (which we used). Indeed, I-FSJ is a complex method including techniques like in-context jailbreak, greedy searching, and token replacement. I hope the explanation can address your concerns. Furthermore, we supplemented the results of the white-box attack, GCG in Table 3. [1] https://www.harmbench.org/results > How well your fine-tuned models perform on new jailbreaks (do they get more robust even to held-out jailbreaks when fine-tuned against?). It's worth noting that our method is an online adaptive defense method. New types of jailbreaks will be adaptively defended against as they emerge. Nevertheless, we demonstrate the static transferability of the fine-tuned model in Table 5 in the link. It is effective when migrating from a more complex attack (MM) to a simpler one (Figstep), but its effectiveness is limited in the reverse direction. > How practical do you think your test-time training approach would be on frontier models. The inference latency is extremely low, as stated in the first response. The training cost is only 12.2% of the total computing cost. Training occurs only when jailbreak activities are successful. Moreover, the defense training can be submitted as a background task. As a result, the latency is just the detection latency. > Do you have any evidence or theoretical basis for how your approach might perform on much larger models? We have conducted additional experiments using LLaVA1.6-Vicuna-13B, as presented in Table 4. TIM remains effective on the 13B model. However, due to computational resource limitations, we are unable to apply our method to 70B+ models (we only have access to an RTX A6000). > How fine-tuning process affect the model's ability to handle legitimate requests? Do you observe an increase in false positive refusals? We report ODR in our paper. TIM shows the fewest false positive refusals compared to other methods. For more details, you can refer to the change in ODR shown in Figures 1 and 2 in the link. There are increases, but acceptable. Morevoer, We will discuss Anthropic's paper in the manuscript. --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed rebuttal, and in light of the new experiments presented, I'm updating my score to a weak accept. I think my main update here comes from the new table 1, and from becoming more confident in the reported false positive rate. --- Reply to Comment 1.1.1: Comment: We are sincerely grateful for your recognition of our rebuttal and for raising your rating. We have further supplied the experiments conducted on the modern architecture LLaMA3-8B-Instruct against I-FSJ, which are shown here. TIM's ASR is reported as ASR / ASR-50. The results demonstrate that TIM is also effective while adopting LLaMA3 as the backbone. | Model | ASR | ODR | | -------- | -------- | -------- | | Vanilla | 94.3 | 0.2 | | TIM | 1.0/0.0 | 0.2 | *** *As the other reviewers have not yet commented on our rebuttal responses, we apologize for taking up this space to provide an additional response to them.* + @reviewer ydQk We have provided **the results you mentioned about modern architectures such as LLaMA3**, in the table above. The results demonstrate that TIM is still effective for LLaMA3-8B-Instruct. I hope this further enhances your recognition and confidence in the generalization of our work. Furthermore, we think your major concerns lie in 1) **the scalability of the gist token** and 2) **the potential degradation of TIM**. We have provided the avg. token length in training and test (296 for training and 1720 for test) to show that **gist token can extend to longer context**. Besides, we provide the performance curve during testing to demonstrate that **our method (TIM) doesn't suffer from performance degradation**. + @reviewer iXZS We think your main concern lies in 1) the evaluation metrics and 2) the number of jailbreak attacks. **We have provided additional experiments on the GCG attack and reported the ASR evaluated by LLM**, and we are looking forward to receiving your additional comments on our response. *** Once again, we would like to thank the AC and each reviewer for their efforts in the review process of this work.
Summary: In a nutshell, this paper introduces Test-Time Immunization (TIM) as a universal defense against jailbreak attacks on large language models. Specifically, the authors insert a special "gist token" which is used for binary classification of spotting harmful outputs, i.e., question, answer, gist token to predict the harmful output. Once an attack is detected, the model is fine-tuned online with safe refusal responses by using LoRA to preserve its regular performance (i.e., prevent overfit). Claims And Evidence: The overall claim is clear and sounds. Methods And Evaluation Criteria: One of the major beliefs of adversarial robustness evaluation is to attack the defense system itself [1,2,3]. However, from my understanding, this paper did not consider 'adaptive attack', i.e., attacking the proposed method. [1] Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples, ICML 2018 [2] On Evaluating Adversarial Robustness, arXiv 2019 [3] On adaptive attacks to adversarial example defenses, NeurIPS 2020 Theoretical Claims: The paper does not provide a theoretical claim (I don't think this theory is necessary for this case). Experimental Designs Or Analyses: I have checked the experimental designs and analysis, which looks reasonable and clear. I think it would be great to add some modern architectures (e.g., Llama3, Gemma2, Qwen2.5) rather than Llama2. But I don't think this is a major issue. Supplementary Material: I have read all the Appendix. Relation To Broader Scientific Literature: The paper tackles an interesting direction by test-time fine-tuning the model to prevent jailbreaking attacks online. Essential References Not Discussed: I think it is not a weakness (it is more like a suggestion) but it is great to discuss the difference between recent test-time defense methods [1,2] in the final revision. Especially [1] shares a very similar idea in my perspective. [1] Backtracking improves generation safety, ICLR 2025\ [2] Trading Inference-Time Compute for Adversarial Robustness, arXiv 2025 Other Strengths And Weaknesses: **Strengths** The overall paper is well-written except for the mathematical formulation (see below). The overall method sounds and using gist token makes sense. --- **Weakness** This question can be related to my misunderstanding (also see the question part regarding adapter training). I wonder how much the model degrades over time, i.e., multiple attack detections. Since the model is trained for every detected jailbreaking attack, I am concerned about catastrophic forgetting. In the rebuttal, it would be great if the authors can report the performance change throughout the fine-tuning. I do agree that LoRA prevents some part, but still I think this part should be highlighted. One concern is the scalability of gist tokens in multi-tern scenarios or generalizations (e.g., long contexts). Specifically, we need to insert the gist token every time to detect jailbreaking. While this paper primarily focuses on single turn scenario, it is somewhat concerned with how to train or how to insert the gist token for multi-turn cases (e.g., insert to every user query?). Also, one question will be, do this gist token work for longer context size than it is originally trained? I think the paper could write a better mathematical formulation for test-time training (section 4.3). Since there is no objective or mathematical formulation, it is somewhat hard to understand. I had to read the Algorithm table in the Appendix to fully understand the method. In the revision (or in the final revision), I kindly ask the authors to re-write this part. Other Comments Or Suggestions: I think this paper is slightly in the acceptance side. It would be great if the authors can address some concerns and questions during the rebuttal. Questions For Authors: When training with detection loss function in Equation 4, do the authors (i) only train the gist token or (ii) the full network? Do we need to train the adaptor from scratch every time when the attack is detected, or is it reused (continuously trained?) If the LoRA is trained from scratch every time, I have some concern regarding training cost (as the memory bank grows). Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate the constructive feedback. Below we provide point-by-point responses with methodological clarifications and supplementary experimental evidence. All referenced figures/tables are available in our link https://anonymous.4open.science/r/ICML109sda-E0E4/tab_and_fig.pdf. > Regarding adaptive attack against our method. We acknowledge the reviewer's concern about possible adaptive attacks targeting our defense approach. Indeed, adversaries could attempt to craft adversarial samples to mislead our detector and degrade its performance. However, in practical deployment scenarios, our detector operates as a black box—its internal decisions are not directly exposed to users. This inherent opacity increases the difficulty of mounting successful attacks, as adversaries lack direct feedback to refine their adversarial inputs. However, we are considering designing a targeted attack in future work to verify whether our method is robust enough. > Concerned about performance during finetuning and the catastrophic forgetting. In Figures 1 and 2 in the provided link, we report key metrics, including accumulated ASR, ODR (Over Defense Rate), TPR, and FPR, for our method during the fine-tuning process. If the reviewer is referring to the model's ability to answer normal questions, the changes in ODR provide a clear indication of its performance in this regard. While our method did cause the model to reject some normal answers during testing, this effect was manageable and did not lead to significant degradation in overall performance. If you mean the ability to reject the jailbreak attack encountered before, we reported this result in Figure 4 of the original paper. Despite experiencing other attacks, we still maintained our defense against the previous jailbreak attack. No obvious catastrophic forgetting of defense capability against previous jailbreaks is observed. > Scalability of gist token. For multi-turn scenarios, the purpose of gist token is to detect whether the model generates malicious answers, so in multi-turn conversations, we only need to insert gist token after each answer generated by the model, and remove it from the KV cache after the detection is completed. For detector training, we can also build a multi-turn conversation detection dataset to ensure that we can still maintain effective detection when facing multi-turn data. For generalizations for longer context, in our experiments, I-FSJ is a in-context jailbreak attack that uses multiple demonstration to induce our model to generate malicious answers. The average length of its jailbreak instructions is 1720 tokens, while in our training set, the average length of questions and answers is only 15.2 and 281.4 tokens. However, we can still effectively detect jailbreak samples in I-FSJ, hoping that this can address your concern about the generalization of gist token. > Training parameter of the detection loss. We solely train the gist token and the binary classifier. During the training of the detector, the LLM is frozen to prevent any impairment to the model's question-answering ability. > Question about the defense training. The LoRA is continuously trained. We have attached the training and detection costs in Table 1, which can be found via the provided link. The results indicate that our method is highly efficient and thus practical for application. > Modern architecture. We have added the results of LLaVA-v1.6-Vicuna-13B in Table 3, which can be found in the link. LLaVA 1.6 is among the most state-of-the-art MLLMs. Currently, we are attempting to conduct experiments on LLaMA3-8B-Instruct, and we will update the results soon if we finish. > Related works and writing. Thank you for your valuable advice. In the final version, we will carefully consider and discuss the differences between the works you have mentioned. Additionally, we will rewrite Section 4.3 to enhance its clarity and comprehensibility. We sincerely hope that the above rebuttal effectively addresses your concerns. If you have additional question or we midunderstand your review, feel free to respond.
null
null
null
null
null
null
Thickness-aware E(3)-Equivariant 3D Mesh Neural Networks
Accept (poster)
Summary: This paper considers the thickness of the mesh when predicting the deformation. T-EMNN is proposed to preserve E(3)-equivariance and invariance. Experiments show that the proposed method outperforms baselines and the introduction of thickness benefits the baselines as well. ## Update after rebuttal I appreciate the authors' efforts to address my concerns and I have updated my score to 3. Claims And Evidence: The claims are generally clear. Methods And Evaluation Criteria: My main concern is about the limited dataset. Is there any other dataset? More samples or different kinds of objects would strengthen this paper. Theoretical Claims: Please refer to "Claims And Evidence". Experimental Designs Or Analyses: The evaluations seem correct. Supplementary Material: I review the appendix. Relation To Broader Scientific Literature: Please refer to "Essential References Not Discussed". Essential References Not Discussed: Since “E(3)-equivariance and invariance” is one of the core idea in this paper, the author may discuss neural network with the property of “E(3)-equivariance and invariance” in a separate paragraph. For example, the author may adjust L85-106 as part of the related work. Other Strengths And Weaknesses: The E(3)-equivariance is achieved by adjusting the inputs, while the GNN part does not contribute to this property. The proposed method mainly adopts the basic GNN module and introduces several conditional embeddings through concatenation. The main contributions mentioned above seem limited. Other Comments Or Suggestions: Please refer to the "Questions For Authors". Questions For Authors: I summarize my main concerns as follows: 1. Limited dataset. (Methods And Evaluation Criteria) 2. Please provide more details to emphasize the novelty of this paper. (Other Strengths And Weaknesses) 3. Reorganization of related work. (Essential References Not Discussed) Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful review. For a comprehensive response, please refer to the attached [link](https://shorturl.at/gOsz6). All materials within the link are indexed starting with the letter ‘L’ (e.g., Fig. L1). --- **A1. Novelty of Our Work.** We would like to organize our contributions into two key aspects: **1)** Thickness-Aware Framework and **2)** Data-driven Coordinates, which effectively achieve E(3)-equivariance. Given your concern about the novelty of the GNN architecture, ___we will focus on emphasizing the Thickness-Aware Framework in this rebuttal___, rather than the data-driven coordinates. Perhaps our architecture may appear simple, but we would like to emphasize that the ___input meshes do not inherently contain any information regarding the "thickness."___ As a result, a basic GNN without any thickness processing faces challenges in interacting between opposing surfaces, despite their high correlation, as shown in Fig. 1 in the introduction. Our novelty lies in how the GNN model ___adaptively addresses the "thickness,"___ even though this information is not provided in the input data. To achieve this, __we define the "thickness edge" from scratch and address potential issues__, such as the negative correlation between opposing surfaces when the thickness is too large to share the correlative reaction. To overcome this challenge, we propose a __learnable thickness threshold__ that adaptively identifies the optimal thickness edges. As a result, our method demonstrates strong performance, particularly on the $R^2$ metric, showcasing its ability to predict more realistic behavior. We would like to emphasize that our method is not distinguished from others simply by utilizing several conditional embeddings through concatenation. In fact, __the architecture of the baseline methods was modified to allow for the use of exactly the same conditional embeddings, including both condition and coordinate embeddings, as in our method, ensuring a fair comparison.__ Our distinct contribution lies in whether the model can handle opposing surfaces with high correlation, and our model successfully achieves this by defining and processing the thickness in a learnable way. --- **A2. Evaluation on Four Additional Datasets.** We evaluate our method on four additional datasets (using 5 seeds for reliability): - **Basket** (Graph-level prediction - Max pressure) - **Circular Plate** (Node-level prediction - Deflection of each node) - **SimJEB** [1] (Node-level prediction - Magnitude of the displacement) The dataset and results are presented in Table L2 and Figures L1-L5. In Table L2, our method demonstrates strong performance on both the Basket (graph-level prediction) and Circular Plate (node-level prediction) datasets. Regarding the SimJEB dataset, although our data-driven coordinate system does not perform well under the "In Dist." setting (where the original coordinate system is well-established but our system faces challenges in aligning coordinates between diverse shapes), it performs effectively in the "Out-of-Dist." setting. In this setting, our method successfully addresses the misalignment issue in the original coordinate system between shapes, ensuring E(3)-invariance. To qualitatively demonstrate our model’s ability to process thickness, we include the learned valid thickness edges in Fig. L4 and L5. As shown in these figures, our thickness processor effectively connects opposing surfaces (left), facilitating beneficial interactions, while filtering out opposing surfaces that negatively impact performance (right). - **Deforming Plate** (Node-level prediction - Position of Next timestep) Moreover, to evaluate the dynamic capabilities of our framework—specifically the thickness processor—we perform next-timestep deformation prediction using the **Deforming Plate** dataset (as described in the MGN paper) to demonstrate how our thickness processor helps the model handle dynamic scenarios. Since our data-driven coordinate system is designed for static analysis to prevent misalignment from affecting the results, we use the original coordinate system for dynamic analysis. The same mesh edge and node features as in the MGN paper, along with an additional node feature—the shortest world distance from the actuator to the node—are used to model interactions with the external environment. For detailed results, please refer to Table L1 and Fig. L6, where our model successfully accounts for the deforming plate’s thickness, leading to improved performance. --- **A3. Reorganization of related work.** Thank you for pointing this out. We will ensure that the related work regarding methods related to E(n)-equivariance/invariance is clearly addressed and reorganized separately. > [1] Whalen et al. "SimJEB: simulated jet engine bracket dataset." Computer Graphics Forum. Vol. 40. No. 5. 2021. --- Rebuttal Comment 1.1: Comment: Thank you for your response and all my concerns have been addressed. I will raise my score to 3. --- Reply to Comment 1.1.1: Comment: Dear Reviewer pfVr, We are very happy to hear that your concerns have been resolved. Thank you for your constructive feedback, and we will ensure to include our rebuttal in the revised version if it is accepted. Best regards, The Authors
Summary: This paper presents the Thickness-aware E(3)-Equivariant Mesh Neural Network (T-EMNN), a novel graph neural network designed to efficiently integrate the thickness of 3D objects into mesh-based static analysis. The authors introduce an innovative thickness-aware framework that explicitly considers interactions between opposing surfaces through dedicated "thickness edges". The method employs a learned thickness threshold to distinguish genuine thickness relationships from irrelevant spatial connections (e.g., width) and uses data-driven coordinates to ensure E(3)-equivariance. The authors validate their method using a real-world industrial dataset focused on injection molding applications. T-EMNN significantly outperforms state-of-the-art graph-based methods in terms of prediction accuracy (RMSE, MAE, R²) while maintaining computational efficiency. Claims And Evidence: Good to me. Methods And Evaluation Criteria: The evaluation criteria (RMSE, MAE, R²) and comparison methods (MGN, EGNN, EMNN) used by the authors are appropriate and clearly demonstrate the effectiveness of the thickness-aware design. Visualizations in Figure 7 illustrate clearly the error distributions and improvements achieved by T-EMNN. The visualization in Figure 9 show the effectiveness of the threshold. All of these are good. However, the practical utility of the method could be strengthened by demonstrating more clearly the application pipeline in industrial scenarios like injection molding. Theoretical Claims: Good to me Experimental Designs Or Analyses: The experimental designs, particularly the inclusion of in-distribution and out-of-distribution conditions, are sound. The ablation study (Fig. 6, Table 2) effectively demonstrates the contribution and optimization of the thickness threshold and thickness edge features, validating the robustness of the proposed methods. Supplementary Material: The supplementary material is well-reviewed, specifically the additional dataset details, hyper-parameter tuning analysis, and more comprehensive result visualizations. The Appendix G discuss the pros and cons to use volumetric meshes. One potentially impactful application area is cloth simulation. Typically, cloth simulation models initially start as thin-shell volumes and, under certain assumptions, are approximated by surface meshes with implicit thickness. It would greatly enhance the supplementary discussion if the authors explicitly mentioned or briefly explored the potential applicability of their proposed method to this type of scenario, highlighting how thickness-awareness might influence accuracy or computational efficiency in such practical cases. Relation To Broader Scientific Literature: The proposed work effectively builds upon and extends existing mesh-based neural network methods (MGN, EGNN, EMNN), clearly positioning itself within the broader scientific literature by addressing critical limitations related to thickness modeling and E(3)-Equivariance. It offers notable improvements in practical application contexts such as structural engineering and manufacturing. Essential References Not Discussed: The reference is good to me Other Strengths And Weaknesses: The proposed method effectively addresses a significant limitation of existing surface-based mesh methods by integrating thickness-awareness into mesh neural networks. Additionally, the proposed data-driven coordinate transformation notably enhances robustness against spatial transformations, as demonstrated by the clear performance improvements observed when integrating this component into existing baseline methods. The paper further provides comprehensive and thorough evaluations, including meaningful ablation studies that help clarify the contribution of each component. However, the paper currently lacks demonstrations of practical applications beyond synthetic validation scenarios. Including concrete examples demonstrating direct applicability in real-world industrial tasks would significantly strengthen the paper. Furthermore, the application pipeline description could be more detailed to better bridge theoretical contributions and practical utility. Lastly, the paper would benefit from a discussion regarding potential extensions to handle non-watertight surfaces (i.e., surfaces with holes), as the current methodology does not appear to strictly require watertight constraints. Other Comments Or Suggestions: Good to me. Questions For Authors: 1. Could you further elaborate on the practical integration of your model into an existing industrial workflow, particularly for injection molding? Would this integration introduce significant overhead? 2. How sensitive is the learned thickness threshold τ across significantly different geometries or scales, and how generalizable is this threshold to other applications beyond injection molding? 3. How to handle non-watertight inputs? For example, a "solid" surface with "small" holes no that. Does the proposed method robust to that kind of inputs? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We are grateful for your thorough review. For a comprehensive response, please refer to the attached [link](https://shorturl.at/gOsz6). All materials within the link are indexed starting with the letter ‘L’ (e.g., Fig. L1). --- **A1. Practical Utility of the Method (Response to Weakness and Q1).** The pipeline is illustrated in Fig. L10 to aid comprehension of the proposed approach. In the actual industrial process of injection molding product development, the process follows the stages of **design -> analysis -> mass production**. According to this process, designers must verify the product's defect status through analysis before mass production. However, **there is a bottleneck in the design -> analysis phase**, and we aim to address this using T-EMNN. The reason the design -> analysis process is a bottleneck is that the work is divided between the designer, who specializes in design, and the analyst, who specializes in numerical analysis. **The designer does not have numerical analysis expertise and must rely on the expert analyst.** This process requires iteration. When the designer modifies the design, they send it to the analyst for analysis. The analyst then reports the results to the designer, who evaluates whether there are any defects based on the analysis. If defects are suspected, the designer modifies the design and sends it again, creating a repeated cycle (design -> analysis -> design -> analysis). **This iterative process typically takes about a week for a single product** (based on the Basket dataset), leading to significant delays in mass production. We expect that using the **T-EMNN we developed will dramatically reduce this iterative process.** First, **T-EMNN allows designers to self-validate their designs** without requiring any special analysis skills. Furthermore, we confirmed that with **T-EMNN, it takes less than 3 minutes** (based on the Basket dataset) to go from preprocessing to checking the analysis results for a single injection molding product. Thus, with T-EMNN, designers can quickly self-validate designs that are unlikely to have defects. This allows designers to filter out designs with low defect potential and send only those to the analyst, significantly reducing the iteration process. This approach will drastically shorten the time to mass production. --- **A2. Generalizability of the Thickness Threshold (Response to Q2)** We evaluate our method on four additional settings (using 5 seeds for reliability): - **Deforming Plate** (Node-level prediction - Position of Next timestep) - **Basket** (Graph-level prediction - Max pressure) - **Circular Plate** (Node-level prediction - Deflection of each node) - **SimJEB** [1] (Node-level prediction - Magnitude of the displacement) Regarding performance, as shown in Table L1, the thickness processor successfully handles even next-timestep prediction. In Table L2, our method demonstrates strong performance on both the Basket (graph-level prediction) and Circular Plate (node-level prediction) datasets. Regarding the SimJEB dataset, although our data-driven coordinate system does not perform well under the "In Dist." setting (where the original coordinate system is well-established but our system faces challenges in aligning coordinates between diverse shapes), it performs effectively in the "Out-of-Dist." setting. In this setting, our method successfully addresses the misalignment issue in the original coordinate system between shapes, ensuring E(3)-invariance. To qualitatively demonstrate our model’s ability to process thickness, we include the learned valid thickness edges in Figs. L4, L5, and L6. As shown in these figures, our thickness processor effectively connects opposing surfaces (left), facilitating beneficial interactions, while filtering out opposing surfaces that negatively impact performance (right). --- **A3. Discussion for Handling Non-watertight Surfaces (Response to Q3)** We believe that **T-EMNN can be applied even in the case of non-watertight surfaces.** However, when creating the thickness edge, we utilize a method where a ray is cast from the starting node in the opposite direction of the normal (i.e., into the interior of the object) until it reaches the opposing surface. If the object is not watertight, there may be cases where the ray does not touch the opposite surface. In such cases, if the ray does not touch a specific node, the thickness edge can simply be excluded. This should not pose any issues for training and prediction. However, for our T-EMNN, the watertightness of the 3D shape data is not a critical factor. **What matters is whether the physical properties between thickness pairs are similar.** If they are similar, we expect that using T-EMNN will still yield good prediction results.
Summary: This paper presents a novel graph neural network that incorporates thickness edges—connections between opposite sides of a surface mesh—to enable thickness-aware processing. To maintain E(3)-equivariance, it introduces a data-driven coordinate transformation. The model is evaluated on an injection molding dataset, where it outperforms various baselines. Claims And Evidence: There are two key issues with the claims and supporting evidence in the paper: 1. **Limited dataset evaluation** – The model is evaluated on a single novel dataset. While the results appear promising, testing on an additional dataset (or ideally two) would provide stronger support for the claimed superior performance. Given that the authors used MGN as a baseline, I wonder if they considered the *deforming plate* dataset, where a plate undergoes deformation over time. Applying their method to this dataset, particularly with a surface mesh representation, could further demonstrate its advantages. 2. **Unsubstantiated Equivariance Claim** – The paper states that the model “preserves E(3)-equivariance and invariance,” yet provides no theoretical proof to substantiate this claim. In my view, the lack of formal justification weakens the argument. Additionally, the phrasing raises a conceptual issue: how can the model be both equivariant and invariant? From my understanding, invariance is not merely a special case of equivariance, as it completely removes the output transformation rather than modifying it in a structured way. Instead, I would consider it a related but distinct concept. This imprecision in terminology further underscores the need for a more rigorous theoretical analysis to clarify and support the claim. Methods And Evaluation Criteria: # Method **Thickness Node Pair Definition and Mapping Consistency** In Section 3.3, the paper introduces the concept of thickness node pairs, defined by a transformation $T$ that maps each node $v_i$ to another node on the opposite surface. However, the definition provided in Equation (1) does not ensure that applying the transformation twice returns the original node; that is, $T(T(v_i)) \neq v_i$. This issue becomes evident in materials where one surface is skewed relative to the other, resulting in a gradual thinning. The paper does not address how this scenario is handled, which raises concerns about the robustness of the thickness node pairing method in such cases. See for reference at the first draft I created for illustration: https://imgur.com/a/nmmI9oI **Ray Projection Distance ($d$) as a Hyperparameter** From my understanding, the ray projection distance $d$ is a crucial hyperparameter in defining thickness node pairs. In materials with multiple layers, determining the appropriate value of $d$ is essential to accurately pair nodes between surfaces. If the thickness varies between different sheets, selecting a single $d$ becomes challenging, potentially leading to incorrect thickness pairings. The paper does not discuss strategies for choosing ddd in such complex scenarios, leaving a gap in the methodology for materials with non-uniform thickness. In the second image provided in https://imgur.com/a/nmmI9oI , I don't see how the correct thickness pairs can be computed using a global value $d$. **Selection of $\tau$** I like the idea of selecting a data-driven $\tau$, but I am unsure if a global $\tau$ can suffice for non-uniform thickness objects. Going back to the second image in https://imgur.com/a/nmmI9oI , a $\tau$ of 2 would exclude all thickness edges on the left side, while a $\tau$ of 6 would also include the "width" edges on the right side. Is my intuition here correct? **Clarification of Node Features $g_i$ and $r_i$** Table 1 references node features $g_i$ and $r_i$, but there is no explanation of these features in the main text or Appendix A. Clarifying the model inputs is important for understanding the model's functionality. Including descriptions of these features in the main body or referencing a specific section in the appendix would enhance the paper's clarity and comprehensibility. **Thickness Activation Function** Regarding the activation function for thickness pairs, the authors write "“while edges with $t(v_i) > \tau$ are excluded from the propagation process.” This is not precise, the activation function for $t(v_i) = \tau$ is 0.5, see the graph here: https://imgur.com/a/6Pyfzfr As far as I understand it, the activation function has to be shifted to the left. **Overall Assessment** Despite these concerns, the proposed architecture for handling thickness appears sensible. Addressing the issues related to the transformation mapping, the selection of the ray projection distance $d$, and the clarification of node features would strengthen the paper's contributions and provide a more robust framework for thickness-aware processing in graph neural networks. # Evaluation: I did not fully understand the exact problem being addressed in the dataset. Unless I missed it, this information does not appear to be provided in the main text or in Appendix A. Figure 7a) presents a “Ground Truth” with an output value/vector(?) per node, but it is unclear what the referenced “deformation magnitude” represents. What is the exact challenge of the dataset? What is given and what needs to be predicted? Existing baselines, such as MGN, typically handle deformation tasks over time, addressing challenges like rollout stability by introducing noise. However, I suspect that this is not the case in the current task. If so, does the comparison to MGN only refer to its architecture and encoder-processor-decoder structure rather than its dynamic modeling capabilities? In summary, the goal of the task is not clearly defined, and incorporating a dynamic deformation over time would provide a more appropriate basis for comparison with the baselines. Theoretical Claims: The paper does not provide theoretical proofs to support its claims. As mentioned earlier, the claim that the model “preserves E(3)-equivariance and invariance” lacks formal justification, making it difficult to assess its validity. A rigorous proof or at least a more detailed explanation would be necessary to substantiate this claim. Experimental Designs Or Analyses: I encountered difficulties in fully assessing the experimental design due to ambiguities in the dataset's objectives and the lack of precise definitions for the evaluation metrics used. The metrics—Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), and R-squared (R²)—are mentioned as abbreviations without explicit descriptions of their computation methods. While they are in general well-known, a precise defintion of them would strengthen the reproducibility of the paper's results. Additionally, using 3 seeds for repetition is on the lower side, especially if only one dataset is used. Using confidence intervals instead of the standard deviation can also strenghten the statistical significance of the results. Supplementary Material: I read through Appendix A and looked at the qualitative results. To enhance clarity and navigation, reference appendix sections directly within the main text. This practice guides readers to supplementary material and clarifies the relationship between the main content and appendices. Relation To Broader Scientific Literature: The paper introduces a novel "Coordinate Transformation: E(3)-Invariant Data-driven Coordinate System" in section 4.1, which is not addressed in the related work section. This contribution could benefit from contextualization within the broader literature, especially related to coordinate transformations in geometric deep learning and invariant representations. Including references to works that explore coordinate transformations preserving E(3) invariance could strengthen the discussion. Additionally, the paper mentions thickness handling, but only indirectly through mesh density and hierarchical pooling. It would be valuable to relate these ideas to prior work that explicitly addresses thickness handling in similar contexts. There may be relevant research on mesh processing or pooling methods that directly deals with the manipulation of thickness or geometric features in 3D data. If present, incorporating such references could provide a clearer connection to existing research and help position the paper's contributions more effectively in the broader field. Essential References Not Discussed: In Section 4.2.2 the authors propose their surface update. These are the exact update rules given in the Message Passing Neural network used for example by MeshGraphNet and originates from the paper "Graph Networks as Learnable Physics Engines for Inference and Control" by Sanchez-Gonzalez et. al., 2018. I suggest to cite one of the papers and clearly distinguish here between own contribution and existing architectures. Other Strengths And Weaknesses: The general idea of improving the surface mesh by incorporating thickness edges is crucial for learnable physic simulators. I really like the idea and I can clearly see the scientific gap. The data-driven coordiante frame looks promising, I am wondering how this can be extended to multiple objects and their interaction, as well as how this is handling deformations over time. Also, what if gravity is a concern, does this not break the symmetry? Although this is a general problem with equivariant networks, it would strengthen the paper, if this is discussed. Other Comments Or Suggestions: - 094, right: Mention MGN=”Mesh Graph Net” - Inconsistent notation in Eq. 14: Normal $n_i$ uses only the node index $i$, while $n_{T(v_i)}$ uses the complete node $T(v_i)$ as an index. Questions For Authors: Main questions are discussed in "Methods And Evaluation Criteria". Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your thoughtful review. For a comprehensive response, please refer to the attached [link](https://shorturl.at/gOsz6). All materials within the link are indexed starting with the letter ‘L’ (e.g., Fig. L1). --- **A1. Problem Definition and Dynamic Analysis** The task is to predict node-wise 3D deformation for 28 unique shapes with thickness, each under 18 condition combinations. In Figure 7, we visualize the deformation by taking the L2-norm of each node's deformation, resulting in a scalar value per node. These shapes are tested under diverse conditions, which are defined as "static analysis," distinct from the dynamic analysis in MGN, which models material interactions. Static analysis focuses on how deformation changes under varying conditions (e.g., pressure), while dynamic analysis, such as the deforming plate in MGN, examines interactions over time (e.g., velocity). Although they serve different purposes, we also perform next timestep deformation prediction with the Deforming Plate dataset to demonstrate how our thickness processor helps the model handle dynamic situations. Since our data-driven coordinate system is designed for static analysis to prevent misalignment from affecting the results, we use the original coordinate system for dynamic analysis. The same mesh edge and node features as in the MGN paper, along with an additional node feature—the shortest world distance from the actuator to the node—are used to model interactions with the external environment. For detailed results, please refer to Table L1 and Fig. L6, where our model successfully handles the deforming plate's thickness, improving performance. --- **A2. Additional Datasets** Please refer to our _rebuttal A2 to reviewer "pfVr"_ due to character limits. --- **A3. Equivariance/Invariance Proof** Please refer to Sec. L1 for the detailed proof. Our model can be either invariant or equivariant, depending on whether the 'inverse transformation' is applied. We will make sure to be more rigorous in distinguishing between the terms "equivariance" and "invariance" in the revised version. --- **A4. Thickness Node Pair Definition.** Please refer to Fig. L7 and L8 in Sec. L3 for a comprehensive description. The dot product between the normal vectors of $v_i$ and $T(v_i)$ is included as an attribute of the thickness edge between $(v_i, T(v_i))$ to address the issue you raised. To better account for these non-perpendicular edges, we use the dot product to measure how well the normal vectors are aligned in the edge attribute. **A5. Ray Projection Distance ($d$).** We use the ray.intersects_location function from the trimesh library to find intersections between the mesh and the ray. Since the ray has no distance limit, it may extend to the far opposing side, leading to what is more accurately described as width rather than thickness, as shown in Fig. 4 of the paper. To address this, we introduce the thickness edge feature as a distance measure and apply a learnable thickness threshold to filter out incorrect thickness edges, without requiring a distance limit $d$. **A6. Selection of $\tau$.** The learnable threshold $\tau$ is essentially a thickness threshold that helps with the prediction. It defines the range of thickness edges where message passing is beneficial. As distance increases, the interaction or correlation between opposite nodes tends to decrease. This aligns with the design we intended, and the reason it is made learnable is to allow the model to dynamically adjust this threshold. **A7. Clarification of Node Features $g_i$ and $r_i$** Node features are described in Apx, lines 606-607. We will reference them in the main text for clarity. **A8. Thickness Activation Function.** Please refer to Fig. L9 for details. The activation function in Eq. 15 includes $\alpha$, which controls transition sharpness. Increasing $\alpha$ sharpens the influence based on thickness, but the influence is gradually reduced, not abruptly cut off, to ensure stable training. For a detailed analysis of $\alpha$, see Apx. D. The term "exclude" was meant to indicate a gradual reduction, not a hard cutoff. **A9. Regarding MGN.** While our task does not require dynamic modeling, we chose MGN as a baseline for its strong architecture, on which we build by adding a thickness processor and a data-driven coordinate system. As stated in line 142, our method shares the core MGN structure, making a fair comparison and emphasizing the impact of our additions. --- **A10. Essential References Not Discussed** We would like to clarify that MGN is mentioned in line 141, where we note that our architecture is derived from the core MGN structure. However, for clarity, we will specifically reference it in Sec 4.2.2. --- **A11. Regarding the gravity.** Thank you for your insightful comment. We are indeed considering a coordinate system where equivariance operates beyond the direction of the gravitational field as part of our future work.
Summary: The paper introduces **Thickness-aware E(3)-Equivariant Mesh Neural Networks (T-EMNN)**, a framework designed to address the limitations of existing mesh-based 3D analysis methods, which often overlook the inherent thickness of real-world 3D objects. The authors argue that thickness plays a critical role in physical behaviors such as deformation, bending, and stress distribution, particularly in objects like plates, baskets, and layered materials. T-EMNN integrates thickness information into the mesh representation while maintaining computational efficiency and preserving E(3)-equivariance (invariance to translations, rotations, and reflections). The paper also introduces a **thickness threshold** to dynamically regulate interactions between opposing surfaces, ensuring that only relevant thickness-related interactions are considered. Experimental results show that T-EMNN outperforms existing methods, including MGN, EGNN, and EMNN, in both in-distribution and out-of-distribution settings. Claims And Evidence: The claims made in the paper are well-supported by both theoretical and empirical evidence. However, the paper could benefit from a more detailed discussion of the limitations of the proposed method, particularly in scenarios where thickness is not the dominant factor in deformation. Methods And Evaluation Criteria: The proposed methods are well-suited for the problem at hand. The experimental design is sound, with a clear comparison to state-of-the-art baselines (MGN, EGNN, EMNN) and ablation studies to validate the contributions of individual components (e.g., thickness edges, dot product of normal vectors). Could the proposed show more visual examples on the thickness reconstruction for challenging mesh, such as the leaves, cloth, etc. Theoretical Claims: The paper does not present formal theoretical proofs but provides a clear explanation of the E(3)-equivariance and invariance properties of the proposed data-driven coordinate system. The authors justify their approach using geometric principles and demonstrate its effectiveness empirically. Experimental Designs Or Analyses: The experimental design is robust, with the following key strengths: Dataset, Baselines, Ablation studies, Out-of-distribution testing. One potential limitation is the lack of comparison with volume-based methods (DetailRecon: Focusing on Detailed Regions for Online Monocular 3D Reconstruction), which could provide additional context for the performance of surface-based approaches like T-EMNN. Supplementary Material: The supplementary material is comprehensive and supports the main claims of the paper. Relation To Broader Scientific Literature: The paper builds on several key areas of research, Mesh-based 3D representation, E(3)-equivariant networks, and Thickness modeling. The paper situates itself well within the broader context of 3D mesh analysis and equivariant neural networks, but it could benefit from a more detailed discussion of how T-EMNN compares to volume-based methods, which are commonly used in structural analysis. Essential References Not Discussed: It is ok for me. Other Strengths And Weaknesses: **Strengths**: 1. The introduction of thickness-aware modeling and data-driven coordinates is a significant contribution to the field of 3D mesh analysis. 2. The validation on a real-world industrial dataset demonstrates the method's applicability to practical problems like injection molding. 3. The model's performance under out-of-distribution settings highlights its robustness to transformations. **Weaknesses**: 1. The paper focuses on surface meshes but does not compare T-EMNN with volume-based methods, which are commonly used in structural analysis. 2. While the paper provides a clear explanation of the proposed methods, formal theoretical proofs could strengthen the theoretical contributions. Other Comments Or Suggestions: NA Questions For Authors: 1. How does T-EMNN compare to volume-based methods (like FEM, DetailRecon: Focusing on Detailed Regions for Online Monocular 3D Reconstruction) in terms of accuracy and computational efficiency? 2. Can the authors provide formal proofs for the E(3)-equivariance and invariance properties of the proposed data-driven coordinate system? 3. How well does T-EMNN generalize to other types of 3D objects beyond the industrial dataset used in the experiments? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for taking the time to review our work. For a comprehensive response, please refer to the attached [link](https://shorturl.at/gOsz6). All materials within the link are indexed starting with the letter ‘L’ (e.g., Fig. L1). --- **A1. Additional Datasets for Challenging Meshes (Response to Methods And Evaluation Criteria and W3)** We evaluate our method on four additional datasets (using 5 seeds for reliability): - **Deforming Plate** (Node-level prediction - Position of Next timestep) - **Basket** (Graph-level prediction - Max pressure) - **Circular Plate** (Node-level prediction - Deflection of each node) - **SimJEB** [1] (Node-level prediction - Magnitude of the displacement) **To visualize the thickness edges on other challenging datasets**, we include the learned valid thickness edges in Figs. L4, L5, and L6. As shown in these figures, our thickness processor effectively connects opposing surfaces (left), facilitating beneficial interactions, while filtering out opposing surfaces that negatively impact performance (right). **Regarding performance**, as shown In Table L2, our method demonstrates strong performance on both the Basket (graph-level prediction) and Circular Plate (node-level prediction) datasets. Regarding the SimJEB dataset, although our data-driven coordinate system does not perform well under the "In Dist." setting (where the original coordinate system is well-established but our system faces challenges in aligning coordinates between diverse shapes), it performs effectively in the "Out-of-Dist." setting. In this setting, our method successfully addresses the misalignment issue in the original coordinate system between shapes, ensuring E(3)-invariance. Moreover, to evaluate the **dynamic capabilities of our framework**—specifically the thickness processor—we perform next-timestep deformation prediction using the Deforming Plate dataset (as described in the MGN paper) to demonstrate how our thickness processor helps the model handle dynamic scenarios. Since our data-driven coordinate system is designed for static analysis to prevent misalignment from affecting the results, we use the original coordinate system for dynamic analysis. The same mesh edge and node features as in the MGN paper, along with an additional node feature—the shortest world distance from the actuator to the node—are used to model interactions with the external environment. For detailed results, please refer to Table L1, where our model successfully accounts for the deforming plate’s thickness, leading to improved performance. --- **A2. Discussion on Limitations in Scenarios Where Thickness is Not Dominant for Prediction (Response to Claims And Evidence)** Since not all objects have thickness, our method may not be very helpful for such shapes. However, in these cases, we adjust the threshold where thickness influences performance in a learnable manner. In such situations, **the thickness threshold will adapt so that no thickness edge contributes to the prediction.** If thickness is not important, the thickness edge will be ignored, and the model will ultimately be equivalent to the MGN with our proposed data-driven coordinate architecture. --- **A3. Equivariance/Invariance Proof (Response to Theoretical Claims, W2 and Q2)** Please refer to Sec. L1 for the detailed proof. Our model can be either invariant or equivariant, depending on whether the 'inverse transformation' is applied. --- **A4. Surface Mesh vs. Volume Mesh (Response to W1 and Q1)** Please refer to Appendix G for a comparison of GPU memory usage, performance, and inference time between volume-based and surface-based methods. We use the same model architecture (MGN) for a fair comparison, with the distinction that the input data is either volume mesh or surface mesh. The surface mesh utilizes only 11% of the GPU memory and takes 19% of the inference time compared to the volume mesh, while also delivering better performance. We argue that the performance degradation of the volume mesh arises from the over-smoothing or over-squashing problem that occurs when the mesh density is too high in GNN-based methods. For more details, please refer to Appendix G. Regarding the DetailRecon, which was published in January 2025, we did not recognize the paper, but we will address it in the related work section of the revised version. Moreover, it is worth noting that FEM is not a neural network-based method, but rather a traditional numerical technique used for solving partial differential equations. While FEM remains a cornerstone in engineering simulations, it differs fundamentally from neural network-based approaches like ours, which aim to learn patterns from data rather than relying solely on predefined physical models. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed response to my concerns, especially for the additional results. The attached results have addressed my concerns and still encourage the authors to discuss and show some failure cases (it would be better). I still maintain our initial scores.
null
null
null
null
null
null
MME-CoT: Benchmarking Chain-of-Thought in Large Multimodal Models for Reasoning Quality, Robustness, and Efficiency
Accept (poster)
Summary: This paper introduces MMECoT, a benchmark designed to evaluate the Chain-of-Thought (CoT) reasoning performance of Large Multimodal Models (LMMs) across six domains: math, science, OCR, logic, space-time, and general scenes. It proposes a comprehensive evaluation suite with three novel metrics to assess reasoning quality, robustness, and efficiency at a fine-grained level. The study analyzes state-of-the-art LMMs and uncovers key findings: 1) Models with a reflection mechanism, like QVQ, show superior CoT quality, approaching GPT-4o; 2) CoT prompting tends to degrade LMM performance on perception-heavy tasks, possibly due to overthinking; and 3) Despite high CoT quality, LMMs with reflection mechanisms are inefficient during both response and self-correction phases. Claims And Evidence: Yes, the claims are well-supported and make sense in light of the experimental results. Methods And Evaluation Criteria: Yes, the proposed CoT benchmark for LMMs is highly meaningful, as it addresses the need for systematic evaluation of reasoning across multiple domains. Theoretical Claims: This work does not include proofs for its theoretical claims. However, based on the results provided by the authors, the findings appear to be reasonable. Experimental Designs Or Analyses: Yes, the experimental designs and analyses appear to be sound based on the reported correlation studies. Supplementary Material: Yes, we reviewed the supplementary material, specifically Parts A through C. Relation To Broader Scientific Literature: The key contributions of this paper build on prior research in Chain-of-Thought (CoT) reasoning for Large Language Models (LLMs) but extend the focus to Large Multimodal Models (LMMs). Previous work on CoT in LLMs (e.g., GPT-4) has shown improved reasoning capabilities. This paper expands that by evaluating how CoT affects LMMs in multiple domains, revealing novel insights. Essential References Not Discussed: The references discussed in the paper are sufficient for understanding the context and key contributions. The study thoroughly covers the relevant advancements in Chain-of-Thought (CoT) reasoning and Large Multimodal Models (LMMs). Other Strengths And Weaknesses: **Strengths.** This paper's strengths lie in its comprehensive approach to evaluating Chain-of-Thought (CoT) reasoning in Large Multimodal Models (LMMs). By introducing the MMECoT benchmark, it provides a detailed and systematic assessment across six diverse domains, offering novel metrics to evaluate reasoning quality, robustness, and efficiency. The in-depth analysis of state-of-the-art LMMs uncovers valuable insights, such as the role of reflection mechanisms in enhancing CoT quality and the potential inefficiencies that arise in self-correction phases. This work fills a gap in multimodal reasoning research, offering both a practical evaluation suite and a foundation for future advancements in the field. **Weaknesses.** 1. For math-related problems in the benchmark, there are often multiple correct solutions, meaning that there are several valid paths. In such cases, it becomes difficult to fully assess the accuracy of the CoT process. 2.Some models tested in the paper, such as LLaVA-OV, Qwen2-VL, and InternVL2.5, do not generate CoT processes autonomously. The paper does not describe how the CoT outputs for these models were obtained. If these models are indeed capable, could other open-source models, like DeepSeekVL2 and GLM-4V, also be tested? 3. The selection process for Key Step Annotation is not provided in the paper, and it is also unclear whether a QA corresponds to a single or multiple Key Step Annotations. Other Comments Or Suggestions: Please see weakness and suggestions above. Questions For Authors: Please see weakness and suggestions above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate your valuable comments. We find them extremely helpful and will incorporate them in the final version. We address each comment in detail, hoping to address your concerns. > **Q1: Accuracy concern of questions of multiple correct solutions** **We believe this concern is addressed in our paper.** Our methodology explicitly accounts for questions with multiple correct solutions in both annotation and evaluation phases: 1. **Annotation Phase**: As detailed in Lines 203-206 of the main paper, annotators are explicitly instructed to provide all key steps for all possible solution paths. 2. **Evaluation Phase**: Our evaluation framework accommodates multiple solutions: - For recall computation (Lines 245-246 and Equation 1), we compute recall scores for all possible solutions and select the maximum as the final recall score - For precision, relevance rate, and reflection quality, we provide GPT-4o with key step annotations for all possible solutions in the evaluation prompt, enabling proper assessment of the CoT process considering all valid approaches > **Q2: How is CoT obtained in models that don't generate it autonomously?** Thanks for your advice. **We have incorporated the illustration of how to obtain the CoT output in Line 257 - 261 in the right columns of the main paper.** Specifically, we employ a CoT prompt to instruct the model to first perform step-by-step reasoning and finally give the answer. As illustrated in Line 355-358, the CoT prompt is: > Please generate a step-by-step answer, include all your intermediate reasoning process, and provide the final answer at the end. We empirically find that the models that don't generate CoT automatically like LLaVA-OV could give a detailed reasoning process before providing the final answer. GLM-4v cannot receive multiple images as input, so we only evaluate DeepSeek-vl2. The result is: Model|Pre.|Rec.|Eff.|Sta.|Rel.|Ref. -|-|-|-|-|-|- DeepSeek-VL2|81.2|43.0|-1.6|-5.1|93.3|100 Please refer to the detailed table in Table 1 in https://anonymous.4open.science/r/reb-3538/README.md > **Q3: Selection process for key step annotation and the number of key step annotation for each QA** Thanks for your advice. We want to clarify the two points below: 1. **We have illustrated how to obtain the key steps in Line 187 - 205 in the main paper.** We detail the process below: Key steps are defined as **necessary steps** for answering the question correcly, such as identifying critical values in math diagrams or reaching intermediate conclusions essential to determining the final answer. To efficiently annotate key steps for all questions, we implemented a two-phase process. First, GPT-4o generated draft annotations using the questions, images, and final answers as inputs. Including the final answers significantly improved draft quality compared to using only questions and images. Second, human annotators reviewed these drafts, correcting any errors or developing key steps independently when GPT-4o failed to provide reasonable output. All the key steps fall into two categories: 1. *Inference conclusions*: Necessary conclusions reached through logical inference steps (including the final answer) 2. *Image captions*: Identifications of critical visual information We reduce all steps to their simplest form, preserving only core conclusions and relevant visual element descriptions. For problems with multiple solution paths, annotators are required to provide all possible methods. 2. **All QA corresponds to multiple key steps.** 1. **We provide a dataset visualisation in the bottom part of Fig. 2**. The key caption and key conclusion correspond to the inference conclusions and image captions of the key steps. We will make this figure more clear in the final version. 2. **We also provide the statistics of the key step annotations in Table 1.** There are 837 reasoning questions with 3,865 key step annotations in total. On average, each question contains 4.6 key step annotations. This result can also be derived by the summation of the average inference conclusions and image captions listed in the table. We further look into each question and find out that all the questions have at least 3 key step annotations. We will make this more clear in the final version. We will explicitly add this result in the table in the final version. --- Rebuttal Comment 1.1: Comment: The authors have addressed my concerns. I will raise my score accordingly. --- Reply to Comment 1.1.1: Comment: Thank you very much for your reply! We commit to making the modifications mentioned in the rebuttal in the final version.
Summary: This paper introduces MME-CoT, a novel benchmark for evaluating chain-of-thought (CoT) reasoning capabilities in Large Multimodal Models (LMMs). The authors present a comprehensive evaluation framework that assesses three critical aspects of multimodal reasoning: quality, robustness, and efficiency. The insights derived from the experiment results are valuable and helpful to develop better CoT model. ## update after rebuttal The authors's rebuttal has resolved most of my concerns. I hope the rebuttal contents can be added in the final version. I generally think this is a good paper and will left the AC to decide the acceptance Claims And Evidence: yes Methods And Evaluation Criteria: Yes. This paper applies an LLM-as-judge evaluation method, and also provided LLM with the reference solution and background knowledge required to make the judgment. Therefore, although there might be concern on the hallucinations of LLM-as-judge, it's overall acceptable. Theoretical Claims: No. There is no theoritical claims in this paper. Experimental Designs Or Analyses: Yes. I checked section 4 for the details and the setting makes sense. Also Apendix B for prompt template Supplementary Material: Yes. I checked the appendix for the prompt template and related works. Relation To Broader Scientific Literature: 1. This paper presents the first benchmark that evaluates the quality of CoT in terms of multiple aspects including precision and recall, also the robustness, etc. Previous benchmarks only focus on the outcome of the final answer and thus makes the paper's contribution significant. 2. This paper clearly distinguished the vision perception tasks from the reasoning tasks and performance analysis on them in different approaches. The insights that overthinking can do harm to the perception is a unique observation. Essential References Not Discussed: No Other Strengths And Weaknesses: ### Strengths Same as "Relation To Broader Scientific Literature" ### Weaknesses 1. Potential GPT-4o bias: The evaluation relies heavily on GPT-4o for judging various aspects of model outputs. This introduces potential biases, as GPT-4o itself may not be perfect at evaluating reasoning processes. This is a minor concern as the author already tried to give the users a reference solution and background knowledge. 2. There can be potential issues in the evaluation setting, where the comparison of CoT reasoning and direct reasoning can contain problems. Although for direct reasoning, the prompt is "Please directly provide the final answer without any other output", there is still a great change that the model ignore this instruction and keeps output the CoT, which is the behavior the model is trained on. Therefore, this can makes the comparison between CoT reasoning and direct reasoning meaningless. The authors should analyze and report the percentage of the response from direct reasoning that is actually direct. Otherwise, part of the conclusions like stability of the perception and reasoning tasks can be wrong conclusions. 3. Other Comments Or Suggestions: 1. Are there any failure cases analysis in terms of the error type of the CoT, like calculation error, lack of background knowledge, etc? It's quite helpful for people to understand the behavior of the models on CoT generation. 2. If there are still resources, I would suggest the authors to conduct a human verification of the correctness of the GPT-4o as a judge. Although it's provided with the solution and each step along with the background knowledge, there is still a possibility that the GPT-4o can hallucinate. A ablation can alleviates concerns for this. Questions For Authors: 1. What's the cost of evaluating a model since GPT-4o can be expensive. 2. The reflection quality of most models are 100, only Virgo-72B and QVQ-72B have pretty low scores. Are there any insights behind this result? Is that because we rarely observe reflection in other models? 3. For OCR tasks, how did GPT-4o judge the answer? The OCR response should only be regarded as correct when it's the same as the reference, otherwise 0. Will GPT-4o also give score if the response has pretty minor issue like missing a character or work, but the meaning is still the same? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate your valuable comments. We find them extremely helpful and will incorporate them in the final version. We address each comment in detail, hoping to address your concerns. > **Q1: Potential GPT-4o bias in evaluation** Thank you for your valuable advice. We want to address your concern from two aspects: 1. **High Human-GPT-4o Alignment** Our human alignment experiment shows strong correlation between GPT-4o evaluations and human judgment, confirming GPT-4o as a reliable tool for CoT evaluation. Specifically, we investigate two perspectives: 1. **Human Agreement Rate**: A binary (yes/no) human evaluation to assess agreement with the model's per-step judgments. 2. **Hallucination Detection**: We assess whether any reflection steps identified by GPT-4o are hallucinated or contain hallucinations. We cover four key metrics: Recall, Precision, Relevance rate, and Reflection quality. We randomly sampled 54 predictions (9 questions from each subject) from QwenVL2-72B and QvQ-72B, totaling 216 predictions and 2,368 steps. The results are as follows: Metric|Agreeement|Hallucination -|-|- Recall|98.5%|0% Precision|94.1%|2.1% Relevance rate|90.8%|0% Reflection quality|86.1%|0% 2. **Best Available Automated Evaluation Method** While acknowledging the inherent limitations, GPT-4o represents the current state-of-the-art approach for automatic CoT evaluation. GPT-4o has been widely adopted and validated for evaluation across various multimodal [1] and reasoning tasks [2]. Given our reference solutions and human alignment results, we believe GPT-4o is the most reliable option currently available for this complex evaluation task. > **Q2: Issues of true direct answers** Thanks for your valuable advice. We report the actual direct answer (defined as less than 20 words) ratio in the table below: Model|Direct Ans Ratio -|- Mulberry|0.2600 LLaVA-OV-7B|0.9712 LLaVA-CoT|0.0000 LLaVA-OV-72B|0.9704 MiniCPM-V-2.6|0.9944 InternVL2.5-8B|0.9920 Qwen2-VL-7b|0.9776 InternVL2.5-8B-MPO|0.9656 InternVL2.5-78B-MPO|0.9992 Qwen2-VL-72b|0.9776 Virgo-72B|0.6728 QVQ-72B|0.0008 GPT4o|0.9648 Four models demonstrate reluctance to provide direct answers (we mark these four models with \* in Table 2). Based on this, we recalculate stability and effectiveness using only questions that receive direct answers. The updated table is shown in Tab. 1 in https://anonymous.4open.science/r/reb-3538/README.md **We would like to emphasize that all conclusions in our current paper remain valid.** Our analysis and conclusion do not consider any results with \* to ensure validity. Thanks for pointing this out, and we will stress this further in the final version. > **Q3: CoT error analysis** Thanks for your advice. We provide the analysis of CoT error types and reflection error types. + CoT Error Types: Visual Perception, Visual Reasoning, Logical Reasoning, Calculation. + Reflection Error Types: Ineffective Reflection, Repetition, Incompleteness, Interference. Please refer to the example and error ratios in Fig. 3-6 in the link above. > **Q4: Human verification of GPT-4o correctness** Please refer to our response to Q1. > **Q5: The cost of evaluating GPT-4o** **The average cost of evaluating one model is 16 dollars**. We have also experimented with evaluating with GPT-4omini for all the tasks, but the human evaluation shows low agreement. With the recent development of LLM, the open-source models might be used for evaluation in the future. **We also plan to release a testmini set**, which composes 200 questions with all the subjects included. **Evaluation using testmini can reduce evaluation costs to around 3.5 dollars.** > **Q6: Most models score 100 in reflection quality** Yes, as illustrated in Line 381-382, **for models not generating reflection, we define their reflection quality to be 100** since the absence of reflection can be viewed as the most efficient approach. > **Q7: How does GPT-4o judge the answer for OCR tasks?** Thanks for your advice. We look into 20 predictions and their corresponding evaluation results. We observe that: 1. The tested models typically only OCR the content directly relevant to the question rather than performing comprehensive OCR on all text. The models tend to summarize other visual information briefly. This results in concise OCR content in predictions. 2. **We observe that GPT-4o demonstrates high accuracy in judging OCR content**, likely because the OCR content is usually highlighted in the key step annotations. Nevertheless, **we will still enhance our evaluation prompt to explicitly instruct GPT-4o to follow the OCR metric design you suggested**, ensuring that OCR responses are only considered correct when they exactly match the reference. [1] Tarsier: Recipes for training and evaluating large video description models [2] Mathverse: Does your multi-modal llm truly see the diagrams in visual math problems?
Summary: This paper introduces MME-CoT, a new benchmark for evaluating Chain-of-Thought (CoT) reasoning in Large Multimodal Models (LMMs). The work addresses a timely and important gap in the evaluation of multimodal reasoning. The authors have identified key limitations in existing benchmarks and propose a more comprehensive evaluation suite. Claims And Evidence: All the claims are problematic because the metrics themselves are not verified by human at all. Moreover, without any confidence interval, it is too early to draw any conclusion in a statistically significant manner. Methods And Evaluation Criteria: The reviewer is really concerned about the validness of the current evaluation suite. 1. The evaluation relies heavily on GPT-4o(mini) for assessing various aspects of CoT (e.g., Recall, Precision, Relevance). While GPT-4o(mini) is a strong LLM, it's not perfect. Analysis of the agreement between LMM's judgments and human evaluations is necessary. For example, performing similar studies on human agreement in [a,b] could really make the work much more sound. 2. Confidence interval is necessary to draw meaningful conclusions to illuminate future research. 3. Current metrics have a rather small separation between models. 4. The current experimental setup is too simple, lack of sufficient variants to perform in-depth analysis. For example, to understand the importance of reflection/length of CoT, it is expected to at least try different prompts to encourage or discourage the model to include certain behavior in the prompts and perform more controlled experiments. [a] MLLM-as-a-Judge: Assessing Multimodal LLM-as-a-Judge with Vision-Language Benchmark [b] Personalized Video Comment Generation [Update]: The reviewer appreciates the rebuttal and raised score accordingly based on the promised results. But given the current agreement value and the separation between models. The reviewer is still concerned on how this metric effectively reflect the actual rank between models and would like to call this out for ACs to consider. Theoretical Claims: N/A Experimental Designs Or Analyses: Please check details in Methods And Evaluation Criteria. Supplementary Material: All. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: As mentioned in Methods And Evaluation Criteria, references providing approaches to verify agreement with human shall be leveraged and cited. Other Strengths And Weaknesses: Lack of more evaluation of other most capable models such as Gemini and Claude. Other Comments Or Suggestions: NA Questions For Authors: Please check above for details. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate your valuable comments. We find them extremely helpful and will incorporate them in the final version. We address each comment in detail, hoping to address your concerns. > **Q1: Need studies on human agreement** Thank you for your advice. As you suggested, we conduct additional human evaluations to verify the validity of the GPT-4o assessment from two perspectives [1, 2]: 1. **Human Agreement Rate**: A binary (yes/no) human evaluation to assess agreement with the model's per-step judgments. 2. **Hallucination Detection**: We assess whether any steps identified by GPT-4o are hallucinated or contain hallucinations. Our human agreement study covers four key metrics: Recall, Precision, Relevance rate, and Reflection quality. We randomly sample 54 predictions (9 questions from each subject) from QwenVL2-72B and QvQ-72B, totaling 216 predictions and 2,368 steps. The results are as follows: Metric|Agreeement|Hallucination -|-|- Recall|98.5%|0% Precision|94.1%|2.1% Relevance rate|90.8%|0% Reflection quality|86.1%|0% These results demonstrate a high correlation between GPT-4o evaluations and human judgment, indicating that GPT-4o is a reliable tool for CoT evaluation. This result also indicates that all of our analysis and conclusion are valid. We will incorporate references to the papers you suggested in our final version. > **Q2: Confidence interval is not provided** Thanks for your advice. Our default experiment setting uses temperature 0 for GPT-4o to ensure reproducible results. To address your concern, we set the temperature to 1 and evaluate 50 predictions for 5 times. The resulting 95% confidence interval widths are below: Pre.|Rec.|Eff.|Sta.|Rel.|Ref. -|-|-|-|-|- 0.018|0.026|0.004|0|0.049|0.069 These narrow confidence intervals demonstrate the statistical reliability of our findings. The largest interval width is only 0.069 for Reliability, indicating that our results are stable and reproducible even with stochastic sampling. > **Q3: Small separation between models of the current metric** We respectfully disagree with the reviewer's assessment for the following reasons: 1. **MME-CoT separation is comparable to other benchmarks:** The difference is listed below: Benchmark|Difference -|- F1-score (MME-CoT)|3.05 MMMU|0.83 MathVista|1.43 MathVerse|3.47 2. **The robustness and efficiency scores differ from traditional metrics, so the scale of the model's separation is also different.** + The robustness score measures the performance difference within different prompts. This difference results from the intrinsic model attributes. + The efficiency score specifically identifies models with excessively long CoT and reflection steps (e.g., QvQ and Virgo). As shown in Table 2, these two models score over 20 points lower in CoT efficiency compared to others, demonstrating significant separation. > **Q4: Experimental setup is too simple** We respectfully disagree that our experimental setup is too simple: 1. **Our experimental setup is sophisticated and comprehensive** Compared with previous multimodal evaluation works [3, 4], our study explores multiple dimensions: different prompt strategies (CoT prompt vs. direct prompt), evaluation methods (CoT evaluation vs. direct evaluation), and diverse evaluation aspects (quality, robustness, and efficiency). 2. **Our focus is assessing natural CoT behavior, not improving it** The primary objective of our paper is to evaluate how current LMMs perform reasoning when confronted with problems and to assess the quality of their reasoning processes. We specifically examine how models behave with standard prompting approaches rather than engineering prompts to encourage specific behaviors. The latter, while valuable, is beyond our current scope and for future research. > **Q5: Results of Claude and Gemini** Thanks for your advice. The results of Claude and Gemini are listed below: Model|Pre.|Rec.|Eff.|Sta.|Rel.|Ref. -|-|-|-|-|-|- Gemini-2.0-Flash|80.3|52.9|6.6|5.9|95.5|100 Claude-3.5|77.2|48.2|9.9|11.0|91.0|100 Please refer to the updated table in Table 1 in https://anonymous.4open.science/r/reb-3538/README.md [1] MLLM-as-a-Judge: Assessing Multimodal LLM-as-a-Judge with Vision-Language Benchmark [2] Personalized Video Comment Generation [3] Mathverse: Does your multi-modal llm truly see the diagrams in visual math problems? [4] Mmmu: A massive multi-discipline multimodal understanding and reasoning benchmark for expert agi --- Rebuttal Comment 1.1: Comment: The reviewer appreciates the rebuttal and raised score accordingly based on the promised results. But given the current agreement value and the separation between models. The reviewer is still concerned on how this metric effectively reflect the actual rank between models and would like to call this out for ACs to consider. --- Reply to Comment 1.1.1: Comment: Thank you very much for your reply! We commit to including the experiments conducted and citing the related work in the rebuttal in the final version.
Summary: The paper introduces MME-CoT, a specialized benchmark evaluating the CoT reasoning performance of LMMs. It is the first comprehensive study in LMM CoT evaluation: it spans six domains: math, science, OCR, logic, space-time, and general scenes; and it proposes a thorough evaluation suite incorporating novel metrics about reasoning quality, robustness, and efficiency. In-depth analysis of state-of-the-art LMMs are shown, e.g., CoT performance on perception-heavy tasks, inefficiency of reflection mechanism. Claims And Evidence: Main claim: existing evaluation for LMM CoT is insufficiently systematic and thorough, while the proposed MME-CoT is a comprehensive and specialized benchmark for evaluating the CoT reasoning skills within LMMs. The claim is clear and convincing. MME-CoT spans six fundamental domains and introduces comprehensive metrics. Also, diverse typical LMMs are evaluated on MME-CoT and analyzed. Methods And Evaluation Criteria: The proposed metrics are sound for CoT evaluation. The curated dataset spans six important domains and contains large-scale test data (1,130 questions with 3,865 key reasoning steps). Experiments have been conducted for several typical LMMs. **Questions:** 1. For CoT quality evaluation, it is difficult to understand L186-204 and identify the difference compared with existing metrics. 2. For CoT robustness evaluation, the paper proposes stability score based on perception tasks and efficacy score based on reasoning tasks. What about tasks involving both perception and reasoning? For example, counting red balls in a picture of balls with multiple colors, reasoning the relationship of two people from their activity, expression and scene. Such visual reasoning tasks are typical and important. 3. For CoT efficiency evaluation, a reflection should be "valid", either correctly pointing out the previous mistakes or verifying the previous conclusion with a new method. How to judge its validity? It needs comprehensive judgement and seems beyond capabilities of automatic evaluation. Theoretical Claims: No Theoretical Claims. Experimental Designs Or Analyses: I have checked experimental designs and analyses. Supplementary Material: I have viewed all parts of the supplementary material. Relation To Broader Scientific Literature: The key contributions of the paper is related to LMM CoT evaluation. Essential References Not Discussed: None. Other Strengths And Weaknesses: Strengths: The paper is well-organized. The figures and tables are clear and easy to understand. Other Comments Or Suggestions: 1. Comparison with existing metrics can be shown in figures to better illustarte the novelty of MME-CoT benchmark. 2. More visualization and examples can be shown in the supplementary to support the analysis and conclusion. Questions For Authors: My questions are mainly about metric design, which are listed in the "Methods And Evaluation Criteria" part. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate your valuable comments. We find them extremely helpful and will incorporate them in the final version. We address each comment in detail, hoping to address your concerns. > **Q1: Difficulty in understanding Sec 2.2 and difference with existing metrics** Thanks for your valuable advice. **We improve the clarity of Sec 2.2 below:** For CoT evaluation, we provide *key steps* and *reference image captions* for all questions. + *Key steps* are defined as **necessary steps** for answering the question correcly. All the key steps fall into two categories: 1. *Inference conclusions*: Necessary conclusions reached through logical inference steps (including the final answer) 2. *Image captions*: Identifications of critical visual information To efficiently annotate key steps for all questions, we implement a two-phase process. First, GPT-4o generates initial versions of key steps annotations (containing both inference conclusions and image captions) with the questions, images, and final answers as inputs. Including the final answers significantly improves the quality of the initial versions compared to using only questions and images. Second, human annotators review these initial versions, correcting any errors or developing key steps independently when GPT-4o failed to provide reasonable output. We reduce all steps to the simplest form, preserving only core conclusions and relevant visual element description. For problems with multiple solution paths, annotators are required to provide all possible methods. + *Reference image captions* are visual information not covered by the image caption in key steps. These reference captions are mainly for the calculation of precision. We use the same method as key steps to obtain the annotation: GPT-4o first generates initial version and then annotators review and correct the errors. **Difference with existing metrics:** Few works evaluate the multimodal CoT process. MathVerse [1] represents one such effort. MME-CoT differs in several key aspects: 1. **MME-CoT evaluates the CoT quality in two ways: precision and recall**, corresponding to CoT faithfulness and informativeness. While [1] only instructs GPT-4 to judge the correctness of step, which could be viewed as only evaluating precision. 2. **MME-CoT contains ground truth key steps annotation**, enabling more reliable evaluation, especially for tasks beyond GPT-4o ability. [1] contain no such GT reference. 3. **MME-CoT considers both inference conclusions and image captions in CoT on all visual reasoning tasks**. [1] only targets on math tasks without identifying different types of CoT steps. > **Q2: Need to consider tasks involving both perception and reasoning** Sorry for the confusion caused. We'd like to clarify our task definitions: + The perception tasks are the tasks primarily test visual recognition abilities or require very minimal reasoning. + The reasoning tasks additionally require logical inference steps along with visual perception tasks. Therefore, **tasks requiring both perception and reasoning are exactly the reasoning tasks in MME-CoT**. Examples in Fig. 2 (bottom section) of our paper all showcase this two requirements: first perceiving visual cues, and then reasoning based on the perception. + **For your examples**: Counting red balls belongs to perception task since it requires minimal reasoning. Determining relationship between people belongs to reasoning task. Similar reasoning task also occurs in MME-CoT, as shown in Fig. 1 in https://anonymous.4open.science/r/reb-3538/README.md > **Q3: The reflection evaluation seems to be beyond the model's capability** Thanks for your valuable advice. We identify GPT-4o as an well-qualified evaluator for assessing the reflection quality: 1. **GPT-4o shows competitive results of reflection quality evaluation in the human agreement experiments.** We conduct additional human evaluations to verify the validity of GPT-4o assessment from two perspectives: 1. **Human Agreement**: A binary (yes/no) human evaluation to assess agreement with the model's per-step judgments. 2. **Hallucination Detection**: We assess whether any reflection steps identified by GPT-4o are hallucinated or contain hallucinations. We randomly sample 54 predictions (9 questions from each subject) from QvQ-72B. The results are below: Agreeement|Hallucination -|- 86.1%|0% 2. **Additional instruction for reflection quality evaluation prompt.** Acknowledging the challenges in identifying valid reflections, we incorporate specialized prompt design. We identify and list common reflection errors to better guide validity assessment (detailed in Lines 906-910). > **Q4&5: Comparison with existing metrics and more examples** We provide the comparison in Fig. 7 and more examples in Fig.8-18 in the link above. [1] Mathverse: Does your multi-modal llm truly see the diagrams in visual math problems?
null
null
null
null
null
null
Beyond Low-rank Decomposition: A Shortcut Approach for Efficient On-Device Learning
Accept (poster)
Summary: The paper proposes a novel shortcut approach—termed ASI (Activation Subspace Iteration)—that aims to improve the efficiency of on-device learning by addressing the activation memory bottleneck during backpropagation. The key idea is to perform a single subspace iteration with a “warm start” for low-rank decomposition of activation maps, accompanied by a rank selection strategy based on an activation perplexity measure. Experiments on compact networks like MCUNet and other small architectures demonstrate substantial reductions in memory usage and FLOPs compared to vanilla training and traditional HOSVD-based methods. Claims And Evidence: Yes Methods And Evaluation Criteria: 1. The paper’s method centers on a single subspace iteration with a warm start for decomposing activation maps, along with a rank selection mechanism guided by activation perplexity. 2. Evaluation criteria include reductions in activation memory, computational FLOPs, and training latency. 3. A concern is that while the approach shows promise on compact models, the paper does not sufficiently clarify its applicability or scalability when deployed on much larger models such as transformers or large language models (LLMs). Theoretical Claims: 1. The authors derive and analyze the computational complexity of ASI compared to HOSVD-based methods, providing equations for memory savings and speedup ratios. 2. Although the theoretical framework is solid, the paper’s writing is dense and could benefit from clearer explanations to enhance reader comprehension. Experimental Designs Or Analyses: 1. Experiments are conducted on several standard benchmarks and on a Raspberry Pi 5 to validate the on-device feasibility. 2. However, the experiments focus on MCUNet and similar small networks, leaving open the critical question of whether the rank selection and decomposition strategy can scale to transformer-based models or LLMs with billions of parameters. Supplementary Material: This submission has no supplementary material. Relation To Broader Scientific Literature: 1. The work builds on prior research in low-rank decomposition for weight and activation compression, including methods like HOSVD and LoRA. 2. It contributes by proposing a shortcut approach (ASI) that potentially reduces computational overhead. 3. However, similar techniques have been explored in previous works, and the novelty may be limited if the approach does not generalize beyond small-scale networks. Essential References Not Discussed: The paper might benefit from discussing more recent works on scaling low-rank decomposition techniques to transformer architectures and large language models, as these are critical for assessing the broader impact of the proposed method. Other Strengths And Weaknesses: Strengths: 1. The idea of using a single subspace iteration with a warm start for activation compression is interesting and shows promising efficiency gains on small networks. 2. Experimental results demonstrate significant improvements in memory and computational efficiency on resource-constrained devices. WeaknessesL 1. The abstract and overall writing need improvement; the description of the method is not as clear or detailed as it should be, making the paper hard to read. 2. The applicability and scalability of the approach are not clearly addressed, particularly when considering transformer-based models or large language models. If the rank selection strategy does not scale to LLMs, the impact of the work would be considerably diminished. Other Comments Or Suggestions: 1. Consider revising the abstract to include more specific methodological details about ASI and the rank selection strategy. 2. Improve the overall writing style and structure to enhance clarity and readability. 3. It is crucial to discuss the limits of the method’s applicability; a dedicated discussion on whether and how the approach could extend to transformer architectures would greatly strengthen the paper. Questions For Authors: 1. Could you elaborate on how your rank selection strategy might scale to transformer-based models or LLMs with billions of parameters? 2. Have you conducted any preliminary experiments on larger-scale models beyond MCUNet? If so, what were the outcomes; if not, what are your expectations? 3. The paper would benefit from a clearer description in the abstract regarding the methodological details of ASI—could you provide a more detailed outline of the steps involved? 4. Do you foresee any challenges in applying your approach to models with significantly different architectures than those tested in your experiments, and how might you address them? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Question 1: How our rank selection strategy might scale to transformer-based models or LLMs?** Our rank selection strategy is fully applicable to transformer-based models and LLMs with billions of parameters, and it incurs only a **one-time cost**. The main idea of our strategy involves performing a brute-force search over a 2D search space, where one dimension represents the number of layers to fine-tune and the other represents the number of explained variance values considered. Increasing or decreasing either of these factors directly impacts the search time for the optimal solution. The total number of parameters or model architecture does not directly affect our rank selection strategy. As long as backpropagation is required for training, it is still applicable. Furthermore, the choice of rank selection algorithm is not the key point of ASI (*see our response to Weakness 2 of Reviewer dju2*). **Question 2, Weakness 2 and Suggestion 3: Apply ASI to transformer models and LLMs.** We have conducted additional experiments as follows: - **Transformer models**: We applied the same training strategy with ASI implemented on the linear layers within the MLP blocks: - **Swin Transformer for classification tasks**: We used Swin Transformer pretrained on ImageNet-1K from the PyTorch library and evaluated it on five different downstream datasets: CIFAR-10, CIFAR-100, Pets, Flowers, and CUB. - **TinyLlama on BoolQ dataset**: We extended our technique to TinyLlama, a large language model (LLM) with 1.1B parameters, using the BoolQ dataset, which consists of yes/no questions. Since TinyLlama is a very large model, applying $\text{HOSVD}_\varepsilon$ directly would be computationally infeasible given our available resources, making it impossible to construct a budget for comparison. Therefore, instead of using the proposed rank selection strategy, we set a fixed expected rank of **20** for ASI and compared it against vanilla training. - **Image segmentation**: We applied the same training policy and downstream datasets as used in *(Nguyen et al., 2024)* and *(Yang et al., 2023)*. These results are available at the following anonymized link: [https://imgur.com/a/qOsfYU5](https://imgur.com/a/qOsfYU5). Overall, ASI gives similar results to those in our submission. It outperforms other methods in both computational complexity and activation memory consumption. We will add these new results to the camera-ready version. Please note that in addition to MCUNet, we also conducted numerous experiments with MobileNetV2, ResNet18, and ResNet34. The results of these experiments are presented in Tables 1 and 2 of our submission. **Question 3, Weakness 1, Suggestion 1 and Suggestion 2: Writing style and how ASI works.** - **Writing style:** Thank you for your feedback. We will carefully consider it and make appropriate revisions. - **How ASI works:** Regarding the steps in our method, as described in Fig. 1: - **Step 1:** Before the training begins, we measure the perplexity $\mathcal{P}$ (defined in Eq. (7)) of the pretrained model for each predefined explained variance by feeding a minibatch of pretrained data. We then save $\mathcal{P}$ to a file. - **Step 2:** Using the saved file, we run a brute-force search algorithm to find the optimal ranks for each fine-tuned layer. The optimal rank is the one that satisfies Eq. (8) and Eq. (9), i.e., the rank that minimizes the total perplexity while ensuring the memory does not exceed the given budget $\mathcal{B}$. - **Step 3:** We use ASI to "compress" the activation map of each fine-tuned layer into a subspace with the rank corresponding to the one found in Step 2. These ranks remain fixed throughout the training process. **Question 4: Challenges in applying ASI to other architectures.** We do not foresee any issues applying ASI to models architecturally different than those we have tested. As long as the model requires backpropagation and includes convolutional/linear layers, ASI can still be applied. Would you suggest some architectures where you think ASI might face challenges? We would be happy to hear your thoughts.
Summary: The authors focus on the problem of reducing activation memory usage and computational complexity during on-device learning. The authors try to deploy learning tasks on resource-constrained edge devices while maintaining acceptable performance. The evaluation based on the MCUNet model shows the performance on image classification tasks. Claims And Evidence: Yes. The claims are easy to follow and the evidence is clearly supported. Methods And Evaluation Criteria: Yes. The authors use typical on-device learning models and image classification tasks. Theoretical Claims: Yes. The rank selection and backward pass are clearly formulated. Experimental Designs Or Analyses: Yes. The experiments are correctly configured and the insights obtained from the experiments are clearly explained. Supplementary Material: Yes. I have read the appendix in the main submission. Relation To Broader Scientific Literature: This paper is strongly related to the on-device learning design and its deployment. Essential References Not Discussed: No. I think the references are adequately covered. Other Strengths And Weaknesses: The authors propose a rank selection strategy to determine the most suitable ranks for each fine-tuned layer under a given memory budget constraint before training begins. This is a practical idea to reduce activation memory usage and overall training FLOPs. Other Comments Or Suggestions: Overall, I think this paper is interesting and its technical depth is fine in most aspects. Questions For Authors: In experiments, the proposed method reduces overall training FLOPs up to 1.86× compared to vanilla training,. Could you please give more details on how to measure the training FLOPs? Ethical Review Concerns: N/A. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate your review, below is the answer to your only question. **Question: How do we calculate training FLOPs?** Currently, we measure training FLOPs based on theoretical calculations, which consist of the sum of the FLOPs required for both the forward and backward passes. The necessary formulas for a convolutional layer (with similar derivations for linear layers) are derived in equations (13)–(17) of our submission: - **Vanilla training:** - Forward FLOPs: Defined in Eq. (17). - Backward FLOPs: Defined in Eq. (16). - **ASI:** - Forward FLOPs: $O_{\text{vanilla}} + O_{\text{ASI}}$, where $O_{\text{ASI}}$ is defined in Eq. (14). - Backward FLOPs: Defined in Eq. (15). - **$\text{HOSVD}_\varepsilon$:** - Forward FLOPs: $O_{\text{vanilla}} + O_{\text{HOSVD}\varepsilon}$, where $O_{\text{HOSVD}\varepsilon}$ is defined in Eq. (13). - Backward FLOPs: Defined in Eq. (15), which is identical to ASI. The key reason ASI achieves lower training FLOPs compared to vanilla training is that its low-rank gradient calculation significantly reduces computational costs. This reduction more than compensates for the minor overhead introduced by mapping activation maps to subspaces during the forward pass (see Fig. 5).
Summary: This paper proposes Activation Subspace Iteration (ASI), a novel technique to address memory bottlenecks in on-device learning. The method compresses activation maps in neural networks using low-rank decomposition strategies. The key innovations include: (1) a perplexity-based rank selection strategy that identifies optimal compression rates for each layer under a given memory budget constraint before training begins, (2) a single subspace iteration with "warm start" to replace traditional HOSVD-based compression methods, and (3) computation of gradients directly in the compressed space. The authors empirically demonstrate that ASI can reduce activation memory usage up to 120.09× compared to vanilla training while reducing training FLOPs up to 1.86×. On a resource-limited device (Raspberry Pi 5), ASI achieves significant speedups compared to alternative methods (91.0× faster than HOSVD, 1.86× faster than gradient filtering). Claims And Evidence: The major claims of the paper are generally well-supported by evidence: 1. The activation memory reduction claim (up to 120.09×) is substantiated through extensive experimentation across multiple datasets (CIFAR-10/100, CUB, Flowers, Pets, ImageNet) and architectures (MCUNet, MobileNetV2, ResNet-18/34). 2. The computational efficiency claim (up to 1.86× reduction in FLOPs) is backed by both theoretical analysis (Section 3.5) and empirical measurements (Section 4.3 and 4.4). 3. The accuracy claim (comparable performance to vanilla training and HOSVD) is well-supported through tables and figures showing performance across multiple settings. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are appropriate for the problem of on-device learning: 1. The use of real-world datasets (including ImageNet) is appropriate for evaluating model accuracy. 2. The measurement of memory usage and FLOPs provides clear metrics for resource efficiency. 3. The on-device measurements on Raspberry Pi 5 provide real-world validation of the approach's practicality. 4. The comparison against baselines (HOSVD and gradient filtering) gives context to understand the relative advantages of ASI. Theoretical Claims: see Strengths And Weaknesses. Experimental Designs Or Analyses: see Strengths And Weaknesses. Supplementary Material: see Strengths And Weaknesses. Relation To Broader Scientific Literature: see Strengths And Weaknesses. Essential References Not Discussed: see Strengths And Weaknesses. Other Strengths And Weaknesses: Strengths: 1. The paper addresses a critical practical challenge (memory bottleneck) in on-device learning with a novel solution. 2. The paper provides a comprehensive evaluation across multiple datasets, model architectures, and settings. 3. The real-world validation on Raspberry Pi 5 demonstrates that the approach is immediately applicable. Weaknesses: 1. Authors should use \citet when it is a part of sentence. 2. The rank selection approach relies on a brute-force backtracking algorithm, which the authors acknowledge as a limitation in Appendix C. Is there other possible alternative method? 3. The paper focuses exclusively on convolutional networks and CV task; it's unclear how well ASI would work for other tasks or architectures like transformers. 4. While memory and computation reductions are significant, the accuracy sometimes drops substantially when fine-tuning deeper networks, suggesting scalability limitations. 5. The warm-start strategy assumes that activation maps are stable across training iterations, but this assumption may not hold during early training or when learning rates are high. I am not sure if GaLore can help deal with it? Other Comments Or Suggestions: see Strengths And Weaknesses. Questions For Authors: see Strengths And Weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Weakness 1: Use of \citet.** Thank you for this note. We will revise it in the camera-ready version. **Weakness 2: Other rank search algorithms besides brute-force?** Yes, there are certainly alternative methods—for example, using dynamic programming, we might reduce the computational complexity from exponential to linear, at the cost of employing more memory. However, in the context of on-device learning, the models used are typically small with a limited number of layers, and we are heavily constrained by memory. As a result, brute-force search remains computationally feasible while ensuring an optimal solution. That said, rank selection itself is not the core contribution of ASI. It would be interesting to explore smarter methods for rank-selection, relying on the learning dynamics of Deep Models (eg. leveraging something like the Information bottleneck); however, this still remains an open research quest. We leave further exploration of rank search algorithms for future work. **Weakness 3: Apply ASI to transformer models and LLMs.** We appreciate your feedback. To address this, we expanded our experiments to Swin Transformer, image segmentation, and LLM task using TinyLlama with 1.1B parameters. *For further details, please refer to Question 2 of Reviewer N3Rd*. **Weakness 4: The problem of accuracy loss.** We acknowledge that accuracy drops due to compression—this is the tradeoff. As such, this phenomenon also occurs with all other related state-of-the-art techniques, including $\text{HOSVD}_\varepsilon$, Gradient Filter, LoRA, and its variants. The key point is that ASI achieves a superior Pareto curve compared to other methods (Fig. 4). **Weakness 5: Stability of activation map and combination with GaLore.** Thank you for suggesting GaLore. Our response is as follows: - *Stability of activation maps:* - ASI is specifically designed for fine-tuning, where large learning rates are typically not used. As a result, the input to the activation function does not change drastically across training iterations, i.e., it remains stable within a small number of iterations. - Moreover, *(Virmaux & Scaman, 2018)* indicated that most commonly used activation functions have a Lipschitz constant of 1, meaning that if the input remains stable, the output (activation map) remains stable as well. - *Effect of GaLore:* - Based on our understanding, GaLore does not directly affect the stability of activation maps. - GaLore is a gradient compression technique, while ASI specifically works on activation maps. In principle, both methods could be combined to maximize efficiency. - However, GaLore is particularly useful for Adam, where optimizer state becomes a concern. In contrast, for on-device learning, which is our primary focus, SGD without momentum is preferable since it eliminates the need to store optimizer state. Consequently, GaLore offers little benefit in this setting. We acknowledge that GaLore is an interesting approach, and leveraging it could lead to promising results-and will be added in the related works section. However, we have not observed how it influences the stability of activation maps in fine-tuning. Could you please clarify your thoughts on this?
null
null
null
null
null
null
null
null
Adaptive Message Passing: A General Framework to Mitigate Oversmoothing, Oversquashing, and Underreaching
Accept (poster)
Summary: This work addresses the challenge of modeling long-range interactions in deep graph networks, which are often hindered by oversmoothing, oversquashing, and underreaching in message passing. The authors propose a variational inference framework that adaptively adjusts message depth and filters information to mitigate these limitations. The approach is theoretically and empirically validated, demonstrating superior performance on five node and graph prediction datasets. This method enhances the ability of deep graph networks to capture long-range dependencies without explicitly modeling interactions. Claims And Evidence: The content has already been provided in the subsequent subsections. Methods And Evaluation Criteria: The content has already been provided in the subsequent subsections. Theoretical Claims: The content has already been provided in the subsequent subsections. Experimental Designs Or Analyses: The content has already been provided in the subsequent subsections. Supplementary Material: The authors provide code, configurations, and data in the supplementary materials. Relation To Broader Scientific Literature: The content has already been provided in the subsequent subsections. Essential References Not Discussed: The content has already been provided in the subsequent subsections. Other Strengths And Weaknesses: Advantages: 1. The paper is well-written, clear, and easy to read. 2. The experiments are thorough and directly correspond to the target problems introduced, making the findings highly convincing. 3. The paper extends variational methods from neural architecture search to GNNs and provides detailed theoretical proofs. 4. The authors have released the source code, data, and configuration files, ensuring high credibility and reproducibility. Weaknesses: 1. The introduction focuses heavily on the limitations of standard GNNs, but issues like oversmoothing, oversquashing, and underreaching have already been extensively studied. Given the comparative experiments provided, what is the paper’s unique motivation in this context? 2. Based on Table 2, AMP's performance is comparable to IPR-MPNN, with the key difference being whether rewiring is used. Has the paper clearly explained the advantage of ‘not requiring rewiring’? 3. Has the impact of AMP’s dynamic depth on efficiency been sufficiently evaluated in the results section? Similarly, how does its efficiency and complexity compare with other methods? For instance, if AMP tends to choose deeper networks for certain tasks, does this lead to a significant drop in overall efficiency? Other Comments Or Suggestions: The content has already been provided in the subsequent subsections. Questions For Authors: 1. Could the authors clarify the rationale behind selecting these specific tasks as benchmark objectives? Given that they differ from the benchmarks used for comparison with baselines like IPR-MPNN, I have concerns about the representativeness and competitiveness of these tasks in evaluating the proposed method. 2. Have the authors considered including domain-specific models designed to handle long-range interactions as additional baselines? While I am not certain whether such methods exist for all tasks in this paper (e.g., peptide property prediction), there has been extensive research in machine learning force fields addressing similar challenges. One relevant example is: [1] Li Y, Wang Y, Huang L, et al. *Long-short-range message-passing: A physics-informed framework to capture non-local interaction for scalable molecular dynamics simulation*. arXiv preprint arXiv:2304.13542, 2023. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for recognizing that the paper is well written and the findings are convincing. We will comment below on the questions raised. **Other Strengths And Weaknesses:** 1. While oversmoothing, oversquashing, and underreaching are topics of great interest in the community, which are far from being fully understood or solved, our paper’s unique motivation is that, by combining the ability to learn the depth with the adaptive filtering of messages, we want to determine *i)* the number of required layers and *ii)* which messages should be propagated. In other words, our contribution is unique in that it determines, *during training*, *how much and when* to send information. We hope to have clarified this point. 2. In Appendix D, we provide a critical view on how the oversquashing term is defined and understood by our community. In particular, we argue that if oversquashing refers to “informational bottlenecks”, then additive rewiring may hurt performances if the same amount of layers is kept; in contrast, oversquashing defined as sensitivity is always improved by additive rewiring. For this reason, we believe that AMP and rewiring-based approaches are different but equally valid approaches to achieve long-range propagation. We will clarify these aspects in the paper, and we hope the reviewer appreciates our considerations. Thanks to the suggestion of Reviewer ZB4Z, we now have a new Theorem inspired by Di Giovanni et al, 2023 that exposes this inconsistency in the literature! 3. In terms of impact of the depth as regards efficiency, we refer to the computational complexity analysis written in Reviewer XraS’s response. We will add these considerations to the paper. Overall, the complexity of using a deeper network increases due to i) the higher parametrization, as usual; ii) the need for a layer-wise readout in AMP, as also done in previous works [JK-Net, Xu et al., 2018]. In addition, we prepared an ablation study where we try different (fixed) depths for AMP, maintaining the other architectural changes intact. In particular, we allow AMP to learn a normalized importance over the fixed number of layers. Here, we want to see the impact of dynamically learning the depth on effectiveness. The depths we tried depend on the range shown in Figure 4 (so, up to 45 different depths for a single model and dataset), while the other hyper-parameters were fixed to the best one we found for the task. The table is shown below for the peptides datasets: | | Func | Struct | |-------------|-----------|--------| | AMP_GCN | 0.7076 (0.0059) | 0.2497 (0.0009) | | AMP_GINE | 0.6999 (0.0041) | 0.2481 (0.0014) | | AMP_GATEDGCN | 0.6750 (0.0029) | 0.2493 (0.0013) | It appears that learning the importance of fixed-depth network layers does not allow to obtain better performances than the fully adaptive AMP, and it potentially requires many more configurations to be tried, which is the main disadvantage of considering the depth as a hyper-parameter rather than a learnable parameter (as we do). **Questions For Authors:** The specific tasks are chosen because there is reason to believe long-range information plays an important role. In other words, all our tasks require the ability to “reason” globally about the graph and have specifically tested for these reasons. In addition, we used the molecular tasks of the LRGB paper because i) it is possible to perform a fair evaluation on these tasks, thanks to the robust re-evaluation of Tönshoff et al.. IPR-MPNN evaluation on additional benchmarks is necessary to empirically validate their increased expressive power compared to 1-WL MPNNs, whereas we believe to already have sufficient empirical evidence that AMP can be beneficial to improve the performance of MPNNs on long-range tasks; AMP always improves the results compared to any of its base versions on all datasets. **An important clarification about AMP’s competitiveness:** our goal is *not* to demonstrate that AMP surpasses the state of the art, as AMP **is not a model but a framework** that can be wrapped around most MPNNs. In our paper, we used simple MPNNs for their ease of use, showing the performance gains that AMP grants without changing a single line of code of these convolutional layers. In our view, the correct metric to assess AMP’s effectiveness is the gap in performance compared to the base versions AMP is wrapped around, showing that it helps to better capture long-range dependencies. Finally, thank you for the reference, we will include it in the revised manuscript. We were unable to find such models for the considered datasets, but we should definitely mention architectures specific to molecular tasks in our work, as it is relevant due to the electrostatics interactions necessary to model energy and forces. -------- **Conclusion:** We will report these clarifications and additional statements in the paper. We hope the reviewer appreciates our response and hopefully will increase the score. --- Rebuttal Comment 1.1: Comment: Thanks authors reply, I will reaise my recommendation to 3.
Summary: This paper introduces Adaptive Message Passing (AMP) a novel approach to enrich GNNs with learnable depth and message filtering distributions. A variational inference framework is adopted to jointly train netwoks representing these distributions with both mechanisms being aplied to a range of existing GNN baselines as wrapper style enhancments. The paper also "introduce new families of distributions with specific properties to overcome previous limitations" in order to facilitate the variational framework which are outlined in the appendix. The main aim of the work is to dynamically learn these distributions such that models overcome oversquashing, under-reaching, oversmoothing (OUO). To demonstrate the effectiveness of this approach AMP is tested on 2 datasets that represent tasks with long-range dependancies. An analysis of the depth and filtering distributions shows the distributions dynamically adapt for the task and various baseline GNNs. Claims And Evidence: The paper claims to "provide(s) a general framework for improving the ability of any message passing architecture to capture longrange dependencies" which is emperically shown on the selected datasets. It also claims the framework "mitigates" OUO which is somewhat supported with the analysis in Figure 3, however I would say this coud be improved with a more consistent colouring scheme for GCN vs AMP_GCN for each task. It might also be more informative to standard GCN DE decay for a larger number layers. It also also unclear why the Dirichlet becomes negative. There is an extensive discussion on OUO in Section App.D with some conjecture but I think the paper is missing a theoretical analysis in the style of the cited papers (more "suggested" below). One step in this direction Theorem 3.1 but in my opinion the property shown in theorem 3.1 "AMP’s ability to propagate a message unchanged from two connected nodes" is potentially trivial and not useful in the learning setting. It just shows the existence by construction of a message passing scheme for one graph sample between 2 nodes essentially choosing the nessary weights to maintain the feature vector within the ball. Methods And Evaluation Criteria: As the paper claims to introduce a framework to overcome OUO on tasks requiring long distance information propagation I would say the chosen datasets fit the task. However this might indicate bias selecting datasets for which the framework has been designed to perform well in. I would like to see experiments performed on classical homophilic and heterophilic node classification tasks (ie Cora/Texas) to show the scheme is universal. Theoretical Claims: As above the proof of Theorem 3.1 seems correct but the property seems unuseable in practice and relies on construction. To demonstrate mitigation of OUO it would be better to focus on a gradient based analysis as I suggest below. Experimental Designs Or Analyses: Experiments and abalations are suitable and exectued well but see my comments above. Supplementary Material: Yes I went through the proof of theorem 3.1 and discussion on OUO. Relation To Broader Scientific Literature: Due to the extensive literature review the paper does a good job in positioning itself within the current literature wrt adjustable depth, rewiring and adaptive architectures pointing out here the architeture can be adaptively learned during trainined due to the variational formulation. Essential References Not Discussed: I am satisfied a comprehensive literature review of related works has been performed. I would note some benchmarks in Table 1 are not cited. Other Strengths And Weaknesses: I am borderline accept given the novel approach to overcoming the common problems of OUO and promising results and informative emperical analysis. However there are two weaknesses I believe if addressed could significantly strengthen the paper. A proper theoretical analysis of the message passing scheme (as suggested below) and performing exerpiments on classical homophilic and heterophilic node classification tasks (ie Cora/Texas) to show the scheme is universal. It migh be impressive to show where the framework learns whether long or short range information is relevant or useful. One might assume short/long distance for homo/heterophily respectively Other Comments Or Suggestions: I think the paper is missing a proper analysis that the framework overcomes oversmoothing. One suggestion might be to look at the proof of Theorem B.1 in "On Over-Squashing in Message Passing Neural Networks: The Impact of Width, Depth, and Topology" (Di Giovanni '23) and split the message passing matrix, isolating the "relevant" signal, as done in proof 3.1 being able to show the sigmoid gating sharpens the signal on relevent pathways wrt to the node sensitivity might be a way to proceed. In addition It might also be benefical to remove the standard symetric degree normalisation of GCN with "sum of sigmoid" based normalisation which would further sharpen the signal. Questions For Authors: It's unclear to me why the MLPs to output the weights over the layer depths and sigmoid MLP for message filtering can't just be trained using conventional methods? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the detailed review and the valuable suggestions. We will do our best to clarify doubts and address the points of the reviewer. **Claims And Evidence:** *On Figure 3:* We will improve the coloring scheme as suggested, thank you. Also, the reason why we do not show GCN results for more layers is because we report the best configuration on the validation set, but we can mention in the paper that it has been already proven how GCNs oversmooth in the long run. Finally, the Dirichlet Energy (DE) becomes negative simply because the y-axis is in log scale to aid readability, as DE values and sensitivities vary a lot. *Theoretical Analysis:* We cannot thank the reviewer enough for the suggestion! We indeed extended Theorem 3.2 of Di Giovanni ‘23 to our message filtering scheme, and (in short) we arrived at the conclusion that, if $c_F$ is the Lipschitz constant of our filtering function, $k_F \le 1$ is the maximal element-wise value attained by the Filtering function $F$, and $k_h$ is the maximal element-wise value of a node embedding, we arrive at a similar result for (using the notation of Theorem 3.2) $\boldsymbol{S}_{r,a} := c_r\boldsymbol{I} + c_a(c_F*k_h + k_F)\boldsymbol{A}$ Where $\boldsymbol{S}_{r,a}$ controls the upper bound on sensitivity due to the topology of the graph. We are very happy with the Reviewer’s suggestion because this result formally supports what we argued in Appendix D: if we consider for simplicity a constant filtering function, namely $c_F=0$ and we filter enough, meaning $k_F < 1$, then filtering will decrease the sensitivity’s upper bound. However, this is actually helping to reduce the “informational oversquashing” defined in Alon et al., contradicting the widely used statements that “improving sensitivity mitigates oversquashing”. As a result, we argue that the community should probably distinguish the “informational oversquashing” of Alon et al. from the “topological oversquashing” of Topping et al.. We will add these results to our paper. Thank you so much for the invaluable suggestion! This will help our community reason more clearly about these issues. **Methods And Evaluation Criteria:** Following the reviewer’s suggestion, we perform additional experiments with GCN and AMP_GCN on homophilic and heterophilic datasets. Risk assessment is performed using 10-fold Cross Validation, with an internal hold-out hyper-parameter tuning for both models. For each of the 10 folds, the best configuration on that fold is re-trained 10 times and the test values averaged. The final test score is the average of the test scores across the 10 folds. This procedure is inspired from [*] and results are shown below. | | Cora | Citeseer | Pubmed | Texas | Wisconsin | Chameleon | Squirrel | Actor | |---------|------------|------------|-------------|------------|------------|------------|------------|------------| | GCN | 85.1 (2.0) | 72.3 (1.9) | 87.9 (0.9) | 51.9 (8.1) | 45.9 (4.1) | 43.4 (1.5) | 27.6 (1.1) | 28.5 (0.7) | | AMP_GCN | 86.5 (1.7) | 75.3 (2.2) | 89.8 (0.5) | 78.6 (7.0) | 81.3 (7.1) | 49.8 (2.2) | 35.2 (1.8) | 34.8 (1.2) | It appears that AMP allows GCN to improve results especially on heterophilic datasets. Therefore results seem to align with the reviewer’s intuition. We will add other common datasets and baselines in the revised version of the paper. Thank you for suggesting these extra experiments. [*] Errica F., Podda M., Bacciu D., Micheli M., A fair comparison of graph neural networks on graph classification. ICLR 2020 **Essential References Not Discussed:** Thanks for pointing out that some references in the Table are not cited, we will amend that. **Remaining Sections:** We hope to have addressed the weaknesses highlighted by the reviewer, namely the theoretical analysis and the extra experiments on node classification tasks on homo/heterophilic datasets. We will provide a complete proof of the extended version of Theorem 3.2 for AMP in the revised paper. **Questions For Authors:** As a matter of fact, the entire architecture is trained using conventional backprop on a prediction loss + some regularization terms. There is no MLP involved in the distribution of the layers’ importance because we simply need to learn its parameters, but note that the variational formulation merely serves to show that AMP arises from a well defined graphical model, grounding our design choices in a principled formulation. The practical implementation is very simple, thanks to the brilliant work of Nazaret and Blei. Please let us know if we can further clarify this point. ---------- **Conclusion:** Thank you again for the constructive comments, especially those related to the theoretical analysis. We hope the Reviewer appreciates our efforts and that this will lead to a score increase. --- Rebuttal Comment 1.1: Comment: Thanks the authors have sufficiently addressed my points and I think the additional theoretical analysis and experiments will improve the paper enough for me to raise my score to a 4. In particular I'm happy they were able to derive a new theorem in the past days which places them well within the curent literature.
Summary: Graph Neural Networks (GNNs) often struggle to capture long-range dependencies in graphs due to challenges such as oversmoothing, oversquashing, and underreaching. In this work, the authors introduce a variational inference framework that allows GNNs to dynamically adapt their depth and selectively filter message passing. The proposed approach is supported by theoretical insights and empirically validated on multiple graph benchmarks. Claims And Evidence: The claims made in the paper are partially supported by experimental evidence. The results presented in Tables 1 and 2, as well as the synthetic tasks, demonstrate performance improvements when the authors’ approach is integrated with popular GNN variants. On the other hand, the method does not surpass the state-of-the-art (as claimed) in Table 2. It's also not clear how underreaching can be mitigated when messages are filtered which could rather introduce underreaching as the propagation of information is thereby prevented. Methods And Evaluation Criteria: The proposed probabilistic message passing framework is well-motivated for the graph-processing methodology, with a clear intuitive rationale for its potential to enhance performance. The selection of benchmark datasets and evaluation metrics is appropriate and aligns with standard practices in assessing graph-based methods. Theoretical Claims: The proof of Theorem 3.1 is intuitive and straight-forward. Experimental Designs Or Analyses: The experimental setup appears to be well-structured for evaluating the stated claims. I could not find a detailed description of the hyperparameter selection process, including dataset splits, search methodology, and search grid specification for all tasks and methods, however. Supplementary Material: I reviewed the theory section to understand the proof of Theorem 3.1. Additionally, the other sections of the Appendix seem to be comprehensive. Relation To Broader Scientific Literature: The primary contributions of this paper pertain to probabilistic approaches to graph rewiring [1,2] and established GNN methods, which are utilized as baselines for comparison. [1] Qian et al. Probabilistically rewired message-passing neural networks, 2024 [2] Qian et al. Probabilistic graph rewiring via virtual nodes, 2024 Essential References Not Discussed: To enhance the comprehensiveness of the study, it would be beneficial to discuss related approaches or incorporate them as baseline methods for comparison. For instance, [3] introduced Graph-Mamba, an adapted state-space model designed to facilitate long-range information propagation in graph data. Similarly, theoretically grounded sequence-processing frameworks [4,5], leveraging randomized signatures from rough path theory, demonstrated promising potential in alleviating oversquashing effects in large graphs. [3] Wang et al. Graph-Mamba: Towards Long-Range Graph Sequence Modeling with Selective State Spaces, 2024 [4] Toth et al. Capturing Graphs with Hypo-Elliptic Diffusions, 2022 [5] Gruber et al. Processing large-scale graphs with G-Signatures, 2024 Other Strengths And Weaknesses: None Other Comments Or Suggestions: None Questions For Authors: 1. At first glance, [1,2] seem to be conceptually very similar to the approach in this paper. Could the authors emphasize the major differences/novelties compared to these approaches? 2. How expensive is the whole probabilistic modeling process? The modeled distribution has to account for layers, parameters and graphs at the same time, how does the computational complexity compare to GNN variants that do not have to model these components? 3. Since a distribution has to be modeled, is there a trade-off in case of scarce high-dimensional data where distribution-modeling is hard? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the comments. Below, we clarify some of the points raised. **Claims And Evidence:** We apologize for the incorrect statement in the abstract about “surpassing” the state of the art. That statement was true in the past, before we revised Table 2. Indeed, we had fixed our claims elsewhere in the paper (lines 80-83, 242-244, 377-379) by talking about “competitive” performances. However, the goal of the analysis is not to show absolute improvements compared to recent works. Rather, it is to show, as the reviewer correctly identified, that wrapping the AMP framework around basic MPNNs grants a performance improvement in capturing long-range dependencies, due to the particular characteristics of AMP that endow MPNNs to mitigate oversmoothing and oversquashing as measured in Figure 3, in addition to the adaptive learning of the depth to mitigate underreaching. It is our belief that our work should be judged in terms of these contributions, and not solely in terms of absolute numbers, since we, for the ease of prototyping, chose to wrap AMP around classical MPNNs, rather than more recent and convoluted ones. In terms of mitigating under-reaching, it is possible that the network finds a degenerate solution during training that corresponds to an unsatisfactory depth, but this is similar to how MLPs can end up in poor local minima when trying to learn functions. The design of AMP enables learning the depth and we see that it works empirically. Finally, in our results, it seems the method learns more layers than those tried in previous works using grid search. **Experimental Designs Or Analyses:** Thanks for pointing this out, we partly referred to the hyper-parameter ranges and data splits of Gravina et al. and Tonshoff et al. without explicitly mentioning them in the paper. To improve self-containment and reproducibility, we will add this information to Appendix E. The search methodology was a standard grid search with early stopping on validation performance. **Relation To Broader Scientific Literature:** We would like to clarify that our approach does not pertain to probabilistic graph rewiring approaches; it is probabilistic, but it does not perform a topological rewiring of the graph. The graph topology remains the same, and messages are partially filtered to mitigate the oversmoothing and oversquashing issues. Concretely, we never add new edges to the input graph: please also refer to our answer to question 1 below. **Essential References Not Discussed:** Thank you for providing these additional references, we agree with the reviewer’s analysis and we will include and discuss them in the revised manuscript. Briefly, in terms of main differences with AMP, [3] relies on a separate sequence model to develop a node selection mechanism whereas we work on the message passing itself; [4] develops a new graph Laplacian that is better suited for long-range propagation; [5] converts a graph into a latent representation that can be passed to downstream classifiers. **Questions For Authors:** 1. As we also clarified earlier, our approach differs significantly from rewiring approaches, including probabilistic ones. In AMP, the probabilistic formulation is used to dynamically learn the depth (following Nazaret and Blei), something which rewiring approaches cannot do, whereas the “filtering of messages” is not to be intended as a variation to the original graph topology, rather as a “node-based” filtering of outgoing messages. The major difference is that AMP alters the computational graph but not the graph topology, whereas typical graph rewiring methods alter the graph topology. In Appendix D, we have discussed this through the lens of oversquashing. 2. Modeling a one-dimensional distribution leads to no particular overhead during training. We point the reviewer to our answer to Reviewer XraS for a discussion on computational complexity. In short, the main complexity is caused by the addition of one readout per layer, something which was also done in previous works such as JK-Net (Xu et al, 2018). In terms of training, optimizing the ELBO reduces to performing backpropagation w.r.t. a standard prediction loss, plus one (optional) weight regularization term and another (optional) depth regularization term. The overall asymptotic complexity is therefore not different from other GNNs that appeared in the literature in previous years. --------- **Conclusion:** We hope to have clarified the goal of our contribution and to have addressed the other questions. We would appreciate it if the reviewer could consider increasing the score in light of these considerations and revisions. We remain available should further information be needed. --- Rebuttal Comment 1.1: Comment: I thank the authors for their reply and raise my score to 3.
Summary: The authors propose a general framework to tackle certain long-range interaction problems in GNNs, namely (1) oversmoothing, (2) oversquashing, and (3) underreaching. Their Adaptive Message Passing (AMP) framework extends the work of Nazaret and Blei on unbounded depth networks to the GNN setting. The idea is to use a variational theory to learn the GNN structure, both in terms of GNN depth as well as message passing (message filtering). This allows for a finer control on the flow of messages across the graph. Claims And Evidence: Yes, the claims are supported by clear and convincing evidence. For oversmoothing/oversquashing, the authors use standard tools such as Dirichlet energy to examine the quality of the embeddings. They also provide ablation studies to examine the effects of message filtering. Methods And Evaluation Criteria: Yes, the proposed methods and criteria are overall justified. The number of real-world datasets considered could be larger, since the paper targets a very broad array of GNN limitations. Theoretical Claims: I did not check for all theoretical details: Some of the background needed is out of my knowledge scope. I am also not sure how Theorem 3.1 supports the overal theory: What does it even mean to propagate a message unchanged from two connected nodes in a graph? Why is that important? And does this really show that a GNN can learn "asynchronously", as alluded by the citation [FW]? Is this case observed experimentally? Experimental Designs Or Analyses: Yes, the experimental protocols are fairly thorough, in line with recent hyperparameters consideration for long range benchmarks [Tonshoff et al.] Supplementary Material: Yes. Relation To Broader Scientific Literature: The paper forms an important contribution in the current research to mitigate oversmoothing/oversquashing/underreaching. The paper brings in novel tools from learning unbounded depth NNs to address these problems simultaneously. Essential References Not Discussed: No, as far as I know,. Other Strengths And Weaknesses: Strengths 1) The paper employs a very well-motivated theory-driven approach to tackle the target problems. The proposed solution is novel and effective, with potential for further research. Weakness 1) One key advantage of blanket Message-Passing is scalability. The authors have not commented clearly on the exact computational costs of the setting up AMP on top of a GNN. 2) The number of real-world datasets considered is fairly small, as compared to usual GNN literature. Other Comments Or Suggestions: - Questions For Authors: - Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for recognizing the merits of our contribution and for providing constructive criticism. Below we comment on some of the points raised by the reviewer. **Methods And Evaluation Criteria:** Following Reviewer ZB4Z’s suggestion, we will include node classification tasks related to homophily and heterophily and increase the number of real-world datasets considered. Please see our response to Reviewer ZB4Z for a first batch of results. **Theoretical Claims:** AMP focuses on long-range interactions problems, where far-away nodes may need to communicate effectively (for instance, without oversquashing arising). Theorem 3.1 shows that, for a proper parametrization, it is possible to realize such long-range communication between two specific nodes without oversquashing behavior, which would not be possible on classical (synchronous) message-passing neural networks. This is meant to support the design choices made for AMP, although we do not want to claim that AMP is always able to implement such behavior in practice (we do not observe it empirically). At the same time, such a behavior is reminiscent of the asynchronous updates of [FW]. We will revise the paper to reflect these considerations, and we hope that this clarifies the reviewer’s doubts. **Other Strengths And Weaknesses:** Thank you for the kind words about our work. We provide an answer for each potential improvement below. - It is true that we did not mention additional costs in detail, thank you for pointing this out. The cost of filtering messages is $O(N)$, with $N$ nodes (lines 184-185). Therefore, the message passing operation is not altered significantly, since it has a cost of $O(N+E)$, with $E$ edges. However, the additional burden introduced by AMP, compared to classical MPNNs, is the layer-wise readout (lines 344-345) that we implemented as an MLP. Classical MPNNs employ a single readout with cost $O(N)$ or $O(1)$ depending on the task nature, whereas we use one per layer, so we have $O(NL)$ and $O(L)$ respectively. Note that other popular MPNN architectures, such as JK-Net (Xu et al., 2018), employ a similar scheme by means of concatenating the node embeddings across all layers. In terms of training costs, standard backpropagation with at most two light-weight additional regularizers is employed. We will make this clear in the revised version of the paper. - Please refer to our additional experiments in Reviewer ZB4Z’s response, as mentioned before. -------- **Conclusion:** Thank you once more for the constructive feedback! We hope our clarifications and additional experiments may constitute ground for a score improvement. We remain available for further discussions.
null
null
null
null
null
null
Overtrained Language Models Are Harder to Fine-Tune
Accept (poster)
Summary: This paper investigates the phenomenon of "catastrophic overtraining" in language models, where models trained on significantly more tokens than compute-optimal regimes exhibit degraded performance after fine-tuning, despite showing continued improvement in pre-training loss. The authors present both empirical and theoretical evidence to support this claim. ## update after rebuttal Satisfied with responses to questions : Rating remains "4: Accept" Claims And Evidence: * Claim: Overtrained language models are harder to fine-tune. + This is well supported by experiments * Claim: Catastrophic overtraining exists, characterized by thresholds $T_{ft}$ and $T_{pre}$ (fine-tuning and pre-training performance, respectively) degrade when these thresholds are exceeded. + $T_{ft}$ well supported by experiments, $T_{pre}$ less so. * Claim: Theoretical characterization in linear models supports the empirical findings. + Given the simplifications involved in the linear models, the theoretical results supply intuitions about the mechanism, rather than directly supporting the observed results Methods And Evaluation Criteria: The methods and evaluation criteria are appropriate, comprehensive, and designed to rigorously investigate the $T_{ft}$ research question. The evaluations for the $T_{pre}$ element are weaker. Theoretical Claims: While I have not rigorously checked the proofs line-by-line, the informal statements and the overall theoretical approach seem reasonable. The theoretical framework provides a valuable conceptual understanding of the overtraining phenomenon. Experimental Designs Or Analyses: * By varying pre-training token budgets and keeping model size constant, the authors effectively isolate the variable of interest (overtraining). Using intermediate checkpoints of pre-trained models and training models from scratch for controlled experiments further strengthens the design. * Evaluating on both downstream task performance and generalist performance provides a holistic picture of the impact of overtraining. Using multiple diverse benchmarks strengthens the generalizability of the findings. Supplementary Material: I reviewed the supplementary material on a cursory basis, they seem to provide necessary details to understand and potentially reproduce the experiments and theoretical analysis. Relation To Broader Scientific Literature: * Scaling Laws: The paper investigates a regime that deviates from compute-optimal scaling. * Overtraining and Generalization: The paper shows that overtraining in pre-training can negatively impact transfer learning ability, which is a novel and significant finding in the current paradigm of large-scale pre-training. Essential References Not Discussed: N/a Other Strengths And Weaknesses: * The paper addresses a highly relevant and timely question in the field of large language models. The finding that overtraining can degrade fine-tuning performance is novel and counter-intuitive, challenging conventional wisdom. * The claim for $L_{pre}$ (base model capabilities) is a bigger and less definitively backed claim than the claim for $L_{ft}$ (fine-tuning performance). Other Comments Or Suggestions: N/a - and I usually find typos... Questions For Authors: 1. Mitigation strategies (Section 6 and future work): Beyond regularization, are there other potential techniques that could mitigate catastrophic overtraining and improve the fine-tuning adaptability of overtrained models? For example, could techniques like parameter resetting, learning rate scheduling adjustments during fine-tuning, or architectural modifications be explored? 2. In the "optimal learning rate trend" analysis (Section 3.4, Figure 5): You categorize datasets based on whether the optimal learning rate is constant, slowly decreasing, or quickly decreasing. Could you provide further intuition or hypotheses about why certain datasets exhibit each trend? What characteristics of the datasets or fine-tuning tasks might determine the optimal learning rate trend and its relation to overtraining degradation? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your detailed feedback! We are happy to hear your positive comments such as that our methodology is “appropriate, comprehensive, and designed [...] rigorously”, that our paper addresses a problem that is “highly relevant and timely”, and that our results are “novel and counter-intuitive, challenging conventional wisdom”. > Given the simplifications involved in the linear models, the theoretical results supply intuitions about the mechanism, rather than directly supporting the observed results We aren’t aware of any theoretical tools that would enable us to study the dynamics of LLMs without at least some degree of simplification such as the ones we describe. We fully acknowledge that there are limitations that arise from these simplifications but we hope our analysis is a starting point to understand this phenomenon that may lead to principled mitigation strategies in the future. Additionally, since catastrophic overtraining is so counterintuitive (to us), seeing it emerge from incremental learning in a simpler setting reassures us it's not just an artifact of our LLM setup and offers intuition for a possible mechanism. >The claim for $L_{\mathrm{pre}}$ (base model capabilities) is a bigger and less definitively backed claim than the claim for $L_{\mathrm{ft}}$ (fine-tuning performance). We want clarify our claim regarding how extending pre-training can reduce the base model capabilities: Our core finding is that there are some important settings where additional pre-training hurts the fine-tuned model’s capabilities, despite helping the base model’s performance. We don’t intend to claim that all settings necessarily exhibit catastrophic overtraining after hyperparameter tuning. In Figure 2, we find that the fine-tuned model’s “general capabilities” (blue lines) are degraded for PIQA, ARC Challenge, ARC Easy when instruction tuning, and PIQA, ARC Challenge when multimodal tuning. In Figure 3, we find that the fine-tuned model’s “general capabilities” (bottom) degrade for GSM8k, Starcoder-Python, MR, and RTE. Hopefully this clarifies our evidence that the general capabilities can degrade with overtraining. > Beyond regularization, are there other potential techniques that could mitigate catastrophic overtraining and improve the fine-tuning adaptability of overtrained models? For example, could techniques like parameter resetting, learning rate scheduling adjustments during fine-tuning, or architectural modifications be explored? Great question! We do explore various learning rate schedules during fine-tuning (constant, constant + warmup, cosine schedules), as detailed in Appendix C.2, and found catastrophic overtraining persisted in each case. Exploring additional mitigation techniques (e.g., parameter resets, architectural modifications) would indeed be valuable future work. > In the "optimal learning rate trend" analysis (Section 3.4, Figure 5): You categorize datasets based on whether the optimal learning rate is constant, slowly decreasing, or quickly decreasing. Could you provide further intuition or hypotheses about why certain datasets exhibit each trend? What characteristics of the datasets or fine-tuning tasks might determine the optimal learning rate trend and its relation to overtraining degradation? Great question! While empirical datasets are difficult to analyze precisely, our theoretical results offer helpful intuition: Theorem 4.3 shows that extending pre-training increases the sensitivity (or forgetting) of pre-trained features during fine-tuning. This sensitivity is larger for tasks whose feature distributions significantly differ from pre-training (i.e., larger eigenvalue differences)—fine-tuning tasks which are much different from the pre-training task are more prone to catastrophic overtraining. In our revised version of the paper, we will clarify this with an appropriately updated theorem statement. Our intuition is that this theoretical insight translates roughly to practice. Empirically, tasks very dissimilar to pre-training likely require larger changes in model features, and larger learning rates naturally induce larger weight changes (even controlling for steps). We’ve empirically verified that larger learning rates lead to larger weight changes and will add a figure illustrating it clearly. Thus, tasks needing significant feature adaptation typically benefit from larger learning rates and, as a result, exhibit more pronounced catastrophic overtraining. --- Rebuttal Comment 1.1: Comment: Thanks for the answers to my questions. My rating remains "4: Accept"
Summary: This work challenges the widely held belief in the field that scaling pre-training robustly improves LM performance. The authors find that increasing token budgets during pre-training can actually lead to suboptimal performance on downstream fine-tuned tasks. They leverage popular open-source models and datasets to provide empirical evidence supporting this claim. The authors conjecture that the suboptimal fine-tuning performance can be attributed to overly complex features of the pre-training distribution, which are learned toward the end of training. Drawing insights from the transfer learning literature, they provide theoretical evidence for this hypothesis using small models. Taken together, this work sheds light on how design decisions made during pre-training can have ripple effects on post-training outcomes. Claims And Evidence: The authors provide compelling empirical evidence supporting their claims. In particular, the studies involving intermediate OLMo checkpoints and newly trained models from scratch are especially persuasive. Methods And Evaluation Criteria: The methods and evaluation criteria suite the research questions. Theoretical Claims: Yes Experimental Designs Or Analyses: Yes Supplementary Material: NA Relation To Broader Scientific Literature: This paper contributes to the literature on LM pre-training dynamics. There is growing interest in better understanding how decisions made during pre-training affect final model behavior after post-training. Through the lens of fine-tuning performance, this work suggests that overtrained base models may become more difficult to fine-tune for downstream tasks. This paper relates to the literature examining the degree to which post-training robustly modifies LM behavior [1]. The authors' contributions suggest that the robustness of post-training behavior may be improved by mitigating overtraining. [1] - Qi, X., Panda, A., Lyu, K., Ma, X., Roy, S., Beirami, A., Mittal, P., & Henderson, P. (2024). Safety Alignment Should Be Made More Than Just a Few Tokens Deep. ArXiv, abs/2406.05946. Essential References Not Discussed: I am not aware of any essential references that have been excluded from this work. Other Strengths And Weaknesses: NA Other Comments Or Suggestions: During pre-training, it may be difficult to identify precisely when overtraining has occurred. One approach practitioners might adopt is applying post-training to intermediate checkpoints, similarly to the OLMo experiments. However, repeated post-training runs might be infeasible or prohibitively expensive. A helpful step the authors could take to operationalize their findings would be to suggest heuristics based on model weights or activations. These heuristics could help practitioners predict whether training should halt due to overtraining, without needing to rely on repeated post-training experiments. In Section 6, the authors discuss their rationale for not conducting any staged pre-training (also known as annealing or mid-training) [1, 2]. It would be valuable to further expand on this point in the discussion or include additional experiments in the appendix that explore scenarios involving additional domain-oriented pre-training. It is possible that concluding pre-training with domain-specific data might mitigate much of the fine-tuning performance penalty associated with overtraining. [1] - OLMo, Team et al. “2 OLMo 2 Furious.” ArXiv abs/2501.00656 (2024): n. pag. [2] - Dubey, Abhimanyu et al. “The Llama 3 Herd of Models.” ArXiv abs/2407.21783 (2024): n. pag. Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your detailed feedback! We are happy to hear your positive comments that there is “compelling empirical evidence”, that “the studies involving intermediate OLMo checkpoints and newly trained models from scratch are especially persuasive”. > During pre-training, it may be difficult to identify precisely when overtraining has occurred. [...] A helpful step the authors could take to operationalize their findings would be to suggest heuristics based on model weights or activations. Great suggestion! Developing heuristics based on model weights or activations is indeed a valuable direction for future work. Another promising avenue would be creating scaling laws specifically for catastrophic overtraining. With accurate scaling laws, practitioners could significantly reduce computational overhead by performing fewer post-training experiments to identify optimal training durations. > In Section 6, the authors discuss their rationale for not conducting any staged pre-training (also known as annealing or mid-training) [1, 2]. It would be valuable to further expand on this point in the discussion or include additional experiments in the appendix that explore scenarios involving additional domain-oriented pre-training. It is possible that concluding pre-training with domain-specific data might mitigate much of the fine-tuning performance penalty associated with overtraining. This is a great point. In our current experiments, our smaller-scale models are pre-trained on the C4 dataset (which includes some code), and we demonstrate catastrophic overtraining on Starcoder-Python fine-tuning (Figures 2, 3). This suggests domain-oriented data doesn't completely eliminate the problem. However, systematically understanding how the choice of pre-training distribution affects catastrophic overtraining remains important future work. Still, we believe adding domain-specific data cannot universally solve this issue: it’s practically impossible to include data from all potential fine-tuning domains. For example, catastrophic overtraining even arises in multimodal fine-tuning scenarios, where adding image data during pre-training (to a text-only model) simply isn't possible.
Summary: The authors study a phenomenon they observe where the more overtrained a pretrained language model is (as a function of pretraining tokens per parameter, w.r.t. training-compute optimal amounts), the more difficult it is to fine-tune the model. The study is motivated by an initial example for OLMo-1B, where the authors tune the SFT LR to optimize quality for a primary task (e.g. AlpacaEval) and use this LR rate to SFT other tasks (e.g. ARC, BoolQ). When doing this, they see that the strongest performance on all tasks do not come from the final overtrained checkpoint, but instead come from earlier checkpoints, calling into question the value of overtraining in the SFT regime. The authors recreate this behavior in smaller scale controlled settings (to up 90M model, 128B tokens) and observe similar behavior, across a large sweep of learning rates, schedules, and batch size. Finally, the authors provide a theoretical explanation for the phenomenon proved for a 2 layer linear network. Claims And Evidence: The claims in the paper are clear. This paper makes a relatively bold and counter-intuitive claim: that pretraining more can make SFT harder (or in some cases hurt SFT), and the paper provides a wealth of empirical results that provide significant insights into the interaction between overtraining and SFT. The findings here are interesting and would be of interest to the community. A criticism of the work might be that the author's claim is too broad and underspecified, as to what "Harder" means. I believe while there's significant insight in this paper, the framing of the work suggests that overtraining might be bad as it can hurt post-training -- but this broad claim is very difficult to exhaustively verify. It is also unclear to what level practitioners already expect to have to tune basic SFT hyperparameters to maximize performance given a fixed pretrained model. Whether this is "harder" than what's expected is ill defined. This paper would in general benefit from a more straightforward framing of the results. An example: In Figure 1, we see that the optimal LR was selected based on AlpacaEval -- which truly benefits from less pretraining under the experimental settings of this paper, at OLMo-1B scale. However, in Figure 7 (Appendix) we get to see the full LR sweep of each task. If the authors decided to pick a different reference task to optimize LR over, the model would have looked significantly more predictable to fine-tune with more pretraining tokens. The choice to optimize AlpacaEval, on one hand is motivated as general, but ultimately a bit arbitrary -- especially as it uses an imperfect autorater (and maybe not tuned well for 1B model outputs). Similarly, in Figure 9, we see that this difficult to SFT behavior doesn't quite exist for LLM360-7B as the model is less overtrained. However we don't quite see the same AlpacaEval-based result for OLMo-7B to prove this hypothesis. Is it because of the overtraining, or does this phenomena improve as we scale to 7B, for these particular datasets? Finally I particularly have issues with how the authors sometime conflate SFT for instruction tuning, SFT-until convergence over a single task, and general "post-training" -- especially as the intention and training dynamics for these vary widely (post-training typically also includes RL these days.) Methods And Evaluation Criteria: The methods and evaluation are insightful as we get to see the full behavior of the model and how pretraining and finetuning interact over a wide set of hyperparameters. In some sense, given these empirical results alone -- the reader could come up with their own valuable conclusions as to what the relationship between overtraining and SFT is. Some of the methodology is contentious: Figure 1 selects a checkpoint based on IT performance, with the expectation that it generalizes to SFT-until-convergence settings. It's unclear how valid this is as the intent of the two types of SFT are quite different (IT tries to train to generalize over many topics, SFT on a single task tries to maximize just that task -- in both cases it is typical to assume that they would need different LR / hypers.) In Figure 3, the controlled experiment fixes this issue by focusing on single-task settings. However the "harder to SFT" claim is still hard to verify: for most of the tasks, there does indeed exist an optimal hyper-parameter that maximizes performance, and is predictable with more pretraining. There are a few where this is not true, and this is interesting -- however it is unclear if this implies that overtraining makes the model harder to SFT in general. Figure 4 basically shows this, and if theres a widely accepted optimal learning rate per-task, I'm not sure how much "harder" this is. This seems quite reasonable and standard. Theoretical Claims: I followed the theoretical argument, but did not check the correctness of the proofs. Experimental Designs Or Analyses: Yes, see "Claims And Evidence" and "Methods And Evaluation Criteria" for particular cases I have issues with. Aside from these, the authors do indeed do a very thorough job of providing exhaustive empirical results and provide nice analysis of the results. My issues with this work mostly lay in the claims made about the results, given the experimental settings, rather than the results themselves. Supplementary Material: I reviewed all the empirical results in the supplementary material. Most of the results are there, including very important methodology notes that I would urge the authors to put into the main text (e.g. how checkpoint selection is done can have wide effects on SFT.) Relation To Broader Scientific Literature: The authors provide thorough connections to related work. So far, no previous work has found that downstream performance could be harmed by overtraining, although there have been works that show that pretraining evaluations can improve with overtraining and in particular Gadre et al. 2024 show that a scaling does exist between overtraining and downstream performance, but this is more aggregated across a larger set of models and tasks. Essential References Not Discussed: A relevant work might be https://openreview.net/pdf?id=vPOMTkmSiu. Other Strengths And Weaknesses: Other strengths: - The paper is well written free of mistakes. It is easy to follow and read, although including more pertinent information from the Appendix in the main text would improve the writing and let the reader assess the claims easier. Other weaknesses: - The theoretical argument is quite limited (two-layer linear model), and is unclear what value it adds to the work. The space could be more useful to provide more detailed discussion of the results, methodology, and relationship to scaling model size. Other Comments Or Suggestions: N/A Questions For Authors: How would you define what it means to be harder to fine-tune, and what is the main evidence in your paper that supports this specifically? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your detailed feedback! We are happy to hear your positive comments that there is “significant insight in this paper”, that it “would be of interest to the community”, and that we include a “wealth of empirical results”. We were also glad to read: > My issues with this work mostly lay in the claims made about the results [...] rather than the results themselves. To address this, we’ve revised our framing to clarify our claims. These changes will be reflected in the final version. Summary of changes: To address your main concerns, we will: (1) rewrite to clarify the definition of “harder” and the scope of our findings; (2) further highlight how checkpoint selection (tuning) affects results; (3) make specific changes based on the comments below. > How would you define what it means to be harder to fine-tune, and what is the main evidence in your paper that supports this specifically? We will clarify in the updated manuscript—by "harder" we mean specifically that extending the pre-training phase, and then fine-tuning, can lead to worse performance in-distribution (fine-tuning task) and also out-of-distribution (unrelated tasks) compared to a shorter pre-training stage. Our evidence: * After instruction tuning OLMo-1B on Anthropic-HH, models trained on 3T tokens underperform those trained on 2.3T on AlpacaEval (in-distribution) and ARC/PIQA (out-of-distribution) — see Figure 1. * After multimodal tuning, the 3T-token model also performs worse on ARC and PIQA (Figure 1). * After instruction tuning on TULU, the 3T-token model scores lower on AlpacaEval (in-distribution) and on multiple OOD benchmarks (ARC, PIQA, Winogrande, HellaSwag, OpenBookQA) — see Appendix Figure 7 (right). * For OLMo-30M, models trained beyond ~32B tokens show degraded in-distribution performance on several fine-tuning tasks (RTE, TREC) and degraded C4 web-data performance when fine-tuning with GSM8K, Starcoder-Python, MR, RTE — see Figure 2. Apologies for the confusion, we will clear this up in our revision. > This broad claim [overtraining => harder to fine-tune] is very difficult to exhaustively verify Thanks for bringing this up—in our revised version we will properly clarify the scope of our paper. To summarize: Our core finding is that there are some important settings where additional pre-training hurts fine-tuned performance, despite helping the base model performance (Figures 1 and 2). We believe that finding *any* example of this phenomenon is surprising. We do not claim that *all* settings exhibit this degradation with overtraining. > If the authors decided to pick a different reference task to optimize LR over [other than AlpacaEval], the model would have looked significantly more predictable to fine-tune with more pretraining tokens. Agreed! A key contribution of our work is precisely to analyze how the optimal learning rate chosen for one task impacts observed degradation on others — this is the focus of Section 3.4. In this section, we argue that we specifically observe performance (ID or OOD) degradation when optimizing for a task that requires a larger learning rate for good performance — empirically, this is the case for AlpacaEval, and many other evaluation settings (see response below). > The choice to optimize AlpacaEval, on one hand is motivated as general, but ultimately a bit arbitrary. For instruction tuning, we pick AlpacaEval because it is a standard benchmark to measure response quality—improving this is our goal during fine-tuning. We agree that any single metric could seem arbitrary, but we also evaluate on a collection of other evaluation metrics: We consider 12 total evaluations across various settings: * AlpacaEval (for instruction-tuned models) * VLM score (average over 5 VLM benchmarks; for multimodal tuned models) * 10 individual fine-tuning datasets where we optimize for ID performance: SUBJ, BoolQ, MR, CR, RTE, TREC, Tweet sentiment, GSM8k, SIQA, Starcoder-Python (Figure 2, plus more in Appendix C.2) We believe that, collectively, these experiments robustly illustrate the occurrence of degradation across many practical fine-tuning scenarios. > This difficult to SFT behavior doesn't quite exist for LLM360-7B [and] OLMo-7B [...]. Is it because of the overtraining? Great question! This difference is likely explained by how many tokens per parameter each model has seen: * OLMo-1B (3T tokens ≈ 3000 tokens per param) experiences degradation by ~2.3T tokens. * LLM360-7B (1.3T tokens ≈ 185 tokens per param) and OLMo-7B (4T tokens ≈ 571 tokens per param) have seen significantly fewer tokens per parameter relative to OLMo-1B. If degradation scales linearly with model size, we would only expect degradation for a 7B model after ~16T tokens (7B × 2300 tokens per param). Thus, current 7B models simply haven't been trained long enough to reach that regime yet. --- Rebuttal Comment 1.1: Comment: Thank you authors for the clarifications and openness in adjusting the framing and claims of the work. I believe if incorporated your proposed changes will strengthen the work and provide more clarity to the reader. I have increased my rating accordingly. --- Reply to Comment 1.1.1: Comment: Thank you for the quick update! Sorry to follow up, we just wanted to check what improvements could turn this into an "4: Accept" recommendation? We would love to get feedback to improve our work going forward! So far it looks like the review indicated that we propose a "relatively bold and counter-intuitive claim" that is justified by a "wealth of empirical results", “provide[s] significant insights”, and is “of interest to the community”, and the main weakness was the scoping of our claims, which we believe we have resolved—is there anything else we can improve along this front? ------ Many of your questions revolved around the high level question “when does catastrophic overtraining occur?"—also, you were wondering how our theory fits into our story. We’ve updated our theory section to answer a question that is challenging to answer precisely empirically: which attributes of the fine-tuning (and pre-training) data determine whether or not we observe catastrophic overtraining, and its severity? In particular, we’ve updated our theorem statements to make precise exact conditions on the pre-training and fine-tuning distributions for which catastrophic overtraining occurs. We show that degradation from catastrophic overtraining begins to occur earlier in pre-training, and also to a greater extent, when the eigenvalues of the pre-training and fine-tuning tasks differ more substantially. Our intuition is that this theoretical insight translates roughly to practice. Empirically, tasks very dissimilar to the pre-training distribution likely require larger changes in model features, and therefore will exhibit faster degradation with overtraining. (This is based on comments from Reviewer YWVH, so please refer to our conversation there as well.) ------ Again, we really appreciate you taking the time to write a thoughtful review—our paper has already benefited substantially from it—so we apologize for bothering you about this.
Summary: The paper demonstrates how overtraining language models (training on more than the compute-optimal number of tokens) affects their ability to be fine-tuned on new data. For example, the authors perform experiments on OLMo-1B models and show that models pretrained on 3T tokens performed 3% worse on AlpacaEval and 2% worse on ARC reasoning after instruction fine-tuning compared to models trained on 2.3T tokens. Similar degradation is observed over several other evals (PIQA, BoolQ, Winogrande) and fine-tuning setups ((instruction tuning on TULU and multimodal fine-tuning). They perform further controlled experiments on 15M-90M parameter models and provide a theoretical linear model to characterize "catastrophic overtraining," where fine-tuning performance first improves and then degrades with excessive pre-training. Claims And Evidence: - Experiments on 1B-7B parameter models and two fine-tuning tasks (Anthropic-HH SL on instruction-tuning data, multimodal fine-tuning) support the main finding that overtrained language models can perform worse after fine-tuning. - The authors provide a theoretical model that helps explain the observed phenomenon in a simplified setting. - The paper investigates how the learning rate during fine-tuning affects the degradation, showing that different learning rates are optimal for models with different pre-training durations. - To eliminate the confounder of the learning rate schedule being at different points for the different OLMo model checkpoints, they perform controlled experiments and train models from scratch for various pre-training budgets with a cosine annealing schedule. However, while the authors describe "catastrophic overtraining" as universal, the evidence is limited to only two fine-tuning setups. Both fail to produce consistent improvements in the evals measured compared to the base model, which could indicate issues in the choices of fine-tuning data mix. Methods And Evaluation Criteria: Overall the methods and benchmark datasets used make sense. However, the use of perplexity in the small pretraining experiments might be misleading (i.e. the results in Figure 2). For downstream tasks like GSM8K, we don't care about perplexity on the answer but rather accuracy. Because it may be hard to get non-trivial accuracy in small models, you could look at average per-token prediction accuracy instead of loss/perplexity. This would help decouple token prediction calibration/effective temperature from downstream task performance. Theoretical Claims: The theoretical model described in section 4 looks sound. However, the simplification makes a number of assumptions that may not hold for LLMs (e.g. linearity, assuming the pretraining and fine-tuning datasets share the same singular vectors, not accounting for discrete gradient descent algorithm). Experimental Designs Or Analyses: The experimental design is generally sound although I have some concern about the choices of fine-tuning datasets as mentioned in the other sections. Supplementary Material: The supplementary material provides the details of the proof in section 4 and provides additional experiments on LLMs and other instruction tuning datasets. I did not review this in detail. Relation To Broader Scientific Literature: The paper connects to: - Research on pretraining scaling laws (Hoffmann et al., 2022; Kaplan et al., 2020) that established optimal token-per-parameter ratios - Research on model plasticity, particularly studies in reinforcement learning that showed models can become less adaptable after extensive training (Kumar et al., 2020; Lyle et al., 2022) and work on feature rank and inactivity as mechanisms for reduced adaptability (Gulcehre et al., 2022; Dohare et al., 2021) Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: 1. I am confused by the results in Figure 1. It looks like for most evaluations, fine-tuning hurts instead of helps performance on the eval measured (solid line is below dotted line). Possibly the fine-tuning datasets used are too narrow and would benefit from additional regularization (e.g. by mixing in a wider range of diverse examples from other distributions). Do you have empirical evidence showing that your results generalize to settings where the fine-tuning datamix robustly improves performance on the target evals beyond the capability of the pretrained model? 2. For your perplexity-based experiments, do your findings still hold if you measure token prediction accuracy instead of perplexity? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your detailed feedback! We are happy to hear your positive comments that our experiments “support the main finding” and that our theory "helps explain the observed phenomenon”. Summary of changes: To address your main concerns: (1) we clarify that our fine-tuning setups do lead to consistent improvements on the target task; (2) we will rewrite to clarify the scope of our findings; (3) we will add the per-token accuracy experiment you propose; (4) we will make specific changes as outlined below. Please find our responses to each concern: > The authors describe "catastrophic overtraining" as universal Thanks for bringing this up—in our revised version we will clarify the scope and remove the use of “universal”. To summarize: Our core finding is that there are some important settings where additional pre-training hurts fine-tuned performance, despite helping the base model (Figures 1 and 2). We believe that finding *any* example of this is surprising. We do not claim that *all* settings exhibit this degradation with overtraining. > The evidence is limited to only two fine-tuning setups We want to clarify—we consider more than just two fine-tuning settings. For the our large model results (OLMo-1B): * Anthropic-HH (Instruction tuning) * Multimodal tuning * TULU (instruction tuning) * Alpaca (instruction tuning) (Figure 1 and Appendix B). We observe catastrophic overtraining for Anthropic-HH, multimodal tuning, and TULU. For the small model results (OLMo-30M): * GSM8k * SIQA * Starcoder-Python * MR * RTE * TREC * CR * Tweet sentiment * SUBJ * BoolQ (Figures 2 & 3, plus Appendix C). > Both [IFT & VLM fine-tuning setups] fail to produce consistent improvements in the evals measured compared to the base model. For Anthropic-HH, we aim to improve response quality (measured via AlpacaEval), and fine-tuning boosts scores by ~35–45% over the base model (see table below). For VLM fine-tuning, the goal is to enable image input via an adapter—something the base model can’t do. The fine-tuned model reaches ~45% VLM score (Figure 1). We also evaluate OOD tasks (ARC, PIQA, BoolQ, Winogrande, HellaSwag, and OpenBookQA). Fine-tuning on an unrelated task isn’t expected to help performance on these tasks and can often harm performance [1]. Together, these evaluations help us assess how overtraining impacts the ability to learn new tasks (measured by main eval) and the tendency to degrade unrelated capabilities (measured by OOD eval). Here are the base model (and instruction-fine-tuned) scores for AlpacaEval: | Tokens | Base model AlpacaEval (%) | Instruction-tuned AlpacaEval (%) | |---|-|---| | 0.5e12 | 10.57 | **47.10** | | 1.5e12 | 14.43 | **54.97** | | 1.8e12 | 11.94 | **56.03** | | 2.3e12 | 12.39 | **56.05** | | 3e12 | 11.07 | **53.52** | Table: Comparing AlpacaEval for OLMo-1B base model vs instruction tuning with Anthropic-HH. We will add this to the main paper to clarify, apologies for the confusion. > Do your findings still hold if you measure token prediction accuracy instead of perplexity? We ran this experiment, and yes, our findings hold. Below report per-token accuracy of GSM8k as a function of pre-training tokens (analogous to Figure 3). | Pre-training Tokens (B) | Pre-training data per-token accuracy (%) | GSM8k per-token accuracy (%) | |---|----|---| | 4 | 15.4 | 65.9 | | 8 | 15.8 | 67.2 | | 16 | 16.7 | 67.7 | | 32 | 16.4 | 68.0 | | 64 | 15.4 | 67.8 | | 128 | 13.4 | 67.1 | Note: GSM8k per-token accuracy is more predictable than web data because the examples are quite structured. Results corresponding to other figures & datasets similarly hold true. > The simplification [of the theoretical model] makes a number of assumptions that may not hold for LLMs. To clarify—We do model discrete gradient descent explicitly during fine-tuning (see Appendix A.3). We aren’t aware of any theoretical tools that would enable us to study the dynamics of LLMs without at least some degree of simplification such as the ones we describe. We fully acknowledge that there are limitations that arise from these simplifications but we hope our analysis is a starting point to understand this phenomenon that may lead to principled mitigation strategies in the future. Additionally, since catastrophic overtraining is so counterintuitive (to us), seeing it emerge from incremental learning in a simpler setting reassures us it's not just an artifact of our LLM setup and offers intuition for a possible mechanism. [1] Goodfellow, Ian J., et al. An empirical investigation of catastrophic forgetting in gradient-based neural networks. 2013. --- Rebuttal Comment 1.1: Comment: Thank you for the additional information, this clarifies my understanding. I will increase my score.
null
null
null
null
null
null
Transformer-Based Spatial-Temporal Counterfactual Outcomes Estimation
Accept (poster)
Summary: The paper studied counterfactual outcomes with spatial-tempora attributes using transformers and proved consistency and aymptotic normality of the estimator. The authors also conducted synthetic experiments and studied forest loss in Colombia. This paper is generally well-written. *Edit after rebuttal: I changed score from 2 to 3.* Claims And Evidence: The authors claimed that using transformers to estimate the intensity function under the spatial Poisson assumption outperformed baseline estimators. Methods And Evaluation Criteria: The authors tested their method on a synthetic Poission point process as well as Colombia forest change data and UCDP georeferenced event dataset. The authors used relative error rate for the synthetic experiment and checked the consistency of their conlusions with prior work as well as human judgement for the non-synthetic sutdies. Theoretical Claims: This paper recalled known results on propensity scores, and proved a consistency result for the spacial estimator $N _ {\omega} (Y_t)$. While I think recalling the known results is helpful for the structure, I failed to be convinced that there is significnat novelty on the theoretical front -- the proof in Appendix B2 is thorough, but Proposition 3 seems to be more like a routine proof. To be fair, the authors did not claim theoretical contributions, so this is fine by me. Experimental Designs Or Analyses: The authors are very clear on the baseline and evaluation metrics. However, I think the lack of validation for the non-synthetic studies makes the story less convincing. I understand that there is non "true answer" for the non-synthetic studies, but the authors could perhaps perform sensitivity analysis or convergence analysis of their estimators. I think it is helpful that the authors discuss the computation cost but a comparison of computation cost with the baseline might shed more light on the comparison between the proposed method. Supplementary Material: I skimmed through the proof and experiment setions. Relation To Broader Scientific Literature: Satisfactory. Essential References Not Discussed: Satisfactory. Other Strengths And Weaknesses: I think, despite this paper is generally well-written, the constribution seems to be a bit weak -- there is no significnat contributions on the theory front, and this paper has a more application flavor. However, the empirical studies and analyses are related limited compared with what I would expect from an application paper. Other Comments Or Suggestions: In the figures/tables of empirical studies, perhaps adds error bars and std/confidence bands. Questions For Authors: Did you try to relax the Poisson assumption and see how well this method holds up in the synthetic setup? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ## Response to reviewer Uw2a We are glad the reviewer found our paper well-written. We would like to respond to each detailed point individually. >1. I failed to be convinced that there is significnat novelty on the theoretical front -- the proof in Appendix B2 is thorough, but Proposition 3 seems to be more like a routine proof. To be fair, the authors did not claim theoretical contributions, so this is fine by me.  We appreciate that the reviewer found our proofs thorough. Now, we elaborate on our key contributions. One of our **key contributions** is leveraging deep learning to address the challenge of counterfactual outcome estimation for spatial-temporal data. Previous baselines are **unable** to achieve this. Specifically, our proposed method enables the estimation of propensity scores for spatial-temporal data. This **provides a feasible and efficient technical pathway** for causal inference in high-dimensional spatial-temporal settings. Besides, our theoretical analysis provides a guarantee for the proposed deep learning method. --- >2. However, I think the lack of validation for the non-synthetic studies makes the story less convincing. I understand that there is non "true answer" for the non-synthetic studies, but the authors could perhaps perform sensitivity analysis or convergence analysis of their estimators. We appreciate the reviewer's suggestions on sensitivity analysis and convergence analysis. Following your advice, we provide the theoretical sensitivity analysis in the **"sensitivity.pdf"** file in this anonymous [URL](https://anonymous.4open.science/r/DeppSTCI_Release_Version-master-3C91). With regard to the convergence analysis, Proposition 3 demonstrates that as time progresses, our estimator converges to the estimands (ground truth). --- >3. I think it is helpful that the authors discuss the computation cost but a comparison of computation cost with the baseline might shed more light on the comparison between the proposed method. We appreciate the suggestion to compare the computational cost with the baselines. All baselines were also run on an NVIDIA RTX 4090 GPU with an Intel Core i7-14700KF processor. For detailed information, please refer to the table below. Note that these baselines were **not designed for spatial-temporal data** and cannot process such data. Therefore, we convert all spatial-temporal data into scalar values for processing by these baselines (Appendix K.1). As a result, although our method does not significantly outperform these baselines in computation time, it **addresses a problem that they cannot solve.** | Methods | Time required for one setting | | --- | --- | | Ours | about 10 minutes | | LR | about 1.13 minutes | | MSMs | about 1.45 minutes | | RMSNs | about 1.53 minutes | |Causal Forest| about 2.25 minutes | --- >4. I think, despite this paper is generally well-written, the constribution seems to be a bit weak -- there is no significnat contributions on the theory front. However, the empirical studies and analyses are related limited compared with what I would expect from an application paper. We appreciate that the reviewer found our work well-written. Our main contribution lies in **addressing the challenge** of counterfactual outcome estimation for spatial-temporal data, which previous baselines are **unable to solve.** We provide a **feasible and efficient technical pathway** for causal inference in high-dimensional spatial-temporal data. The theoretical section provides the **foundation** for the proposed method. --- >5. In the figures/tables of empirical studies, perhaps adds error bars and std/confidence bands. We appreciate the suggestion to include error bars and confidence intervals. However, in the additional experiments, we trained the neural networks 25 times with different training data, and the results were consistently superior. Therefore, we believe the additional experiments demonstrate the robustness of our method. --- >6. Did you try to relax the Poisson assumption and see how well this method holds up in the synthetic setup? We acknowledge that the Poisson assumption may be somewhat restrictive. In this work, we use the spatial Poisson point process as a fundamental hypothesis for modeling spatial-temporal data. Therefore, if the Poisson assumption is violated, our theoretical framework would need to be restructured, and its performance may be compromised. In statistics, modeling spatial data distributions without assuming a specific distribution remains an open problem, and the Poisson assumption is widely used to describe spatial events [1-2]. [1] Cressie, N., & Wikle, C. K. (2011). Statistics for spatio-temporal data. John Wiley & Sons. [2] Cressie, N. (2015). Statistics for spatial data. John Wiley & Sons. --- Rebuttal Comment 1.1: Comment: Thank you authors for addressing my questions and provide more details on computational costs and the write-up on sensitivity analysis. I still think this manuscript needs more work but won't hold back if other reviewers would like to champion it. --- Reply to Comment 1.1.1: Comment: Dear Reviewer Uw2a, Thanks a lot for your comments, and for raising the score. Now we articulate the key contributions of our work. In this paper, we propose a feasible and efficient technical pathway for causal inference in high-dimensional spatial-temporal data. In addition, we provide the theoretical guarantee for the proposed method. Following your advice, we will **take the following measures to enhance our manuscript.** * First, we will repeat all our experiments twenty times and add the error bars to the final version of our paper. * Second, we will relax the Poisson assumption and examine the performance of our method. All results will be added to the final version of our paper. Thank you once again for your invaluable contribution to the enhancement of our research. Best regards, Authors
Summary: This paper introduces a Transformer-based framework for counterfactual outcome estimation in spatial-temporal data. It aims to improve causal inference in settings where treatments and outcomes are structured across both space and time. The authors propose a novel deep-learning-based estimator with a CNN-Based Propensity Score Estimation and Transformer-Based Intensity Function Estimation. The framework was supported by both theoretical insights and experiments on synthetic and real data. Claims And Evidence: The theoretical evidence is adundant, whereas the empirical results lack sufficient ablation to support the claims such as the improvements from the CNN based propensity score estimation. Methods And Evaluation Criteria: Methods do make sense, but the evaluation criteria is lacking. Theoretical Claims: I didn't check all the details. The proposals and theorems look good. Experimental Designs Or Analyses: See below. Supplementary Material: Yes, i was looking for model architecture details. Relation To Broader Scientific Literature: Spatio-temporal causal inference is very important and broadly applicable. Essential References Not Discussed: I would encourage including more references to the temporal counterfactual prediction literature as it is an active research area, including (but not limited to): Seedat, N., Imrie, F., Bellot, A., Qian, Z., & van der Schaar, M. (2022). Continuous-time modeling of counterfactual outcomes using neural controlled differential equations. arXiv preprint arXiv:2206.08311. Wu, S., Zhou, W., Chen, M., & Zhu, S. (2024). Counterfactual generative models for time-varying treatments. In Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (pp. 3402-3413). El Bouchattaoui, M., Tami, M., Lepetit, B., & Cournède, P. H. (2024). Causal contrastive learning for counterfactual regression over time. Advances in Neural Information Processing Systems, 37, 1333-1369. Other Strengths And Weaknesses: Strengths One thing that stands out is the consistent and rigorous theoretical framework of the work, which can potentially lay the foundation for future works on spatio-temporal causal inference. Weaknesses The experiment section is lacking, especially: 1. The details of the transformer architecture seems to be missing: what positional encoding are you using? what tokenization scheme is used? 2. More ablation is needed to identify the contribution of each innovation, including the CNN-based propensity score estimator. 3. Following my last point, it seems that when comparing to other baselines, it is unclear if the performance gain come from model backbone (i.e. transformer) or the proposed spatio-temporal causal inference framework. One way is to keep the model backbone simple for similar to other baselines, and adopt several ablation studies to demonstrate the gains brought by the two major innovations. Other Comments Or Suggestions: No Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ## Response to reviewer pyND We are glad the reviewer found the theoretical framework rigorous. We would like to respond to each detailed point individually. >1. The theoretical evidence is adundant, whereas the empirical results lack sufficient ablation to support the claims such as the improvements from the CNN based propensity score estimation. We really appreciate that the reviewer found the theoretical evidence abundant. Now we explain why removing the propensity score estimation is inappropriate in our setting. In our work, the CNN-based propensity score estimation serves as a practical implementation of the propensity score in our estimator framework (Sec. 4.4). Therefore, if we remove it, we effectively eliminate the propensity score from the estimator framework. As a result, the estimator would be **theoretically incomplete** and unable to perform its intended function properly. --- >2. Methods do make sense, but the evaluation criteria is lacking. We are glad the reviewer found the methods reasonable. Now we justify our choice of evaluation criteria. In the simulation experiments, we use the relative error rate as a direct measure of the estimator’s accuracy. In real-world data experiments, since the ground truth is unavailable, we assess our approach by comparing our findings with those of existing studies. --- >3. Spatio-temporal causal inference is very important and broadly applicable. We are glad the reviewer found the studied problem important. --- >4. I would encourage including more references to the temporal counterfactual prediction literature as it is an active research area, including (but not limited to). We really appreciate the reviewer's suggestions on the references. Following your advice, we **have cited and discussed** these excellent works in our paper. **Temporal Counterfactual Prediction** Temporal counterfactual prediction refers to performing counterfactual outcome prediction under time-varying settings. Seedat, N. et al. [1] propose TE-CDE, a neural-controlled differential equation approach for counterfactual outcome estimation in irregularly sampled time-series data. Wu, S. et al. [2] introduce a conditional generative framework for counterfactual outcome estimation under time-varying treatments, addressing challenges in high-dimensional outcomes and distribution mismatch. El Bouchattaoui, M. et al. [3] propose an RNN-based approach for counterfactual regression over time, focusing on long-term treatment effect estimation. [1] Seedat, N. et al. Continuous-time modeling of counterfactual outcomes using neural controlled differential equations. [2] Wu, S. et al. Counterfactual generative models for time-varying treatments. [3] El Bouchattaoui, M. et al. Causal contrastive learning for counterfactual regression over time. --- >5. The details of the transformer architecture seems to be missing: what positional encoding are you using? what tokenization scheme is used? We employ the Transformer to capture spatial information within a single time step. As a result, we do not apply fixed positional encoding. For the tokenization scheme, the Transformer processes discrete data points as input. Therefore, we treat each discrete coordinate as a token and map it to a high-dimensional embedding space. --- >6. More ablation is needed to identify the contribution of each innovation, including the CNN-based propensity score estimator. We appreciate the suggestion of the ablation of the CNN-based estimator. However, the ablation of the CNN-based estimator is inappropriate in our setting, for detailed reasons, please refer to response 1. --- >7. It is unclear if the performance gain come from model backbone (i.e. transformer) or the proposed spatio-temporal causal inference framework. One way is to keep the model backbone simple for similar to other baselines, and adopt several ablation studies to demonstrate the gains brought by the two major innovations. Compared to previous baselines, our work primarily tackles a problem that these methods cannot handle. This is because the frameworks and implementations of these baselines are not applicable to spatial-temporal data. Therefore, we attribute the superior performance of our method to both its theoretical framework and model implementation. We appreciate the reviewer's suggestions on the ablation studies. However, it is not appropriate to maintain the same model backbone as other baselines. For example, we employ a CNN to process the high-dimensional tensor. If we replace the CNN with the simple logistic regression or linear regression used in MSMs, the propensity score in our work would not be estimable. --- Rebuttal Comment 1.1: Comment: I've read the comments and most of my concerns are addressed. Raising the score from 2 to 3. I would still strongly recommend supplementing more experimental results and add the aforementioned clarification into the main text/supplements. --- Reply to Comment 1.1.1: Comment: Dear Reviewer pyND, Thanks a lot for your comments, and for raising the score. We sincerely appreciate your support for our paper and are particularly grateful for your invaluable comments. Following your advice, we will add experiments **replacing the Transformer with RNNs** to better demonstrate the gains brought by the major innovations. We will also include the results in the final version of the paper. Additionally, we commit to incorporating the aforementioned clarification into the main text or the appendix. Thank you once again for your invaluable contribution to the enhancement of our research. Best regards, Authors
Summary: The paper introduces an approach to estimate counterfactual outcomes in a spatial-temporal setting, where both treatment and outcome may be represented in a high-dimensional space. The proposed method adapts IPW to the spatial-temporal setting, leveraging propensity score. The approach is implemented in practice using a deep learning architecture, leveraging CNNs for computing propensity scores and transformers for encoding the intensity function. Finally, the method is compared empirically with traditional counterfactual outcomes estimation approaches and achieves significantly lower error in high-dimensional settings. Claims And Evidence: The paper makes three claims: (1) they propose an estimator for counterfactual outcomes in the spatial-temporal setting; (2) they use a deep-learning approach to perform this estimation, leveraging CNNs for propensity score calculation and transformers for modeling intensity functions; and (3) the approach is demonstrated to outperform baselines empirically. See below sections for the evaluation of these claims. Methods And Evaluation Criteria: The data is separated into treatment $Z$, outcome $Y$, and covariates $X$. Importantly, each variable is collected across several time steps, and $Z$ and $Y$ are modeled as spatial point processes rather than traditional binary treatments and outcomes. An inverse probability weighting estimator is derived, composed of the propensity score, the counterfactual probability, and the intensity function of the outcome. The estimator estimates the expected number of outcomes in a specific region at a specific time. This estimator (Eq. 3) seems to be the main contribution of the paper, but it is unclear why this is the quantity of interest and how the estimator is derived. The estimator is implemented in practice using a CNN to compute the propensity score and a transformer to compute the intensity function. The architecture seems to make sense for the task. Theoretical Claims: The paper proves three propositions in Sec. 4.5 about the estimator in Eq. 3. The proofs seem to hold, but it seems generally unclear how to interpret these results and why they are important. Prop. 3 seems to be the most important as it seems to indicate that the estimator properly converges to the intended value, but there seems to be some subtleties in the result that could be elaborated. Experimental Designs Or Analyses: The proposed architecture is tested in both synthetic and real data, compared to baselines which do not take the spatial-temporal nature of the data into account. The results seem to convincingly show that the proposed architecture works better in these data settings. Supplementary Material: I did not read the supplementary material carefully, only skimming through the proofs. Relation To Broader Scientific Literature: The related works section adequately sets the background of the task. Essential References Not Discussed: N/A Other Strengths And Weaknesses: I appreciate that the assumptions are clearly highlighted. I think that the clarity of the paper is its greatest weakness. The paper uses a lot of convoluted notation and does not explain much of the math. It would be much clearer if the paper walked through a running example, where each term of the estimand is clearly highlighted. Other Comments Or Suggestions: N/A Questions For Authors: Why is the dimensionality reduction in Eq. 5 possible? Are we not losing information? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: ## Response to reviewer p6Zj We greatly appreciate the reviewer's comments and suggestions. We would like to respond to each detailed point individually. > 1. The estimator estimates the expected number of outcomes in a specific region at a specific time. This estimator (Eq. 3) seems to be the main contribution of the paper, but it is unclear why this is the quantity of interest and how the estimator is derived. Now we explain why the expected number of outcomes in a specific region at a specific time is the quantity of interest. These estimands provide spatial-temporal decision-making evidence for policy makers. For example, during epidemic control, policy makers can evaluate the average number of infections in specific areas at specific times under different isolation strategies, thereby optimizing the allocation of isolation resources. Next, we briefly introduce how the estimator is derived. The estimator is derived based on the Inverse Probability Weighting (IPW) strategy in Marginal Structural Models (MSMs) [1]. Specifically, the $\lambda_{Y_t^{ob}(z_{\leq t})}(s)$ in Eq.3 is the intensity function of outcomes in time $t$ and location $s$, and $\prod_{j=t-M+1}^{t} \frac{p_{h_j}(z_j)}{e_j(z_j)}$ represents the weights similar to those in the MSMs. In summary, we adopt the weighting strategy of MSMs to the studied spatial-temporal setting. [1] Robins J M, Hernan M A, Brumback B. Marginal structural models and causal inference in epidemiology[J]. Epidemiology, 2000, 11(5): 550-560. --- >2. The proofs seem to hold, but it seems generally unclear how to interpret these results and why they are important. Prop. 3 seems to be the most important as it seems to indicate that the estimator properly converges to the intended value, but there seems to be some subtleties in the result that could be elaborated. Prop. 1 and Prop. 2 are two essential properties of the propensity score. Specifically, these propositions demonstrate that the propensity score can make the estimation more **accurate and efficient**. Therefore, by proving Prop. 1 and Prop. 2, we validate that the propensity score used for spatial-temporal data is "reasonable" and can assist in counterfactual outcome estimation. Prop. 3 establishes the **theoretical reliability** of the estimator. It demonstrates that the expected value of the proposed estimator equals the estimands (i.e., unbiasedness) and that the variance of the estimation error converges to a finite value. --- >3. I appreciate that the assumptions are clearly highlighted. We are glad the highlighted assumptions are helpful. --- >4. I think that the clarity of the paper is its greatest weakness. It would be much clearer if the paper walked through a running example, where each term of the estimand is clearly highlighted. We appreciate the reviewer's suggestion on the running example. Following your advice, we provide an example that illustrates the estimands. For simplicity, consider the case of $t=8$ and $M=3$. Then the estimands $$ N^\omega_8(F_H) = \int_{Z^3} |S_{Y_8^{ob}(z_{\leq 8}(F_H))} \cap \omega| dF_H(z_{\left[ \text{6,8} \right]}), $$ represent the expected number of outcomes in $t=8$ and region $\omega$ under distribution $F_H$. Next, we employ a table to elaborate on each term of the estimands. | Term | Interpretation | | --- | --- | | $t$=8, $M$=3 | Evaluates outcomes at time 8, considering 3-times intervention persistence (times 6-8). | | $z_{[6,8]}$ | Sequence of treatment variables over the time window [6,8]. | |$F_H(z_{\left[ \text{6,8} \right]})$ | Joint probability distribution of $z_{[6,8]}$ under counterfactual intervention strategy $F_H$.| | &vert;$S_{Y_8^{ob}(z_{\leq 8}(F_H))} \cap \omega$&vert; | Observed outcome counts in region $\omega$ at time 8, under $F_H$. | | $Z^3$ | All possible values of $z_{[6,8]}$. | | $\int_{Z^3}\cdot dF_H(z_{\left[ \text{6,8} \right]})$ | Expectation computation over all possible values of $z_{[6,8]}$. | --- >5. Why is the dimensionality reduction in Eq. 5 possible? Are we not losing information? In this work, we mainly estimate the outcome quantities. Therefore, we believe it is possible to reduce the studied variables to their quantity. We acknowledge that this strategy may result in some information loss; however, it is **useful for estimating the number of outcome events**. Moreover, this strategy **has its own theoretical significance** in terms of how to reduce the dimensionality for complicated problems. --- Rebuttal Comment 1.1: Comment: Thank you for addressing my points in your rebuttal. A lot of the results have been made more clear for me. Can you elaborate on " this strategy has its own theoretical significance in terms of how to reduce the dimensionality for complicated problems"? --- Reply to Comment 1.1.1: Comment: Dear Reviewer p6Zj, Thanks a lot for your acknowledgement and reply. Now we briefly explain "this strategy has its own theoretical significance ...". Eq. 5 maps the treatment variable set in time $t$ and region $\Omega$ ($\lbrace Z_t(s) | s \in \Omega \rbrace$) to the count of treated locations (scalar). This strategy retains sufficient information for estimating outcome quantities (total events) while avoiding intractable computations. This strategy is grounded in our problem. Since we assume treatments are generated by the spatial Poisson process (Assumption 2), the count of treated locations naturally follows a Poisson distribution. Therefore, the distribution of the propensity scores is specified and can be calculated. We hope this clarifies the rationale behind Eq. 5 and its role in our framework. Best regards, Authors
null
null
null
null
null
null
null
null
Imitation Learning from a Single Temporally Misaligned Video
Accept (poster)
Summary: The paper tackles the problem of imitation learning from a single video demonstration that may be temporally misaligned with the learner’s execution (e.g., inconsistent execution speed, pauses, etc.). The authors show that severe misalignments such as those that were created in their experiments (long pauses or x5 or x10 speedups in parts of the trajectory) can render prior imitation-learning techniques like Optimal Transport (OT), and Dynamic Time Warping (DTW) ineffective. Specifically, they show that OT can match subgoals in the wrong order and DWT can skip goals or not complete the task in these situations.. the proposed ORdered Coverage Alignment (ORCA) computes a non-Markovian reward that measures the probability that the learner has “covered” each demonstration subgoal in the correct order. At every timestep 𝑡, the reward depends on whether the learner occupies (or closely matches) the final subgoal in the demonstration and how well it has covered all preceding subgoals. ensuring the agent stays with the last subgoal once it is covered. Empirically, ORCA can get stuck in local minima if the policy does not have a decent initialization. Hence, the authors first pretrain the agent with a simpler, frame-level alignment reward (specifically TemporalOT) and then switch to ORCA. This yields better coverage of all subgoals in the correct order. Experiments are performed on a set of simulated robotic arm tasks and a set of humanoid movement-to-pose tasks. In all experiments, intentional severe time warping is introduced. Comparisons are performed to: OT (Optimal Transport with entropic regularization) TemporalOT (OT with a diagonal “mask” for partial temporal alignment) DTW (Dynamic Time Warping, which enforces monotonic time alignment but not full coverage) Threshold (a simpler hand-engineered approach that tracks subgoals via a distance threshold) RoboCLIP (a transformer-based model encoding entire demonstration videos) Results show robustness of the ORCA approach. Claims And Evidence: The claims are well supported with toy-problem demostrations of the failure modes of the other methods. Methods And Evaluation Criteria: The methods that are compared against are good representatives of methods in use and halp clarify the difficulties that the proposed method can tackle. Theoretical Claims: The proofs in the appendix where read briefly and did not raise any red flags at the level in which they were examined. Experimental Designs Or Analyses: The experiments heightlight the strength of the method (covering all goals in the correct order when the demostration data is very warped)) Supplementary Material: I have read through the supplamentray material with a focus on the experimentation and additional ablations. Relation To Broader Scientific Literature: The method compared against ar good candidate methods to compare against though there are certainly additional methods in this field. Essential References Not Discussed: I am not familiat with particular studies should have been mentioned in this context. Other Strengths And Weaknesses: The situation in which training is based on a single example trajectory with sever timewarping seems very unnatural. In most real life situations there would be several to large amounts of demonstrations and time-warping would not be so sever (especially with tele-operated robots where robot dynamics are similar in demonstration and operation). Therefore it is hard to extrapolate from the situation tackled by this paper to the more commonly encountered use cases. The paper does well to make use of image (as opposed to state) data in calculating the reward. It also does well to perform RL training with standard implementations using a state-based representation for the agent. In this way - the focus of the evaluation is on the fitness of the reward generating mechanism. Other Comments Or Suggestions: None Questions For Authors: The paper is very clear as it is. Ethical Review Concerns: None Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We are excited that the reviewer finds our claims well-supported and that our experiments highlight the strength of ORCA. Below, we address the reviewer's concern. # Concerns ## Clarification on applications of ORCA We clarify that our focus is on learning from temporally misaligned data. In robotics, this is a common problem even when collecting teleoperated demonstrations. Recent works handle the temporal misalignment by filtering trajectories [1,2] and manually removing pauses [3]. Instead of relying on these data collection tricks, we focus on designing an algorithm that directly addresses issues of temporal misalignment. And we find the results exciting: ORCA is the most performant approach given either a temporally aligned (Fig. 4 in the paper) or a misaligned demonstration (Table 1 in the paper). ORCA can also handle multiple demonstrations by calculating the ORCA reward with respect to each demonstration and then performing a max-pool. We show that ORCA improves its performance as the number of demonstrations increases, even when the videos have different levels of temporal misalignment. For details and results, we kindly refer the reviewer to our replies to reviewer YcsA. [1] Chi et al. Universal Manipulation Interface: In-The-Wild Robot Teaching Without In-The-Wild Robots. RSS, 2024. [2] Ghosh et al. Octo: An Open-Source Generalist Robot Policy. RSS, 2024. [3] Kim et al. OpenVLA: An Open-Source Vision-Language-Action Model. CoRL, 2024.
Summary: Reinforcement learning and imitation learning from a single visual demonstration is a challenging problem because the demonstration may be temporally misaligned: the demonstration and online trajectories may differ in timing. Frame-level matching is not adequate because it does not enforce the correct ordering of subgoal completion. OCRA presents a dense reward function that encourages the agent to cover all subgoals from the demonstration in the correct order. Empirical results show that ORCA significantly outperforms existing frame-level alignment methods on meta-world tasks and a simulation humanoid setup. Claims And Evidence: The three key contributions seem to be verified* and supported* (* because of the questions raised in the Methods And Evaluation Criteria and Experimental Designs Or Analyses sections). Methods And Evaluation Criteria: See Experimental Designs Or Analyses for the main concern. In addition, while the paper emphasizes imitation learning from a single temporally misaligned video demonstration, the policy implementation appears to rely exclusively on state-conditioned inputs, rather than directly incorporating video-based conditioning. Given the nature of the proposed ORCA reward function—which explicitly leverages visual alignment at the sequence level—it seems more natural and potentially beneficial for the policy to be conditioned jointly on both state and visual information from the demonstration. Theoretical Claims: The proof seems correct. However, I think the more critical questions are raised in the following section. Experimental Designs Or Analyses: Related to the Q3 below: based on figure 3, section 5.1, it seems that the environment for RL training is not randomized (i.e. for the door-open task, the location of the door isn’t randomized). I am not sure if this assessment is correct. The key challenge, as identified by other visual imitation learning works, is that the condition only provides a signal of the task definition, and the test time environment configuration may be quite different. It is unclear if the current experiment setup correctly accesses the model’s capability, especially since the reward function is dependent on explicit state comparisons. Supplementary Material: I reviewed section B (environment details in the appendix). Relation To Broader Scientific Literature: It is broadly related to visual imitation learning, one or few-shot imitation learning literature. Essential References Not Discussed: In the robot learning domain, there are a few recent works that worked on learning from a few video demonstrations. It is not necessarily the case where visual imitation learning has to be framed as online RL or IRL. See below: [1] Jain, Vidhi, Maria Attarian, Nikhil J. Joshi, Ayzaan Wahid, Danny Driess, Quan Vuong, Pannag R. Sanketi et al. "Vid2robot: End-to-end video-conditioned policy learning with cross-attention transformers." arXiv preprint arXiv:2403.12943 (2024). [2] Fu, Letian, Huang Huang, Gaurav Datta, Lawrence Yunliang Chen, William Chung-Ho Panitch, Fangchen Liu, Hui Li, and Ken Goldberg. "In-context imitation learning via next-token prediction." arXiv preprint arXiv:2408.15980 (2024). [3] Xu, Mengdi, Yikang Shen, Shun Zhang, Yuchen Lu, Ding Zhao, Joshua Tenenbaum, and Chuang Gan. "Prompting decision transformer for few-shot policy generalization." In international conference on machine learning, pp. 24631-24645. PMLR, 2022. [4] Y. Duan et al., “One-shot imitation learning,” Advances in neural information processing systems, vol. 30, 2017. [5] Di Palo, Norman, and Edward Johns. "Keypoint action tokens enable in-context imitation learning in robotics." arXiv preprint arXiv:2403.19578 (2024). Other Strengths And Weaknesses: N.A. See other sections for a more detailed explanation. Other Comments Or Suggestions: N.A. Questions For Authors: Q1. Concerns regarding the title of the paper. Imitation learning has a special connotation (supervised learning from (visual)-proprio-action data), where the policy is not updated via online-RL. It seems to be the case that “online RL from single temporally misaligned video” seems more reasonable. If it is purely imitation learning, then a comparison with Vid2Robot [1] seems necessary. Q2: There’s a mismatch between the demonstration and the policy learning. It is quite weird that while the demonstration only includes visual observation, the learned RL policy has access to full state information and does not take in visual observation: “All policies use state-based input, and in metaworld we include an additional feature that represents the percentage of total timesteps passed” (pg 5). This design seems counterintuitive because state information is quite hard to extract in real-world RL/robotics applications (see Vid2Robot [1], where in certain timesteps object states may be partially/not observable). Q3: Concerns on generalization: in the setting that is being presented in the paper (figure 3, section 5.1) it seems that the environment for RL training is not randomized (i.e. for the door-open task, the location of the door isn’t randomized). Please let me know if this understanding is correct. I think the key challenge, as identified by other visual imitation learning works that are video, video-action, or state-action conditioned [1,2,3], the condition only provides a signal of the task definition, and the test time environment configuration may be quite different. It is unclear how the current version of the method can be applied to those settings. [1] Jain, Vidhi, Maria Attarian, Nikhil J. Joshi, Ayzaan Wahid, Danny Driess, Quan Vuong, Pannag R. Sanketi et al. "Vid2robot: End-to-end video-conditioned policy learning with cross-attention transformers." arXiv preprint arXiv:2403.12943 (2024). [2] Fu, Letian, Huang Huang, Gaurav Datta, Lawrence Yunliang Chen, William Chung-Ho Panitch, Fangchen Liu, Hui Li, and Ken Goldberg. "In-context imitation learning via next-token prediction." arXiv preprint arXiv:2408.15980 (2024). [3] Xu, Mengdi, Yikang Shen, Shun Zhang, Yuchen Lu, Ding Zhao, Joshua Tenenbaum, and Chuang Gan. "Prompting decision transformer for few-shot policy generalization." In international conference on machine learning, pp. 24631-24645. PMLR, 2022. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We are grateful that the reviewer engages closely with our ideas. We thank the reviewer for the detailed suggestions to make our paper better, and we address the concerns below. # Concerns ## Clarification about paper title We agree that a portion of imitation learning literature assumes access to action data. However, imitation learning is a broad category of algorithms that includes inverse reinforcement learning. IRL first estimates "the hidden objectives of desired behavior from demonstrations" before learning a policy to maximize the expected return [1]. Our work specifically studies imitation learning from observation, where demonstrations are just visual observations [6,7]. Without action labels, IRL is a more feasible choice. Our approach closely resembles the IRL framework because the ORCA reward is estimated from demonstrations. Unlike traditional IRL methods, ORCA does not directly learn a reward function (e.g., a linear combination of features) [2,3] or a discriminator [4,5]. Instead, it is similar to works that leverage OT [6,7] to match the learner and demonstration distributions. ## Recent works on learning from video demonstrations We thank the reviewer for sharing works outside of the IRL paradigm. We will update our related works section to discuss them. We also clarify that all the mentioned works require demonstrations with action data, which ORCA and its baselines do not have access to. Notably, all of them except KAT [11] require more than 10k trajectories of observations and robot actions to train their policy model. KAT requires 10 robot trajectories with action labels as in context examples. In contrast, ORCA and baselines can estimate the rewards and train the policy with just one action-free demonstration. ## RL Policy Input ORCA's goal is to estimate rewards from a single demonstration in the observation space before learning a policy. We pose no assumptions on the policy itself. As reviewer gxJg points out, by following prior work's standard RL implementation and using a state-based policy input [7-9], we can focus on evaluating ORCA’s reward formulation. We also trained visual policies using DrQv2 [10] on a subset of the Metaworld tasks. The state and image-based policies trained with ORCA have similar performance (average normalized return: 0.704 ± 0.10 → 0.76 ± 0.06). In contrast, policies trained with TemporalOT perform poorly, regardless of their input. | Task | TemporalOT (state-based policy) | TemporalOT (image-based policy) | ORCA (state-based policy) | ORCA (image-based policy) | |---|---|---|---|---| | Button-press | 0.10 (0.02) | 0.00 (0.00) | **0.62 (0.11)** | **0.60 (0.11)** | | Door-close | 0.19 (0.01) | 0.12 (0.01) | **0.88 (0.01)** | **0.88 (0.02)** | | Door-open | 0.08 (0.01) | 0.00 (0.00) | 0.89 (0.13) | **1.31 (0.12)** | | Window-open | 0.26 (0.05) | 0.28 (0.05) | **0.85 (0.16)** | **0.79 (0.15)** | | Lever-pull | 0.07 (0.03) | 0.00 (0.00) | **0.28 (0.09)** | **0.19 (0.08)** | | Total | 0.14 (0.02) | 0.08 (0.01) | **0.71 (0.10)** | **0.76 (0.06)** | ## Clarification about environment randomization Following prior works [7,13], we use the default setup of Metaworld [12], which randomizes the object of interest locations on each rollout (see page 18 of [12] for exact randomization details). For Humanoid, we use the environment's default randomization scheme, where the initial positions and velocities of the joints are randomly initialized. We evaluate the RL policies on 10 randomly seeded environments, and the training environment has a different seed as well. [1 An algorithmic perspective on imitation learning. Foundations and Trends® in Robotics, 2018. [2] Apprenticeship learning via inverse reinforcement learning. ICML, 2004. [3] Maximum entropy inverse reinforcement learning. AAAI, 2008. [4] A connection between generative adversarial networks, inverse reinforcement learning, and energy-based models. 2016. [5] Generative adversarial imitation learning. NeurIPS, 2016. [6] Imitation learning from pixel observations for continuous control, 2022. [7] Robot policy learning with temporal optimal transport reward. NeurIPS, 2024. [8] Graph Inverse Reinforcement Learning from Diverse Videos. CoRL, 2023. [9] What Matters to You? Towards Visual Representation Alignment for Robot Learning. ICLR, 2024. [10] Mastering Visual Continuous Control: Improved Data-Augmented Reinforcement Learning. ICLR, 2022. [11] Keypoint action tokens enable in-context imitation learning in robotics. 2024. [12] Meta-world: A benchmark and evaluation for multi-task and meta reinforcement learning. CoRL, 2020.
Summary: This paper tries to address the challenge of learning sequential tasks from a single visual demonstration, particularly when the demonstration is temporally misaligned with the learner's execution. This misalignment can arise from variations in timing, differences in embodiment, or inconsistencies in task execution. The authors argue that existing imitation learning methods, which treat imitation as a frame-level distribution-matching problem, fail to enforce the correct temporal ordering of subgoals or ensure consistent progress. They propose a novel approach ORCA that defines matching at the sequence level. The key idea is that successful matching occurs when one sequence covers all subgoals in the same order as the other sequence. The ORCA reward function is recursively defined, considering both the learner's current state and whether it has covered previous subgoals correctly. Experiments on Meta-world and Humanoid-v4 tasks demonstrate that agents trained with the ORCA reward significantly outperform existing frame-level matching algorithms. ## update after rebuttal I still recommend acceptance since I don't have major concerns regarding this paper. Claims And Evidence: I think most of the claims in this submission are supported by evidence. Methods And Evaluation Criteria: Yes, they make sense to me. Theoretical Claims: I briefly checked the proofs in Appendix A.1 and did not identify any significant errors. Experimental Designs Or Analyses: I went through all the experiments, and most of them make sense to me. Supplementary Material: I reviewed the Appendix A. Relation To Broader Scientific Literature: Prior inverse RL methods, especially those leveraging optimal transport primarily focus on frame-level matching. These methods often rely on Markovian distance metrics. Other works use Dynamic Time Warping to align trajectories temporally. ORCA shifts the focus from frame-level to sequence-level matching. This is important for tasks where the order of subgoals is critical. Essential References Not Discussed: N/A Other Strengths And Weaknesses: ### Strengths - I appreciate the structure of this paper. It begins by explaining why OT and DTW are insufficient, followed by introducing the proposed method. The examples are intuitive and effectively demonstrate scenarios where OT and DTW fail. This paper is highly engaging to read. - The key insight of this paper is clear and intuitive: matching should be defined at the sequence level instead of the frame level when learning rewards for sequence-matching tasks. This idea is highly logical and resonates well. - The paper provides a theoretical analysis of both the limitations of existing frame-level matching approaches (OT, DTW, TemporalOT) and the properties of ORCA. Though the proofs are actually quite simple and straightforward, they still strengthen the paper's claims. ### Weaknesses - The abstract and introduction mention variations in embodiment, but no experiments are conducted to evaluate performance under this setting. - As noted in Section 4, the ORCA reward is non-Markovian, which violates some assumptions in standard RL settings (MDP). It remains unclear if this could make RL training harder. - Compared to learning from a single temporally misaligned video demonstration, a more common scenario involves multiple demonstrations for a task. Extending the proposed ORCA reward to handle multiple demonstrations appears non-trivial. Other Comments Or Suggestions: - The formatting at line 141 is very weird. Questions For Authors: - The tasks used in the experiments are from Meta-World and MuJoCo, which include many tasks that are not utilized in this paper. Could the authors clarify the rationale for excluding those tasks? Is it because ORCA does not perform well on them? This is not a criticism, as I understand that no method excels across all tasks. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for finding our work clear, intuitive, and engaging. Thank you for the feedback that helps further strengthen it. We will update the paper to fix minor formatting issues, and we address the reviewer's questions and concerns below. # Questions ## Q1: "Could the authors clarify the rationale for the chosen tasks?" For Metaworld, we evaluated ORCA and baselines on all nine tasks shown in TemporalOT [1], the most closely related work. We also added one more task, Door-close, because originally there was only one task (Button-press) in the Easy category. To evaluate ORCA's effectiveness on difficult control tasks, we include the MuJoCo Humanoid Environment, following the setup in prior work [2]. Because the original tasks (walking, standing up) lack demonstrations, we define tasks where we can generate demonstrations via interpolation from the initial to the goal joint state. # Concerns ## Multiple Demonstration Videos Same as prior work TemporalOT [1], ORCA can handle multiple videos by calculating the ORCA reward with respect to each demonstration and max-pooling to obtain the final reward. To study how the number of demonstrations affects performance, we ran TemporalOT and ORCA with 4, 10, and 20 temporally misaligned videos on a subset of Metaworld tasks (Door-open, Window-open, Lever-pull). These demonstrations have variations due to randomly initialized object locations, but they have the same speed. The table below reports the average normalized return with standard error. As the number of demonstrations increases, ORCA's performance also improves. In contrast, although TemporalOT handles multiple videos in the same way and benefits from having more than one demonstration, its performance soon starts decreasing as the number of demonstrations further increases. | # Demos | 1 | 4 | 10 | 20 | |------------|-----------------|-----------------|-----------------|-----------------| | TemporalOT | 0.14 (0.02) | 0.36 (0.02) | 0.30 (0.02) | 0.25 (0.02) | | **ORCA** | **0.67 (0.08)** | **1.02 (0.07)** | **1.08 (0.07)** | **1.10 (0.08)** | In addition, per reviewer SsGe's request, we study how multiple videos of different speeds affect ORCA performance. For each task, we randomly sample one video demonstration from each category of temporal misalignment Slow (H), Slow (L), Fast (L), Fast (H), as shown in Fig. 4. Policies trained with the ORCA reward and 4 demonstrations with different speed are still able to improve performance compared to having only 1 video (0.67 -> 0.77), although demonstration videos with different speeds have slightly worse performance than videos with the same speed. | # Demos | 1 | 4 (Different Speed) | 4 (Same Speed) | |------------|-----------------|---------------------|-----------------| | TemporalOT | 0.14 (0.02) | 0.16 (0.02) | 0.36 (0.02) | | **ORCA** | **0.67 (0.08)** | **0.77 (0.08)** | **1.02 (0.07)** | ## RL Training for Non-Markovian Tasks We study sequence-matching tasks where it is critical to follow the entire sequence in the correct order, thereby making the true task objective non-Markovian. ORCA and all baselines face this challenge during RL training. Prior works have explored learning policies conditioned on a belief or sequential context [3,4], and we will include these in our related work. We found it sufficient to add an additional feature to the policy input that represents the percentage of total timesteps passed. We leave exploring policy input spaces as interesting future works. ## Clarification about applications of ORCA We clarify that our focus is on learning from temporally misaligned data. This is a common problem in teleoperation data with the same embodiment, and we mention the cross embodiment setting simply to provide more context. Recent works handle temporal misalignment in teleoperation data by filtering trajectories [5,6] and manually removing pauses [7]. In contrast, we focus on designing an algorithm that can directly address issues of temporal misalignment without relying on post-processing tricks. [1] Fu et al. Robot policy learning with temporal optimal transport reward. NeurIPS, 2024. [2] Rocamonde et al. Vision-Language Models are Zero-Shot Reward Models for Reinforcement Learning. ICLR, 2024. [3] Igl et al. Deep Variational Reinforcement Learning for POMDPs. ICML, 2018. [4] Qin et al. Learning non-Markovian Decision-Making from State-only Sequences. NeurIPS, 2023. [5] Chi et al. Universal Manipulation Interface: In-The-Wild Robot Teaching Without In-The-Wild Robots. RSS, 2024. [6] Ghosh et al. Octo: An Open-Source Generalist Robot Policy. RSS, 2024. [7] Kim et al. OpenVLA: An Open-Source Vision-Language-Action Model. CoRL, 2025. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for their response. I still recommend acceptance.
Summary: This paper studies how to provide a policy with rewards using a single video. The paper argues that temporal misalignment may occur due to pauses in the video. The authors design an algorithm that calculates the probability that the policy's frame at time t corresponds to the video's frame at time j, thereby providing better rewards. Experiments on MetaWorld validate the effectiveness of the proposed method. Claims And Evidence: For the results of Metaworld, it seems that none of the baseline methods are effective, and the significant improvement of OCRA appears to be due to the weakness of the baselines. Methods And Evaluation Criteria: In many cases, we have more than one video. The article can be extended to multiple events and situations and compared with a better baseline, such as LIV: Language-Image Representations and Rewards for Robotic Control. Theoretical Claims: No theory in this paper. Experimental Designs Or Analyses: I have reviewed Table 1 and Table 2, and both seem to have a significant advantage over the baseline ToT. However, I am not sure whether the reward proposed in the paper can be widely applied. Supplementary Material: No. Relation To Broader Scientific Literature: The paper may propose a highly general reward function that can provide rewards based on a video. Essential References Not Discussed: Some works that generate rewards based on text-video pairs rather than a single video have not been discussed. In comparison, such works seem to be more scalable. LIV: Language-Image Representations and Rewards for Robotic Control Other Strengths And Weaknesses: Strengths 1. The motivation of the paper is well-developed, with thorough and thoughtful reasoning. Weaknesses 1. The experiments in the paper are not sufficiently comprehensive. I would expect to see the application of the reward model in more realistic scenarios. For example, in the Libero benchmark, can the reward model be used to perform rl to enhance a pre-trained diffusion policy? You can refer to PARL for preforming rl for diffusion policy. PARL: A Unified Framework for Policy Alignment in Reinforcement Learning from Human Feedback Other Comments Or Suggestions: No Questions For Authors: This method does not seem to require training. Is it scalable? How does it handle multiple videos? Will having more videos enhance the effect? How is the reward calculation time? For very long videos exceeding 300 frames, will computing the distance matrix each time be very time-consuming? How much time does it take? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewer's valuable feedback on how we can strengthen our work. We answer the questions and address the concerns below. # Questions ## Q: “Can ORCA improve with multiple videos?” ORCA's performance improves with more demonstration videos. We kindly refer the reviewer to our reply to reviewer YcsA for the experiment setup and detailed results. ## Q: “What is the reward calculation time?” ORCA is significantly faster than TemporalOT and Roboclip, and is comparable to all other frame-level matching baselines, even for very long videos. A detailed analysis of ORCA’s runtime is in our reply to reviewer SsGe. ## Q: "Can ORCA enhance a pre-trained diffusion policy via RL?" Yes, ORCA rewards can be applied to any pretrained policy, as shown by its effectiveness at improving policies pretrained with TemporalOT. Prior work [1] has explored using an OT reward to finetune a behavior cloning policy, and ORCA can easily be extended to this framework. However, as we focus on evaluating different reward formulations, we leave different pretraining strategies as interesting future work. # Concerns ## Clarification on existing baselines' performance Our baselines have high normalized return given temporally aligned demonstrations (OT: 0.47, TemporalOT: 0.43). However, these baselines degrade when the demonstration is misaligned. In contrast, ORCA performs the best on both aligned (0.57) and misaligned (0.50) demonstrations. We include additional baselines below, which are more competitive under misalignment. ## Additional Baselines We emphasize that we focus on learning a policy from a single video demonstration, and we consider text-based guidance to be an interesting problem for future work. Based on the reviewer’s suggestions, we introduce two new baselines. ### Text-Conditioned Reward Formulation: LIV [2] LIV is a vision-language model that can be used to calculate rewards for a learner video given a text description. For this baseline, we follow the implementation of FuRL [3], which closely resembles our IRL setup. The table below shows the average normalized return of LIV on all easy and medium Metaworld tasks. While outperforming TemporalOT, LIV performs significantly worse compared to ORCA. LIV uses a single text goal to describe the task, whereas ORCA is conditioned on a sequence of images, making ORCA denser and better at capturing details. | Category | Task | GAIfO [4] (state-based obs) | LIV [2] (same setup as [3]) | TemporalOT | **ORCA** | |---|---|---|---|---|---| | Easy | Button-press | 0.12 (0.04) | 0.00 (0.00) | 0.10 (0.02) | **0.62 (0.11)** | | Easy | Door-close | 0.34 (0.04) | 0.34 (0.09) | 0.19 (0.01) | **0.88 (0.01)** | | Medium | Door-open | 0.22 (0.05) | 0.00 (0.00) | 0.08 (0.01) | **0.89 (0.13)** | | Medium | Window-open | 0.27 (0.09) | 0.72 (0.19) | 0.26 (0.05) | **0.85 (0.16)** | | Medium | Lever-pull | 0.10 (0.03) | 0.00 (0.00) | 0.07 (0.03) | **0.28 (0.09)** | | | Total | 0.21 (0.03) | 0.21 (0.05) | 0.14 (0.01) | **0.70 (0.05)** | We also point out ORCA's strength: it works with any visual encoder, including LIV. In our reply to reviewer SsGe, we evaluate ORCA with LIV as the encoder. ### IRL Baseline: GAIfO [4] We include GAIfO, a classical IRL algorithm. It trains a discriminator, alongside the policy, to differentiate state transitions between the learner and demonstration distributions and uses it as the reward function. Due to compute limitations, we use a state-based demonstration instead of a video. In contrast, ORCA and all existing baselines use video demonstrations. While GAIfO is a competitive baseline (outperforming TemporalOT), it still performs significantly worse compared to ORCA. We hypothesize that, because there is only one demonstration, GAIfO could fixate on inconsequential details instead of estimating the task reward. ## The experiments are not sufficiently comprehensive Prior works use Metaworld for all experiments [3, 5, 6], and we test all of the tasks in [5], the most closely related work. We additionally include 4 more difficult control tasks in the Humanoid environment to demonstrate the viability of ORCA in a variety of environments and conditions. ## Related works that learn from text-video pairs We thank the reviewer for pointing out related work in learning from text-video pairs, and we will update the related work to include LIV, as well as other video learning methods suggested by reviewer xJms. [1] Watch and match: Supercharging imitation with regularized optimal transport. CoRL, 2024. [2] LIV: Language-image representations and rewards for robotic control. ICML, 2023. [3] FuRL: Visual-Language Models as Fuzzy Rewards for Reinforcement Learning. ICML, 2024. [4] Generative Adversarial Imitation from Observation. ICML, 2019. [5] Robot policy learning with temporal optimal transport reward. NeurIPS, 2024. [6] Imitation learning from observation with automatic discount scheduling. ICLR, 2024.
Summary: This paper focuses on learning sequential tasks from a single temporally misaligned video, which belongs to the imitation learning paradigm. They propose a novel reward function - ORCA, which measures the matching at the sequence level to ensure that the agent covers all subgoals in the correct order. Experiments show that compared with the best frame by frame-level method, ORCA outperforms in several tasks from Meta-world and Humanoid. Claims And Evidence: clear Methods And Evaluation Criteria: yes Theoretical Claims: yes Experimental Designs Or Analyses: yes Supplementary Material: yes Relation To Broader Scientific Literature: none Essential References Not Discussed: no Other Strengths And Weaknesses: Paper Strengths This paper proposes a novel reward function in which it provides rewards by calculating the probability that the agent covers all subgoals in the correct order, enhancing the performance of imitation learning in a time-shifted demonstration. Through rigorous mathematical proof, this paper shows that ORCA can overcome the failure cases of traditional methods in subgoal ordering and coverage. This paper achieved superior results on different tasks and the selection of tasks is sufficient(Easy-Medium-Hard). Paper Weaknesses The effectiveness of ORCA can be further clarified. ORCA is a dense reward function and implemented by dynamic programming. Thus, how about the training wall-clock time of ORCA? Is it time-consuming? The importance of pretraining is confusing. In Sec. 4.3, the authors explain that they pretrain the agent with TemporalOT and then train with the proposed ORCA. However, Table1 and Table 2 show that ORCA without pretraining might outperform ORCA in some tasks, which is inconsistent. Does this mean that ORCA has higher quality requirements for pretraining? It's necessary to analyze when it's better to introduce pretraining, otherwise it may indicate that ORCA would be difficult to deploy in real-world. The acquisition of the distance function d(∙,∙) can be further explored. In the paper, the authors utilize ResNet50 to extract visual features and the compute cosine similarity. In Fig. 10, it's found that off-the-shelf visual encoders produce noisy rewards in Mujoco and they clarify that this may require further online fine-tuning. Will different structures or scales of visual encoders (such as CLIP, R3M or MOCO) have an impact on ORCA in Meta-world or Humanoid? Please refer to R1 for details. R1: Hu Y, Wang R, Li L E, et al. For pre-trained vision models in motor control, not all policy learning methods are created equal[C]//International Conference on Machine Learning. PMLR, 2023: 13628-13651. The applicability of ORCA may need further clarification. In Fig. 4, the authors show that ORCA performs worsen when the demonstrations are slowed down and longer than the learner trajectory. Does this mean that slow demonstrations would affect the quality of skill learning and lead to slow speed during inference? ORCA could reach SOTA with only one video. So, will more demonstrations lead to better results? As mentioned in R2, the proposed method fails when multiple demonstrations are given. Thus, when more demonstrations are provided, especially when the speeds of different demonstrations are different (slow and fast), will the performance of ORCA be affected? R2: Sontakke S, Zhang J, Arnold S, et al. Roboclip: One demonstration is enough to learn robot policies[J]. Advances in Neural Information Processing Systems, 2023, 36: 55681-55693. The writing can be improved. For example, the reference of 'Table 4.3' in Line 329 and 'Table 2' in Line 311 should be unified. Other Comments Or Suggestions: see Strengths And Weaknesses Questions For Authors: see Strengths And Weaknesses Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We are grateful for the reviewer’s deep engagement with our paper, and we thank them for feedback that strengthens its clarity. We will update the paper to unify our references to tables. Please find below our responses to questions and concerns. # Questions ## Q: "When should pre-training be performed with ORCA?" We agree that there is a balance between initializing the policy in a good basin via pretraining and directly optimizing ORCA rewards. We recommend using pretraining because it results in the best and most consistent performance. Given a temporally misaligned demonstration, ORCA outperforms all baselines on all Metaworld and Humanoid tasks, not including the ablation ORCA (No Pretraining). Additionally, given a temporally aligned demonstration, ORCA still consistently outperforms baselines on 8/10 tasks, not including ORCA (No Pretraining). The 2 tasks that ORCA does not work well on (push and basketball) are difficult for all methods. Overall, ORCA is a consistent choice for good performance, given both temporally misaligned and aligned demonstrations. ## Q: "Will more demonstrations lead to better results? What happens when different demonstrations have different speeds?" We show that having access to more demonstrations leads to better ORCA performance, even when these demonstrations have significantly different speeds. We kindly refer the reviewer to our reply to reviewer YcsA for the experiment setup and detailed results. ## Q: "What's the training wall-clock time of ORCA?" Empirically, the latency of ORCA reward calculation is significantly less than both TemporalOT [2] and RoboCLIP [3], and is almost identical to optimal transport, DTW, and threshold. The table below shows the average latency (ms) of different reward computation methods. We evaluated 100 rollouts with a demonstration of length 100 and learner rollouts of length 100 and 300, respectively. Experiments were performed on an NVIDIA RTX A6000 GPU. ORCA is 24% faster than TemporalOT and 52% faster than RoboCLIP given rollouts of length 100, and it is comparable to threshold, OT, and DTW. LIV is the fastest method because it does not consider demonstration frames, but this leads to poor performance on most tasks. | method | LIV | OT | Threshold | ORCA | DTW | TemporalOT | Roboclip | |---|---|---|---|---|---|---|---| | 100 rollout frames | 38.03 ± 4.69 | 54.03 ± 0.35 | 54.69 ± 3.81 | 56.93 ± 0.33 | 58.82 ± 0.37 | 75.12 ± 1.15 | 118.21 ± 17.94 | | 300 rollout frames | 129.96 ± 3.29 | 170.17 ± 0.44 | 170.95 ± 4.16 | 179.85 ± 0.62 | 185.27 ± 0.42 | 228.46 ± 1.95 | 300.17 ± 31.30 | ORCA’s time complexity is the learner rollout horizon multiplied by the length of the demonstration, which is also the lower bound time complexity of any method that relies on a frame-level distance matrix (including all OT-based methods). ## Q: "How does ORCA perform with different visual encoders?" The choice of encoder is a practical, task-dependent choice, and not the main focus of this work. In Metaworld, we followed prior work [2] and used a pretrained Resnet50. In Humanoid, we finetuned the encoder on a set of images from the simulation environment. We include below an additional ablation of ORCA with LIV [4], a robotics-specific visual encoder, and DINOv2 [5], a standard vision model. Overall, Resnet50 achieves the best performance, although there is high variability. This substantiates the observations in [1]: "evaluation [of visual encoders] based on RL methods is highly variable." | Task | ORCA+Resnet50 (26M) | ORCA+LIV (100M) | ORCA+DINOv2-L (300M) | |---|---|---|---| | Door-open | **1.71 (0.08)** | 1.10 (0.06) | 0.44 (0.12) | | Window-open | 0.50 (0.14) | **1.17 (0.15)** | 0.65 (0.10) | | Lever-pull | **0.28 (0.09)** | 0.04 (0.01) | 0.16 (0.06) | | Total | **0.83 (0.09)** | **0.77 (0.08)** | 0.42 (0.06) | ## Q: "Do slow demonstrations affect learning and lead to a slow policy?" Given slow demonstrations, TemporalOT performs poorly because it distributes the assignment over more frames. Thus, it misses important details, which causes ORCA to have similar failure modes because it is initialized in a worse basin. Nevertheless, there are no temporal constraints in the ORCA formulation, so ORCA policies trained on slow demonstrations complete tasks at a similar speed compared to fast demonstrations ([as shown in the figure from an anonymized link](https://raw.githubusercontent.com/orcaicml/ICML_Rebuttal_2025/refs/heads/main/image.png)). [1] For pre-trained vision models in motor control, not all policy learning methods are created equal. ICML, 2023. [2] Robot policy learning with temporal optimal transport reward. NeurIPS, 2024. [3] Roboclip: One demonstration is enough to learn robot policies. NeurIPS, 2023. [4] LIV: Language-image representations and rewards for robotic control. 2023. [5] DINOv2: Learning Robust Visual Features without Supervision. TMLR, 2024.
null
null
null
null
EcoMapper: Generative Modeling for Climate-Aware Satellite Imagery
Accept (poster)
Summary: This paper introduces two generative models for the generation of controllable satellite images. The models, based upon Stable Diffusion 3, enable the controlled generation of satellite images with several control types, including: Image-conditioned generation, spatiotemporal conditioning (location, date, land cover type, clouds), and climate-control (temperature, radiation, precipitation), and a novel combination of these factors. For training these models, the paper introduces a novel dataset of 2.7 million images based on Sentinel-2, spanning more than 100k geographical locations. The model is evaluated against several baselines, across relevant metrics for image generation quality and alignment. The contributions of this paper are said to advance climate-aware earth observation. ## update after rebuttal I have read the rebuttal and other reviews. The authors have adequately addressed my concerns, and have provided a solid rebuttal overall. I am more positive about this paper and have increased my rating. Claims And Evidence: The paper makes two main claims. First that they introduced a dataset for the training of satellite image generation models. This claim is supported by evidence, provided particularly on the supplementary material, albeit more analyses would be welcome. Then, the paper introduces two different models generative models to solve the aforementioned tasks. These are presented in the paper, and relatively well evaluated, but I am not convinced that they are particularly novel in the context of generative models. Methods And Evaluation Criteria: The methods and evaluation criteria make sense int he context of the problem, I have no signifcant concerns about this (see my comments in Experimental Designs Or Analyses for a more thorough analysis of the evaluation part of this section). Nevertheless, I remain unconvinced about the novelty of the model introduced by this paper. The conditioning mechanisms used in this paper are well studied in the literature, and using them for a new task does not constitute a novel technical contribution, in my opinion. Theoretical Claims: There are no significant theoretical claims in this paper, it is very application-driven. Experimental Designs Or Analyses: I checked the experimental analyses in the paper. I found them to be sound in the context of the problem. The method is evaluated against several relevant baselines, across several metrics. Note that the metrics used in this paper are common for generative models (FID, PSNR, LPIPS, SSIM) and for text-conditioned models (CLIP). However, none of these metrics are particularly well-suited for the task this paper is trying to solve. I would suggest using SatCLIP as an additional metric for enhancing the quantitative results. Further, the paper provides extensive qualitative results across different conditionings, showing the capabilities of the model. In any case, it would be interesting to do some sensitivity analysis, to study how much the images change in the dataset when the climate or temporal variables change, and compare with how much they change in the images generated by the model. Supplementary Material: I reviewed all the supplementary material. It contains valuable information about the characteristics of the dataset, particularly on the distribution of land cover types and continent distributions, as well as their relationship to climate. In it, it is found that the Global South is well represented, however, I am puzzled by the lack of urban and rural but inhabited lands. This somewhat limits the scope of the dataset, in my opinion. The extra details in the supplementary material provides valuable information for reproducibility. Relation To Broader Scientific Literature: To my eyes, this paper introduces a dataset and generative models for tasks that are somewhat novel, which are the climate-controllable generation of satellite imagery. This was underexplored in the literature. Essential References Not Discussed: I am missing a reference to SatCLIP, which I find to be closely related to the contents of this paper. SatCLIP: Global, General-Purpose Location Embeddings with Satellite Imagery (Klemmer et al, 2023). Other Strengths And Weaknesses: - The paper is well structured, well written and the quality of the figures is generally high. - Analysis of related work is sound. - My main concern with this paper is the lack of technical novelty. While the dataset may prove valuable, the methods introduced in this paper are hardly novel, and therefore the impact of this paper may be limited to using the dataset or the generative models as tools. This limits the scope of the paper quite strongly, in my opinion. Nevertheless, this is an applications-driven paper and, as such, I believe it could have some impact. Other Comments Or Suggestions: - I suggest the authors expand this dataset with more human-inhabited areas, Questions For Authors: - What is the influence of scale and resolution in the model outputs when using image conditioning? Does it affect the outputs strongly? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Reviewer comment:** I am missing a reference to SatCLIP, which I find to be closely related to the contents of this paper. **Response:** Thank you for this helpful suggestion. We agree that SatCLIP (Klemmer et al., 2023) is highly relevant and will include a citation and brief discussion in the revised version of the paper. We considered using SatCLIP in our evaluation. However, the model is designed for multi-spectral Sentinel-2 inputs (13 channels), while our dataset consists of standard RGB images (3 channels). To make our images compatible, we would need to artificially pad the missing bands, which introduces noise and may result in embeddings that do not reflect SatCLIP’s original intent. We nonetheless tested SatCLIP for comparison, and the results support our concerns. As shown below, even the ground truth RGB images receive low (negative) SatCLIP scores, suggesting a mismatch between the model's input expectations and our dataset. **Table 7: SatCLIP Scores on RGB Imagery** | Model | Avg. SatCLIP Score | |-|-| | SD3_FT_HR | -0.0148 | | **SD3_FT** | **-0.0105** | | Diffsat_FT | -0.0285 | | Ground Truth| -0.0171 | These results suggest that SatCLIP is not well-suited for RGB-only satellite imagery, which is why we chose not to use it as a core evaluation metric. But we are happy to receive the reviewers feedback on this. **Reviewer comment:** I remain unconvinced about the novelty of the model introduced by this paper. The conditioning mechanisms used in this paper are well studied in the literature, and using them for a new task does not constitute a novel technical contribution, in my opinion. **Response** Thanks for the fair comment. As an application-focused paper, our aim is to demonstrate the feasibility of climate-aware satellite image synthesis using established models. Our contribution lies in adapting them for multi-conditional generation with structured climate prompts and continuous variables. We also added more discussion about the prompting strategy as outlined in the reply to reviewer 54H1, as we think this is a valuable contribution to the development of future climate- aware generative models for EO data. **Reviewer comment:** I am puzzled by the lack of urban and rural but inhabited lands. This somewhat limits the scope of the dataset, in my opinion. **Response:** Thank you for raising this point. Our dataset was constructed through globally uniform random sampling and reflects the class distribution of the underlying land cover map, which defines only 16 categories—of which “Urban and Built-Up” is the only inhabited land type explicitly represented. We did not manually adjust class frequencies, and the share of urban areas (~0.7%) aligns with the global land surface covered by this class. While inhabited areas are important, our primary focus is on landscapes where climate conditions are a key driver of visual and environmental variation. In urban settings, land cover changes are often dominated by human interventions (e.g., new buildings, roads), which are less predictable from climate variables alone. For this reason, we consider urban and densely inhabited areas to be of secondary relevance in the current version of the dataset. **Reviewer question:** What is the influence of scale and resolution in the model outputs when using image conditioning? Does it affect the outputs strongly? **Response:** Thank you for this technical question. Our ControlNet-based model operates at a fixed resolution of 512×512, which is a downscaled version of SD3’s native 1024×1024. During fine-tuning, we adjusted only the last layers, allowing the model to adapt its pre-trained high-resolution features to the lower resolution. In practice, using higher-resolution conditioning inputs could improve spatial detail and texture fidelity. However, since the model is trained entirely at 512×512, feeding 1024×1024 inputs without re-training could introduce inconsistencies or degrade visual quality due to the resolution mismatch. We agree that training at higher resolutions would likely enhance fine-grained spatial accuracy and believe it is a promising topic for future research activities. --- Rebuttal Comment 1.1: Comment: I have read the rebuttal and other reviews. The authors have adequately addressed my concerns, and have provided a solid rebuttal overall. I am more positive about this paper and will increase my rating. --- Reply to Comment 1.1.1: Comment: We are pleased that we were able to address the concerns, thank you for the kind feedback and we are happily awaiting the rating upgrade.
Summary: This paper extends previous work on satellite image generation by introducing a larger dataset and considering two generation scenarios - text2img and ControlNet. Quantitative comparison shows superior performance only in FID but not other metrics (i.e., CLIP, SSIM, PSNR, LPIPS). Claims And Evidence: This paper presents EcoMapper, a large-scale satellite imagery dataset integrated with climate data. While the dataset's size is notable, the paper's primary contribution and the novelty of its proposed prompting setups remain unclear. The prompting methods, as presented, appear incremental. Furthermore, while potential applications like forecasting and scenario analysis are mentioned, the paper lacks concrete evidence demonstrating the utility of the generated imagery for these downstream tasks. Demonstrating the practical benefits of EcoMapper's generative capabilities would significantly enhance the paper's impact. Methods And Evaluation Criteria: Since it is a generative modeling task, the presented evaluation is standard. Theoretical Claims: NA Experimental Designs Or Analyses: One way to "convince" the reader of the contribution of this work in forecasting is predicting hold-out images or future images. Say the model had stopped training for one month, and then I would wonder how good or bad this model is in forecasting future imageries, conditioning on future climate data (which is more mature and well-studied). In addition to this, a detailed analysis upon why the model performs good or bad can be very helpful for future work on satellite imgery generation. Stratified analysis such as under what conditions the model performs worse (e.g. extreme hot weather, or snowy scene) can also provide more insights to this community. I am very willing to raise my score is this experiment is provided. Supplementary Material: All Relation To Broader Scientific Literature: NA Essential References Not Discussed: This work can be helpful for cloud removal in satellite imagery, and the reviewer can consider adding more references in this domain, such as SEN12MS-CR-TS (TGRS'22), AllClear (NeurIPS'24), or DiffCR (TGRS'24). It would also be helpful if the authors discuss concurrent work such as CRS-Diff (TGRS'24) and MetaEarth (TGRS'24). Other Strengths And Weaknesses: NA Other Comments Or Suggestions: NA Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your detailed and constructive feedback. In the following, we address the key points you raised. **Reviewer comment:** While the dataset's size is notable, the paper's primary contribution and the novelty of its proposed prompting setups remain unclear. **Response:** The prompting strategy plays a key role in our approach. We tested several strategies for conditioning the model on climate and spatial context. These experiments are detailed in our response to Reviewer 54H1, where we present a breakdown of different prompt setups and their impact on generation quality. We kindly refer you to that section for a more in-depth explanation. **Reviewer comment:** One way to demonstrate forecasting value would be to evaluate the model on hold-out or future satellite images conditioned on climate data. **Response:** We conducted additional experiments to address this point. We evaluated all models on satellite images from 2023 and 2024, using the same 5,500 test locations as in the original setup. These years were not part of the training period (2017–2022), establishing a strict temporal split. The results (see below) show stable and consistent performance as in the other testings before, indicating that our models generalize well to unseen time periods. We will include this experiment, along with results for the single-image generation setup, in the final version of the paper. The 2023–2024 test data will be added to the dataset. **ControlNet – Year-wise Testset Results** |Year|CLIP↑|SSIM↑|PSNR↑|LPIPS↓|FID↓|Dataset share %| |-|-|-|-|-|-|-| |2017|0.36|0.40|13.33|0.61|102.14|8.5%| |2018|0.35|0.41|14.00|0.58|84.12|12.6%| |2019|0.36|0.41|13.90|0.58|89.52|12.4%| |2020|0.35|0.40|13.40|0.59|86.37|12.5%| |2021|0.33|0.40|13.73|0.58|83.30|12.1%| |2022|0.33|0.40|13.27|0.58|77.47|14.8%| |2023|0.35|0.43|14.09|0.59|93.21|9.8%| |2024|0.32|0.40|13.48|0.59|73.54|17.3%| We want to briefly clarify the use of the Fréchet Inception Distance (FID). FID is known to be less reliable on small test sets, as it estimates distribution similarity based on feature mean and covariance from a pre-trained Inception network. Limited sample size can introduce noise and variance in these estimates. We interpret FID cautiously and emphasize the importance of using it alongside other metrics to ensure a more robust evaluation. **Reviewer comment:** A detailed analysis of when and why the model performs well or poorly would be helpful. Stratified evaluation (e.g., under extreme weather conditions) could offer valuable insights. I am very willing to raise my score is this experiment is provided. **Response:** Thank you for this important suggestion. To address this, we performed additional stratified experiments focusing on extreme climate conditions, evaluating model performance across scenarios of high/low temperature, precipitation, and radiation. These weather extremes are closely tied to challenging land cover types (e.g., snowy forests, cloud-covered areas), which tend to be underrepresented in the training set and inherently harder to reconstruct. In contrast, warmer and drier regions — which are more common in the data — yield more accurate generations. The results below illustrate that low temperatures, high precipitation, and low radiation correspond to significantly lower visual fidelity and structural alignment, while high radiation and dry conditions lead to much better performance **Table 10: SD3_FT Performance Under Extreme Climate Conditions** |Condition|FID↓|SSIM↑|PSNR↑|LPIPS↓|Inception Score↑|CLIP Score↑| |-|-|-|-|-|-|-| |High Temperature|115.33|0.475|15.52|0.640|3.790|0.353| |Low Temperature|145.11|0.260|11.79|0.756|3.745|0.355| |High Precipitation|170.81|0.365|11.93|0.725|3.261|0.403| |Low Precipitation|85.63|0.430|15.10|0.665|4.185|0.339| |High Radiation|107.35|0.469|15.83|0.646|3.909|0.362| |Low Radiation|141.37|0.252|11.65|0.770|3.941|0.327| We also provide a detailed per–land cover class analysis in our response to Reviewer WAR9 (ControlNet) **Reviewer comment:** The reviewer suggests citing additional related work, including SEN12MS-CR-TS, AllClear, DiffCR, CRS-Diff, and MetaEarth. **Response:** We acknowledge the growing number of generative models and datasets in EO and will take up these works in the discussion section of our paper and include references in the revised version. SEN12MS-CR-TS and AllClear focus on cloud removal using multi-modal Sentinel-1/2 data but are limited either temporally (e.g., only 2022) or regionally. DiffCR uses diffusion for cloud removal but lacks temporal and climate conditioning. CRS-Diff introduces multi-modal control but operates at different resolutions and without climate or multi-temporal information. MetaEarth enables large-scale image generation but relies solely on image and geographic metadata. In contrast, our work uniquely combines image and continuous climate conditioning for globally distributed, climate-aware satellite image synthesis. --- Rebuttal Comment 1.1: Comment: The authors have addressed my concerns with the additional experiments on temporally held-out experiments and stratification outcomes. I will raise my rating. --- Reply to Comment 1.1.1: Comment: Thank you for your kind feedback and thank you for the rating upgrade.
Summary: This paper introduces EcoMapper, which combines climate data with satellite imagery based on Sentinel-2 images. Satellite images often face observational challenges, such as areas affected by cloud cover and inherent resolution limitations that can hinder accurate analysis. To overcome these issues, the authors propose a text-prompt-based generative model that leverages both spatial and climate information to synthesize realistic satellite images. This approach promises to enhance the utility of satellite data in applications like environmental monitoring, climate change prediction, and agricultural planning. Claims And Evidence: The authors claim that satellite images can be generated using text prompts containing spatial and climate information, and that the control image plays a crucial role in preserving the spatial structure. To support this claim, it would be beneficial to include experiments that analyze the generated images based on control images taken from different timelines of the same region. Methods And Evaluation Criteria: The authors conducted experiments using a sufficiently large Sentinel-2 image dataset and evaluated the quality of the generated images based on various quantitative metrics such as FID, PSNR, and SSIM. These performance evaluation methods are appropriate. However, given the unique challenges of the satellite imagery domain, it would be valuable to include assessments that measure how closely the generated images match real-world conditions. Theoretical Claims: In this paper, no new theoretical proofs or claims are directly addressed. Instead, the model is built upon existing research, such as Stable Diffusion, and its performance is verified through experiments. Experimental Designs Or Analyses: The number of experiments seems insufficient to fully validate the model's overall generalization capabilities. In particular, there is a lack of analysis for complex regions, such as urban areas, where the data and spatial structures are more challenging. Supplementary Material: The authors did not submit any additional supplementary material. Relation To Broader Scientific Literature: This paper presents a new text prompt-based approach capable of reflecting various environmental conditions. The authors propose a method to generate conditioned satellite images by simultaneously incorporating spatial information and climate data. Essential References Not Discussed: There do not appear to be any essential references missing. Other Strengths And Weaknesses: Strength: The paper introduces a deep learning model for generating realistic satellite images, which has potential for applications of satellite data. Weaknesses: 1. Limited ablation studies on control images: The paper does not provide detailed ablation studies on the selection and impact of different control images. This analysis is essential to determine the robustness of the approach. 2. Text prompt methodology: The authors incorporate text prompts to condition the generative process, however, there is insufficient discussion on how to concretely define effective text prompts, and the experiments evaluating which types of text prompts yield the best performance are lacking. 3. Overall structure and generalizability: The overall structure does not appear particularly novel, and there are concerns regarding the generalizability of the approach Other Comments Or Suggestions: No additional comments. Questions For Authors: Questions are embedded in the above section. Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: We thank the reviewer for the feedback. We would like to address the main critics: **Reviewer:** The paper does not provide detailed ablation studies on the selection and impact of different control images. **Response:** While it's not fully clear what is meant by "selection and impact" of control images, we interpret this as a question of robustness. To address this, we conducted further analysis of the ControlNet model's performance across land cover types (2017–2024 test set). Results show that classes with higher representation tend to generalize better, while less frequent classes are more challenging to generate consistently. **Controlnet landcover types** |Land Cover|CLIP↑|SSIM↑|PSNR↑|LPIPS↓|FID↓|Dataset %| |-|-|-|-|-|-|-| |Grasslands|0.33|0.40|14.06|0.60|65.27|23.7%| |Savannas|0.33|0.34|10.55|0.65|94.00|21.1%| |Barren/Sparse Veg.|0.32|0.60|20.18|0.40|67.44|16.1%| |Open Shrublands|0.34|0.41|15.95|0.58|88.25|9.9%| |Evg. Broadleaf Forests|0.37|0.39|9.13|0.68|122.37|9.9%| |Woody Savannas|0.33|0.36|10.32|0.64|109.39|9.1%| |Croplands|0.34|0.29|12.91|0.60|100.48|9.0%| |Mixed Forests|0.36|0.29|11.11|0.65|148.46|3.6%| |Dec. Broadleaf Forests|0.42|0.33|11.81|0.69|172.39|2.1%| |Evg. Needleleaf Forests|0.44|0.27|10.88|0.68|169.07|2.0%| |Crop/Nat. Veg. Mosaics|0.42|0.35|12.16|0.62|214.63|1.1%| |Wetlands|0.50|0.38|8.61|0.67|239.32|0.8%| |Urban/Built-Up|0.66|0.29|12.01|0.55|214.36|0.7%| |Closed Shrublands|0.47|0.18|8.28|0.62|215.98|0.4%| |Dec. Needleleaf Forests|0.52|0.28|12.17|0.65|297.67|0.2%| **Reviewer:** *It would be valuable to include assessments that measure how closely the generated images match real-world conditions.* **Response:** We use established metrics such as FID, SSIM, and LPIPS, which are widely applied in both general and remote sensing image generation tasks — including in DiffusionSat (Khanna et al., 2023), which serves as a foundation for our work. This helps ensure that both visual quality and geographic realism are captured. We acknowledge that downstream evaluation would be a valuable addition. We want to highlight a more detailed answer at reviewer WAR9. **Reviewer:** No supplementary material was provided **Response** We would like to clarify that we included supplementary material starting on page 12 of the main submission. **Reviewer:** Text prompt methodology: The paper lacks sufficient discussion and comparison of different prompting strategies **Response:** We appreciate this valid point and extend our appendix to include details on the development and evaluation of our prompting strategies. Across all experiments, the CLIP encoder input was kept constant using a short spatial prompt to ensure consistent image-text alignment. Variations were introduced only in the T5 encoder input, which handled climate-related information. We evaluated the following strategies: - **Numerical Climate Data + Short Prompt** (**best performance**) Combined a short spatial prompt with continuous climate variables (e.g., temperature, precipitation, solar radiation). - **Categorical Climate Data + Short Prompt** Climate variables were discretized in the prompt into human-readable categories (e.g., “hot,” “moderate,” “extreme precipitation”). - **Numerical Climate Data + Short Prompt with dropout** Introduced stochasticity by randomly dropping parts of the spatial and temporal metadata from the prompt. - **Categorical Climate Data Only** Used only categorical climate descriptors without the spatial prompt, encouraging the model to rely entirely on environmental signals. Due to computational constraints, we leave a broader investigation of prompting strategies to future work and consider our results here as an initial guide for further exploration. Below we report results for first three prompting strategies under comparable conditions. | FID | Inception Score | CLIP | LPIPS | PSNR | SSIM | Prompting # | |-:|-:|-:|-:|-:|-:|-| |68|4.7|0.33|0.66|13.11|0.35|1| |72|4.8|0.33|0.69|11.90|0.35|2| |94|4.5|0.35|0.71|13.30|0.35|3| **Reviewer comment:** The number of experiments seems insufficient to fully validate generalization, especially for complex regions like urban areas. The overall structure does not appear novel **Response:** As an application-focused paper, our aim is to demonstrate the feasibility of climate-aware satellite image synthesis using established models. Our contribution lies in adapting them for multi-conditional generation with structured climate prompts and continuous variables. While our dataset includes some urban areas due to global sampling, our focus is on climate-sensitive landscapes where environmental change is linked to climate inputs. Urban dynamics, in contrast, are typically shaped by human decision-making and fall outside the primary scope of this work. For generalization, we refer to our detailed response to Reviewer WAR9, where we introduce a new temporal evaluation on 2023 and 2024 data and report consistent results.
Summary: The paper introduces EcoMapper, a generative modeling framework designed to synthesize climate-aware satellite imagery. It provides two primary contributions: - EcoMapper Dataset: A comprehensive dataset comprising 2.7 million Sentinel-2 RGB satellite images from 104,424 global locations, annotated with climate metadata (temperature, precipitation, solar radiation) and spanning 15 land cover types. - Generative Modeling Approaches: * Text-to-Image Generation: Uses fine-tuned Stable Diffusion 3 models conditioned on structured textual prompts (geographic location, date, and climate metadata) to generate realistic synthetic satellite images. * Conditional Image Generation: Employs ControlNet, enabling guided generation of satellite, allowing realistic representation of climate-driven landscape evolution and seasonal variations. Contributions include: - Demonstrating the feasibility of generating realistic satellite imagery conditioned explicitly on climate information, validated through quantitative metrics (FID, CLIP, SSIM, PSNR, LPIPS) and qualitative examples. - A sensitivity analysis illustrating the generative models' capability to reflect climate-induced visual changes across various land cover types and climate conditions. - Showing improved generative performance through fine-tuning models with climate-specific metadata, especially highlighting the advantage of spatial conditioning via ControlNet for preserving geographical consistency. ## Update after rebuttal Thank you for clearly addressing my main concerns. The authors have provided important clarifications that strengthen the paper: - They've revised the introduction to better articulate the intended applications of their approach, including forecasting models, climate change visualization, and filling observational gaps. - They've conducted a new temporal validation experiment using 2023-2024 data (beyond their original 2017-2022 training period), demonstrating good generalization across time periods, which is critical for climate-related applications. These responses address my primary concerns about application clarity and temporal generalization. The EcoMapper dataset and framework represent a valuable contribution to climate-aware satellite imagery generation. Given these clarifications, I updated my recommendation to accept this paper. Claims And Evidence: The paper's main claims—that generative models can produce realistic climate-conditioned satellite imagery—are supported by clear qualitative and quantitative evidence. However, the claim that these synthetic images are directly useful for environmental monitoring, scenario planning, or policy-making lacks explicit evidence from real-world downstream tasks. The evaluation primarily focuses on visual realism rather than practical accuracy or usefulness, making this latter set of claims less convincingly supported. Methods And Evaluation Criteria: The methods used (Stable Diffusion 3 and ControlNet for satellite image generation) and the dataset linking Sentinel-2 images with climate data are appropriate for generating realistic climate-conditioned images. The evaluation metrics (FID, CLIP, SSIM, PSNR, LPIPS) are suitable for measuring visual quality and realism. However, these metrics don't directly measure practical usefulness or accuracy for real-world applications like forecasting, mapping, scenario planning or environmental monitoring. Including additional tests or user evaluations that show practical utility would make the methods more convincing. Theoretical Claims: The paper does not include any theoretical proofs or formal mathematical claims. Experimental Designs Or Analyses: Yes, I checked the experimental design for evaluating generated images. The experiments clearly compared different generative models using standard visual metrics, which makes sense for assessing image realism. However, the dataset uses random splits without considering time, making it unclear if the model can actually generalize to future climate conditions. This might overestimate how well the model truly performs in real-world extreme climate scenarios. Supplementary Material: No Relation To Broader Scientific Literature: The paper builds on recent advancements in generative models (like Stable Diffusion) and applies them specifically to remote sensing data. Prior works have used similar models for satellite imagery tasks such as super-resolution, image-to-image translation, and cloud removal. This paper uniquely extends these ideas by conditioning the generative models explicitly on climate variables (temperature, precipitation, solar radiation), enabling visualization of climate-driven environmental changes. It also introduces the EcoMapper dataset, significantly expanding available satellite imagery datasets linked to climate metadata. Essential References Not Discussed: None. Other Strengths And Weaknesses: Strengths: - The paper combines satellite imagery and climate data using generative AI, which is original and timely. - The new large-scale dataset (EcoMapper) is valuable for researchers working on related problems. - The examples provided clearly illustrate the model’s ability to represent climate-driven changes visually, making it useful for education, communication, or scenario planning. Weaknesses: - The intended application (visualization, communication, public awareness) wasn't clearly stated upfront, causing some confusion. - The evaluation doesn't clearly demonstrate real-world usefulness or accuracy beyond visual quality. - The data splitting method doesn't explicitly consider time, limiting confidence in the results for forecasting/scenario planning scenarios. Other Comments Or Suggestions: None Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive feedback and thoughtful evaluation of our work. Below we respond to the key points raised. **Reviewer comment:** The paper lacks explicit evidence that the synthetic images are directly useful for downstream tasks like environmental monitoring or scenario planning. **Response:** We acknowledge this important point. Our current work focuses on demonstrating the feasibility of climate-aware generative modeling for satellite imagery — a foundational step for downstream applications such as forecasting, scenario simulation, or visualization. Operational deployment (e.g., crop yield prediction, land use modeling) requires additional task-specific pipelines, benchmark data sets and a full set of research experiments, which are beyond the current scope. We note that related work such as DiffusionSat similarly does also not evaluate downstream utility and probably would be an own full paper. However, our method extends prior work by enabling explicit control over continuous climate variables, making it well-suited for simulation-based use cases. Prior studies (e.g., Liu et al., 2024; Toker et al., 2024) have shown that synthetic remote sensing imagery can support applications such as cloud-gap filling, semantic segmentation, and training data augmentation. Strong performance across quantitative metrics (FID, SSIM, CLIP, SATCLIP) signals practical utility. Moreover, synthetic imagery becomes especially valuable where real observations do not exist — such as forecasting future landscapes under climate change. We consider this work a foundational step and are actively exploring downstream integrations in follow-up projects. **Reviewer comment:** The paper’s intended application (visualization, communication, public awareness) is not clearly stated in the introduction. **Response:** Thank you for this helpful suggestion. We revised the end of the introduction to clarify the core use cases of our approach. The updated text reads: > *“In this paper, we introduce a novel approach to generate satellite images conditioned on geographic-climate prompts using Stable Diffusion 3. Our method enables the simulation of how weather and climate affect Earth’s surface — generating synthetic images that can support forecasting models (e.g., crop yield prediction or land cover classification), visualize climate change models under various scenario assumptions, or fill observational gaps in regions affected by persistent cloud cover. The approach is globally applicable and generates realistic images with 10-meter spatial resolution across diverse vegetation types (e.g., cropland, broadleaf forests, savannas), using information about location, land cover type, and climate conditions.”* **Reviewer comment:** The dataset split does not consider time, limiting confidence in generalization to future conditions. **Response:** We agree that temporal independence is critical for evaluating generalization. To address this, we added a new experiment in which all models were evaluated on satellite imagery from **2023 and 2024** at the 5,500 test locations. These years were not included in the former data set (2017–2022), ensuring a strict temporal split. Results show that the model generalizes well across years (see table below). The performance in the new test data does not significant deviate from the other test settings. We will integrate this experiment into the final version of the paper, and release the test data as part of the dataset upon acceptance. Additionally, our globally sampled dataset reduces the risk of encountering entirely unseen climate scenarios, and our edge-case tests (see Fig. 5) further support the model’s robustness to outlier conditions (please see here reviewer vwzA. **ControlNet Year Metrics** | Year | CLIP ↑ | SSIM ↑ | PSNR ↑ | LPIPS ↓ | FID ↓ | Dataset share (%)| |-|-|-|-|-|-|-| | 2017 | 0.36 | 0.40 | 13.33 | 0.61 | 102.14 | 8.48% | | 2018 | 0.35 | 0.41 | 14.00 | 0.58 | 84.12 | 12.64% | | 2019 | 0.36 | 0.41 | 13.90 | 0.58 | 89.52 | 12.38% | | 2020 | 0.35 | 0.40 | 13.40 | 0.59 | 86.37 | 12.50% | | 2021 | 0.33 | 0.40 | 13.73 | 0.58 | 83.30 | 12.14% | | 2022 | 0.33 | 0.40 | 13.27 | 0.58 | 77.47 | 14.83% | | 2023 | 0.35 | 0.43 | 14.09 | 0.59 | 93.21 | 9.78% | | 2024 | 0.32 | 0.40 | 13.48 | 0.59 | 73.54 | 17.34% | **Reviewer comment:** No supplementary material provided. **Response:** We would like to clarify that supplementary material begins on page 12 of the main submission. It includes additional information about the dataset structure, evaluation metrics, fine-tuning procedures, and our prompting strategy. We hope this addresses the concern.
null
null
null
null
null
null
Controlled Generation with Equivariant Variational Flow Matching
Accept (poster)
Summary: In this paper, the authors present a controlled generation objective in the framework of Variational Flow Matching (VFM), as well as an equivariant formulation of VFM which has applications in 3D molecule generation. For controlled generation, the authors demonstrate that both end-to-end constrained training and post-hoc modification of pretrained models is possible under their methodology. Results are demonstrated on several unconditional and conditional molecular generation tasks, including the QM9, GEOM-Drugs, and Zinc250k datasets. Claims And Evidence: 1. In Table 2, the authors present results on continuous molecular generation with G-VFM as “our variational treatment of flow matching”. However, as I understood, this Unconditional Generation section is simply reproducing/re-implementing the previously described VFM framework on these specific datasets. The authors state this explicitly for Table 1 (discrete generation), but not for Tables 2 and 3. Either the specific contributions from this work for unconditional generation should be made more clear, or this section should be moved to the Appendix, because it is not in service to the story/main contributions of the paper, which I interpreted to be controlled generation and equivariance. 2. D-Flow seems to considerably outperform the proposed G-VFM methods for controlled generation in Table 4. The authors claim that this comes at the expense of increased generation cost for D-Flow, but I didn’t see a specific quantification of this cost (I may have missed it). NFE should be added to Table 4, unless I am missing something. Without this info, it is hard to make a judgment about whether the substantially worse results of G-VFM relative to D-Flow are justified by computational acceleration. Methods And Evaluation Criteria: 1. Given the exclusive focus on molecule generation and a methodological section devoted to proving equivariance properties of VFM, I did not find any investigation/mention of equivariance in the experiments/evaluation (for instance, a direct ablation/comparison between VFM with the equivariance properties satisfied vs not satisfied). This begs the question: if you can enforce equivariant/invariant generative dynamics, so what? Is it actually practically useful for, e.g., producing more physical realistic samples and/or achieving better conditioning scores? While the authors perhaps treated this as self-evident, there is growing evidence in adjacent fields that equivariance can be learned from data without explicit enforcement, and that this is actually the preferred approach as models/data scales [1]. I would like to see a more thorough investigation of this. 2. Related to #1, given the lack of focus on equivariance in the results, I would also expect much broader evaluations/demonstration of the proposed method on different modalities in which equivariance is not required such as controlled image generation (with target properties like compressibility, prompt-image alignment, aesthetic quality, etc.) See [2] for details. Without this, I feel that the current scope of evaluation of the paper is too narrow to be of broad interest. 3. Given the explicit mentioning of alternative conditioning strategies like classifier guidance (CG), I would have expected a direct comparison to CG and CFG to empirically demonstrate the benefits. [1] Qu, Eric, and Aditi Krishnapriyan. "The importance of being scalable: Improving the speed and accuracy of neural network interatomic potentials across chemical domains." Advances in Neural Information Processing Systems 37 (2024): 139030-139053. [2] Black, Kevin, et al. "Training diffusion models with reinforcement learning." arXiv preprint arXiv:2305.13301 (2023). Theoretical Claims: The primary theoretical claim is that given appropriate choices of the prior, conditional velocity field, and variational posterior, the generative dynamics of VFM are invariant under transformations in SO(n). I did not carefully review the proof. Since I am not very familiar with the literature in this area, can the authors comment on how different/surprising this result is compared to (what I assume are known equivariance properties) of regular FM? Experimental Designs Or Analyses: See above. Supplementary Material: I briefly reviewed the proofs in the SI Relation To Broader Scientific Literature: This work builds upon Variational Flow Matching, first introduced in [1]. [1] Eijkelboom, Floor, et al. "Variational flow matching for graph generation." Advances in Neural Information Processing Systems 37 (2024): 11735-11764. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The paper is well written and easy to follow Other Comments Or Suggestions: Include citations for baselines directly in the Tables. Questions For Authors: How does the performance of the proposed post-hoc modification approach change as a function of how out-of-distribution the target property is for the generative model and/or classifier? Is classification always used as guidance even if the target is continuous-valued (e.g by binning into discrete classes). Or is the approach equally valid for regression as well? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear reviewer uXdp, Thank you for the detailed and thoughtful review. We appreciate your close reading and thank you for your constructive suggestions. Below we address the main points raised in your review and clarify several aspects that were not sufficiently emphasized in the original submission. **1. Contribution in Unconditional Generation (Table 2)** We agree that the motivation behind the unconditional generation section was not clearly explained. Our goal with Table 2 was not to introduce a new architecture, but to show that strong performance can be achieved via a simple, unified formulation under the VFM framework. By linearly interpolating between noise and data—whether continuous or discrete—and selecting an appropriate variational distribution (eg, Gaussian or categorical), one can train a single model with a shared objective across modalities. This avoids the need for specialized architectures or training regimes typically required in standard FM for mixed data types, and even generalizes to other distributions (eg, Poisson for neural activity). We have clarified this conceptual simplicity in the revision and added citations for baselines directly in the tables. **2. Comparison to D-Flow and Efficiency Tradeoff** While D-Flow performs better in Table 4, it does so by evaluating the forward process over many candidate initializations ($n \times K$ evaluations, eg, $10 \times 100$). In contrast, our method uses a single forward pass and a short fixed-point calibration for conditional guidance—requiring only $K$ forward evaluations and no additional gradient computations. We prioritize simplicity and scalability, whereas D-Flow trades compute for precision. Both are valid design choices, and we now highlight this tradeoff explicitly in the paper, including a discussion of guidance cost and number of function evaluations (NFE). **3. Equivariance Evaluation and Practical Utility** You are correct that we do not isolate the empirical effect of equivariance. While the theoretical formulation is a key contribution, we agree that deeper empirical validation would strengthen the work. We now clarify which models enforce which equivariant properties and will provide an ablation in the final version. **4. Generality Beyond Molecular Generation** While molecular generation is a natural testbed due to its symmetry constraints, the proposed methods are general. VFM provides a flexible framework for combining learned dynamics with structural inductive biases. We now emphasize this more clearly in the discussion and are actively exploring applications beyond molecules, including images (to be included in the final version, at least as a proof of concept). **5. Comparison to CG / CFG Methods** You raise a valid point. Our method assumes access to a classifier over $x_1$ rather than a time-dependent $p_t(y \mid x_t)$. While related, these regimes differ in flexibility and cost. CG/CFG typically requires backpropagating through a denoised prediction at every timestep, which is expensive and needs conditional training. Our fixed-point procedure operates post-hoc on the endpoint and does not require joint training. We now clarify this key distinction in the manuscript. **6. Theoretical Novelty of Equivariance Result** While standard flow matching methods can exhibit equivariant behavior under certain conditions, our contribution is to show that equivariance can be guaranteed through the variational distribution—without requiring the velocity field itself to be equivariant. This decoupling is novel and enables greater modularity: symmetry constraints can be enforced directly through the variational family, independent of the learned dynamics. This allows for clean, flexible design of symmetry-preserving generative models and simplifies implementation in practice. We now clarify this point more explicitly in the final version. Once again, we thank the reviewer for their careful and constructive feedback. Your comments helped us significantly improve the clarity, presentation, and positioning of our contributions. --- Rebuttal Comment 1.1: Comment: I thank the authors for their clarifications. I have a few more questions. 1. I still don't understand the contribution of Table 2. What is the precise difference between "the simple, unified formulation of VFM" and the original VFM paper by Eijkelboom, et. al.? And I don't see how mixed modalities/data types factors in here. Is it the case that the original paper only considered discrete data, and you are now showing that it also works well on continuous data as well? If so, this should be made more clear in the presentation of Table 2. 2. Is eqn 15 (fixed point refinement) applied after every step of the flow matching reverse process? Or only at the end after the unconditional samples have been generated? If the former, then how is it justified to use a classifier trained only on samples from the data manifold and not noised samples? And if the latter, then how does the gradient ascent procedure prevent producing degenerate samples that maximize $p(y | x_1)$ but have low likelihood under the unconditional data distribution? This was not clear to me and I would suggest including an algorithm for the controlled generation as inference to make this clearer, as this is one of the main contributions of the paper. Also, I still find the absence of empirical results on equivariance to be a significant drawback of the paper, given the attention devoted to it in the theoretical part of the paper. I would be willing to reconsider my score if these points are addressed. --- Reply to Comment 1.1.1: Comment: Thank you again for your thoughtful engagement and helpful questions. We are happy to clarify the remaining concerns. ### **1. Contribution of Table 2 and mixed modality handling** The original VFM paper by Eijkelboom et al. is indeed only evaluated discrete graph generation, and to the best of our knowledge, continuous data or mixed discrete-continuous settings were not tested anywhere in the literature. As such, the contribution of these experiments is twofold: 1. We provide the **first empirical evaluation of VFM in the continuous setting**. 2. More importantly, we demonstrate that **the same objective can be used across both discrete and continuous modalities**. This is particularly attractive for molecular generation, which often involves mixed data types (i.e. atom types, positions, and formal charges). Our results show that VFM offers a unified framework that flexibly adapts to these diverse generative tasks without the need to redesign losses or architectures. That is, we believe the utility of VFM was under explored and saw a significant gap - especially in the conditional setting - with we addressed in this work. ### **2. Fixed-point refinement and classifier training** Yes, the fixed-point refinement (Eq. 15) is applied at every step of the reverse process. The reviewer raises an important point about the use of a classifier trained only on clean (data manifold) samples, rather than noised ones. Our approach is based on the key observation that the posterior can be factorized as: $$p_t(x_1 \mid x, y) \propto p(x_1 \mid x) \cdot p(y \mid x).$$ This allows us to separate inference into two parts: Inference of $x_1$ given $x_t$, handled by VFM, and inference of $y$ from $x_1$, handled by a classifier. This decoupling is powerful because it enables post-hoc conditioning using any classifier trained on clean data—removing the need to retrain on noisy samples as in standard classifier guidance for diffusion models. Furthermore, this opens up possibilities for using task-specific classifiers, such as those leveraging geometric or topological inductive biases [1,2]. In this way, our method introduces a new inference-time interpretation of conditional generation in generative modeling. We emphasise the points in the final version and will add an algorithm block for clarification. In case we do not use a noisy classifier for guidance but rather approximate this classifier based on a 'standard' classifier - which is a form of CG that is closer to our approach - notice that indeed then one has to take gradients w.r.t $x_t$ of the score function. Notice that our approach **completely avoids this** and only uses the classifier to evaluate. Hence, this work really gives a different way of thinking of generation which we believe to be fruitful based on our initial experiments. ### **3. Equivariance Ablations** Thank you for highlighting this point again. We agree that this was missing in our initial submission. We have now rerun the key experiments without equivariance constraints and observed a notable drop in performance. These results will be included as an ablation in the final version of the paper. This reinforces the utility of enforcing equivariance in our generative models—although, as recent work such as Vadgama et al. [3] shows, it is not universally required for good performance. We hope our results help nuance this ongoing discussion in the community. [1] Zhdanov, M., Ruhe, D., Weiler, M., Lucic, A., Brandstetter, J., & Forré, P. (2024). Clifford-steerable convolutional neural networks. arXiv preprint arXiv:2402.14730. [2] Liu, C., Ruhe, D., Eijkelboom, F., & Forré, P. (2024). Clifford group equivariant simplicial message passing networks. arXiv preprint arXiv:2402.10011. [3] Vadgama, S., Islam, M. M., Buracus, D., Shewmake, C., & Bekkers, E. (2025). On the Utility of Equivariance and Symmetry Breaking in Deep Learning Architectures on Point Clouds. arXiv preprint arXiv:2501.01999.
Summary: The paper proposes two novel methods within the variational flow matching (VFM) framework for generative modeling. The first is controlled generation, which enables conditional generation using unconditional generative models without requiring retraining (though it is not necessarily limited to this scenario). The second method introduces equivariant generative models, which are well-suited for molecular generation tasks, where invariance to rotations, translations, and permutations is beneficial. ### **Background on flow matching and variational flow matching** In the standard flow matching (FM) framework, models directly parameterize the velocity field $u_t(x)$, i.e., the expected value of the endpoint-conditional velocity field $u_t(x|x_1)$ over the posterior distribution (i.e., the distribution of the endpoint given the current position). In contrast, the variational flow matching (VFM) framework does not directly model the velocity field. Instead, it models the posterior distribution and takes the expectation later. Thus, the model is trained by minimizing the forward KL divergence between the ground truth and the model posterior. This training process simplifies into matching posterior means when the endpoint-conditional velocity field is linear, such as in conditional optimal transport (condOT). Consequently, an unimodal distribution, such as a Gaussian, can be used for continuous random variables. Moreover, under the linearity assumption on the endpoint-conditional velocity field, computing the velocity field is further simplified—it can be done by parameterizing the mean of a Gaussian, for example. ### **Controlled generation** For a controlled generation, the paper highlights a key observation: the endpoint-conditional velocity field should remain unchanged regardless of additional conditions since the endpoint already contains information about those conditions. Leveraging this insight, the authors show that the velocity field of a conditional generative model corresponds to the expected value of the same endpoint-conditional velocity field as in the unconditional model. However, in the conditional case, the expectation is taken with respect to the conditional posterior of the endpoint (rather than the unconditional posterior), given both the additional condition and the current position. The paper then shows that the conditional posterior of the endpoint can be rewritten in terms of the unconditional posterior and a classifier (Equation 13) using Bayes' theorem up to a normalizing constant. This formulation enables reusing the unconditional model without retraining. However, taking an expectation with respect to an unnormalized distribution is typically intractable, making direct computation infeasible. To address this, the authors leverage the linearity assumption on the endpoint-conditional velocity field. Since VFM only requires an unimodal distribution, they propose estimating the mean of the conditional posterior by solving a fixed-point equation (Equation 14) iteratively. This iterative process enables controlled generation without sampling from unnormalized distributions. ### **Equivariant generative models** In addition to the controlled generation, the paper introduces a method for equivariant generative modeling under the VFM framework. A key requirement for this approach is that the expected endpoint-conditional velocity field (computed over the posterior distribution of the endpoint given the current position) must be equivariant under group actions. The paper establishes that this can be achieved if: The prior distribution is group-equivariant. The endpoint-conditional velocity field is bi-equivariant with respect to both the endpoint and the current position. The expected value of the model's posterior, which is a function of the current position, is group-equivariant with respect to its input (i.e., the current position). The authors emphasize that when using a conditional optimal transport (condOT) map, the second condition is automatically satisfied if the relevant groups act linearly on the domains of interest, such as SO(n), and this would be the case for molecular generation, which are the main application of interest of the paper. ### **Experimental results** The paper demonstrates the effectiveness of the proposed methods on several benchmark molecular generation datasets, including QM9, ZINC250k, and GEOM. Claims And Evidence: In my understanding, the paper's contributions are clear. The proposed methods--controlled generation and equivariant generative modeling under the variational flow matching (VFM) framework--are both conceptually insightful and practically impactful. These contributions are particularly significant for several reasons. First, the controlled generation approach introduces an efficient mechanism for performing conditional generation without retraining (though it is not necessarily limited to this scenario, as discussed in the paper). Lacking an efficient controlled generation method for the flow matching framework has led many practitioners in various applications to prefer denoising diffusion. The proposed controlled generation is particularly valuable in scenarios where retraining is computationally expensive or infeasible. Second, the introduction of equivariant generative models under the VFM framework is a notable theoretical contribution, as it ensures that the generative process respects fundamental symmetry constraints. This is crucial for applications such as molecular generation, where invariance to rotations, translations, and permutations is essential for generating physically meaningful structures. Furthermore, the paper provides sufficient mathematical formulation and justification to support the proposed methods concisely, allowing readers to understand them easily. The experimental results on benchmark molecular datasets further validate the effectiveness of these approaches, demonstrating their potential real-world impact. Overall, I find that the paper not only addresses a well-motivated problem but also offers valuable solutions that could inspire future research directions in both generative modeling and scientific machine learning. Methods And Evaluation Criteria: N/A Theoretical Claims: N/A Experimental Designs Or Analyses: N/A Supplementary Material: N/A Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: Dear reviewer 6iCA, We sincerely thank the reviewer for their comprehensive assessment of our paper. We appreciate your recognition that these contributions are "conceptually insightful and practically impactful" and that our work addresses an important gap in flow matching frameworks. We are glad to hear the reviewer is on the same page about the interesting avenues to explore regarding integrating 'classical' inference techniques with SOTA generative modeling.
Summary: The paper focuses on extending the recently proposed Variational Flow Matching (NeurIPS 2024) approach for conditional generation and for incorporating inductive biases as symmetries. They derive two different ways for controlled generation, the first one is similar to conditional diffusion models, with the difference that in this case, they learn a conditional vector field and one that resembles classifier guidance, where the conditioning is post-hoc. They then focus on the problem of generating molecules where the symmetries of the true underlying distribution that has to be learned present specific symmetries. They consider three different molecular generation tasks, discrete, continuous, and joint generation tasks: in the first one they just focus on atom types and bond types, in the second one they focus on atom positions and sometimes atom type, and in the third one they focus on everything plus formal charges. Results show that the variational formulation of flow matching is beneficial. Claims And Evidence: The main claim in the papers is that Variational Flow matching is beneficial on top of Flow matching especially for conditional generations and when the true distribution presents symmetries. They considered mostly the problem of molecular generation and the results show that the variational formulation is helpful. The authors mention several times that "VFM provides a unified approach that can be applied to any combination of discrete and continuous molecular features", while I agree that the benefit is that the loss is the same, one still has to choose a different type of variational distribution. I found it a bit unfair with respect to diffusion models for example as, in that case, one has to choose a specific stochastic process instead of a different variational distribution. But this is a minor detail and the authors might also disagree with me. Methods And Evaluation Criteria: Yes, the benchmark datasets are the usual one that people uses for evaluating generative models for molecular generation, and also the metrics considered are correct. I have a few comments about the tables that I leave in the experimental analysis sections. Theoretical Claims: They have two main propositions in the paper. The first one is about showing that the conditional on some observations vector field generates the conditional probability path. This was also derived in [1], but [1] did not provide a full proof. The second theoretical claim shows that the marginal path is invariant under G if we have the following elements: a prior that is invariant under G, a conditional velocity field that is bi-equivariant, and a variational posterior that is equivariant. **References** [1] Zheng, Q., Le, M., Shaul, N., Lipman, Y., Grover, A., & Chen, R. T. (2023). Guided flows for generative modeling and decision making. _arXiv preprint arXiv:2311.13443_. Experimental Designs Or Analyses: The design of the experimental design is valid. I just have a few comments regarding the analysis: - the use of bold numbers is a bit misleading. For example, in Table 1 the authors bold both $100.00$ and $99.99$, but not if the difference is $0.02$ in the unique results column. In Table 2, EDM and EFM perform exactly the same in terms of atom stability and molecule stability, but their results are not bolded. In Table 3, Semla Flow is performing better on QM9 in terms of Mol. stability but the result of G-VFM is bolded. Also, EquiFM results on QM9 for JS(E) should be bold as they perform the same as SemlaFlow. - I think that the results in Table 3 do not present an apple-to-apple comparison as some of the methods like EDM, GCDM, and EquiFM do not generate bond information, which has been shown to be helpful for getting higher validity. Therefore, I will invite the authors to make it clear in the main text what the different models are generating, e.g. if they are generating bonds and charges. That's the main reason for the huge difference in terms of metrics on QM9 in Table 3. - I don't really see why having Table 1 in the main text as results and comparison were presented already in the Variational Flow Matching paper. - It would be nice if the authors presented the full details of model parameters and training details in the appendix. Also, the paper will benefit if the results in Table 2 authors specify if they are modeling atom types or not. Supplementary Material: I went through all the sections of the supplementary material. Relation To Broader Scientific Literature: The paper places itself in the flow matching landscape, building on top of Variational Flow Matching approach [1]. They proposed techniques specific to this approach for conditional generation similar to what people usually do when they train conditional diffusion models [2] or perform post-hoc conditioning using classifier guidance [for example 3]. However, I feel that the method section does not cite any relevant references for the entire Section 2 and 3, which I invite the author to add. For example, the main part section 3.1, and the proposition were also derived in [4], which the authors do not cite. As the main application is molecular generation, I think that the paper presents methods relevant to people working in that field. Although [5] is not evaluating log-likelihood it is a related work for Table 2. **References** [1] Eijkelboom, F., Bartosh, G., Andersson Naesseth, C., Welling, M., & van de Meent, J. W. (2024). Variational flow matching for graph generation. _Advances in Neural Information Processing Systems_, _37_, 11735-11764. [2]Ho, J., Jain, A., & Abbeel, P. (2020). Denoising diffusion probabilistic models. _Advances in neural information processing systems_, _33_, 6840-6851. [3]Chung, H., Kim, J., Mccann, M. T., Klasky, M. L., & Ye, J. C. (2022). Diffusion posterior sampling for general noisy inverse problems. _arXiv preprint arXiv:2209.14687_. [4] Zheng, Q., Le, M., Shaul, N., Lipman, Y., Grover, A., & Chen, R. T. (2023). Guided flows for generative modeling and decision making. _arXiv preprint arXiv:2311.13443_. [5] Cornet, F., Bartosh, G., Schmidt, M., & Andersson Naesseth, C. (2024). Equivariant neural diffusion for molecule generation. Advances in Neural Information Processing Systems, 37, 49429-49460. Essential References Not Discussed: See above Other Strengths And Weaknesses: The paper is nicely written and it tackles an important task in deep generative models which is conditional generation and application to model distribution that has symmetries, which is of interest to the community. I would like just to add one more comment on something that the authors mention in the paper on line 181 (right column): "unlike standard classifier-guidance methods in diffusion, which require a time-dependent classifier $p_t(y |x)$, classification in VFM is performed on data pairs $(x_1,y)$". I think this is not entirely correct as in diffusion one can also use a classifier trained on the clean samples, but at each denoising step, one has to first apply Tweedie to get the approximate clean sample. Other Comments Or Suggestions: - Line 422: Fischer flow instead of Fisher flow Questions For Authors: I have just one question for the authors, if I am not completely wrong, it seems that equation 13 can be written as: $$ p_t(x_1|x,y) = \frac{p_t(x_1|x)p_t(y|x_1)}{p_t(y|x)} \approx p_t(x_1|x)p_t(y|x_1) $$ where at the denominator the time-dependent classifier appears. Is it something that can be used to improve the guidance? It might be a completely useless question, but since I try to derive Eq. 13 I was curious to ask if that can be used. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear reviewer TqUv, Thank you for the detailed and thoughtful review and for engaging deeply with both the theoretical and empirical aspects of our work. Below we respond to the key points and how they will be addressed in the revised manuscript. **1. Unified Objective and Variational Distributions** We fully agree: while VFM provides a unified objective, it still requires variational distributions per modality. However, our contribution is to show that these choices can be **modular and plugged directly into the same objective, avoiding the need to redesign the loss or architecture per data type—a common challenge in standard FM pipelines**. Though not appropriately explored here, this even naturally extends to other distributions (eg, Poisson for neural data) by learning the conditional rate and optimizing the NLL. We clarified this core motivation in the revision. **2. Fairness of Table 3 Comparisons** While our method is flexible enough to generate any subset of molecular features (e.g., just positions or positions plus atom types), in all experiments, we matched the generation scope of each baseline for a fair comparison. For example, if a baseline did not model bond structures or formal charges, we excluded those as well. We now clearly state this in the text and annotate Table 3 to indicate which components each model generates. **3. Simplifying Assumptions and Classifier Guidance** We now explicitly state and discuss simplifying assumptions (eg, classifier log-concavity) in the revised manuscript. This work extends VFM with an inference-based view of conditional generation. We perform variational inference on $p_t(x_1 \mid x, y)$ in a simple proof-of-concept setup that yields meaningful improvements. Due to the linearity of the conditional velocity (as in flows/diffusions), matching only the posterior mean suffices (see VFM), and **a full Bayesian treatment is unnecessary**. Our main contribution is showing that this inference view enables scalable post-hoc control by combining classical inference tools with learned approximations. For more detail, we refer to our response to reviewer k6L2. This also addresses the reviewer's question about the normalization constant in Eq. 13—your interpretation is correct. Furthermore, while related to classifier guidance (CG/CFG) in diffusion, our approach differs significantly in cost and flexibility. CG typically requires backpropagating through a denoised prediction at each timestep, which can be expensive and must be trained conditionally. In contrast, **our fixed-point method operates post-hoc at the endpoint, with no joint training**. Finally - even though is it true that true the Tweedie transform a similar effect can be obtained, the approach would arguably be rather noisy and therefore hard to learn. We now clarify this key distinction in the paper. **4. Author questions / Minor comments:** - **Missing citations:** We agree and will cite [4], [5], and other related work on conditional generation and classifier guidance. These citations have been added in Section 3.1. - **Missing Experimental Details (Table 2):** We have included full model architecture and training details in the appendix, and clarified in Table 2 whether atom type modeling is included per experiment. - **Table Formatting and Metric Presentation:** We corrected the inconsistent bolding and formatting in Tables 1–3. Bolding is now applied uniformly (best per column, including ties), and we added global method rankings to aid interpretability across datasets. - **Table 1 Repetition:** We included Table 1 to demonstrate that G-VFM reproduces the results of standard VFM models as a consistency check. That said, we agree that this space could be better used to highlight new contributions, so we have moved Table 1 to the appendix. - **Typo:** We have corrected “Fischer flow” to “Fisher flow” (line 422). We again thank the reviewer for their constructive feedback. Your comments helped significantly improve the clarity, fairness, and completeness of the revised manuscript. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for answering my questions. I would like to get these additional points clarified: 1- In Table 3 (QM9 experiment), EDM, GCDM, and EquiFM do not generate bond information while SemlaFlow does. Therefore I am a bit confused by the answer you gave me. Modelling the bond information usually helps in generating more stable molecules as it can be seen from the gap between EDM, GCDM, EquiFM and SemlaFlow itself. By looking at the results of G-VFM it seems that bonds are modelled in that case. Therefore, I really think that it should be clear from text or table caption that part. 2- I think I also need some more clarification on this point ``CG typically requires backpropagating through a denoised prediction at each timestep, which can be expensive and must be trained conditionally. In contrast, **our fixed-point method operates post-hoc at the endpoint, with no joint training**.`` I think it is related to what **Reviewer uXdp** is also asking. How many refinement steps $k$ are you doing usually? Also, in diffusion, to perform classifier guidance, you don't need joint training with the diffusion and classifier. In the case of a time-dependent classifier $p(y|x_t)$, one needs the noising schedule of the diffusion model, but one can also just use a pre-trained classifier $p(y|x)$ (where $x$ is the cleaned sample) and then at each reverse step get $x$ by Tweedie which is $O(1)$ operation. Maybe a pseudo-algorithm explaining the proposed method might help. ### **Edit after reading authors' answers to my comments** I really think that by incorporating appropriate citations in the method section, by having a more fair discussion of results, and by making the overall proposed method more clear by having a pseudo-algorithm can make the paper really stronger. I believe that you are going to make these changes in the updated version of the paper. Therefore, I will increase my score. --- Reply to Comment 1.1.1: Comment: Thank you for your follow-up and for the clear articulation of your concerns. ### **1. Clarification on bond modeling in Table 3** Thank you for pointing this out—we agree this distinction was not sufficiently clear in the original submission. We initially followed the evaluation setup from SemlaFlow, where models that do and do not jointly generate all molecular attributes (including bonds) are compared within the same table. We believed that fully modeling all attributes jointly, as we do, constituted the more challenging setting. That said, we now make this distinction explicit and will revise Table 3 and the surrounding discussion accordingly. ### **2. Clarification on classifier guidance vs. our fixed-point method** Indeed, as you and Reviewer uXdp correctly note, classifier guidance in diffusion models does not necessarily require joint training. One can use a classifier $p(y \mid x)$ trained on clean data and approximate the noisy conditional $p(y \mid x_t)$ using e.g. Tweedie denoising. However, this still involves computing gradients through the score function with respect to the inputs $x_t$, since the clean prediction is a function of the noised sample. Our method differs in that it operates entirely at the endpoint level. The classifier is only evaluated directly on $x_1$, and as such removes the need for backpropagation through the learned dynamics entirely. This makes the method lightweight and modular—allowing us to plug in any classifier, even ones trained on structured, topological, or symmetry-aware features, without retraining or modifying the generative model. We will include a pseudo-algorithm in the final version and clarify that we typically use 3–10 refinement steps in practice. More broadly, we believe our approach offers a novel perspective on conditioning in generative modeling—reframing controlled generation as a form of inference rather than training. Our initial results suggest that this perspective is not only conceptually valid, but also empirically effective, and believe this opens up doors to new controlled generation techniques.
Summary: The core contributions of the paper are twofold. **[1.Inference time control of VFM]** The authors show that a conditional VFM distribution can be factorized into unconditional VFM part and the classifier part. Based on this factorization, the authors propose an iterative approximate solution to $\underset{x_1}{\mathrm{argmax}}\\, p_t(x_1|x,y)$ that can be solved during the inference time. **[2.Achieving equivariance in VFM]** The authors show that equivariance of VFM can be achieved by using an equivariant variational distribution, when the conditional vector field is equivariant. Claims And Evidence: ## 1.Inference time control of VFM For the inference-time control, the authors propose to optimize $\underset{x_1}{\mathrm{argmax}}\\, p_t(x_1|x,y)$ by iteratively solving $x_1$ that satisfies the fixed-point equation $\nabla_{x_1} \log p_t(x_1|x) + \nabla_{x_1} \log p_t(y|x_1)=0$. Since computing the score function $\nabla_{x_1} \log p_t(x_1|x)$ is intractable, the authors propose to approximate the score function with only the first moment $\mu_t$, resulting in equation (15). However, several strong assumptions are not sufficiently justified. Although the proposed approximate guidance appears to offer some empirical benefits, it lacks thorough theoretical or experimental validation beyond incremental metric improvement (e.g., adding the approximate guidance increases metric XX by YY%). For example, the authors assume log-concavity of the classifier $\log p(y|x_1)$. However, most of the deep classifiers are not log-concave. Hence, as mentioned in the paper, the convergence is not guaranteed, which is in contrast to typical classifier guidance for SDE-based models. Another strong assumption is the approximation of $p_t(x_1|x)$ with a Gaussian centered at the mean of the variational distribution. Although the update rule derived from this approximation is simply a VFM + classifier gradient, which seems reasonable, it is unclear whether this provides a good Bayesian approximation. ## 2. Achieving equivariance in VFM. The authors show that VFM can be made equivariant by utilizing an equivariant variational distribution instead of directly modeling the equivariance to the marginal vector field network. This is indeed a significant benefit of the VFM approach, as it allows one to flexibly put inductive bias to the distribution instead of the marginal vector field. Methods And Evaluation Criteria: By utilizing VFM framework, the authors shows that it is possible to simplify the implementation complexity of an equivariant flow matching model for a mixed discrete-continuous molecular generation tasks. In particular, the proposed method, G-VFM, matches the performance of SemlaFlow with a simpler training pipeline. The authors have also demonstrated the benefit of the proposed inference-time controlled generation. Theoretical Claims: As mentioned earlier, the authors used several strong assumptions to derive Equation 15. This makes its Bayesian correctness questionable. However, the final update rule looks reasonable, regardless of its theoretical accuracy. Other proofs look correct. Experimental Designs Or Analyses: As mentioned before, the paper could be improved with an in-depth qualitative/quantitative analysis on the accuracy of the Bayesian posterior approximation for inference-time controlled generation. Supplementary Material: I have reviewed the proofs in Appendix A. Although I have not verified every claims with full mathematical rigor, they seem valid within engineering-level contexts. Relation To Broader Scientific Literature: The flexibility of the proposed equivariant VFM approach could be beneficial for other areas such as 3D vision or robotics. Essential References Not Discussed: Relevant works are appropriately referenced. Other Strengths And Weaknesses: There are several unreference varibles in the paper. For example, it is unclear where $\mu_t(x)$ comes from. I presume this is the mean of the variational distribution $q_t(x_1|x)$. The presentation of the paper could be improved if the authors explicitly state that $\mu_t(x)$ is approximated with that of $q_t(x_1|x)$. Also, it would be helpful for the readers to understand equation 15 if the authors explain how equation 15 is related to VFM. Other Comments Or Suggestions: There is a typo in Table 3. The Mol Stab of G-VFM (=99.5) is highligted in bold, although SemlaFlow has higher score of 99.6. Questions For Authors: At first glance, it is unclear how exactly $q_t(x_1|x)$ is modeled. I presume that $q_t(x_1|x)$ is simply a mean field Gaussian for continuous variables and a categorical distribution for discrete variables as in the original VFM paper. Is this correct? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the clear summary and thoughtful comments. We appreciate the recognition of our core contributions, as well as the constructive suggestions that helped us clarify and strengthen the presentation. Below, we address the reviewer’s main points. **1. Inference-Time Control: Assumptions and Approximation Accuracy** We agree that our formulation suggests simplifying assumptions (eg, log-concavity of the classifier). These are now stated explicitly and discussed in the revised manuscript. While VFM connected variational inference to unconditional flow matching, our goal is to extend this to conditional generation and introduce an equivariant formulation. We show that one can perform variational inference on $p_t(x_1 \mid x, y)$, and more importantly, **use this perspective to rethink conditioning in generative models**. A simple proof-of-concept demonstrates this idea. Our aim is not a full Bayesian treatment, but to show that minimal inference-time updates—without retraining and with negligible overhead—enable effective post-hoc control. Using only the posterior mean is sufficient: in endpoint-based models (eg, flows, diffusions), the expected conditional velocity depends only on this value (see VFM). This eliminates the need for higher-order terms, keeping the method scalable while still improving empirical performance. As such, issues like log-concavity are less critical for our setting—we propose a simple integration of inference with generative modeling and focus not on recovering the full posterior, but on obtaining a reliable estimate of the posterior mean. Broadly, this work opens a direction within VFM: enabling modular inference-time control by integrating classical inference tools with learned approximations. We clarified this vision in the revision, and hope it inspires further work in this efficient design space. **2. Clarification of Notation and Presentation** We thank the reviewer for catching unclear notation and formatting issues. We confirm that $\mu_t(x)$ denotes the mean of $q_t(x_1 \mid x)$ and have made this explicit. We revised the explanation around Eq. 15 to clarify its relation to VFM: the fixed-point update approximates the mean of $p(x_1 \mid x, y) \propto p(x_1 \mid x)p(y \mid x_1)$. We also corrected the bolding in Table 3 and improved formatting throughout. **3. Modeling Details of $q_t(x_1 \mid x)$** Yes, we use a mean-field Gaussian for continuous variables and a categorical distribution for discrete ones, as in the original VFM. We added a brief clarification in the revised text. Once again, we thank the reviewer for the insightful feedback, which helped refine both the clarity and precision of our work. --- Rebuttal Comment 1.1: Comment: Dear authors, thank you for your clarification. My concerns have been addressed and I would like to keep my initial recommendation to accept the paper.
null
null
null
null
null
null
Fixing the Double Penalty in Data-Driven Weather Forecasting Through a Modified Spherical Harmonic Loss Function
Accept (poster)
Summary: This paper introduces a simple, parameter-free modification to the loss function, separating decorrelation loss from spectral amplitude errors. Fine-tuning the GraphCast model with this new loss function yields sharper deterministic forecasts, increases effective resolution from 1,250 km to 160 km, improves ensemble spread, and enhances predictions of tropical cyclone strength and surface wind extremes. This advancement addresses a key limitation in data-driven models, significantly improving forecast precision and detail. ## Update after rebuttal While I recognize the authors’ efforts to address my concerns, I believe the current version of the paper still lacks the necessary quality and impact to meet the ICML acceptance standards. Claims And Evidence: The claim that AMSE improves extreme weather predictions is not fully supported. While Figure 5 and 6 show improvements in specific cases (e.g., tropical cyclones), these are individual examples and lack broader statistical validation. Additionally, Figure 4 reveals no significant RMSE improvement over the original GraphCast, raising doubts about the method's overall effectiveness. To strengthen the claim, the authors should provide statistical evaluations across more extreme weather events and clarify the relationship between AMSE's benefits and traditional metrics like RMSE. Methods And Evaluation Criteria: The methods seem to make sense, but the evaluation is not sufficient Theoretical Claims: Yes, they are correct. Experimental Designs Or Analyses: I have reviewed all sections of the results, and the experimental design and evaluation are sound and valid. The authors' experiments are strictly aligned with the setup used in GraphCast, ensuring a fair comparison. Supplementary Material: Yes, I reviewed the supplementary material, specifically the Supplemental Verification section, which includes: Spectra Verification: Analysis of spectral performance to validate the improvement in effective resolution. Ensemble Verification: Evaluation of ensemble spread and reliability, demonstrating improvements in probabilistic forecasting. Relation To Broader Scientific Literature: The paper addresses the smoothing effect in data-driven weather forecasting caused by traditional MSE loss functions, a well-known issue in the field. By introducing the AMSE loss function, which separates decorrelation loss from spectral amplitude errors, the work aligns with efforts to improve fine-scale variability and extreme weather predictions. While it enhances effective resolution and ensemble spread, the authors acknowledge limitations in ensuring physical plausibility for out-of-distribution scenarios. This contribution advances the integration of data-driven models into operational forecasting, bridging gaps between machine learning and traditional physics-based approaches. Essential References Not Discussed: [1] Verma, Y., Heinonen, M., & Garg, V. (2024). ClimODE: Climate and weather forecasting with physics-informed neural ODEs. arXiv preprint arXiv:2404.10024. [2] Vaughan, A., Markou, S., Tebbutt, W., Requeima, J., Bruinsma, W. P., Andersson, T. R., ... & Turner, R. E. (2024). Aardvark weather: end-to-end data-driven weather forecasting. arXiv preprint arXiv:2404.00411. [3] Han, Tao, et al. "Fengwu-ghr: Learning the kilometer-scale medium-range global weather forecasting." arXiv preprint arXiv:2402.00059 (2024). Other Strengths And Weaknesses: ### Strengths: The proposed AMSE loss function significantly improves fine-scale variability and extreme weather predictions, addressing a critical limitation in data-driven forecasting. This has direct relevance for operational weather centers and public safety. The paper is well-written, with clear explanations of the methodology, experiments, and results. ### Weaknesses: Limited Novelty: The work builds on GraphCast by fine-tuning it with a new loss function, which, while effective, does not introduce a fundamentally new approach. This limits the perceived originality of the contribution. Non-Standard Ensemble Method: The ensemble forecasting approach averages predictions from consecutive initialization times, which, although used in prior literature, is not a standard ensemble method. This could raise questions about the robustness and generalizability of the ensemble results. Other Comments Or Suggestions: See above Questions For Authors: Addressing these weaknesses, particularly by enhancing novelty and adopting more standard ensemble techniques, would strengthen the paper's overall contribution. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for their comments. While we disagree with some of the reviewer's conclusions (see below), the fact that they raise these issues demonstrates that some sections of the paper should be rephrased in a camera-ready version. We also think that we can add further extreme-weather verification in a camera-ready version of the paper. Our specific rebuttal follows: > While Figure 5 and 6 show improvements in specific cases [...] these are individual examples and lack broader statistical validation. Figure 5 is indeed a specific case study (hurricane Ian), but it is included for illustration and qualitative understanding only. Figure 6 is a systematic evaluation of the model performance with respect to all tropical cyclones for June–September 2022, and it is statistically robust. Tropical cyclones are ideal for systematic evaluation such as this because they are rigorously defined and independently observed, and no other observation record comes close for model verification of compact, extreme events. > Figure 4 reveals no significant RMSE improvement over the original GraphCast, raising doubts about the method's overall effectiveness. First, Figure 4 (comparable to figure 2 of Brenowitz 2024) shows RMSE of the (lagged) ensemble mean, not RMSE of individual forecasts. We will clarify the text to make this difference more obvious. Second, we should not expect large differences to the RMSE of the ensemble mean. AMSE-based training improves forecast variability, but that variability largely 'averages out' over a large ensemble. That being said, mean-RMSE is a biased estimator (see Leutbecher 2008, doi 10.1016/j.jcp.2007.02.014) which we nonetheless use to match Brenowitz. If we used the unbiased estimator of Leutbecher, we would expect to see our results show modest improvements in mean-RMSE over the control Graphcast model. We will add a discussion of this to the supplementary material of the revised paper. Overall, we consider CRPS to be the preferred metric since it directly incorporates the ensemble spread. By this metric, our results show consistent improvements. > [T]he authors should provide statistical evaluations across more extreme weather events We reiterate that figure 6 is such a systematic evaluation for tropical cyclones, but we would welcome any specific suggestion of neglected evaluations. This is a very challenging problem, particularly for models that do not fully resolve convection (thunderstorms); see for example Marsigli et al. 2021 (doi 10.5194/nhess-21-1297-2021) and Ebert 2008 (doi 10.1002/met.25) for a general overview. We may be able to provide more detailed verification with the data on hand. Would you consider systematic evaluation of min- and max-pooled observations a useful addition? > Essential References Not Discussed: We will certainly add brief discussion of [2] and [3] to the literature review section. [2] demonstrates that a dramatically different training approach (learning directly from observational data) still produces a forecast model that blurs predictions, and [3] demonstrates that very high resolution training is insufficient to solve the problem. We request clarification about why [1] is considered 'essential', since it's a proof-of-concept study at very low resolution (5.625°) that cannot effectively demonstrate blurring or the lack of it. > The work builds on GraphCast by fine-tuning it with a new loss function, which, while effective, does not introduce a fundamentally new approach. While this work uses GraphCast as the basis for implementation, the problem of forecast blurring is a general one, seen in essentially every deterministic ML-based forecast system. We argue that AMSE is broadly applicable, and please see the response to reviewer KNJY about applicability to other fluid dynamics problems. > The ensemble forecasting approach averages predictions from consecutive initialization times, which, although used in prior literature, is not a standard ensemble method. First, the ensemble approach does not _average_ predictions from different lead times, it treats the different initializations as distinct ensemble members. It's uncertain if this is just a typo or whether there's a deeper misunderstanding of the approach. Second, we think that the lagged ensemble approach is the best one available for fundamentally deterministic models such as GraphCast. A "true" ensemble of initial conditions involves either non-public data (such as the ensemble data assimilation members from ECMWF/IFS) or an extreme number of free parameters to create a 'balanced' initial perturbation. We agree with Brenowitz 2024 that while the lagged ensemble makes for a poor forecast system, it provides the fairest comparison between different systems because it isolates the properties of the ML model from the circumstances of its initialization.
Summary: A spherical loss variation of the MSE is introduced that breaks MSE's tendency to push ML models to converge to the mean via its double penalty. The prestented AMSE effectively conserves amplitudes in weather forecasts, as demonstrated with a carefully fine tuned GraphCast model, leading to sharper forecasts that are the base for well calibrated ensembles. Claims And Evidence: Looks good. I only have one major concern, as detailed in point 4 under Experimental Designs Or Analyses. If this can be addressed, I'm happy to raise my score of this work. I like this work and the presented analyses a lot and hope it will be polished further for publication. Methods And Evaluation Criteria: 1. L1 could also help to prevent regression to the mean and would be even simpler. A comparison to L1 as fine tuning metric would be interesting. Theoretical Claims: Not carefully checked. Experimental Designs Or Analyses: 1. Figure 3 and 7 demonstrate that the amplitude is nicely preserved, that's exciting! But the AMSE fine tuned models seem to overestimate amplitudes at long wavelengths. This is well addressed to noise that seems to be imputed in the forecasts. It would be great qualitatively see some of those noisy forecasts to somewhat understand what type of noise appears. The AMSE AR1 forecasts seem well suited for such a qualitative analysis and I'd be curious to see it. 2. Regression to the mean is a prominent problem in spatiotemporal forecasting tasks. It would therefore be interesting to see how the amplitude-preserving loss introduced here performs on other datasets, such as [moving MNIST](https://paperswithcode.com/dataset/moving-mnist). Edit: I understand that the method is only applicable for spherical domains. 3. Also, how does the introduced loss perform when used not only for fine tuning, but for an entire model training. I'm wondering whether the model converges in a similar way (or worse/better). Such an ablation could also be performed on few variable prediction only with a small model to reduce the computational burden, e.g., [this study](https://arxiv.org/abs/2407.14129). 4. I am skeptic about the comparison of a fine tuned vs not fine tuned model. Is the improved sharpness attributable to the new loss, or just a result of careful fine tuning? I am concerned that the deliberate fine tuning curriculum might also be a source of improved sharpness. Supplementary Material: Checked it briefly. Looks okay, yet a bit convolved, as it is integrated into the GraphCast code. A standalone `pip install amse` or alike would be great for applicability. Relation To Broader Scientific Literature: Some older deep learning weather prediction approaches could be cited as well, such as Dale Durran's and Jonathan Weyn's pioneering works with CNNs, e.g., this work: https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2020MS002109 Essential References Not Discussed: None Other Strengths And Weaknesses: _originality_: The introduced loss component is novel and well motivated. It is also well justified with amplitude analyses. _significance_: Fixing the double penalty in spatiotemporal forecasting tasks marks a solid contribution to deep learning weather prediction. _clarity_: The manuscript is well structured and organized. Arguments are clearly motivated and approachable. Other Comments Or Suggestions: - Line 28, right: ERA5 dates back to 1949 by now. - Figure 4: A quick description for the metrics would be good. That is, CRPS and eRMSE are best when small, whereas spread/error supposedly (?) should be at 1.0. Edit: found it in line 378, left. But might be good to have this in the figure caption too to prevent the case that I ran into. Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: First, we thank the reviewer for their comments. They accurately note limitations of this work and suggest extensions that will improve the robustness of these results in a camera-ready version of the paper. Several of the suggested ablation studies should be possible, but they will take long enough that results won't be available during this comment period. We will include them as supplemental material in the full version of the paper. Our more specific responses are as follows: > A comparison to L1 as fine tuning metric would be interesting. An L1 study can be included in the camera-ready version of the paper. We don't expect to see strong differences, since L1-optimality predicts the median rather than mean of the distribution. The Aurora model [Bodnar 2024, arxiv:2025.13062] was trained with L1 error, and it shows similar smoothing (figures I37 and I38). > [T]he AMSE fine tuned models seem to overestimate amplitudes at long wavelengths. This is well addressed to noise that seems to be imputed in the forecasts. It would be great qualitatively see some of those noisy forecasts to somewhat understand what type of noise appears. As a minor correction, the amplitude overestimation is at large wavenumbers and short wavelengths. The revised paper will include an auxiliary length scale to ensure clarity with the figures. We will also show examples of this overestimation through high-pass filtered forecast fields. This overestimation seems to be an amplification of grid imprinting [arxiv:2212.12894, section 7.5.3] that already exists in the Graphcast model. > It would therefore be interesting to see how the amplitude-preserving loss introduced here performs on other datasets, such as moving MNIST. Edit: I understand that the method is only applicable for spherical domains. Spherical geometry is helpful, but a similar derivation is possible on the plane. The core idea is that AMSE groups together modes that differ only by grid rotation (eq. 3 $\to$ 4), and on the sphere this nicely divides modes into equivalence classes based on total wavenumber. On the (hyper)plane, a similar construction is possible that groups together modes with similar-enough total wavenumbers ($\sqrt{k^2 + l^2 + m^2}$). The recently-announced Chakraborty 2025 [arxiv:2502.00472] does this to develop its own modified error measure, but unlike AMSE Chakraborty only considers the amplitude (energy) component. Overall, we're optimistic that AMSE might apply to a wide variety of fluid/turublence problems, but we don't want to over-claim in the paper without firm proof. Application of AMSE to moving MNIST or other image/video problems is less straightforward. High-frequency modes in images or video correspond to edges, so it's not obvious that the Fourier modes can be lumped together as equivalent. A similar problem arises in weather with precipitation, where positivity (i.e. no negative precipitation) breaks the independence of spherical harmonic modes; AMSE slightly degrades the Graphcast precipitation forecast (figures 11 and 12). > Also, how does the introduced loss perform when used not only for fine tuning, but for an entire model training. This is a question we've considered, but training at a fine enough scale to show the smoothing is not currently practical. In theory, AMSE should be applicable throughout the training process, but in practice it might result in learning problems. We know that MSE-based training smooths/sharpens during learning (fig. 2), so the smoothing might force the model to learn to predict large scales first, acting as implicit regularization that speeds learning. Discussion of this would be added to the 'limitations' section of a camera-ready paper. Low-resolution studies like Karlbauer aren't great test-beds for AMSE comparisons because the atmosphere is more predictable at large scales. For example, fig. 3 shows that for short lead times there is barely any smoothing noticeable at wavenumber 32, corresponding to the 5.625° resolution of Karlbauer's examples. > I am skeptic about the comparison of a fine tuned vs not fine tuned model. Is the improved sharpness attributable to the new loss, or just a result of careful fine tuning? I am concerned that the deliberate fine tuning curriculum might also be a source of improved sharpness. We will add a fine-tuned control model for the camera-ready version. The process here was based on [Subich 2024, arxiv:2408.14587], and there the author showed that fine-tuning to 12-steps changed smoothing only slightly (figure 12). Since smoothing is optimal under MSE, we should expect smoothing out of any MSE-trained model and be surprised by any exceptions. > A standalone pip install amse or alike would be great for applicability. Please see our response to 9jxY for discussion of this. > Some older deep learning weather prediction approaches could be cited as well Agreed, also Keisler 2022 as the foundational GNN weather prediction model.
Summary: This paper addresses a significant issue in state-of-the-art data-driven weather forecasting models: the tendency for forecasts to be overly smooth, particularly at finer scales. This smoothing is attributed to the commonly used mean squared error (MSE) loss function, which penalizes models for misplacing features (the "double penalty" effect) and incentivizes averaging away less predictable scales. The authors propose a novel, parameter-free loss function, the Adjusted Mean Squared Error (AMSE), based on a spherical harmonic decomposition of the MSE. AMSE aims to separate errors due to decorrelation from errors in spectral amplitude, encouraging models to maintain realistic variability even for less predictable scales. Claims And Evidence: Central claim is that data-driven weather models trained with standard Mean Squared Error (MSE) suffer from unrealistic smoothing due to the "double penalty" effect. This is a known problem (even in computer vision). The authors analyze MSE optimization properties and authors show the smoothing behavior during training. The proposed Adjusted Mean Squared Error (AMSE) loss function, derived by spectrally separating amplitude and coherence terms, mitigates this smoothing. This is visually compared and shown as well as via spectral analysis and quantile plots of wind distributions. The evidence provided is generally clear and convincing for the main claims, particularly regarding improved sharpness, resolution, and tropical cyclone intensity. Ensemble evals are trickier and authors acknowledge that. Save with respect to variables like precipitation where spherical decomposition is less meaningful. Methods And Evaluation Criteria: Methods and evaluation criteria are relatively standard and make sense for the problem. Evaluations includes a fair mix of qualitative and quantitative methods. The derivation of MSE in terms of spectral power density and coherence (Equation 4) relies on Parseval's theorem for spherical harmonics and appears standard. Theoretical Claims: I didn't do a rigorous mathematical check of the derivations or proofs. The core theoretical claim linking MSE optimization to smoothing seems sound and widely accepted in the NWP and ML communities. Experimental Designs Or Analyses: The experiment design and analyses were quite reasonable. Starting from publicly available GraphCast checkpoint and fine-tuning on the HRES dataset is a valid and efficient design choice to demonstrate the specific impact of the loss function change. The staged increase in forecast steps during fine-tuning is also quite standard. Supplementary Material: Briefly looked at the code. It's somewhat complicated the way it's structured and would suggest cleaning that up if authors wish wider adoption. Relation To Broader Scientific Literature: The paper is well connected to prior work. https://arxiv.org/abs/2308.05732 might be a different way to consider this problem. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: - While spectral losses aren't entirely new, the specific formulation is simple and easier to apply as finetuning. - Addressing the smoothness issue is crucial for the operational viability and downstream use of data-driven models. - The paper is generally well-written and clearly explains the problem, the proposed method, and the results with supporting figures. Weaknesses: - Likely won't work with irregular grids. Other Comments Or Suggestions: "Effective resolution" is tricky to define/understand and claiming those numbers in the abstract is somewhat iffy. Questions For Authors: Given the poor performance for precipitation, attributed to the spectral decomposition not suiting non-negative, localized fields, do you think alternative modifications or hybrid approaches within the AMSE framework (or separate loss terms) could specifically improve precipitation forecasts while retaining the benefits for other variables? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their comments. This review brings up some interesting points regarding the limitations, applications, and extensions of AMSE, and our detailed response follows. > Briefly looked at the code. It's somewhat complicated the way it's structured and would suggest cleaning that up if authors wish wider adoption. This was also mentioned by reviewer KNJY. Our intention is to first publish the training code and model checkpoints in the spirit of an open science record, and as a result the provided AMSE code is relatively closely coupled to the GraphCast training code. However, we are looking at testing AMSE with a different (currently unpublished) PyTorch-based model in the coming weeks, and needing a second implementation is sufficient motivation to make the design more modular and accessible. It likely won't be ready in time for a camera-ready version of the paper, but we thank you and KNJY for the encouragement. > The paper is well connected to prior work. https://arxiv.org/abs/2308.05732 might be a different way to consider this problem. We thank you for the reference, and it deserves a place in our literature review. We knew from papers such as GraphCast (Price 2025) that diffusion models seemed to provide more realistic spectra, but it's very revealing to see a work such at this that arrives at a diffusion-like approach beginning with a motivation to accurately reproduce the low-amplitude portions of the spatial spectrum. > "Effective resolution" is tricky to define/understand and claiming those numbers in the abstract is somewhat iffy. That's a valid complaint about 'effective resolution.' The implied question is always 'effective at what?' Our work was motivated by downstream problems like that of (Husain 2024), where the ML-based forecast can only be used to guide downstream applications over scales where it has the correct amount of energy, and our smoothing-based analysis aligns with the cutoff scale used there. This is not the only definition of effective resolution. For example, Kent 2014 (doi 10.1016/j.jcp.2014.01.043) calculates the dispersion relation of various advection schemes and computes both diffusion-limited (smoothing) effective resolutions and dispersion-limited (wave propagation) resolutions. We will add some discussion of this to the 'limitations' section, to make our assumptions more apparent. We don't know enough detail about how GraphCast-like models propagate atmospheric waves to perform a similar study. However, we do know that the unmodified GraphCast provides reasonably accurate predictions of tropical cyclone locations even as it systematically weakens the storms, so we conjecture that the unmodified model has a diffusion-limited effective resolution that is far coarser than a dispersion-limited effective resolution. Since AMSE-based fine tuning doesn't seem to degrade the per-wavenumber correlations of the forecast fields (fig. 3), we think we improve the smoothing-limited effective resolution without damaging anything else. In recent days, Selz 2025 was published (10.22541/essoar.174139239.94807670/v1) as a preprint. As a testament to the subtlety of this issue, the paper discusses relative differences in effective resolutions of AI-based weather models without quantifying those resolutions with single numbers. > Given the poor performance for precipitation, attributed to the spectral decomposition not suiting non-negative, localized fields, do you think alternative modifications or hybrid approaches within the AMSE framework (or separate loss terms) could specifically improve precipitation forecasts while retaining the benefits for other variables? For precipitation itself, the best option is to probably look at other error measures entirely, but the idea of doing this for our paper seemed like cherry-picking. Ultimately, precipitation is a hybrid random variable: it doesn't rain a finite fraction of the time, so there's a qualitative difference between zero precipitation and 0.1mm of precipitation. That's most suited to double-variable predictand (chance of precipitation, expected precipitation if present), with a combination of a cross entropy loss function with something MSE-like, but this is a study that belongs elsewhere. However, other, smoother variables like dewpoint temperature, total column water content, or net evaporation-less-precipitation are correlated with precipitation and might be more suitable for AMSE-based loss calculations. AIFS (Lang 2024) includes the first two of those, but we wanted to maintain strict compatibility with the control GraphCast model.
null
null
null
null
null
null
null
null
Reward Translation via Reward Machine in Semi-Alignable MDPs
Accept (poster)
Summary: This paper considers a setup where we want to transfer the reward from source domain to the target domain so that we can train RL agents in target domain where the reward design is tedious or difficult. But this will be difficult for many source-target domains because many domains don't share the same structure. So the paper introduces the idea of using the concept of semi-alignable MDP -- by designing a method that first generates reward machine and transferring the reward machine. Experiments are conducted on some simple 3D visual navigation tasks and OpenAI Gym tasks. ## update after rebuttal My score is updated from 2 to 3 to reflect that authors provided additional experiments in the rebuttal. I recommended authors to provide further diverse experiments if the paper is accepted. Claims And Evidence: Experimental results are very weak for supporing the claims made in the paper. Actually results are quite noisy and not significant so that it's difficult to see the trend. Methods And Evaluation Criteria: For OpenAI Gym experiments, experiments are conducted on a bit of contrived setup where the reward is artificially made sparse. One additional note is that PPO is not 'state-of-the-art' -- this is method from 2017! - and it is not clear why the proposed idea is evaluated only on PPO -- maybe having more experiments based on other backbone RL algorithms can make the results be more convincing. Theoretical Claims: I don't have an enough background / knowledge to thoroughly go through the proof. Experimental Designs Or Analyses: Again, the setup is a bit contrived and the paper should consider more backbone RL algorithms. Using the source domain of NChain is interesting but the paper can be made more stronger by considering more realistic scenarios of considering more 'relevant yet different' domains other than NChain domain. Supplementary Material: I only checked additional results on OpenAI Gym tasks. Relation To Broader Scientific Literature: Reward translation is an interesting idea that can be useful for many scenarios, but the current method is a bit too complex and does not have promising results yet. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Other strengths - Related work was very useful in understanding relevant works Other weaknesses - The paper is a bit weak in providing a motivation. It could be nice to provide a trivial example where it is easy to transfer rewards and it is hugely beneficial. Other Comments Or Suggestions: The authors may want to check how they should write double quote in latex Questions For Authors: - Would it be possible to provide experimental results on a more 'natural' setup where the reward is not artificially removed from the task? Removing the reward at all, thus making PPO baseline entirely fail, and then showing that the proposed idea can work may not be an ideal way of showing the promise of reward transition. - It seems to me that PPO policies are all stuck in local minima for OpenAI Gym tasks -- is there a particular reason for this? - Regarding the above point, is there any qualitative analysis that shows which reward is transferred? - Would there be a way to generate reward machines without using LLMs? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful review of our paper. We appreciate your feedback and the opportunity to address your concerns regarding our approach and experimental results. We have updated the additional experiment in: https://drive.google.com/file/d/1_U1d13bM4kG1reHUdx2wkLNb9zfn4otS/view?usp=sharing (Due to time constraints, many supplementary experiments could not be run with multiple groups to account for variance effects.) ## 1. Regarding baseline and experiment design. We appreciate this concern about our algorithmic choices. We must emphasize that our paper's primary contribution is not proposing a superior RL algorithm, but rather enabling cross-domain reward reuse - the RL method simply serves to validate our reward transfer effectiveness and should avoid excessive additional techniques. From this perspective, we selected PPO for several reasons: 1. PPO is a widely-used, reliable algorithm that doesn't introduce additional techniques that might confound our results. 2. Using a consistent algorithm across most environments allowed for standardized experimentation. 3. PPO's compatibility with both discrete and continuous action spaces facilitated unified testing methodology. We should clarify that our baselines aren't limited to PPO alone - our Nchain2Nchain experiments utilized DQN. Additionally, to address your concern, we have expanded our experiments to include SAC (Soft Actor-Critic) as an additional baseline for the Mujoco tasks.The result is shown in Figure 4 in the additional experiment. These new results, included in our supplementary materials, demonstrate that our method's benefits are consistent across different RL algorithms, strengthening the generality of our approach. Our experimental design deliberately demonstrates knowledge transfer from simpler source tasks to more complex target domains, showing how fundamental task structures can inform learning in challenging environments. We carefully selected tasks to demonstrate effectiveness with both isomorphic and homomorphic reward machines, specifically addressing semi-alignable MDPs - a challenge inadequately addressed in previous research. Beyond MuJoCo, our Text-Sign to 3D-Sign transfer experiments validate our approach in first-person navigation with sparse rewards. To address concerns about reward structure, we ran additional experiments with naive reward shaping (Figure 5 in additional experiment). While simple shaping improves performance, our NRT method still outperforms by capturing logical dependencies between subtasks, providing more principled guidance. ## 2. Regarding reward visualization and transfer Our reward machine diagrams already indicate transferred reward values directly, showing how values map between domains - a more intuitive representation than alternatives like heat maps. We provide a visualization example of the reward machine heatmap from NChain to MuJoCo, as given in the link below, for your reference and comparison. https://drive.google.com/file/d/1wPJJbeZjK6_6ZTib4JEaBnueIoMja6nU/view?usp=sharing ## 3. Regarding performance in Mujoco environments You correctly noted that PPO policies seem to struggle in our modified Gym tasks. This is by design - we modified the standard Mujoco environments to use sparse rewards, where agents only receive feedback upon reaching specific checkpoints (e.g., point F when x-position exceeds 8), rather than the dense rewards in the original environments that provide immediate feedback at every step. We should emphasize that this checkpoint-based task setting for Mujoco environments has precedent in classical reward machine literature, specifically in Icarte et al.'s "Reward Machines: Exploiting Reward Function Structure in Reinforcement Learning." However, while Icarte et al. retained the control reward component from the original environments, we implemented a fully sparse reward setting to create a more challenging scenario that better highlights the benefits of our approach. This sparse reward setting intentionally creates a challenging learning scenario that better demonstrates the value of our approach. Standard PPO struggles because most sampled trajectories contain no reward signal. Our NRT method, while still operating in a relatively sparse reward regime, provides more informative guidance through the transferred reward structure, helping overcome this exploration challenge. ## 4. Regarding reward machine generation without LLMs While we used LLMs, alternatives include manual definition and Icarte et al.'s combinatorial optimization approach. We focused on the reward translation framework itself rather than reward machine generation, which could be explored in future work. Thank you again for your valuable feedback, which has helped us strengthen both our presentation and experimental validation. --- Rebuttal Comment 1.1: Comment: Thank you for the response and I like that you added new experiments. I would strongly recommend adding similarly diverse experiments in the camera-ready if the paper is accepted; that would make the paper much stronger and interesting to the community. --- Reply to Comment 1.1.1: Comment: We sincerely thank you for your insightful and constructive comments that improved our work. We will add these diverse experiments.
Summary: This paper proposes a way to derive reward functions for cross domain transfer learning. This is achieved via reward machines for obtaining a transferable reward in semi-align MDPs. Experiments conducted on 3D visual navigation and a few Mujoco tasks demonstrate the benefits when agents are learned with PPO. Claims And Evidence: The claims of this paper are around developing the foundations for reward translations and semi-alignable MDPs which are defined in the Section 3 and 4. Methods And Evaluation Criteria: The paper evaluates on 3 tasks from Mujoco (HalfCheetah, Ant and Hopper) and on 3D visual navigation based on Mini`world. Theoretical Claims: The 2) condition in Definition 3.1 seems off. As the $\phi$ is applied over the transition function $Pr$. The theorem 3.5 looked fine. I did not check Theorem 4.5 closely. Experimental Designs Or Analyses: The experiments on Mujoco tasks are not convincing enough. The source domain is a NChain environment where the agent needs to reach a point F and is used to define a reward for target domain where the Ant / HalfCheetah needs to reach point F. This is a relatively simple task. What if the source task is more harder? Given that the paper uses LLMs (as described in Sec 4), will the method work when the source domain in HalfCheetah and the target domain is Ant. Supplementary Material: The Proof of Theorem 3.5. Relation To Broader Scientific Literature: The idea of recovering a reward function that can be transferred across domains is challenging. However, the experiments are not convincing enough. Essential References Not Discussed: NA Other Strengths And Weaknesses: NA Other Comments Or Suggestions: NA Questions For Authors: 1. The example in Fig 1 is not convincing. In the Half-Cheetah environment, the is run / walk with a certain speed and not to reach a point F as in NChain. How are these two tasks related and how is Point F defined in HalfCheetah environment? 2. The second condition (b-dynamic) in Def 3.1 is not clear to me. How does the function $\phi$ applies to $Pr^B$ as this is a transition function and should return probabilities? 3. In Definition 3.3, how the action domain x is used in defining the Reward Machine? It is mentioned, but not used anywhere. Why x is a action domain, in Sec 3.1 X is defined as a source domain? 4. What if instead of NChain, a cheetah task where the agent has to reach the point F is used for transfer such that the Ant reaches a point F? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful review of our paper. We appreciate your careful reading and questions, which help us improve the clarity and rigor of our work. We have updated the additional experiment in: https://drive.google.com/file/d/1_U1d13bM4kG1reHUdx2wkLNb9zfn4otS/view?usp=sharing (Due to time constraints, many supplementary experiments could not be run with multiple groups to account for variance effects.) ## 1. Regarding the HalfCheetah example in Fig 1 We apologize for any confusion in our presentation. To clarify, we have modified the standard Mujoco environments to create goal-oriented tasks that better demonstrate our cross-domain transfer approach. For the HalfCheetah environment, we defined checkpoints along the x-axis, with point F specifically representing when the robot's position exceeds x=8. This task definition follows similar approaches in Icarte et al.'s work Reward Machines: Exploiting Reward Function Structure in Reinforcement Learning (which we have cited). The key difference from the standard HalfCheetah environment is that we use a sparse reward structure where the agent receives a reward only upon task completion (reaching point F), with the reward magnitude depending on the number of steps taken. This contrasts with the original dense reward setting that provides control and forward rewards at each step. This sparse reward formulation makes the task more challenging to learn but better demonstrates the value of our approach. ## 2. Regarding condition (b-dynamic) in Definition 3.1 Thank you for requesting clarification on this point. The essence of this definition relates to our abstraction of environments into subgoals and skills, creating a coarser granularity than the standard RL state-action level. To elaborate: in environments like our Text-Sign task, the agent must collect specific items in sequence to complete the task. Each item collection represents a subgoal, and the process of achieving that subgoal constitutes a skill. This abstraction doesn't consider the specific actions or states involved in executing the skill (which would be fine-grained), focusing instead on the higher-level completion of subgoals. The mappings φ and η represent these coarse-grained alignments of subgoals and skills between source domain X and target domain Y. Our "semi-alignment" concept acknowledges that fine-grained alignment between domains is often infeasible, but partial reward reuse through these higher-level mappings remains valuable. In our implementation, this is achieved through reward machine states and propositional symbols mappings. The function Pr in Definition 3.1 is a deterministic function that indicates which new abstract state (equivalently, which reward machine state) the agent will transition to when selecting a particular skill from its current abstract state. ## 3. Regarding Definition 3.3 and domain notation We sincerely apologize for this notational error. You are correct that in Definition 3.3, "x" should be "A" (representing the action space), not the source domain. This is indeed a typographical error that we will correct in the final version. ## 4. Regarding experimental concerns The core contribution of our work is enabling reward translation for cross-domain transfer, with reward machines as the bridging mechanism. Our experiments focus on transferring knowledge from simpler tasks (e.g., NChain) to more complex target domains, demonstrating how fundamental task structures can inform learning in challenging environments. We carefully selected tasks to highlight our approach’s effectiveness in both isomorphic and homomorphic reward machine settings, specifically addressing semi-alignable MDPs—an area insufficiently explored in prior work. Beyond Mujoco, we also validated our method in a significantly different domain: transferring from Text-Sign to 3D-Sign, where first-person navigation with sparse rewards poses a unique challenge. Regarding Mujoco transfers (e.g., HalfCheetah to Ant), our method is applicable but unnecessary from a motivation standpoint. These environments share the same reward machines, and existing work (e.g., Raychaudhuri et al. Cross-domain Imitation from Observations) has already demonstrated effective cross-domain transfer for alignable MDPs (They also transfer HalfCheetah to Ant). Since our approach abstracts reward structures rather than leveraging shared state-action similarities, it does not offer additional benefits in such cases. That said, due to their inherent similarity, even with our method, performance would be better than in NChain. We believe these clarifications address your concerns and hope they provide a clearer understanding of our approach and its contributions. We thank you for your valuable feedback, which will help us improve the presentation of our work.
Summary: The paper introduces the Neural Reward Translation (NRT) framework, a novel methodology designed to transfer knowledge from completing a task in one environment to quickly learning to solve a (sufficiently similar) task in another environment. For example, NRT can transfer knowledge gained from completing a task in a grid with discrete actions to learning how to solve a similar task in a 3D environment with continuous actions. To achieve this, NRT utilizes the optimal value function of the original task (e.g., the grid world) to shape the reward for the target task (e.g., the 3D environment). Furthermore, the paper formally defines the concept of semi-alignable MDPs. Based on this definition, the authors demonstrate the conditions under which NRT can be used to transfer knowledge from one MDP to another. Claims And Evidence: Yes Methods And Evaluation Criteria: Overall, I believe the experimental evaluation is solid. However, I have two concerns regarding the experiments. 1) I find it somewhat disappointing that NRT was tested only on sequential tasks. It would have been great to see tasks that include disjunctions, conjunctions, or cycles (beyond self-loops). That said, I don’t believe there’s any reason the proposed method wouldn’t perform well in those cases as well. 2) I find it challenging to assess the merits of the proposed method without additional baselines. For example, we could shape the reward function by providing extra rewards whenever the agent makes progress in the task (i.e., moves closer to the terminal state in the RM). I believe that such a baseline could be competitive with the proposed method, and it is straightforward to implement. Theoretical Claims: The definitions and theorems appear correct to me. However, I have a couple of comments, which are discussed below. Experimental Designs Or Analyses: Yes, the experimental design and analysis are sound. Supplementary Material: I reviewed the appendix. Relation To Broader Scientific Literature: The paper explains that previous attempts to transfer rewards would not be effective with incompatible MDPs (which are non-pairable and non-time-alignable). However, it does not connect to the literature on automated reward shaping or the literature on intrinsic motivation. I believe that any method altering the reward function could serve as a valid alternative for using NRT. Essential References Not Discussed: I think all the important works are being discussed. Other Strengths And Weaknesses: Overall, I think this is a solid paper. It explores an important issue: how to transfer knowledge from one domain to another. To do so, it formally defines semi-alignable MDPs and then uses RMs to develop a practical algorithm. The results, although limited to sequential tasks, show that this novel method can work well in practice. I only have two minor concerns about this work: 1. The paper discusses the potential use of LLMs to automatically generate an RM. However, I wouldn’t consider this a major contribution of the work because the method itself is not well explained, and it is unclear whether it would be effective in a different domain. 2. I suspect that a simple reward shaping technique could perform just as well as the proposed method for the sequential tasks tested in the paper. As such, the experimental section would be more compelling if it assessed other types of tasks, particularly those involving cycles or disjunctions. Additionally, I believe that incorporating a naive reward shaping method as a baseline would enhance the paper. Even a straightforward approach, such as giving a reward of +100 every time the agent makes progress on the task, would help better evaluate the effectiveness of the proposed method. Other Comments Or Suggestions: I suggest better explaining $\Pr^B$ in Section 3.2. Currently, it just states that it “_denotes the transition on goal._” However, if I understand correctly, it represents a probability from $B \times W$ to $\Pr(B)$. If that is the case, please make that explicit. Also, I find it strange that in Definition 3.1 the paper states that $\Pr^B_y(b_y,w_y) = \phi(\Pr^B_x(b_x,w_x))$ because that implies that $\Pr^B_x(b_x,w_x)$ is actually a deterministic function (not a probability distribution) that returns one state $b \in B_x$ (since $\phi$ was defined as a function from $B_x$ to $B_y$). In Sections 3.3 and 4, I am confused by the meanings of $P$, $\mathcal{P}$ and $2^{\mathcal{P}}$. Usually, $\mathcal{P}$ refers to the set of propositional symbols and $2^{\mathcal{P}}$ is a truth value assignment to each of those symbols. So, for instance, in one transition, symbols $a$ and $b$ could hold at the same time. Thus, the definition of $\delta_u$ in an RM goes from $U \times 2^{\mathcal{P}} \mapsto U$. But the paper defines them in terms of $U \times P \mapsto U$. So, I don’t know if that is a typo, or if $P$ is used as shorthand notation for $2^{\mathcal{P}}$. Then in Theorem 3.5, I think $\Gamma_x$ should go from $2^{\mathcal{P}_x}$ to $W_x$ because in every RM transition, multiple propositions might hold at once. Questions For Authors: 1. If we use a naive reward shaping technique, such as always providing an extra reward for making progress on the task—or something similar to the automated reward shaping method proposed by Icarte et al. (2022)—would that work similarly to the proposed method? Or would NRT work better? Why? 2. In what way is the proposed method superior to the automated reward shaping suggested by Icarte et al. (2022)? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We deeply appreciate your thoughtful review and constructive feedback on our NRT paper. We have updated the additional experiment in: https://drive.google.com/file/d/1_U1d13bM4kG1reHUdx2wkLNb9zfn4otS/view?usp=sharing (Due to time constraints, many supplementary experiments could not be run with multiple groups to account for variance effects.) Your suggestions will certainly strengthen our work, and we address each point below: ## 1. Regarding testing NRT beyond sequential tasks Thank you for this valuable suggestion. While the NChain to Mujoco task does contain a basic disjunction (at point C, the agent can choose to go to point G or point D), we acknowledge this implementation is relatively simple. We have now expanded our evaluation by modifying the Text-Sign task to incorporate more complex disjunctions. In this enhanced environment, we added a "trap" mechanism where the agent receives a -10 reward and terminates the episode if it falls into the trap at any point. From a reward machine perspective, this modification adds a branch from each state (U0, U1, U2, U3) to a terminal failure state, creating multiple disjunctive paths through the task. The environment diagram and transferred reward machine is shown in Figure 1 and Figure 2 in the additional experiment. Our results (shown in Figure 3 in the additional experiment) demonstrate that NRT continues to show significant performance improvements when transferring the reward machine and corresponding rewards from the original Text-Sign task, even with these more complex logical structures. ## 2. Additional baselines (naive reward) We agree that additional baselines strengthen our evaluation. We implemented a naive reward approach on the Sign task (shown in Figure 5 in the additional experiment, where each time the agent take one thing, it will get a reward +10), where the agent receives supplementary rewards when making progress toward the goal. The results (available in our supplementary materials) show that while naive reward shaping does improve performance compared to sparse rewards, there remains a substantial performance gap when compared to our NRT method with transferred reward machines. This comparison highlights that while simple reward shaping can help, the structured knowledge transfer facilitated by NRT provides more substantial benefits for learning efficiency. The transferred reward structure itself inherently contains this shaping information but in a more principled way that captures the logical dependencies between subtasks, which explains the superior performance of our approach. ## 3. On automated reward machine generation Thank you for highlighting this concern. We want to clarify that our primary contribution is the reward translation framework (cross-domain reward transfer), with reward machines serving as the bridge for this translation process. Icarte et al.'s work on automated reward machines is indeed excellent, formulating a combinatorial optimization problem for reward machine construction. However, solving this requires substantial data and computational resources. Our LLM-based approach to generating reward machines is admittedly more similar to human specification but offers practical automation benefits. We agree that combining Icarte's automated reward machine learning with our reward translation framework represents a promising direction for end-to-end learning. This integration would need to address challenges like limited trajectory data in target tasks and modeling complexities in continuous action spaces like Mujoco. But it is far from what this paper aims to discuss. ## 4. Clarification on propositional symbols (P) and definition 3.1 We apologize for any confusion regarding our notation. P represents the set of propositional symbols, expressed as textual predicates (e.g., "c" or "!c" indicating whether the agent has or hasn't reached point c). The notation 2^P refers to the power set of these symbols, representing all possible truth assignments. We should have used 2^|P| for clarity, and we will correct this in the camera-ready version if accepted. The function Pr in Definition 3.1 is a deterministic function that indicates which new abstract state (equivalently, which reward machine state) the agent will transition to when selecting a particular skill from its current abstract state. We sincerely thank you for your thoughtful review and hope our responses address your concerns. --- Rebuttal Comment 1.1: Comment: Thank you for your response and the additional experiments. My primary concerns have been addressed, so I have increased my recommendation score to Accept. --- Reply to Comment 1.1.1: Comment: We sincerely thank you for your insightful and constructive comments that improved our work.
null
null
null
null
null
null
null
null
When can in-context learning generalize out of task distribution?
Accept (poster)
Summary: This paper examines the generalization properties of in-context learning (ICL) in transformers. Specifically, it explores the conditions necessary for ICL to emerge and extend beyond the pretraining distribution. To investigate this, the authors conduct a series of experiments across various tasks and summarize their key findings. Claims And Evidence: See strength and weaknesses Methods And Evaluation Criteria: yes Theoretical Claims: yes Experimental Designs Or Analyses: yes Supplementary Material: no Relation To Broader Scientific Literature: NA Essential References Not Discussed: yes Other Strengths And Weaknesses: ## Strengths - Paper is written clearly and easy to follow - Paper ask important questions regarding how transformers generalize, a crucial topic for advancing the field’s understanding. ## Weaknesses - The experiments are relatively simple and do not explore the effects of increasing task complexity. For instance, incorporating vision-language tasks post-pretraining could provide deeper insights into ICL. - The experimental design appears highly similar to [1], yet the paper does not adequately clarify this resemblance. - The paper does not clearly answer the key questions posed in the “Contributions” section. While the experimental results section attempts to address them, the discussion lacks clarity and direct conclusions. 1. Raventós, A., Paul, M., Chen, F., & Ganguli, S. (2023). Pretraining task diversity and the emergence of non-bayesian in-context learning for regression. Advances in neural information processing systems, 36, 14228-14246. Other Comments Or Suggestions: none found Questions For Authors: ## Questions to ask - Can the authors explicitly discuss the similarities between this paper and [1] in terms of experimental design? - Can the authors provide clearer conclusions drawn from the experimental results regarding the key questions outlined in the “Contributions” section? 1. Raventós, A., Paul, M., Chen, F., & Ganguli, S. (2023). Pretraining task diversity and the emergence of non-bayesian in-context learning for regression. Advances in neural information processing systems, 36, 14228-14246. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for highlighting the clarity and importance of our work! We appreciate your comments and suggestions, which help us improve the paper. &nbsp; > The experiments are relatively simple… We agree that our experimental setups are relatively simple. However, this simplicity is precisely what enables controlled experiments that are essential for developing an understanding of complex learning phenomena. We view our work as establishing an important foundation upon which future research can build more sophisticated models of out-of-task-distribution generalization in increasingly complex settings. This minimal model approach has proved crucial in advancing our understanding of nontrivial learning phenomena, such as generalization and learning dynamics of deep networks (eg, Saxe et al, ICLR 2014; Lampinen & Ganguli, ICLR 2019; Ji & Telgarsky, ICLR 2019; Arora et al, ICLR 2019), double descent and benign overfitting (eg, Bartlett et al, PNAS 2020; Wu & Xu, NeurIPS 2020; Hastie et al, Ann Stat 2022; Richards et al, AISTATS 2021; Mel & Ganguli, ICML 2021), and emergence of in-context learning (eg, Chan & Santoro et al, NeurIPS 2022; Singh & Chan et al, NeurIPS 2023; Raventós et al, NeurIPS 2023; Reddy, ICLR 2024; ​​Nguyen & Reddy, ICLR 2025). &nbsp; > …and do not explore the effects of increasing task complexity. Our work considered several settings with varying task complexity, including noisy regression (Fig 2B), increasing task dimensions (Fig 7), and nonlinear regression (Fig 9). We have also recently performed experiments on a classification task with $y = \mathrm{Heaviside}(w^Tx)$ and $w$ drawn from a hyperspherical cap similar to our linear regression setting. We will include the results in our revised manuscript. In all settings considered, we observe a specialization-generalization transition, similar to those observed in Fig 2A. Our results hint at universal features of out-of-task-distribution generalization in exemplar in-context learning. We will clarify this point in the discussion section in our revised manuscript. &nbsp; > …incorporating vision-language tasks post-pretraining could provide deeper insights into ICL. We would love to be able to do this, but doing so would require developing a more robust notion of task similarity for vision-language tasks, which is outside the scope of our current work. &nbsp; > The experimental design appears highly similar to [Raventós et al], yet the paper does not adequately clarify this resemblance. > Can the authors explicitly discuss the similarities between this paper and [Raventós et al] in terms of experimental design? Thank you for pointing out an opportunity to clarify the differences between our work and Raventós et al. We intentionally chose our experimental setup to begin from Raventós et al. This similarity enables a seamless transition from this earlier work, which considered the effects of the number of training tasks on in-task-distribution generalization, through to our work on out-of-task-distribution generalization. This approach allows for a more direct comparison between previous results and ours, which helps readers build intuitions on the effects of task diversity on the emergence of general purpose in-context learning, without the need to translate between two different experimental settings. We will revise our manuscript to better explain the rationale behind our experimental design, where we describe our setup. &nbsp; > The paper does not clearly answer the key questions posed in the “Contributions” section. While the experimental results section attempts to address them, the discussion lacks clarity and direct conclusions. > Can the authors provide clearer conclusions drawn from the experimental results regarding the key questions outlined in the “Contributions” section? We will include a new Conclusions section to the discussion to further clarify how our experiments support our claims in the Contributions section. In particular, we will add the following texts: * We investigate the ability of transformers trained on simple tasks to generalize out-of-*task*-distribution: where “task generalization” is a particular type of covariate shift. * We identify a novel transition in the task generalization performance of transformers trained to do ICL, and show that pretraining tasks need not cover the full task space in order for models to generalize * Our experiments identify specialization-generalization transitions in several different ICL problems, suggesting that these transitions may be a universal phenomenon.
Summary: The paper explores the effect of pretraining task diversity, i.e., instead of the number of pretraining tasks, the paper considers the diversity of the fixed number of pretraining tasks. The same number $N$ of tasks could be more diverse than the other $N$ tasks. Specifically, the paper draws samples from a subset of a unit hypersphere and tests on the entire hypersphere. The paper measures the new concept of diversity and shows that there is a transition of the trained Transformer from a specialized solution to generalization over the full tasks when the new diversity of tasks is increased. There are multiple ablation studies provided to make the experiment complete. Claims And Evidence: Claims: The paper claims there should be another angle of tasks diversity, i.e., the scale of pretraining task distribution. Via special experimental design, the paper illustrates the claim is true. Supported? Yes. The claim itself is reasonable and straightforward, and supported by the experiments. The paper could do better via diversifying the experimental setting on the tasks. The paper concentrates on the pretraining task's parameters on the sphere, the paper could consider more either proposing another task, possibly on classification rather than regression, or consider a task distribution other than the sphere. Methods And Evaluation Criteria: Yes, the experimental setup makes sense. Theoretical Claims: No theory claim. Experimental Designs Or Analyses: Yes, I read the full paper so I checked the experimental design. The experiments are straightforward to me. Supplementary Material: No. Relation To Broader Scientific Literature: The key contribution could be regarded as a complement to the definition of pretraining task diversity. There are indeed two angles: the number of training tasks and the scale of pretraining task distribution. The paper highlights the second part. Prior findings are concentrated on the first part. Essential References Not Discussed: The author could consider to discuss the relationship to: Can Transformer Models Generalize Via In-Context Learning Beyond Pretraining Data? Other Strengths And Weaknesses: Strength: (i) Fig 2 illustrates an interesting phenomenon: the pretraining task does not need to cover all the space to achieve good performance on all the space. There is a sweet spoint on the pretraining task distribution which depends on the level of label noise. Weakness: (i) The paper should consider arranging some experiments from the appendix into the main paper, especially when the main paper is not full 8 pages and the main paper refers to some figure in the appendix. (P3 col1 line 142 Fig 10 is in the appendix) The paper is a bit beyond borderline for me, but cannot reach 4, so I score 3. The experiments are good but not surprising, and the claim is very reasonable, well supported, but not surprising. The contribution is decent, well, but not huge. The writing is also good. If the finding could shed some light on how such generalization happens, the paper will score 4. Other Comments Or Suggestions: There are multiple misuses in the citation. For instance, P1 col2 line038 "the results of (...)". I think the author used \citep, but here \citet should be used since there should be no "()". P3 col1 line 142 "Fig 10" should be "Fig. 10". P4 col1 line 201-204 "set by $\sigma^2$. with". should be "With"? The Optimal Bayes* solution depends on the label noise as shown in equation (5), where $p(w|C,y))$ depends on the label noise over $y$. Therefore, the author may connect the Optimal Bayes* solution to the case of $\sigma\rightarrow 0$. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your comments and suggestions! We are pleased that you find our work interesting and well supported. &nbsp; > The paper could do better via diversifying the experimental setting on the tasks. > …consider more either proposing another task, possibly on classification rather than regression, or consider a task distribution other than the sphere. Thank you for suggesting an intuitive way to diversify our experimental setting. We have performed experiments on a classification task with binary class labels {0,1}, generated from $y=\mathrm{Heaviside}(w^Tx)$. The tasks $w$ are drawn uniformly from a hypersphere cap similar to our regression setting. We measure model performance on unseen tasks in and out of the training distribution. In this classification setting, we also observe a specialization-generalization transition similar to Fig 2. We update our manuscript to include the new results. These new results will add to existing variations we considered, such as noisy regression (Fig 2B), increasing task dimensions (Fig 7), and nonlinear regression (Fig 9). A transition between specialized and general purpose solutions was evident in all settings investigated, including the new classification experiments described above. These results suggest that this transition, driven by distributional task diversity, is a universal feature of out-of-task-distribution generalization in exemplar in-context learning. &nbsp; > …consider to discuss the relationship to: [Yadlowsky et al (2023)] Thank you for pointing out this interesting and relevant work. Yadlowsky et al (2023) asked whether a transformer can in-context learn a novel task and highlight that the coverage of the training task distribution (ie, *distributional task diversity* in our work) is an important factor in ICL ability. While related, our work differs from theirs in several important aspects: * Yadlowsky et al’s setting can best be described as “concept shift” of the task distribution, in the context of standard OOD literature. Our work fits within the framework of “covariate shift” of the task distribution. * We develop an experimental framework that allows us to clearly define task similarity, which enables quantitative investigations of its effects. * We demonstrate that distributional task diversity drives a sharp transition between specialized and general-purpose ICL. We include discussion of Yadlowsky et al and clarify the points above in our revised manuscript. &nbsp; > …experiments are good but not surprising, and the claim is very reasonable, well supported… We emphasize that the existence of a specialization-generalization transition in the out-of-task-distribution performance of transformers is nontrivial. One could expect at least three potential outcomes a priori: 1. No out-of-distribution generalization. This is what one would observe from an optimal Bayesian solution with a prior matching the pretraining distribution. 2. Nontrivial out-of-distribution generalization with performance gradually improving with increasing coverage of training task distribution. 3. Nontrivial out-of-distribution generalization with abrupt performance improvement as the coverage of training task distribution exceeds a threshold (without encompassing the entire task space). In our view, the third option, which we found empirically, is perhaps the most interesting. It implies a sharp change in ICL solution patterns where the model learns a new concept that generalizes beyond both the training tasks and the training task distribution. We discuss these potential options in the introduction of our revised manuscript. &nbsp; > If the finding could shed some light on how such generalization happens, the paper will score 4. While we don’t have a full theory of the transition to OOD generalization, we suggest that our “covariate shift” of the task distribution setting is critical and helps explain why Yadlowsky et al did not find such a transition. To clarify this, we will add a new section to the discussion that casts the ICL problem as a special case of supervised learning. In this framework, we show how “task generalization” can be seen as a particularly structured type of *covariate shift* or *domain generalization*. In contrast, we show that Yadlowsky et al instead study a type of *concept shift*. We argue that this distinction in the mode of OOD generalization considered is what allows our models to generalize OOD while the models in Yadlowsky et al do not. Our analysis suggests that it may be the type of OOD problem considered, not only the scale of the model/data, that allows for nontrivial OOD generalization in LLMs. &nbsp; > … should consider arranging some experiments from the appendix into the main paper… We appreciate this helpful suggestion and revise our manuscript accordingly. &nbsp; > There are multiple misuses in the citation… Thank you for spotting these typos and formatting errors. We fix these in the revision. --- Rebuttal Comment 1.1: Comment: Thank you for your rebuttal. I have read other reviews and I'll maintain my scores.
Summary: The authors study a new notion of task diversity--task similarity--and investigate the condition on task diversity for out-of-distribution generalization to emerge. The authors find that there is a transition from specialized models to generalizable models with increasing task diversity. They also show that this specialization-generalization transitions also occur in nonlinear regression problems. ## update after rebuttal I do not have further concerns. I will maintain my score and suggest acceptance of the paper. Claims And Evidence: I feel that some of the claims may be a little problematic. Specifically, the authors have not discussed how they determine the transition point. For example, in Fig. 8 and 15, it appears from the plot that the threshold slightly increases with number of layers. Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense. Theoretical Claims: N. A. Experimental Designs Or Analyses: I have checked the soundness and validity of experimental designs and analyses. For nonlinear regression experiments, concatenating the weight vectors may cause some unexpected behaviors as the authors have also discussed in section 3.3. Specifically, there is also rescaling symmetry in this nonlinear parameterization. Considering some measure in the function space may make more sense than in the parameter space. Supplementary Material: I have skimmed most of the supplementary materials. Relation To Broader Scientific Literature: Understanding the emergence of in-context learning is a very important question. Previous works have shown that the emergence of in-context learning on out-of-distribution linear regression tasks will only occur if the pretraining task is diverse enough. The diversity discussed in previous literatures are about the number of pretraining tasks. Here, the authors take another perspective and view task similarity as another dimension of task diversity. The results of the paper contribute to this broader scientific understanding of in-context learning. Essential References Not Discussed: None Other Strengths And Weaknesses: The paper studies the effect of task diversity on the out-of-distribution performance of in-context learning linear regression from a new perspective. The finding of the paper is interesting, and the paper is overall well written. However, there are still a few points that the authors can improve. 1. First, the authors should more clearly write down how they define and find the transition point. 2. In Fig. 4, it seems that the models with $\phi>120$ will deviate away from OLS for in-context examples more than 10. I suggest the authors adding discussions for this observation. Other Comments Or Suggestions: 1. For Fig. 5B, it may be good to plot normalized MSE loss instead of the test MSE. One can normalize the loss by the variance of y so that the effect of R on the variance of y will be factored out for a clearer point. Questions For Authors: 1. In Fig 5A, why does training on larger $\phi$ have a worse generalization for the cases where test radius is larger than 1? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the helpful comments and suggestions! &nbsp; > Specifically, the authors have not discussed how they determine the transition point. For example, in Fig. 8 and 15, it appears from the plot that the threshold slightly increases with number of layers. Thank you for the opportunity to clarify our presentation here. We will introduce a more formal definition of the transition point, which captures our intuition. We say a transition has occurred when the spread of the test error across test angles *δ* = {5°,10°,…} decreases below a specified threshold as the training cone angle *ɸ* increases. We quantify this spread by the relative standard deviation (ie, standard deviation normalized by mean). When plotted against *ɸ*, we generically see a sharp drop in this spread measure as *ɸ* increases. We include plots of this measure and the thresholded phase boundary in the revised appendix. We will update our presentation, including Figs 8 and 15, to use this more precise definition of the transition. &nbsp; > Specifically, there is also rescaling symmetry in this nonlinear parameterization. Considering some measure in the function space may make more sense than in the parameter space. While we agree that a function space measure would indeed be a more principled approach for nonlinear functions, developing such a measure is beyond the scope of this work. For the nonlinear regression experiments, our main aim is to show that, at least qualitatively, specialization-generalization transitions exist for more complex settings than linear regression. We view the fact that our simple parameter-space measure (which ignores many features important to the nonlinear problem) works at all in this setting as a sign that specialization-generalization transitions may be somewhat robust to a choice of similarity measure. We will edit the text to clarify this point. &nbsp; > Previous works have shown that the emergence of in-context learning on out-of-distribution linear regression tasks… We would like to emphasize that although previous work has shown the ability of transformers to generalize to linear tasks beyond those they were pretrained on, our work means something different (and, to the best of our knowledge, novel) by “out-of-distribution.” In particular, previous work (eg, Raventos et al) has focused on the ability of transformers to generalize to new tasks *within* the support of the pretraining distribution. Instead, we focus on the ability of transformers to generalize to new tasks which are completely disjoint from those in the pretraining distribution. In particular, we show in our revised manuscript how “task generalization” can be seen as a particularly structured type of *covariate shift* or *domain generalization* by defining the ICL problem as a special case of supervised learning. Our analysis suggests that it may be the type of OOD problem considered, not only the scale of the model/data, that allows for nontrivial OOD generalization in LLMs. &nbsp; > In Fig. 4, it seems that the models with ϕ > 120 will deviate away from OLS for in-context examples more than 10. I suggest the authors adding discussions for this observation. Thank you for the suggestion to clarify our presentation here. Indeed, in our experiments, models with *ɸ* > 120 do not achieve zero test error after 10 examples. (The test error is just small). While the log scale on the plot exacerbates the difference between these curves, it is possible that this discrepancy is part of what allows transformers to generalize nontrivially out-of-distribution while an optimal Bayesian algorithm (with a prior matching the pretraining distribution) does not. We will update our manuscript to comment on this phenomenon. &nbsp; > For Fig. 5B, it may be good to plot normalized MSE loss instead of the test MSE. Thank you for the suggestion. If we normalize the MSE accordingly in Fig 5B, the transition remains qualitatively similar. We have added this plot to the appendix (but we leave the current plot in the main text to ease presentation). &nbsp; > In Fig 5A, why does training on larger ϕ have a worse generalization for the cases where test radius is larger than 1? We believe that this effect is due to initialization/optimization noise in the training of the models. We update the plot with a version that averages over more models to clarify this. --- Rebuttal Comment 1.1: Comment: I thank the authors for their responses to my questions. And I understand and acknowledge the novelty of the OOD studied in the paper. I do not have further concerns. I will maintain my score and suggest acceptance of the paper.
Summary: This paper studies empirically the task diversity to the generalizability, focusing on the transformer trained to learn a linear regression problem. It proposes the new axis of task diversity, namely the task similarity, independent of the unique task numbers seen during pretraining. Claims And Evidence: See experimental section for questions. Methods And Evaluation Criteria: Yes Theoretical Claims: NA Experimental Designs Or Analyses: 1. Could you explain why you mention the "loss plateaus" behavior in section 3? Are the results presented with the runs that "escape" the plateaus? 2. Could you explain why the noise level sigma affects the transition point? How to filter out the effect that because of the non-zero noise level, the "effective training degree" is larger than the set training degree? 3. In Figure 2, when \delta is small, the test MSE is better for small \phi than large \phi. I assume this has to do with the fact that the test MSE for a shortcut solution trained for small \phi is small, instead of transformer trained on small \phi learns a better solver. Is there any way to normalize the test MSE such that we will not observe the confusing behavior where an increase in test MSE when \phi increases? Maybe use the way you normalize the loss in section 3.2? 4. How do you plot figure 6C? What is the threshold chosen, and why? How do you validate the IWL regime? 5. Could you properly define the "transition" point? Is it the \phi that across \delta the test MSE collapses? Or is it the \phi that the test MSE plateaus when \delta=175? In Figure 2 I assume the definition goes by the first, and in figure 7 the definition goes by the second. Are these two definitions always collapses? 6. In Figure 7, what is the context length? What training algorithm did you use to train especially the task with large d? Are all the runs have saturated performance (that escape the loss plateaus)? 7. Out of the whole results presented in the paper, I am mostly interested in the results in Figure 7 & 8. Could there be any explanation on why the transition happens independent of the problem dimension and model depth? Could it have to do with model embedding size? Supplementary Material: I didn't review the supplementary part. Relation To Broader Scientific Literature: This paper expand on the previous work on the effect of task diversity in generalizability of the transformer. Essential References Not Discussed: Not that I aware of. Other Strengths And Weaknesses: I appreciate the paper’s effort in introducing a new perspective on task diversity measurement and providing extensive experimental evidence to empirically analyze model behavior. The work is well-motivated and contributes valuable insights into understanding task generalization. That said, the transition from an exemplar-based solver to a more generalizable solver is a well-studied phenomenon, so some of the initial findings may not be particularly surprising. For example, Figure 2 illustrates the existence of a transition point, while Figures 3 and 4 highlight that transformers learn to solve tasks using a different prior than OLS. Figure 5 further shows generalization beyond the unit sphere, where models trained with small \phi struggle with OOD tasks within the unit sphere. Lastly, Figure 6 provides a helpful visualization of the phase transition in terms of task numbers and similarity. While these results are interesting and support the overall argument, they largely align with existing expectations. Other Comments Or Suggestions: - radii in line 272 Questions For Authors: Please see the experimental section. Thanks! Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your helpful comments and suggestions! &nbsp; > why [do we] mention the "loss plateaus"? We mention the loss plateaus because they are characteristic of ICL behavior (Fu et al, ICML 2024; Reddy, ICLR 2023), confirming that our training is consistent with ICL phenomenology. We will move this sentence to a footnote. &nbsp; > Are … [these] runs that "escape" the plateaus? Yes, all our experiments escape the plateau. &nbsp; > How to [understand that], the "effective training degree" is larger than the set training degree? With non-zero label noise *σ*, the effective training degree does not change. The training degree, *ɸ*, controls the portion of the hypersphere that the tasks, *w*, are drawn from. The tasks do not change when noise is added as the noise level only affects the label, *y*. Specifically, the noise, *ε* ~ 𝒩(0,*σ*²), is added to *w·x*. &nbsp; > Could you explain why the noise level sigma affects the transition point? We posit that because of the noisy information that the model sees during training, it takes a larger proportion of the hypersphere to learn a general solution to the linear regression task, moving the transition slightly. &nbsp; > In Figure 2, when \delta is small, the test MSE is better for small \phi than large \phi. … [Does the model learn a shortcut solution?] Yes, this “shortcut” solution is the specialized solution the transformer develops when only training on small *ɸ*. The transformer has not seen enough of the hypersphere in training to generalize to large test angles (*δ*) it has never seen before. &nbsp; > Is there any way to normalize the test MSE …? We have made a plot with normalized test MSE to disentangle any effects from unnormalized test MSE and to instead focus on the transition from specialization at small *ɸ* to generalization at large *ɸ*. Here, the test MSE starts at the same value and avoids this behavior of the unnormalized test MSE being higher at small test angles *δ* when trained on large *ɸ*. However, this does not change the transition to generalization, illustrated in Fig 2. &nbsp; > How do you plot figure 6C? What is the threshold chosen, and why? Currently, the information is only in the figure caption—Thank you for reminding us to add this important detail to the main text. The threshold is set to 0.01 where losses below this threshold for the in-distribution (ID) and out-of-distribution (OOD) losses are characterized as out of task distribution generalization (purple), losses below the threshold for the ID losses but above this threshold for OOD losses are characterized as in task distribution generalization (yellow), and losses above this threshold for both ID and OOD losses are characterized as IWL (teal). &nbsp; > How do you validate the IWL regime? We validate the IWL regime in appendix section A.3. We note that this is the same methodology used by Raventos et al. &nbsp; > Could you properly define the "transition" point? We have produced new plots clarifying how we define the transition point that we add to the text. Here, we define the transition point as the reduction in the standard deviation of the mean over *δ*, leading to the “collapse” in the MSE. In Figs 7 and 8, we only plot the results for *δ* = 175°. Results for all values of *δ* for Fig 8 are shown in Fig 15 in the appendix, which emphasizes that the transition is the same regardless of the value of *δ* plotted, where we see the collapse of the MSE (similar to the results in Fig 2). &nbsp; > In Figure 7, what is the context length? What training algorithm did you use … We use the same context length (*n*=50 examples) and training algorithm (AdamW) across all runs, regardless of task dimension, *d*. &nbsp; > Are all the runs have saturated performance …? The loss converges for all runs. The loss increases across task dimensions because the regression problem becomes more difficult for the transformer to solve. However, we still see the same transition point in the training angle *ɸ* where the transformer begins to learn a general solution. &nbsp; > [Explain] why the transition happens independent of the problem dimension and model depth? Could it have to do with model embedding size? The model embedding size remains the same for all experiments. We believe that the transition occurs independently of the linear regression task dimension (as in Fig 7) and the number of layers of the transformer (as in Figs 8 and 15) because of the structure in the data itself. The similarity in the data, independent of the dimension of the hypersphere, is enough for the model to learn a general solution that extends to the entire hypersphere after seeing tasks drawn from only *ɸ*=120°. While previous work, such as Raventos et al (2023), focuses on the number of tasks needed to generalize, our experiments suggest that this task similarity measure is also crucial in moderating the learning of a generalized solution. --- Rebuttal Comment 1.1: Comment: I appreciate the author’s response. I have carefully read it and decided to maintain my score. Reason for not a higher score: While it is interesting to explore a new axis of task diversity, the current setup for determining the degree of transition still feels somewhat hand-wavy. It primarily relies on interpreting the test MSE curve. Providing a more rigorous and formal definition of the transition would strengthen the work. Reason for not a lower score: The paper conducts extensive analyses along this new axis of task diversity and presents several insightful experiments. --- Reply to Comment 1.1.1: Comment: Thank you for your response; we appreciate that you find our experiments insightful! &nbsp; We agree that our approach to determining the transition angle involves heuristics. However, we would like to offer the following points of clarification. First, developing a rigorous definition of this specialization-generalization transition would require developing an analytic theory, which is outside the scope of our current experimental work. Sharp phase transitions in statistical physics typically occur only in the idealized thermodynamic limit (i.e., infinite systems). In finite systems, it is nontrivial to unambiguously pinpoint the transition point as it becomes blurred quite generally. Second, our “experiment first” approach parallels the natural progression in this research area. Raventos et al (NeurIPS 2023) empirically identified the in-distribution axis of task diversity without providing a formal definition of the transition point or developing a corresponding theory. Only later did Lu et al (M3L Workshop @ NeurIPS 2024) develop such a theoretical framework with formal definitions. Our extensive experiments provide substantial empirical evidence for a specialization-generalization transition in the out-of-distribution generalization behavior of transformers, which also provides fertile ground for subsequent theory. &nbsp; [1] Pretraining task diversity and the emergence of non-Bayesian in-context learning for regression, Allan Raventós, Mansheej Paul, Feng Chen, Surya Ganguli, NeurIPS 2023  [2] Yue M. Lu, Mary I. Letey, Jacob A. Zavatone-Veth, Anindita Maiti, Cengiz Pehlevan, Asymptotic theory of in-context learning by linear attention, NeurIPS M3L Workshop 2024
null
null
null
null
null
null
Quantifying Memory Utilization with Effective State-Size
Accept (poster)
Summary: The paper proposes to study the memory stored in a wide range of sequential neural network architecture through the notion of effective state-size, which is motivated by minimal realization theory and applicable out of the box to many architecture. The authors empirically validate it as a sound measure by correlating it with performance on memory intensive tasks, as well as use it to better understand which kind of model can be easier to distill and to derive initialization strategies. **[EDIT 04/02/2025]**: updated score from 2 to 3. Claims And Evidence: The claims are supported through extensive theoretical and empirical verification. Methods And Evaluation Criteria: The focus on memory intensive tasks is particularly relevant for the proposed approach. The distillation part also makes sense. I have some concerns regarding the setting of Section 5.2 that I detail below. Theoretical Claims: The theoretical claims are mostly rooted on established results from the theory of recurrent realization. I haven't checked the correctness of the claims though, and I am not familiar with this literature. Experimental Designs Or Analyses: No, but I didn't notice anything raising my awareness. Supplementary Material: Only had a look at it, but did not review it more thoroughly. Relation To Broader Scientific Literature: The paper does a great job at linking the ideas it introduces / problems it studies with existing work. Essential References Not Discussed: No Other Strengths And Weaknesses: **Strengths**: - the paper studies a very timely problem, that is the one of in-context memory, of sequential models. - the approach is theoretically well motivated, looking at the minimal representation is an interesting angle. - most of the experiments make sense. The memory experiments are a good test bed and the proposed method brings an interesting angle to the distillation case. - the state saturation vs. state collapse argument is an interesting lens on studying failures of models on memory heavy tasks. **Weaknesses**: - the main weakness to me is the presentation of the paper: I find hard to get an intuitive understanding of the method (see next box for more details). - as a result of the previous point, I have some troubles judging whether the proposed method makes sense. This justifies my conservative score for now (weak reject), and I am happy to revise my score once I am more confident in my understanding of the method. - The authors claim that model performance on language depends on the model's "ability to dynamically modulate its ESS in response to inputs". While I agree with the underlying intuition, I fail to see what ESS is bringing here and would appreciate clarification from the authors. Other Comments Or Suggestions: **Regarding the writing.** The clarity of the paper could be greatly improved in my opinion. I appreciate the authors' effort in summarizing important results in boxes, but more should be done for the paper to be ready for acceptance. To list some improvement points: - I find the introduction overly technical. As a rule of thumb, I would avoid any mathematical formulation and dynamical system jargon in the introduction. - Introducing one or two working example while going through the theoretical section could be useful, e.g. a simple linear SSMs (eventually with rank constraint on the A matrix) and/or an SSM with an input-dependent transition matrix. This would help the reader (me included) to get a better feeling of what the proposed measure is capturing. - Moving section 3.1 from the main text to the appendix might be a good idea (keeping a high level summary in the main text would be enough). I find it overly technical for the main text and it does not really seem to be needed in the following. - [Minor] Using the terms "memory capacity" and "memory utilization" would make the text easier to read than the repetitive use of ESS / TSS, which I find slightly confusing. - [Minor] Figure 1 is not really dense in terms of information, and the current version could likely be removed without hindering the understanding of the paper. - [Minor] Suggestion to improve Figure 5: I would find more interesting to see how the loss evolves as a function of the student TSS. In particular I would like to see how much bigger it needs to be than the ESS of the teacher to get a better than chance performance. The current visualization makes this difficult to assess. Questions For Authors: - I am confused by the fact that the effective state-size is a function of $i$. I cannot really make sense of it: from my understanding $i$ indexes time, and I would expect the effective state-size to be something independent of $i$, for example the maximum of all the $\text{ESS}_i$ values. Could the authors clarify that point? Having a working example, as I mentioned in the previous section, could be a good way to clarify things. - I cannot really make intuitive sense of why the ESS decreases *before* the end of the sequence in Figure 7. Intuitively, I would expect the separator to remove information about previous states but not the tokens appearing before, at least in a causal system. Why isn't it the case? Are there some boundary effects of the method? - Can the author compare their method to the following approach: for all mentioned models except softmax attention, look at $ds_{i + j} / ds_j$, compute on average for how many time steps it is greater than some threshold an compute the capacity of the model as this value times the number of states. Intuitively, this would correspond to the "kv" cache size of the model. I would appreciate if the authors could compare with this type of metric (theoretically / empirically through a comparison in a simple setting / ...). Linked to that point is the last sentence of the last paragraph in page 1: I could not find a detailed criticism of approaches like Wu et al. 2024. Discussing them in more detail would help better understanding the contribution of the present work. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your detailed comments! The paper will be revised to better explain each of the following points: ## Why is ESS a function of time? Why does ESS decrease before the EOS token? ESS at each time step $i$ captures the lower bound for the minimal state-size required at that specific step. Intuitively, this bound reflects at least how much information from the past is relevant for future computation. As the sequence progresses, the model may retain more or less information depending on upcoming needs. While we can summarize ESS over time with an aggregate measure (e.g. max or average, as done in Sections 4 and 5.1), examining ESS at each time step can reveal interesting temporal patterns—particularly w.r.t. a model’s ability to store and discard information dynamically. Because our formulation of ESS depends on both past and future relevance, it decreases when the model requires less prior context to process upcoming tokens. Near the EOS, fewer future tokens remain to be influenced by previous tokens, so ESS tends to shrink before the actual EOS token. We have also identified ESS variants that measure a causally determinable minimal state-size (i.e. it depends only on the past). Under this metric, state-size drops sharply at the EOS token, rather than exhibiting a more gradual decrease. In Section D.2.2, we briefly discuss these “causal ESS” metrics. Instead of computing the rank of $H$, one can decompose $H$ into its causal and anti-causal parts ($H = \mathcal{OC}$) and then compute the rank of $\mathcal{C}$ to obtain a causal ESS. However, this metric may fail to capture certain insights, because for models like softmax attention, $\mathcal{C}$ is effectively an identity matrix that grows with sequence length (substitute Eq. D.2.5 into $\mathcal{C}$). As a result, the causal ESS for softmax attention increases linearly with sequence length, failing to reflect how state-size might rise and fall near the EOS token. ## Comparing ESS to an Effective Window-Based Metric Counting the number of state derivatives above a certain threshold effectively measures the window of the model’s memory. Intuitively, a smaller window suggests lower memory utilization. This phenomenon is also reflected in the ESS metric: when the operator decays quickly, most rows/columns in $H_i$ are zeroed out, leading to a lower rank and hence a smaller ESS. However, the effective window alone does not capture the complexity of dependencies within the window. For instance, for linear attention, $\frac{ds_{i+j}}{ds_j}$ is always an identity matrix, regardless of $i$ or $j$. Consequently, a window-based metric that simply counts these derivatives would increase indefinitely with sequence length and fail to reflect the model’s true memory capacity or utilization that is capped. Moreover, because the window-based metric does not account for complexity, it ignores the effects of $B$ and $C$ (input and output projections), which themselves can be degenerate (e.g. zeroed-out rows). For these reasons, ESS provides a more accurate and theoretically grounded measure of memory utilization. Nonetheless, we acknowledge that alternative metrics, such as the one proposed by the reviewer, can still offer useful heuristic insights when analyzing a model’s memory. ## Criticism of Approaches in Wu et al. (2024) and Related Works We are not criticizing Wu et al.; rather, we cited their work as they similarly highlight how some spectral analyses of the attention matrix (i.e. the operator T) overlook the causality inherent in sequence decoders. That said, we do point out drawbacks in approaches such as Min et al. (2024) and Bhojanapalli et al. (2020), which derive their metrics by taking the SVD of the operator T without accounting for causal masking. For instance, Min et al. state: “It should be noted that the training of GPT-2 necessitated the masking of attention matrices to prevent the model from accessing future data. However, this mask was not applied during the rank evaluation.” Consequently, these approaches fail to tie key notions like memory capacity and memory utilization. ## Further improvements in writing 1. We will reduce technical jargon or add contextual information where needed in the introduction to improve accessibility. 2. We will move Section 3.1 to the appendix to shorten the main text, enabling space for a toy example of ESS (illustrating how ESS varies with time/input). Figure 1 will be moved to the appendix if needed. 3. Regarding your question on what a model's "ability to dynamically modulate its ESS in response to inputs" captures, our results in Section 5.2 indicate that recall performance depends on a model’s ability to modulate its ESS, not just total memory capacity. Although WLA, GLA, and LA share the same memory capacity, their recall differs, implying differences in memory utilization. Nonetheless, we will clarify in the text that ESS only **suggests** this dependence, rather than conclusively proving it.
Summary: This work introduces the Effective State-Size (ESS) metric to quantify memory utilization in sequence models while previous approaches focus on memory capacity (such as cache size/total memory available). ESS aims to measure how effectively a model uses its available memory. Using this metric the authors analyze 4 kinds of sequence models on a synthetic task and demonstrate how the metric helps to explain their performance, as well as how it predicts performance after distillation for some models. Claims And Evidence: The claims seem to be generally supported: 1. The appendix details their experiments on trying to improve performance through ESS-informed regularization (although the accuracy only reaches 0.3 - would this keep going up with more regularization or has it saturated at this point? This needs to be shown.) 2. The appendix details their experiments on trying to improve performance through ESS-informed initialization for GLA vs S6 models. 3. They show how their metric correlates with the performance of distillation dependent on whether the state utilization of the teacher model was high or low (if it was high then it should be expected that the distillation should not be very successful if I understand correctly). Methods And Evaluation Criteria: 1. The method to study sequence models' usage of their state space through a newly defined metric makes sense. It would be good to get some intuition though of why rank represents utilization of state for recurrent operators since I am not from a signal processing background. 2. The task used throughout the paper is synthetic - MQAR (with additional results in appendix for compression and copying - also synthetic). 3. The paper reports correlations between their metric and quantities of interest but something that is worrying is that the actual correlation values are pretty low (0.5-0.7 range) even if the trends in changes in correlation (with changes in models etc) make sense. Theoretical Claims: I was not able to check the proofs throughly since I am not very familiar with the literature in this area and the paper's description of notation was confusing (for eg. what are C and B in section 3 is not defined anywhere and has to be guessed). Experimental Designs Or Analyses: I read the experiments section throughly but I'm not very familiar with prior work that has shown results on the same tasks. Supplementary Material: 1. The notation section. 2. The section on regularization and initialization 3. The plots showing how utilization changes across models for English and code sequences when separator tokens are introduced. Relation To Broader Scientific Literature: Many prior studies focus on measuring memory capacity rather than utilization, ESS refines this by measuring actual usage rather than theoretical capacity. Also, prior work mostly used qualitative measures (e.g., attention visualizations, synthetic task accuracy). The introduction of ESS as a precise quantitative metric allows drawing insights in a more automated way. Essential References Not Discussed: - Other Strengths And Weaknesses: I found the visualizations of ESS around EOS tokens pretty interesting. Other Comments Or Suggestions: - Questions For Authors: - Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the positive feedback! Below, we respond to the points raised. ## Claims and Evidence 1. Yes, this is a good point, we will extend the regularization experiments to include parameters beyond the range in the plot and update the plot in the paper. We anticipate that at some point, the performance will begin to decline since continuing to increase the regularization strength corresponds to increasingly decaying the model towards linear attention, which, as shown in Section B, performs quite poorly. 2. Yes, your understanding of how the ESS metric correlates with performance of distillation is correct. We will clarify this in the paper. ## Methods and Evaluation Criteria 1. For theoretical justification as to why the rank represents state utilization, we refer the reviewer to Theorem 3.2. For intuition, we offer the following interpretations of rank as it pertains to state utilization: - **Distinct directions of influence**: Through this lens, rank counts how many linearly independent “directions” connect past inputs to future outputs, i.e., how many unique ways inputs can shape the future. - **Minimal internal memory**: Because each independent direction requires its own coordinate in memory, the rank matches the smallest state dimension that can exactly replicate the operator. - **Effective rank**: Of course, in practice we compute effective rank to capture the “dominant” directions of interests where “dominant” is captured by either some threshold (e.g. tolerance ESS) or by the decay rate of the singular values (e.g. entropy ESS). 2. Here, we address the concern posed by the reviewer regarding the correlation values being too “low”. - **Having sufficient ESS/kv results in a non-linear effect**: As shown in Figure 2, once ESS/kv exceeds a certain threshold for a given model, the majority of task-model configurations achieve an accuracy of 1. At this point, performance saturates, and further increases in ESS/kv do not translate into higher accuracy. This weakens the correlation, as correlation coefficients do not adequately capture such non-linear relationships. We chose not to omit these data points in our reported correlations, since the non-linear trend is clearly visible in the plot. However, we are happy to also report correlations with these saturated points omitted, should the reviewer find that informative. - **On the interpretation of “low” correlation values**: Our claims about correlation are relative rather than absolute. While it is fair to consider correlations in the range of 0.85–1.0 as “high,” our point is that ESS consistently correlates better with task performance than TSS. In this comparative context, we maintain that ESS is a significantly better proxy. This is relevant since most works consider only the TSS when evaluating model memory. Note that we do not claim that ESS is the best possible performance predictor—there may be stronger alternatives—but evaluating such alternatives is beyond the scope of this work. ## Clarifying notation To answer your question in particular, $C$ and $B$ are analogous to the $Q$ and $K$ matrices in attention. And in the context of recurrent models, they canonically represent the state-to-output and input-to-state matrices, respectively. However, we agree that this is unclear in the paper and we will clarify this upon revision along with analogous points raised by reviewers KvDY and KxgK.
Summary: The paper introduces Effective State-Size (ESS) as a novel measure for quantifying memory utilization in causal sequence modeling architectures. ESS provides interpretable and actionable metrics that can enhance initialization strategies, regularizers, and model distillation. The paper develops a unified framework for analyzing systems with input-invariant and input-varying linear operators (LIVs), and demonstrates the correlation between ESS and performance across various tasks and models. Applications of ESS include model-order reduction, predicting model compressibility, and state modulation in large language models. The empirical validation shows ESS's utility in improving performance-efficiency trade-offs and highlights cross-architectural differences in memory utilization. Claims And Evidence: 1. The introduction of Effective State-Size (ESS) as a quantitative measure of memory utilization is a novel and valuable contribution. It provides interpretable and actionable measurements that can enhance initialization strategies, regularizers, and model distillation. 2. The paper explores multiple applications of ESS, such as model-order reduction, predicting model compressibility, and state modulation in large language models. These applications are well-explained and show the practical relevance of ESS. Methods And Evaluation Criteria: The authors provide thorough empirical validation of ESS across various tasks and models. The correlation between ESS and performance is demonstrated, highlighting the utility of ESS in improving performance-efficiency trade-offs. Theoretical Claims: The paper's theoretical sections, particularly the formal definition and computation of ESS, look solid, but may be challenging for readers who are not well-versed in dynamical systems theory. Simplifying these sections or providing additional explanatory material could improve accessibility. Also, I understand the implementation details for computing ESS are discussed in the appendix. But I feel that it would be great if pseudocode can be included in the main text. Experimental Designs Or Analyses: The authors showed the correlation between ESS and accuracy on some synthetic tasks, which I understand are especially created to conduct experiments in a controlled manner. However, most people in the field are more familiar with other real-world benchmarks. Does the same correlation extend to popular real-world benchmarks, e.g., machine translation benchmarks (e.g., WMT), language understanding (e.g., GLUE, MMLU), question answering (e.g., SQuAD, PIQA), image classification which can use Vision Transformers (e.g., ImageNet)? It would be great if the authors can include some of these common benchmarks in their experiments, which will strengthen this paper. Supplementary Material: skim through it but didn't check the details. Relation To Broader Scientific Literature: I believe understanding the memory utilization is of interest to many researchers in the field. Essential References Not Discussed: looks good to me Other Strengths And Weaknesses: NA Other Comments Or Suggestions: NA Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your constructive review and positive outlook on the work! ## Presentation and Accessibility Thank you for pointing out that some theoretical sections may be inaccessible to many due to technical jargon. We will simplify those parts and add working examples of computing ESS, as Reviewer KxgK suggested. We will also simplify the introduction by stripping some of the more technical motivation in order to make it more accessible. In addition, we will distill Section 3.1 to make room to add the pseudocode for the implementation of ESS, as you have suggested. ## Real-World Benchmarks We agree that applying ESS to popular benchmarks could be valuable. However, we note that many real-world NLP benchmarks often test factual recall (i.e., returning accurate information from the pre-training/post-training data) and not working memory utilization. For example, the Mamba paper (Gu et al., 2024) shows strong performance on standard benchmarks (i.e., against Pythia models (Biderman et al., 2023) of the same scale) but performs poorly on tasks that test raw memory utilization, such as the phone book recall task (Jelassi et al., 2024). For this reason, we focused on tasks that directly test memory recall and chose bigram perplexity (Arora et al., 2024) for evaluating the language models (Section 5.2). Bigram perplexity evaluates the model's perplexity on repeated bigrams in any arbitrary dataset. This allows us to use the diverse and large scale NLP pre-training datasets (that the model wasn’t trained on), as opposed to a narrow hand-crafted evaluation set to specifically evaluate the model’s ability to utilize past context. In our case, we extracted 16k randomly selected sequences from the Fineweb dataset (Penedo et al., 2024). Therefore, we maintain that the controlled experiments we have conducted effectively conveys the utility of ESS as a diagnostic tool for analyzing memory utilization. - Gu, A., & Dao, T. (2024). Mamba: Linear-time sequence modeling with selective state spaces. arXiv. https://arxiv.org/abs/2312.00752 - Jelassi, S., Brandfonbrener, D., Kakade, S. M., & Malach, E. (2024). Repeat after me: Transformers are better than state space models at copying. arXiv. https://arxiv.org/abs/2402.01032 - Biderman, S., Schoelkopf, H., Anthony, Q., Bradley, H., O’Brien, K., Hallahan, E., Khan, M. A., Purohit, S., Prashanth, U. S. S., Raff, E., Skowron, A., Sutawika, L., & van der Wal, O. (2023). Pythia: A suite for analyzing large language models across training and scaling. arXiv. https://arxiv.org/abs/2304.01373 - Arora, S., Eyuboglu, S., Timalsina, A., Johnson, I., Poli, M., Zou, J., Rudra, A., & Ré, C. (2023). Zoology: Measuring and improving recall in efficient language models. arXiv. https://arxiv.org/abs/2312.04927 - Penedo, G., Kydlíček, H., Ben Allal, L., Lozhkov, A., Mitchell, M., Raffel, C., Von Werra, L., & Wolf, T. (2024). The FineWeb datasets: Decanting the web for the finest text data at scale.
Summary: This paper claims to propose a new metric "effective state-size (ESS)", which can not only evaluate the memory-utilization of different models, but also brings instruction to the selection of initialization/regularization, and distillation strategies. Several empirical results are presents to supports the effect of ESS. Claims And Evidence: - Calculating the actual (as opposed to theoretical) information capacity using Singular Value Decomposition (SVD) and the Rank of parameter matrices/hidden states has been a widely adopted metric among researchers and engineers for many years—its origins are difficult to trace. - Additionally, the relationship between singular values and the rank of parameter matrices/hidden states appears trivial and intuitive, raising questions about the necessity of the overly complex modeling. Methods And Evaluation Criteria: - Some aspects hold significance, yet there are technical details that necessitate further verification and elaboration. Theoretical Claims: - However, this paper claims to introduce ESS, an SVD-based metric, for evaluating model memory utilization. This approach appears highly similar to the common practice mentioned above. Simply renaming an established method and presenting it as a novel contribution is unlikely to be encouraged. Experimental Designs Or Analyses: I have reviewed the relevant experimental sections, and the pertinent information can be found under the section labeled "Weakness." Supplementary Material: yes Relation To Broader Scientific Literature: None Essential References Not Discussed: - Wang, Shida, and Qianxiao Li. "Stablessm: Alleviating the curse of memory in state-space models through stable reparameterization." arXiv preprint arXiv:2311.14495 (2023). - Qi, Biqing, et al. "Smr: State memory replay for long sequence modeling." arXiv preprint arXiv:2405.17534 (2024). Other Strengths And Weaknesses: ## Strenghs: - Within the view of ESS, this paper brings lots of comparisons between models with different architecture, such as attention/recurrent/convolution. These comparison may brings insight on the design of new architectures in the future. ## Weacknesses ### 1、Over-claimed Contribution (which is also their primary claimed contribution) - Calculating the actual (as opposed to theoretical) information capacity using Singular Value Decomposition (SVD) and the Rank of parameter matrices/hidden states has been a widely adopted metric among researchers and engineers for many years—its origins are difficult to trace. - However, this paper claims to introduce ESS, an SVD-based metric, for evaluating model memory utilization. This approach appears highly similar to the common practice mentioned above. Simply renaming an established method and presenting it as a novel contribution is unlikely to be encouraged. - Additionally, the relationship between singular values and the rank of parameter matrices/hidden states appears trivial and intuitive, raising questions about the necessity of the overly complex modeling. ### 2. Ambiguous Presentation and Inappropriate Typography - Certain metrics, such as ESS/kv and TSS/kv, are used before being properly defined. - The captions and explanations of figures are too simple, making it difficult for readers to follow the paper’s contributions. - The paper claims that ESS is valuable for initialization, regularization, and distillation strategies, highlighting its practical significance. However, no relevant empirical results are provided in the main body to support this claim. Other Comments Or Suggestions: See weakness. Questions For Authors: None Ethical Review Flag: Flag this paper for an ethics review. Ethics Expertise Needed: ['Other expertise'] Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We appreciate your detailed feedback. Below, we address the main concerns. ## Mischaracterization of Our Claims Your review states that our work involves measuring the rank of parameter matrices or hidden states using SVD—a practice that has existed for years. However, we would like to clarify that this is a misrepresentation of our contribution, and believe that the subsequent criticisms that we are simply renaming an established method while proposing an overly complex approach to analyze memory utilization, do not apply to the actual claims made in our paper. We do not simply apply SVD to parameter matrices or “hidden states” (i.e., the input/output activations of a neural network layer), of which, the former capture complexity only across the model’s channels and cannot be used for analyzing memory utilization or capacity across the temporal dimension. Instead, the ESS metric is computed by applying SVD to specific submatrices of the materialized, flattened operator T in causal sequence models. These submatrices capture complexity across both channel and time dimensions, and we show that its rank provably lower bounds the minimal state size of an equivalent linear recurrence—offering a theoretically grounded proxy for memory utilization in causal sequence models. Moreover, prior approaches that apply SVD over the entire attention matrix (i.e., the operator T) typically ignore causal masking, which greatly influences the rank of T and distorts any interpretation related to memory usage. Our metric is explicitly designed to account for this, and we discuss this aspect of our contribution in the introductory section of the paper. Importantly, we note that the underlying theoretical framework for computing minimal recurrent realizations stems from classical signal processing and control theory, and to the best of our knowledge, has not been adapted to analyze the memory utilization of modern deep learning sequence models such as input-varying SSMs, linear attention and softmax attention. We summarize the core contributions below: 1. We demonstrate that most modern deep learning causal sequence models (SSMs, convolutions, attention, etc.) can be cast as input-varying linear models (LIVs), which uniformly realize a materialized operator T. In doing so, we are not introducing an overly complex framework; rather, we show that diverse sequence models can be analyzed under a unified lens. 2. Drawing from classical control theory, we prove that the rank of specific submatrices of T provides a lower bound on the minimal state-size of an equivalent SSM realization. Based on this result, we propose ESS as a principled and model-agnostic proxy for memory utilization. 3. We empirically validate ESS using a wide range of modern sequence models and synthetic tasks that are explicitly designed to test memory utilization. 4. Finally, we show how ESS can be used in practice to inform and analyze downstream tasks such as model distillation, initialization, and regularization, as well as provide insight into memory-related phenomena in language models. We hope this clarifies our main claims and the novelty of our contributions. ## Improvements on Presentation We thank the reviewer for pointing out areas where the presentation can be improved. We will more clearly define metrics such as ESS/kv and TSS/kv (i.e., ESS normalized by the number of key-value pairs in a task), and clarify related notation earlier in the manuscript. Regarding the experiments, ESS can be used to support a range of downstream analyzes. First, we would like to clarify that the distillation results are in the main body of the paper (see Section 5.1). However, due to space constraints, the initialization and regularization experiments are presented in the appendix along with all relevant experimental details. We will improve the referencing of these results in the main text to ensure clarity and visibility. Also, we are happy to rearrange the paper to fit some of these results in the main body if the reviewer sees fit.
null
null
null
null
null
null
Exact Upper and Lower Bounds for the Output Distribution of Neural Networks with Random Inputs
Accept (poster)
Summary: This paper provides upper and lower bounds on the cdf of a ReLU neural network which converge to the exact cdf as the granularity increases. Other monotonic piecewise differentiable activation functions can in turn be approximated using ReLU activation functions, extending the results. Experiments validate the theoretical results. Claims And Evidence: The theoretical claims are adequately proved. Methods And Evaluation Criteria: There is a lack of detail provided about the experiments, making it difficult to evaluate the methodology. For instance, how much compute/granularity was provided for the MC, PLT, and U/L methods? Why should numerical estimation inaccuracy affect PLT but not the proposed U/L method? How many points were tested for the OOB calculations (is it the grid size)? Theoretical Claims: I did not carefully check the correctness, but the results are unsurprising in light of prior work. Experimental Designs Or Analyses: See "Methods And Evaluation Criteria". Supplementary Material: N/A Relation To Broader Scientific Literature: This paper is very much related to [Krapf 24]. One of the main theorems, Theorem 3.8, appears to be subsumed be Theorem 2 in Krapf. Krapf, T., Hagn, M., Miethaner, P., Schiller, A., Luttner, L., & Heinrich, B. (2024). Piecewise Linear Transformation – Propagating Aleatoric Uncertainty in Neural Networks. Essential References Not Discussed: Even though [Krapf 24] is cited, it appears that the relationship between Theorem 2 in [Krapf 24] and Theorem 3.8 in this work is not discussed. Other Strengths And Weaknesses: This paper extends prior work on computing cdfs to providing upper and lower bounds, instead of just an approximation, which better quantifies the uncertainty of the estimate. Other Comments Or Suggestions: N/A Questions For Authors: 1. Why use Theorem 3 in [Krapf 24] as a comparison for theoretical and experimental results, when Theorem 2 also applies? 2. Is there a major difference between Theorem 3 in [Krapf] and the upper/lower bounds provided? Looking at equation 6, it appears that the upper and lower bounds follow from just upper and lower bounding the integrand. Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: *Relation To Broader Scientific Literature and Questions For Authors:* Our approach aims to approximate as accurately as possible the **cdf** of the output in a probabilistic NN. Krapf et al (2024) estimate the **pdf** of a NN output. Our Theorem 3.8 derives the exact cdf (not pdf) of a ReLU NN when the input's pdf is a piecewise polynomial. That is, given a ReLU NN and an input with piecewise polynomial pdf, our tool (implementing Theorem 3.8) computes the cdf values **exactly** (not approximately). Theorem 2 in Krapf requires the input distribution be defined as what they call a piecewise pdf (different pdfs on the partition sets of the input support). The two theorems are *not* directly analogous. Furthermore, Theorem 2 in Krapf et al (2024) gives the formula of the probability density function (pdf) of the output of a feedforward NN subject to random perturbation of the inputs. The formula (Eqn (2) in Krapf et al (2024)) contains an integral that is never explained how to compute. In effect, Krapf et al (2024) do not apply Theorem 2 in their paper but rather its corollary (Theorem 3 in Krapf et al) which is a discrete approximation of the integral in Theorem 2. The latter is the reason we compare our approach to Theorem 3 in Krapf et al, which we could actually apply using the code in the Krapf et al GitHub page. To make things clear, in our Example 3.10, Krapf et al's method cannot compute the exact output pdf in finite time. Our approach computes the exact cdf of the output in less than a second. Regarding question 2, Theorem 3 and our approach yield similar results in the sense that the estimated cdf lies within our bounds (once the pdf is transformed to a cdf) provided the input pdf is very smooth (not wiggly). *Methods And Evaluation Criteria:* For MC, 100,000,000 sample points were used, which far exceeds the sample size in Krapf et al. The triangulation of the domain was upper bounded by 50,000 points, an ad hoc selection. The grid size we used was 20 raised to the number of classes of the output values. In the case of continuous outcome, the grid size was 20. For the Iris and Wine datasets, it was 20^3. We applied the provided PLT code from Github. For example, for the Iris dataset, we follow the settings in Krapf et al. in order to compare the performance of our method with theirs and MC. We report here computing details when we use 1, 2, 3 input variables. The emerging pattern is that our method is far more accurate and typically faster than PLT. We would also like to thank the referee for giving our method a concise name, namely U/L. *1 input variable: 25% of the data are imputed* **Computed MC in 1.873 sec** **Computed ours (U/L) in 5.700 sec** **Computed TLP in 0.348 sec** PLT out of bounds count = 2179 / 8000 MC out of bounds count = 378 / 8000 *2 Input variables: 25% of the data are imputed* **Computed MC in 2.114 sec** **Computed ours (U/L) in 19.663 sec** **Computed TLP in 46.043 sec** PLT out of bounds count = 0 / 8000 MC out of bounds count = 53 / 8000 *3 Input variables: 50% of the data are imputed* **Computed MC in 2.309 sec** **Computed ours (U/L) in 97.876 sec** **Computed TLP in 240.542 sec** PLT out of bounds count = 7 / 8000 MC out of bounds count = 9 / 8000
Summary: The paper addresses the challenge of uncertainty quantification in neural network (NN) outputs by deriving exact upper and lower bounds for the cumulative distribution function (CDF) of NN outputs under noisy (stochastic) inputs. The method is designed to apply to feedforward NNs and convolutional NNs (CNNs) using continuous monotonic piecewise differentiable activation functions, such as ReLU, tanh, and softmax. A novel aspect is the use of ReLU NNs to bound general NNs, enabling the computation of these bounds, which converge to the true CDF as resolution increases. Claims And Evidence: I find a lot of the proof details missing. For example, the proof of Theorem 3.8 is hand-wavy, and I don't see all the steps. Even more problematic is the proof of Theorem 3.11, where the authors only provide proof sketches. I don't know precisely how all these fall into place without knowing the details. Methods And Evaluation Criteria: yes Theoretical Claims: yes. See my comment above about the proofs. The proofs in these papers are just not explicitly written out, and the quality of the proofs is something that I would not even tolerate in an undergraduate proof-based class, let alone a research paper. While I understand the proofs are mostly analytical and require calculation, the authors must do this to the point where it is clear how the proofs follow. If they use any well-known theorems as black boxes, then they must explicitly cite a formal statement used, or even better include it as a preliminary subsection of the appendix, and invoke it from here. Experimental Designs Or Analyses: No issues. Supplementary Material: Yes. Entire. Relation To Broader Scientific Literature: Not sure. See weakenesses. Essential References Not Discussed: NA. Other Strengths And Weaknesses: See summary. Weaknesses: See my complain about the paper. Overall, I don't find any details to verify the proofs. Even the statements themselves look trivial to me. For example, 3.13 just seems to be a direct corollary of the classical universal theory of approximation of NN. Theorem 3.11 is perhaps the most interesting of the three theorems, but I don't find details to understand this for myself. Other Comments Or Suggestions: Please provide the proof details to the point where one can verify the correctness of the claims. Questions For Authors: I would want to understand each theorem-by-theorem, why the statement is non-trivial, why previous works could not solve this, and what the key difficulty addressed by this work is. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: *Claims And Evidence and Theoretical Claims:* The proofs are not detailed as most of the steps are based on well known calculus facts and are easy to obtain. The challenge is that they are numerous and this is why we provided only the sketch of the proof. We appreciate that for the sake of clarity and completeness all the details should be made available and we will include them in the appendix if we are given the opportunity to revise. We will also place them in the arXiv version of the paper (available after the final decision). *Weaknesses:* Theorem 3.8 derives the exact cdf of a ReLU NN when the input's pdf is a piecewise polynomial. Theorem 3.11 follows from the universal approximation theorem, but the proof we provide **constructs** the sequences whereas the UAT simply shows their existence. Theorem 3.13 is our main result as it constructs bounds for the cumulative distribution function of the output of any feed forward NN whose inputs are continuous random variables with compact support. The proof of Theorem 3.13 is provided in detail. We fail to see how it is a direct corollary of the UAT and we would be interested to see the justification of this claim. --- Rebuttal Comment 1.1: Comment: Thanks for the clarifications! I missed the last line about constructiveness in Theorem 3.13. The presentation here should be significantly improved. The constructiveness should be an essential part of the presentation throughout. Namely, 1) The real constructiveness comes from Theorem 3.11, and it is completely missing from the statement. This should be included formally. 2) The constructiveness is just described in Section 3.2 as a discussion. There should be a formal algorithm box with a clear algorithmic step-by-step procedure at an appropriate level of detail, and this should be referred to when formalizing the constructiveness of Theorem 3.11. 3) Similarly, in Theorem 3.13, the constructiveness should not be just written in a way that feels like a side remark. It should be emphasized and formalized by referring to some algorithmic description for this theorem as well. 4) Finally, proofs should be more detailed, as I mentioned. I believe (1),(2),(3) above are important changes that authors must make to reflect the new ideas developed in this paper. I am raising my score to 3 and would be happy to have this paper accepted conditionally on this. --- Reply to Comment 1.1.1: Comment: We appreciate the reviewer's suggestions and the increase in the score. Should we be given the opportunity to revise, we can easily address all points raised.
Summary: This paper proposes a novel method to compute exact upper and lower bounds for the cdf of a neural network’s output, assuming stochastic inputs. Key contributions include: - A method to compute the exact cdf of the output of ReLU networks under inputs with piecewise polynomial pdf over compact hyperrectangle. - A method to compute tight upper and lower bounds of the cdf of the output of neural networks with piecewise differentiable activations under inputs with a pdf with compact support. Claims And Evidence: Correct me if I misunderstood but it seems that Table 1 merely shows that MC and PLT disagree with your bounds, but it does not provide evidence to support the correctness of your bounds? In Figure 1, it says that you are assuming beta-distributed inputs, but how can you assume that if the input is the Iris dataset? Are you talking about random perturbations to the Iris dataset? Methods And Evaluation Criteria: see claims and evidence Theoretical Claims: I did not check the correctness of the proofs, only looked at the statements. Theorem 3.13 seems to be a strong result but it is not well presented. For any F (y), of course there always exists upper and lower bounds F_n such that they converge to F (y): the interesting part of the result is that the authors are talking about particular F_n that can be constructed, so the last sentence "The sequence {F n} can be constructed by bounding the distributions of sequences of ReLU NNs." should be the core of the theorem. The subsequent two paragrpahs (l.289-304 right block) where the authors describe how to construct F_n should be part of the statement, and not using words but mathematical notations... Also, from the text in the second paragraph (l.295-304 right block), it seems that polynomial pdfs are never used, so it would be perhaps be less confusing if Theorem 3.8 was stated for piecewise constant pdfs. I'm wondering if Theorem 3.11 does not follow directly from universal approximation theorems? Experimental Designs Or Analyses: see claims and evidence Supplementary Material: I only read the main paper. Relation To Broader Scientific Literature: This paper contributes to an important area of the literature on estimating uncertainty in neural networks. Essential References Not Discussed: Not that I know of Other Strengths And Weaknesses: Strengths: strong theoretical contribution Weaknesses: see other comments Other Comments Or Suggestions: see other comments Questions For Authors: if you could address my concerns about the experiments, and answer my comments on the theoretical contributions, I would be willing to increase my score. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: *Theoretical Claims:* You are right that Theorem 3.11 follows from the universal approximation theorem. But it also **constructs** the sequences whereas the UAT simply shows their existence. Indeed the significance of Theorem 3.13 is that the bounds are built using ReLU functions, which is our main innovation as it also renders our approach applicable to general feedforward NNs. If we have the opportunity to revise, we will restate the theorem as you suggest. Theorem 3.8 obtains a more general result even though we apply it in the proof of Theorem 3.13 only with piecewise constant pdfs. *Claims And Evidence:* In Table 1 we see that our bounds differ from the competing PLT and MC methods. You are right that it provides no evidence for the correctness of ours. We show formally that our bounds are correct. Table 1 serves to illustrate that the other two methods, PLT and MC, are less accurate. In effect, if we *significantly reduce* the accuracy of our bounds then they include the PLT and MC approximations. In other words, we can recover the PLT and MC estimates by making our method less precise. In the Iris data, the inputs are perturbed by Beta noise since in this case (the Beta pdf is a polynomial) we can compute exact bounds. Of course, we could have had used other random perturbations as well. --- Rebuttal Comment 1.1: Comment: I am inclined to accept this paper as the authors have promised to address my concerns in their revision. The paper provides a strong theoretical contribution that results an efficient computational method to compute bounds of the cdf of the output of neural networks that are tighter than existing methods. I have raised my score. --- Reply to Comment 1.1.1: Comment: We would like to thank the reviewer for his comments on our work and for increasing his score.
null
null
null
null
null
null
null
null
Adaptive Flow Matching for Resolving Small-Scale Physics
Accept (poster)
Summary: Applying conditional diffusion (CDM) and flow matching (FM) to natural images is very effective for super-resolving small-scale details, like the image semantic- or geometric information. However, CDM and FM will have difficulties with physical sciences, particularly for weather, mainly due to 1) spatially misaligned input-output (super-res weather input); 2) misaligned and distinct input-output channels (various channels sensor data); 3) stochasticity in channels resulting in multi-scales issue. To alleviate these challenges for weather application scenarios, this paper proposes first to encode various inputs to a latent distribution that is closer to the target and reduce input differences. Then the authors map the small-scale physics with Flow Matching. So Flow Matching adds stochastic details while the encoder adds deterministic ones, an adaptive noise scaling mechanism is utilized to dynamically adjust the noise scale at specific iterations to reduce the residual error and maintain the generalization. Extensive experiments on benchmarks, like real-world weather data (25 km to 2 km scales in Taiwan) and in synthetic Kolmogorov flow datasets, validate the effectiveness of the proposed method, Adaptive Flow Matching. Claims And Evidence: Yes, the authors have supported necessary claims and evidence with step-by-step proofs. Methods And Evaluation Criteria: Although the main contribution of this paper is to apply conditional diffusion and flow matching into a very downstream task, whether physical science, they propose some technical methods to resolve the problems raised by the application tasks and obtain competitive performances. Theoretical Claims: I have not checked each proof detail, presented in the Appendix, but the proofs in the main paper sound reasonable. For Eq (9), imposing interpolation like this formulation has been widely used in flow-matching based diffusion methods for nature image tasks. Experimental Designs Or Analyses: Regarding Fig 2, I guess the SFM shall be the method proposed in this paper. When we compare the visually generated images between SFM and CorrDiff, it seems CorrDiff is better. Can the authors provide more convincing explanations about the advantages of CorrDiff? I am also curious whether the proposed adaptive flow matching works for nature images, and I do expect to see such validated experiments to make this paper stronger. Supplementary Material: Sec A&C&D&E&F. Relation To Broader Scientific Literature: This paper aims to apply CDM and FM to physical science weather and conduct experiments on regional weather data, from 20km to 2km, and synthesis data. Though they obtain competitive performances and support theoretical proofs to demonstrate their method, it is not clear whether this method can be generalized or employed to other tasks, both physical science and nature image tasks. So this leaves a doubt about the wide application of this paper and I suggest the author can make some more demonstrations. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: 1. The presentation of this paper is well and extensive experiments on weather scenario validation have been conducted. 2. The writing of this paper is also well-following. Weaknesses: 1. The applications to weather scenarios are limited, or we need to see large-scale validations to make this paper more convincing. 2. Whether the proposed adaptive flow matching works for the nature image task and needs to conduct further experiments. Other Comments Or Suggestions: 1. SFM in Figures shall be well denoted, since we may think it shall be AFM. 2. How do the main contributions of adaptive flow matching from interpolating the latent from common flow-matching based nature image methods? Questions For Authors: Please see my overall comments and questions above, and I suggest the authors address them well. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their remarks and for recognizing the extensive experimental validation of our work. *1. When we compare the visually generated images between SFM and CorrDiff, it seems CorrDiff is better. Can the authors provide more convincing explanations about the advantages of CorrDiff?* **Response.** While a single visual sample (e.g., Fig. 2) may suggest CorrDiff looks better, such examples can be misleading and not representative. A more robust comparison is provided in Fig. 3, where the spectral analysis shows that CorrDiff underperforms AFM at both low and high frequencies. Moreover, quantitative results in Table 2 demonstrate that AFM achieves better reconstruction accuracy and significantly improved uncertainty calibration. In the revised manuscript appendix, we have included another case in each of Figures 9 and 10 to shed more light in this comparison. --- *2a. Though they obtain competitive performances and support theoretical proofs to demonstrate their method, it is not clear whether this method can be generalized or employed to other tasks, both physical science and nature image tasks. So this leaves a doubt about the wide application of this paper and I suggest the author can make some more demonstrations.* *2b.I am also curious whether the proposed adaptive flow matching works for nature images, and I do expect to see such validated experiments to make this paper stronger.* *2c.The applications to weather scenarios are limited, or we need to see large-scale validations to make this paper more convincing.* *2d. How do the main contributions of adaptive flow matching from interpolating the latent from common flow-matching based nature image methods?* **Response.** Thank you for the suggestion. While our method was motivated by challenges in scientific data, it makes no domain-specific assumptions and is broadly applicable. To demonstrate generality, we include two diverse testbeds: (1) real-world weather downscaling using Taiwan’s CWA dataset, and (2) a synthetic PDE-based Kolmogorov flow dataset. Extending AFM to natural image tasks is a promising direction, but beyond the scope of this already extensive study. We have added this discussion to the conclusion of the revised manuscript. --- *4a. Regarding Fig 2, I guess the SFM shall be the method proposed in this paper.* *4b. SFM in Figures shall be well denoted, since we may think it shall be AFM.* **Response.** Thank you for noting this. We have fixed the AFM label in the figures of the revised manuscript. We hope our responses and revisions have adequately addressed your remarks. --- Rebuttal Comment 1.1: Comment: Somehow, the authors have addressed most of my concerns, hence I keep my original rating for this paper.
Summary: The paper introduces stochastic flow matching (SFM) for super-resolving small-scale physics in weather data, tackling challenges such as data misalignment, multiscale dynamics, and limited data availability. The approach employs an encoder to project coarse-resolution inputs into a latent space, followed by flow matching to generate fine-resolution outputs. Additionally, adaptive noise scaling is used to balance deterministic and stochastic components. Experimental results show that SFM surpasses existing methods, including conditional diffusion and flow models. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes, I checked all the mathematical derivations and proofs, including Appendix B and C for the derivation of denoising objective. Experimental Designs Or Analyses: I reviewed the experimental design and analyses presented in the paper, including the methodology for super-resolving small-scale physics in weather data using stochastic flow matching (SFM). Specifically, I examined how the authors addressed challenges such as data misalignment, multiscale dynamics, and limited data availability. I also assessed the effectiveness of the encoder-latent space mapping, flow matching process, and adaptive noise scaling in balancing deterministic and stochastic components. Supplementary Material: Yes, basically Appendix B and C. Relation To Broader Scientific Literature: The paper's contribution lies in applying the well-established techniques of flow matching and denoising diffusion modeling to enhance the resolution of small-scale details in natural images, particularly in weather science. While similar to CorrDiff, the proposed method introduces joint training of a regression encoder and residual diffusion, effectively mitigating potential overfitting issues that may arise in the two-stage training approach adopted in CorrDiff. Essential References Not Discussed: No. Other Strengths And Weaknesses: Weakness. 1. In my opinion, VAE is a more natural choice for stochastic encoding; however, the authors employ a deterministic encoder and subsequently inject noise into the latent variable. I find the intuition behind this approach questionable, as the explanation provided in the paper is not entirely convincing to me. 2. The novelty of the paper is questionable given the well-established connection between flow matching and diffusion modeling. The proposed approach bears a strong resemblance to CorrDiff, although the authors argue that CorrDiff employs a two-stage training process, where the regression encoder is trained first, followed by the diffusion of residual components. In contrast, this paper appears to adopt a joint training strategy, with the final loss formulated as the sum of both components. 3. Lack of detailed explanation of noise scaling, since it is one of the main contributions of the paper, as stated by the authors. Other Comments Or Suggestions: 1. Too many typos occurs in the submission, e.g. Conclusions Para.2. 2. Where is definition of $\mathcal{D}_{\theta}$? I highly recommend to clarify it in the main context. Questions For Authors: See "Weaknesses". Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive comments. Below are our responses to your remarks. *1. In my opinion, VAE is a more natural choice ...* (we shorten the questions due to the character limit) **Response.** Thank you for the insightful question. As clarified in our Remark in Sec 4.2, while VAEs are a natural choice for stochastic encoding, we intentionally opt for a maximum-likelihood-based approach with a deterministic encoder and post-hoc noise injection. This choice is motivated by both **simplicity** and **interpretability**. From a modeling standpoint, our approach yields a closed-form expression for the noise variance—simply the RMSE of the regression error—which enables straightforward tuning using validation data to control generalization. In contrast, VAEs require careful balancing of the KL term and reconstruction loss, which is known to be challenging and prone to issues such as posterior collapse and the prior hole. From a physics perspective, particularly in applications like weather downscaling, large-scale structures are largely deterministic. A deterministic encoder is therefore well-suited to capture these coherent features, while the stochasticity in small scales is effectively modeled via calibrated noise. While we agree that VAE-style encoders are a valid alternative, our design provides a principled and practical path that aligns well with physical intuition and avoids some known drawbacks of VAEs. --- *2. The novelty of the paper is questionable...* **Response.** We respectfully disagree. While our method shares surface similarities with CorrDiff, AFM introduces key innovations that address fundamental limitations of CorrDiff and generalize it as a corner case. *Joint Training vs. Two-Stage*: CorrDiff relies on a two-stage pipeline, first training a deterministic encoder, then fitting a diffusion model to the residuals. This separation causes overfitting in the encoder. In consequence, the residuals shrink and become uninformative, leaving the generative model. In contrast, AFM performs joint training of the encoder and the generative model, maintaining informative residuals throughout optimization. ** Uncertainty Modeling**: CorrDiff lacks an uncertainty-aware encoder. AFM introduces an adaptive noise injection mechanism that dynamically increases the encoder noise when the validation error rises, signaling overfitting. This prevents collapse and encourages the encoder to retain uncertainty. ** Channel-wise Adaptivity**: AFM performs per-variable noise scaling—crucial for scientific data where variables (e.g., temperature vs. radar reflectivity) exhibit different stochastic behaviors. CorrDiff applies uniform noise, which fails to capture such heterogeneity. These design choices lead to significantly improved calibration and spread-skill performance (see Tables 2–3, 6, 13; Figures 5–6, 8). --- *3. Lack of detailed explanation of noise scaling...* **Response.** Thank you for the remark. We have expanded our explanation of the noise scaling mechanism in Sec. 4.2, Appendix E, and Algorithm 1 of the revised manuscript. Briefly, we estimate $\sigma_z$ using a maximum-likelihood criterion linked to the encoder's RMSE on validation data (see Eq. 8). As training progresses, if the encoder begins to overfit—reflected by rising validation error—our method increases $\sigma_z$, injecting more uncertainty into the latent space. This prevents the encoder from collapsing into a deterministic solution and ensures that the flow-matching model remains calibrated and robust to out-of-distribution inputs. Unlike fixed or manually tuned noise levels, our approach adaptively adjusts $\sigma_z$ per variable (channel-wise), which is crucial for scientific data where different physical variables exhibit distinct noise scales and predictability (e.g., temperature vs. radar reflectivity, as shown in Fig. 5 and Fig. 8). In practice, to avoid abrupt changes, we update $\sigma_z$ using an exponential moving average (EMA) every $M=10{,}000$ training steps $\sigma_z \leftarrow (1{-}\beta)\, \sigma_z + \beta\, \sigma_z^{\mathrm{cur}}$ The final $\sigma_z$ is fixed for inference. An ablation in Sec. 5.3 (Table 4) demonstrates the effectiveness of this adaptive noise scaling in improving ensemble calibration and spread-skill alignment. --- *4. Too many typos...* **Response.** Thank you for noticing the typos. We have corrected them in the revised manuscript. --- *5.Where is definition of $\lambda $?..."* **Response.** Thank you for your comment. Currently $\lambda $ is introduced in Sec. 4.3. It balances the deterministic regression and stochastic diffusion terms. In the limit $\lambda \rightarrow \infty $, AFM reduces to a purely deterministic model. We have added this clarification to the main text in the revised manuscript. --- We hope our responses and revisions have adequately addressed your concerns.
Summary: The paper focuses on tackling small-scale physical science problems (e.g., weather super-resolution). It proposes the joint encoder and flow-matching training objective over the prior two-stage methodologies to improve the overfitting. Specifically, this work introduces Adaptive Flow Matching (AFM) with the help of adaptive noise scaling to introduce stochasticity in the process. At last, authors compare w.r.t. Various baselines on regional downscaling and multiscale Kolmogorov-Flow tasks. Claims And Evidence: In the abstract itself, it is claimed that AFM achieves SOTA on regional downscaling. However, this is not the case. According to Table 2 and Figure 2, improvements over CFM are arguable. However, on Kolmogorov-Flow, task performance seems to be much better. But overall, it feels marginally better. Methods And Evaluation Criteria: Methods: Yes, the proposed method makes sense, as spatially misaligned input/output could be challenging for the existing generative model formulations. Evaluations: Due to a lack of reviewer expertise, it is hard to verify the evaluation procedure. However, it seems exhausting. Theoretical Claims: The review has verified Proposition 1 and the resulting sampling procedure. Both seem to be accurate. Experimental Designs Or Analyses: The reviewer verified all experimental designs and analyses. There are no visible issues. Supplementary Material: Reviewer has looked at supplementary sections B, C, D, E, and I. Relation To Broader Scientific Literature: The proposed approach to AFM might be of interest to the broader community as it attempts to perform joint training of the encoder and the flow model. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Weaknesses: - The paper contains many typos and lacks clarity (see Suggestions). - As flow matching maps the any to any distribution, there should be a baseline that just does that along with traditional CFM. Basically, direct $y$ to $x$ mapping. Similarly, other methods, such as “Diffusion Schrödinger bridge matching,” could also be added. - Because of these prior works (and missing baselines), I am unsure if the claim for Section 4 Line 149-152 is valid. Other Comments Or Suggestions: - Paper often misrepresents the Flow matching as a stochastic process (Abstract Line 25), which is misleading. Meanwhile, stochasticity comes from noise term in eq. (5). - Typos in Figures 2,3 & 4 where SFM is written instead of AFM. - “scientific” type in conclusion Questions For Authors: - What is the final $\sigma_z$ value during the inference? Is it the one result at the end of training (Figure 5)? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their comments and for finding our paper interesting. *1. In the abstract itself, it is claimed that AFM achieves SOTA on regional downscaling. However, this is not the case. According to Table 2 and Figure 2, improvements over CFM are arguable. However, on Kolmogorov-Flow, task performance seems to be much better. But overall, it feels marginally better.* **Response.** We thank the reviewer for their remark. Our claim of “SOTA” is based on the consistent gains observed across datasets, channels and metrics, particularly in those with greater stochastic variability. In Table2, for instance, *radar reflectivity* sees a clear improvement when using AFM over CorrDiff, and AFM’s ensemble calibration (e.g., SSR) is superior across all channels. Similarly in Table 13 AFM outperforms all other models in most metrics and channels and has better calibration by a large narging. Also, in the Kolmogorov Flow experiments (Table 3), AFM clearly outperforms CFM and CorrDiff and the gap become wider as the misalignment increases (tau 3 -> 10). While the margins may appear moderate on more deterministic channels (e.g., temperature), these results overall indicate that AFM consistently delivers better calibration and uncertainty modeling, which is crucial in scientific tasks. We clarify in the revised abstract that our method excels especially under higher stochasticity and improves overall ensemble calibration. --- *2. As flow matching maps the any to any distribution, there should be a baseline that just does that along with traditional CFM. Basically, direct $\mathcal{X} \to \mathcal{Y}$ mapping. Similarly, other methods, such as Diffusion Schrödinger bridge matching,' could also be added. Because of these prior works (and missing baselines), I am unsure if the claim for Section 4 Line 149-152 is valid.* **Response.** Thank you for the comment. In our setting, direct $\mathcal{X} \to \mathcal{Y}$ flow matching is not applicable due to both spatial and channel misalignments—$\mathcal{X}$ and $\mathcal{Y}$ often have different modalities (e.g., $\mathcal{X}$ includes forecast features, while $\mathcal{Y}$ includes precipitation and radar reflectivity). As such, intermediate alignment via an encoder is necessary. The closest viable baseline is Conditional Flow Matching (CFM), which we include. Methods like diffusion Schrödinger bridges (e.g., I2SB) also assume aligned domains and are similarly inapplicable here. --- *3. Paper often misrepresents the Flow matching as a stochastic process (Abstract Line 25), which is misleading. Meanwhile, stochasticity comes from noise term in eq. (5).* **Response.** Thank you for pointing this out. You are correct—flow matching is inherently deterministic (ODE-based), and the stochasticity in our method arises from the noise injected into the encoder output. We have clarified this distinction in the abstract of the revised manuscript. --- *4.Typos in Figures 2,3 & 4 where SFM is written instead of AFM. “scientific” type in conclusion* **Response.** Thank you for noticing the typos, that are now fixed in the revised manuscript. --- *5. What is the final $\sigma_z$ value during the inference? Is it the one result at the end of training (Figure 5)?* **Response.** Yes, that is correct. During inference, we use the final converged value of \(\sigma_z\) obtained at the end of training, as shown in Figure~5. We hope our responses and revisions have adequately addressed the remaining of your concerns. --- Rebuttal Comment 1.1: Comment: I thank the authors for providing the clarifications. Here is my updated response: 1. (Q1) I acknowledge that the proposed method indeed shows improvements over the baseline (as also mentioned in the original review). However, performance again is not significant despite limited novelty (also ack. by other reviewers). Additionally, due to the reviewer's lack of data domain expertise, I will assume that this is good enough. 2. (Q2) Thanks for the clarification. What if we add the extra random channel to match the dimensionality? To summarize, I will maintain my current score with 3/5 confidence. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the follow-up. In our CWA experiments, the input has 20 channels while the target has only 4, so dimension matching via random auxiliary channels is not feasible. More importantly, matching dimensionality alone is not sufficient—the key issue is that the content of the input and target channels must be compatible. Since flow matching relies on linear interpolation in data space, mixing unrelated modalities (e.g., temperature in the input and radar reflectivity in the target) leads to semantically meaningless interpolants and poor learning behavior. Thus, an encoder is essential to map the input to a representation aligned with the target distribution, making their content mixable and interpolation meaningful. This was the motivation behind our initial point: methods like direct flow matching or Schrödinger bridges assume spatial and semantic alignment between source and target, which does not hold in our setting.
Summary: The paper addresses image 'super-resolution' in the context of atmospheric physics. The super-resolution aims at generating small stochastic scales into the input data while preserving and aligning large scale physics. The authors propose to adapt and apply flow matching for this purpose. To this end, the authors show that it is possible to jointly train a time dependent decoder and the flow. An adaptive noise scaling factor allows to adjust the noise level of the diffusion process to the input data Experimental results are illustrated on two meteorological datasets. Quantitative results and comparisons with related work are reported. "## update after rebuttal The rebuttal provided by the authors has addressed my few concerns. The paper has probably a limited audience amongst the ML/CV community, but nevertheless illustrates a very relevant practical use case (and associated challenges) beyond the known benchmarks. Claims And Evidence: The authors claim that small physics scales can be accounted for with a diffusion denoising model while the large scale discrepancy between output/input can be tackled with an determinist encoder. Experimental setting and comparison with standard approaches demonstrate promising results. Methods And Evaluation Criteria: The motivation of the approach is clearly exposed and the mathematical derivation of the problem clearly presented. The steps of the approach and reasoning are well justified and discussed. Performance is evaluated in terms of RMSE, Continuous Ranked Probability Score (CRPS), Mean Absolute Error (MAE), and Spread Skill Ratio (SSR).Ablation studies are performed. Thorough comparisons are reported. Theoretical Claims: The mathematical derivations (main paper and supplementary material) are correct tmk Experimental Designs Or Analyses: Experimental results are limited but convincing. One real data and one synthetic data are used for training and evaluation. Details of the datasets are reported in appendix. Thorough and detailed visual and quantitative results are reported. It is interesting in particular to observe the discrepancy of quantitative results for stochastic channels (ie radar and wind) vs deterministic (ie temperature) channels. Supplementary Material: Very thorough supplementary material, which comprises a proof of the proposition, additional quantitative results, description of the datasets, of the metrics and of the network architecture. Relation To Broader Scientific Literature: This paper is at the frontier between applied atmospheric physics and machine learning. It introduced most relevant material, without being exhaustive, as required. Essential References Not Discussed: no missing reference, to my knowledge. Other Strengths And Weaknesses: A well presented paper, with a clear 'flow' , a well justified approach, and limited but convincing results. Other Comments Or Suggestions: - In figure 2, 4: there is a confusion between the acronyms: SFM -> AFM - The authors could be more specific when describing the "mis-alignment" between the input / output. How this mis-alignment could be characterized: is it global or local, subpixel or few pixels scale? - I understand that the input low resolution data is first rescaled by bilinear interpolation to the size of the output. Is it correct? Questions For Authors: Could you envisage different fields of application? Could this approach be useful for other use cases? Ethical Review Concerns: no ethical concern Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their positive and constructive feedback, and for recognizing our clear motivation and rigorous methodology. *1. In figure 2, 4: there is a confusion between the acronyms: SFM -> AFM* **Response.** Thank you for noticing the typo. In the revised manuscript, we have used AFM consistently. *2. The authors could be more specific when describing the 'misalignment' between the input / output. How this mis-alignment could be characterized: is it global or local, subpixel or few pixels scale?* **Response.** Thank you for the suggestion. We have clarified the notion of misalignment in the revised manuscript. It arises in two ways: (1) channel-level mismatch, where input and output contain different modalities (e.g., forecasts vs. radar reflectivity); and (2) spatial misalignment due to differences in the underlying PDEs between coarse and fine models. Spatially, the misalignment can be both global and local. For instance, in Fig. 7, storm centers in wind fields are displaced by several grid points (large-scale shift), while in Fig. 8, smaller PDE discrepancies ($\tau{=}3$) lead to subpixel-to-few-pixel level local misalignment. In the revised manuscript we have added a section G.3 discussing this in further detail and also adapted Section 5 to more clearly define misalignment. *3. I understand that the input low resolution data is first rescaled by bilinear interpolation to the size of the output. Is it correct?* **Response.** Thank you for the question. Indeed, we apply bilinear upsampling to align dimensions, then let the encoder refine and align further. This is explained in Section 5.1 of the manuscript (and in more detail in G.1 of the Appendix). In the revised version we added further clarifications regarding this in the main text (Sec 5.1). *3. Could you envisage different fields of application? Could this approach be useful for other use cases?* **Response.** Certainly. Our AFM framework applies wherever partial or misaligned data requires super-resolution or channel synthesis, e.g., MRI subsampling in medical imaging. We highlight broader applicability in the Conclusion of the revised manuscript. We hope our responses and revisions have adequately addressed your questions.
null
null
null
null
null
null
LlavaGuard: An Open VLM-based Framework for Safeguarding Vision Datasets and Models
Accept (poster)
Summary: This paper introduces LlavaGuard, a suite of vision safeguards. They decribe a systematic framework including safety taxonomy, data preprocessing, augmentation, and training setup. Then they build a multimodal safety dataset and train LlavaGuard models on this. Through extensive experiments, they demonstrate that LlavaGuard outperforms previous SOTA methods and is applicable to several real world problems. ## update after rebuttal I'm satisfied with the rebuttal. With the additional experiments, the paper is further strengthened. Claims And Evidence: 1. `LlavaGuard provides consistent assessments across these examples and continues to demonstrate strong policy-following capabilities, providing well-grounded reasoning using the risk guidelines of the relevant safety category.` This claim is supported by Figure 2 and 7. 2. `LlavaGuard is the only model that ... as evidenced by its recall performance along with high accuracy.` This claim is not that rigorous since the experiments are conducted on the held-out test set. Typically, a model trained on the training dataset might perform better on the held-out test set. However, since the performance gap between LlavaGuard and baseline methods are large, the claim is still acceptable. 3. `LlavaGuard handles even edge cases effectively.` This claim is supported by Table 4. Methods And Evaluation Criteria: 1. In Table 2, the authors use accuracy, recall, and precision to evaluate the model performance. These are very common and widely-used ways for evalustion. 2. In Table 2, the authors also use a metric called PES. The definition of PES is reasonable, but there are two questions: (1) Why not showing PER results in Table 2? (2) Why using the **harmonic** mean of PER and balanced accuracy? 3. In Table 3, the authors conduct a user study. In appendix J, they describe the overall pipeline, which seems to be reasonable. Theoretical Claims: The paper has no theoretical claims. Experimental Designs Or Analyses: 1. The authors compare LlavaGuard with Llava-OV and GPT-4o. However, VLMs like Qwen are also strong and should be compared. Also, can the authors provide performance of visual reasoniong models like OpenAI o1 and QvQ for reference? (It's totally OK if LlavaGuard cannot outperform reasoning models, and it would be amazing if LlavaGuard can.) 2. The authors only use one test dataset for the main experiment. It would be better if there are more. Supplementary Material: Yes, I've checked appendix A,B,D,I,J carefully and quickly browsed through other sections. Relation To Broader Scientific Literature: 1. The paper studies how to build a vision safeguard model, which is important. 2. LlavaGuard goes beyond rigid classifications and provides assessments that include violated categories and detailed rationales. CoT and reasoning have been proved to be effective, and the application of these methods to VLM safety is good. Essential References Not Discussed: LlavaGuard provides assessments that include violated categories and detailed rationales. Therefore, it should add related work about (visual) cot reasoning. For example, one should at least cite `Chain-of-thought prompting elicits reasoning in large language models`. Other Strengths And Weaknesses: 1. The idea of providing a rating, category, and rationale is novel in VLM safety. It is relatively well studied in general LLM/VLM. 2. Demonstrating applied use cases is useful. Other Comments Or Suggestions: 1. Line 63, `detailed` is a typo and should be changed to `details`. 2. In Figure 10, the authors don't need to mention the files are `all_data.json` and `test.json`. The `v24` tag is also confusing. Questions For Authors: 1. Baselines: VLMs like Qwen are also strong and should be compared. Also, can the authors provide performance of visual reasoniong models like OpenAI o1 and QvQ for reference? (It's totally OK if LlavaGuard cannot outperform reasoning models, and it would be amazing if LlavaGuard can.) 2. Benchmarks: The experiments are conducted on the held-out test set. Typically, a model trained on the training dataset might perform better on the held-out test set. It would be better if there are more benchmarks. 3. The authors should add related work about (visual) cot reasoning since adding rationale has been well studied in general VLMs. 4. The authors should fix minor typos in `Other Comments Or Suggestions` section. I'm happy to raise my score if these questions are well resolved. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for your detailed response and the constructive feedback! Below we address your concerns. --- ### **1. Generalization Beyond the Held-out Test Set** While achieving strong performance on the held-out test set is not a small step, we agree that exploring additional datasets provides valuable insights. For this reason, we evaluated LlavaGuard on several real-world (e.g., ImageNet) and synthetic datasets (e.g., Stylebreeder), with users verifying LlavaGuard assessments (see overall score Tab. 3). Below, we break down those scores across datasets. These user evaluations demonstrate that our approach generalizes well beyond the original training and test sets. While individual dataset scores should be interpreted with caution due to low support (e.g., some have only 40 examples annotated), the overall generalization of LlavaGuard remains well-supported. ||Rating|Category|Rationale |-|-|-|- |Stylebreeder|90.5|85.7|76.2 |COCO|80|80|68 |ImageNet|84.6|84.6|80.8 |CC12M|100|100|100 |GenAI|89.5|89.5|89.5 We have not identified any suitable benchmarks with safety annotations, as most existing ones primarily focus on (conversational) model responses rather than analyzing image content directly (e.g., [MSTS](https://arxiv.org/abs/2501.10057)). ### **2. Clarifications on PER/PES Metric** As explained in Sections 4 (lines 182 ff) and 5 (lines 246-252), we use a combination of PER and balanced accuracy to ensure metric robustness to data imbalance. In a dataset with a dominant class, e.g. "unsafe" samples, there will be more policy exceptions labeled as "safe". A safety classifier that always predicts "safe" will naturally classify many policy exceptions correctly, inflating its PER. However, this can be misleading, as it may give the illusion of greater flexibility than the model actually possesses. To account for this imbalance, we introduce PES, which combines PER with balanced accuracy. PES can be seen as a more reliable measure for policy modifications. That said, we recognize the standalone value of PER (see table below, Response3) and included it in the paper. ### **3. Comparison with other VLMs** Recently, there has been a surge in strong VLMs, and we have now evaluated several models: Qwen2.5, InternVL2.5, and the reasoning model QvQ. All models initially showed lower performance, with Qwen models scoring 67-71%. Moreover, we developed QwenGuard using our framework, once again outperforming their baseline models by a significant margin (bal. acc. of 88-90% and PES of 85%, see Table below), on par with LlavaGuard. We also investigated building SiglibGuard (based on Siglip2-large) as a lightweight alternative to LlavaGuard. Notably, the Siglib model does not natively support safety classification, meaning there is no direct baseline. Instead, to adapt it for safety evaluation, we train a classification head on top. While SiglibGuard outperforms all VLM baselines, it still lags behind LlavaGuard and other VLM-Guards by more than 16%. A further central limitation is that CLIP-like models cannot inherently process (varying) safety policies. The safety policy provided as input to the VLMs offers a strong and beneficial inductive bias, which CLIP-like methods are missing. When examining reasoning models, QvQ reaches only 60% balanced accuracy. In analyzing QvQ's failure cases, we observed that it often got lost in recursive reasoning traces, highlighting the complexities of safety evaluation--a promising area for future research. For example, our generated rationales could serve as training data to develop safeguards with safety reasoning traces. |Model|Acc|Rec|Prec|PER|PES |-|-|-|-|-|- |InternVL2.5-1B|50.6|88.1|44.0|6.7|11.8 |InternVL2.5-8B|61.3|31.3|73.7|70.0|65.3 |InternVL2.5-78B|66.9|46.1|74.4|66.7|66.8 |Qwen2.5-VL-3B|68.1|79.7|58.7|20.0|30.9 |Qwen2.5-VL-7B|67.6|49.2|73.1|60.0|63.6 |Qwen2.5-VL-72B|70.8|60.0|71.8|52.2|60.1 |QVQ-72B-Preview|62.0|25.5|**93.4**|**94.2**|74.8 |GPT-4o|72.9|56.0|81.1|82.2|77.3 **Fine-tuned Models** |SiglipGuard (ours)|73.7|75.6|67.5|24.4|36.7 |QwenGuard-3B (ours)|88.7|87.8|86.8|81.1|84.7 |QwenGuard-7B (ours)|89.7|88.9|87.9|80.0|84.6 |LlavaGuard-0.5B (ours)|88.7|86.7|87.9|85.6|87.1 |LlavaGuard-7B (ours)|**90.8**|**91.4**|88.0|82.2|**89.9** ### **4. Related Works on CoT and Visual Reasoning** We agree, and this citation ("Chain-of-Thought Prompting Elicits [...]") was actually already included in our bibliography, but accidentally missed including it in the main text. We will revise our related work section about visual CoT and reasoning accordingly and add further relevant references, such as "Measuring and Improving Chain-of-Thought Reasoning in Vision-Language Models", focusing on consistency in visual reasoning. ### **5. Typographical Errors and Figure Clarifications** Thanks, fixed it! --- Thanks for considering our responses. Please also see our responses for the other reviewers. Given our responses, we would appreciate it if the reviewer reconsiders their score. --- Rebuttal Comment 1.1: Comment: I appreciate the additional experiments. I will adjust my score accordingly. --- Reply to Comment 1.1.1: Comment: We want to thank you for the valuable feedback and your prompt response. We are pleased to hear that our additional experiments are well received and agree that they have improved our paper.
Summary: The paper introduces a vision-language-based framework specifically designed for safety compliance verification in visual content. It first establishes a context-aware assessment covering nine safety taxonomies and uses it to curate a human-labeled dataset. This dataset includes ground truth safety ratings, violated categories, and human rationales, making it well-suited for content moderation scenarios. The authors fine-tune the Llava model and demonstrate its performance, showing that it outperforms other state-of-the-art VLMs in specific safety tasks. Additionally, they show that LlavaGuard can detect unsafe content in existing natural image datasets such as ImageNet and AI-generated image datasets such as I2P. Claims And Evidence: Claims are supported by clear evidence. Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense for the problem. Theoretical Claims: No theoretical claims. Experimental Designs Or Analyses: See the strength and weakness session. Supplementary Material: I have reviewed all the supplementary material. Relation To Broader Scientific Literature: The first VLM-based work on safety compliance. Essential References Not Discussed: See the strength and weakness session. Other Strengths And Weaknesses: Strengths: 1. This is the first work leveraging vision-language models (VLMs) for safety compliance verification, marking a significant step forward in the field. 2. The paper introduces a well-structured safety taxonomy covering a broad range of commonly encountered safety rules, with clear definitions provided for each category. 3. A major contribution is the release of a curated dataset with human-labeled safety ratings, violated categories, and rationales, which will be valuable for future research in this domain. 4. The authors fine-tuned an open-source VLM on this dataset, demonstrating its effectiveness in safety compliance tasks compared to native VLMs. Additionally, the release of this fine-tuned model further enhances its impact on the research community. 5. The study includes a thorough analysis of model predictions and extends beyond the curated dataset by evaluating its ability to detect unsafe content in publicly available datasets such as ImageNet and I2P, further demonstrating its applicability and contribution to the field. Weaknesses: 1. The study focuses solely on fine-tuning Llava, without comparisons to other prominent VLMs such as Qwen or MiniGPT. A broader evaluation, including fine-tuned VLMs and native image-based deep learning models (e.g., ResNet), would strengthen the findings. 2. The proposed taxonomy is fixed and lacks support for dynamic adaptation. While data augmentation is applied to adjust category relevance during training, LlavaGuard does not inherently allow for flexible taxonomy updates. In real-world applications, compliance rules evolve over time, requiring costly and time-consuming retraining for customization. 3. The taxonomy is relatively concise, whereas real-world compliance rules can be much more complex and lengthy. It would be beneficial to evaluate how LlavaGuard performs when handling long-form compliance regulations. 4. Some rule definitions in the taxonomy are vague and subjective. For instance, under O3: Sexual Content, the guideline states that content should not "contain sexually explicit (i.e., erotic) content," which is inherently subjective, even for human annotators. The study lacks an in-depth analysis of how the model handles such subjective definitions and potential biases that may affect safety assessments. Other Comments Or Suggestions: No. Questions For Authors: 1. How do you address the challenge of adapting LlavaGuard to customized real-world safety compliance rules, as highlighted in the weaknesses? Given that compliance regulations evolve over time and vary across domains, what strategies do you propose to enable dynamic taxonomy updates without requiring extensive retraining? 2. What is your vision for extending this work to video compliance scenarios? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for your detailed response and the constructive feedback! Below we address your concerns. --- ### **W1 Additional VLMs** According to your suggestion, we included additional prominent VLMs (e.g., Qwen-VL) and baseline image-centric models (SigLip2), please refer to Response3 for LJnY. ### **W2,W3,Q1 Future strategies to adapt LlavaGuard to real-world safety rules** While LlavaGuard already exhibits preliminary adaptability (e.g., through our policy-driven inference), we acknowledge that customizing it to evolving, real-world compliance regulations across diverse domains presents important future challenges. To effectively enable more complex policy changes without requiring resource-intensive retraining, we envision the following strategies: 1. **Rule-Based Classification.** Modular, rule-based classifiers that operate on top of the VLM’s intermediate outputs--such as structured rationales or even extracted semantic symbols from rationales. These symbols can serve as inputs to a logic-based reasoning layer (e.g., a logic program), which can be updated based on evolving safety taxonomies. 2. **Inference-Time Reasoning.** Novel scaling methodologies beyond model parameters can offer significant benefits for adaptability. For instance, scaling inference-time compute to incorporate step-by-step reasoning traces can enhance nuanced safety understanding and, in turn, allow more nuanced adaptations of safety specifications. ### **W4 Subjective Rule Definitions.** This is indeed a crucial concern. Subjectivity of (safety) taxonomies is a well-known challenge, not only in recent AI safety taxonomies but also in other real-world applications and guidelines, such as [PEGI](https://pegi.info), or similar frameworks, where classification rules require subjective interpretation. For example, PEGI’s treatment of simulated gambling recently led to a controversial age rating that was later successfully appealed (see [IGN article](https://ign.com/articles/balatro-dev-successfully-appeals-pegi-18-rating-over-simulated-gambling)), highlighting challenges of subjectivity in the realization of such taxonomies. A key motivation for leveraging VLMs like LlavaGuard is their capability to handle both precisely defined and inherently subjective or vague safety guidelines through their acquired commonsense understanding. However, we recognize that subjectivity remains a challenge—LlavaGuard likely reflects some averaged subjectivity embedded in its training data rather than an objective standard. Though we did not directly analyze LlavaGuard’s handling of subjective guidelines, results from our user study demonstrate strong human-model agreement—including subjective and ambiguous cases—indicating that LlavaGuard’s assessments generally align with human interpretations. However, we acknowledge that biases embedded within the model’s acquired commonsense understanding might influence specific subjective decisions. We agree that systematically investigating, understanding, and mitigating these biases is an essential direction for future research. We will add this to our discussion. ### **Q2 Extension to Video** Thank you for highlighting this important direction. Technically, LlavaGuard could be adapted to video scenarios using a sliding-window approach, processing videos frame-by-frame or through short segments, and aggregating frame-level safety assessments into an overall video compliance score. However, videos inherently combine multiple modalities, most notably audio alongside visual content. Therefore, a more robust approach would involve extending our current vision-language models (VLMs) to multimodal architectures capable of jointly modeling visual, textual, and auditory signals. A particularly relevant and impactful application could involve compliance verification for video games, films, and streaming content, which currently rely on established rating frameworks such as PEGI or ESRB. Our vision includes developing multimodal safeguards capable of automating and augmenting traditional human-based rating processes, ensuring consistent and transparent safety assessments at scale. Additionally, with recent advancements in generative AI, we anticipate the emergence of interactive applications where generative models dynamically create video clips in real time, based on user prompts or interactions. In such applications, real-time safety checks become crucial, as the dynamic nature of generated content could rapidly introduce risks that traditional compliance approaches might not detect. LlavaGuard, due to its robust multimodal understanding and flexible policy adherence, and follow-up advancements, could play an essential role in monitoring, assessing, and ensuring the safety of these generative interactive media environments. --- Thanks for considering our responses. Please also see our responses for the other reviewers. Given our responses, we would appreciate it if the reviewer reconsiders their score. --- Rebuttal Comment 1.1: Comment: Thanks for addressing all my comments. I have no further concerns. I would raise my score to Accept. A good paper. --- Reply to Comment 1.1.1: Comment: We would like to thank you for the constructive feedback and the prompt response. We are pleased to hear that we have addressed all your comments and that there are no further concerns. We believe your feedback has further enhanced our paper.
Summary: - Key contribution: This paper presents a safety guard suite, LLavaGuard, with a dataset consisting of ~5K images annotated with safety labels and rationales, and two models trained using the dataset. - Motivation: The key motivation behind this is that safeguard models and datasets are rare in the visual domain despite some previous limited attempts, such as LAION-nsfw classifiers. The authors claim that we need a more comprehensive coverage for better moderating images. - Dataset Curation: The taxonomy is built according to AI regulations, including nine safety categories. Images are sourced from SMID and supplemented via web-scraped images. Human annotators are employed to label the image according to safety risk taxonomy, and models are prompted to generate rationales explaining why the images are classified. Data augmentation is introduced to cast the classification problem into One vs All (non-violating) formats for better adaption. - Empirical findings: Two models based on LLaVA-OV-0.5B / 7B are trained using the proposed dataset, where the LLaVAGuard7B model shows better classification accuracy compared with previous moderator models such as OpenAI-omni-mod. Further analysis applying the LLaVAGuard to ImageNet shows wrongly labeled images in the original dataset. LLaVAGuard can also safeguard generative models such as StableDiffusion, achieving a high agreement > 80% with human users. Claims And Evidence: The claims in the paper are generally well-supported. Methods And Evaluation Criteria: - The dataset curation section lacks essential information about human annotators, including demographics, total number, compensation, and procedures for handling diverse perspectives on subjective safety guidelines. The authors should address how conflicting decisions and ambiguous cases were resolved. Additionally, questions remain about copyright compliance for web-scraped training images and the predominantly Western-centric regulatory framework that fails to account for cultural nuances. - Table 2 reveals that scaling the guard model from 0.5B to 7B yields minimal performance improvements. This raises the question of whether simpler approaches, such as fine-tuning CLIP/SigLIP 2, might achieve comparable results despite lacking rationale generation capabilities—a comparison that would strengthen the baseline evaluation. - The methodology for rationale generation requires clarification regarding which models were employed and what quality assurance measures were implemented to ensure accurate and helpful explanations. - Finally, the evaluation would benefit from including results across a more diverse set of models, particularly examining how LLama-Vision 11B or Qwen-VL series models might perform when trained on the curated dataset. Theoretical Claims: N/A Experimental Designs Or Analyses: - The experiments for filtering ImageNet and safeguarding generative is generally sound. - I am curious about the failure cases of the current guard models. any categories in which they might fail more frequently? Also, providing some false-positive samples detected would be helpful? Supplementary Material: - Detailed category description - Guided Rationale generation details Relation To Broader Scientific Literature: The paper extends LLaMAGuard and LAION-NSFW using curated dataset to train a LLaVAGuard for image safety moderation. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Cons: (minor) Despite the efforts of curating the dataset is well-appreciated and the significance of the safety moderation is well-recognized, this paper does not bring any technical contribution to the community. Other Comments Or Suggestions: N/A Questions For Authors: - What is the downstream task performance of the models trained using the filtered dataset? e.g., would the accuracy of an image classifier trained on the filtered ImageNet become lower than that trained on the original dataset? ==== The additional annotationdetails and updated comparison with recent models provided during rebuttal effectively address my concerns, therefore I am increasing my scores from 2 to 3. Ethics Expertise Needed: ['Discrimination / Bias / Fairness Concerns', 'Legal Compliance (e.g., GDPR, copyright, terms of use)'] Ethical Review Concerns: - There are no details of the human annotators, which might lead to biased annotation results; - The web-scraped images might have copyright issues. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your detailed response and the constructive feedback! Below we address your concerns. --- ### **1. Annotators and Dataset Information** We reaffirm our commitment to ethical and regulatory standards. To ensure annotators' well-being, dataset annotation was directly performed by the authors, and our user studies included researchers with expertise in handling safety-related content, following annotator protection guidelines by [Vidgen et al. (2019)](https://aclanthology.org/W19-3509). For more details, please refer to App. J. Regarding the dataset, all annotators are male, White, and between 20-40 years old. We adopted a prescriptive annotation approach, collaboratively developing a taxonomy that defines a detailed categorization. For edge cases, all annotators discussed potential contradictions to achieve consensus. For the user study, participants were aged 20-30, 30% male and 70% female, with 70% identifying as White (from the US and Europe) and 30% as Asian-American. Despite the demographic differences between dataset annotators and user study participants, we observed a high level of agreement. Regarding copyright compliance, we adhere to Fair Use principles. Efforts have been made to ensure that images are in the public domain or under open licensing. Most of our data is based on SMID (Creative Commons license). For the remaining data we collected, neither the images are distributed, nor does the model distribute data. Consequently, training our model on this data does not infringe copyright, as it serves a non-commercial research purpose. We will incorporate these clarifications to enhance transparency. ### **2. Classifier Baselines** We evaluated simpler approaches like CLIP-based classifiers (Q16, App. Tab. 5), achieving significantly lower performance (only 69% acc.). We have also finetuned SigLIP2-large, which reached 99.6% train and 73.7% test acc. (more details in Response3 to LJnY). While the performance is higher than Q16's, it still lags behind LlavaGuard's accuracy by more than 16%. The main reason is that CLIP-like models cannot handle (varying) policies. Testing SigLIP under fixed-policy conditions yielded improved acc. (79%), still far below LlavaGuard. ### **3. Rationale Quality** We acknowledge that a quality validation of guided rationales strengthens our findings. Therefore, we present a more comprehensive evaluation using GPT-4o to compare guided vs. non-guided rationales across our entire dataset. The GPT-4o judge scored rationales based on comprehensiveness, accuracy, and guideline adherence: |Llava-34B|Guided Rationales|Base Rationales |-|-|- |Mean Quality Score (1–10)|**9.1**|3.8 |Median Quality Score (1–10)| **9.0**|3.0 |Win Rate (%)|**99.9**|0.1 The results demonstrate that our guided generation approach produces substantially better rationales, which is supported by qualitative examples in App. Fig. 7. Additionally, we benchmarked rationale quality across Llava model scales: |Model|Win Rate (%)|Mean Score (1-10)|Median Score (1-10) |-|-|-|- |Llava-34B|**83.9**| **9.0**|**8.4** |Llava-13B|9.6|7.0|6.7 |Llava-7B|6.6|7.0|6.8 We used Llava-34B in our final setup to generate rationales for LlavaGuard training and the table shows it largely outperforms the other base models. These clarifications have been added to the paper. ### **4. Additional VLMs** We fully agree about the need for a broader evaluation range, particularly examining strong recent models (e.g., Qwen-VL series). Please see our detailed Response3 to LJnY. --- ### **5. Failure Case Analysis** Upon examination of false-positives, we noted common failure cases of LlavaGuard: - Images near decision boundaries proved inherently challenging. - The model occasionally outlined challenges interpreting embedded image text. For instance, it misclassified signage explicitly prohibiting sexual harassment of nurses as unsafe content. We uploaded an [overview](https://anonymous.4open.science/r/LlavaGuard-FE7F/figs/false_positives/FP_overview.md) (Please note some samples might be disturbing). We add these insights to our paper. ### **6. Downstream Performance on Filtered Dataset** We trained ResNet50 (50 epochs, LR=0.001, AdamW optimizer) from scratch on original vs. LlavaGuard-filtered ImageNet (~20K images removed, ~1% overall). |overall|Top-1 Acc.|Top-5 Acc. |-|-|- |ImageNet|67.2±0.5|87.2±0.6 |ImageNet-filtered|67.4±0.4|87.1±0.6 Despite removing up to 55% of samples in some classes (e.g., “assault rifle,” “army tank,” “missile,” “syringe”), overall downstream performance remains unchanged. Inspecting the most filtered classes only, we observe a gap. |top-10 most filtered classes|Top-1 Acc.|Top-5 Acc. |-|-|- |ImageNet|53.6±3.5|82.4±3.1 |ImageNet-filtered|46.9±4.8|77.3±3.6 This shows that careful safety filtering does not have to come with compromises in downstream performance. --- Thanks for considering our responses. Given our response, we would appreciate it if the reviewer reconsiders their score. --- Rebuttal Comment 1.1: Comment: Thank you for your response which effectively addresses my concerns. I will increase my scores accordingly :) --- Reply to Comment 1.1.1: Comment: Thank you for the constructive feedback and the quick response. We are pleased to have effectively addressed your concerns. Your insights have been valuable and have helped us further refine our paper.
null
null
null
null
null
null
null
null
RobustLight: Improving Robustness via Diffusion Reinforcement Learning for Traffic Signal Control
Accept (poster)
Summary: The paper introduces ​RobustLight, a novel framework designed to enhance the robustness of Traffic Signal Control (TSC) systems against adversarial attacks and missing data. The authors propose a ​plug-and-play diffusion model​ that integrates with existing TSC platforms to recover from noise attacks and restore missing data in real-time. The framework includes two key algorithms: ​denoise​ and ​repaint, which leverage a ​Dynamic State Infilling (DSI)​ algorithm to train an improved diffusion model online. The authors conduct extensive experiments on real-world datasets, demonstrating that RobustLight significantly improves the performance of TSC systems under various adversarial attacks and missing data scenarios, with up to ​50.43% improvement​ in average travel time compared to systems without RobustLight. Claims And Evidence: The claims made in the paper are ​well-supported by empirical evidence. The authors provide extensive experimental results on real-world datasets, including JiNan, HangZhou, and New York, to validate the effectiveness of RobustLight. The results show consistent improvements in TSC performance under various adversarial attacks (Gaussian noise, U-rand, MAD, MinQ) and sensor damage scenarios. The authors also compare RobustLight with traditional and RL-based TSC methods, demonstrating its superior robustness and recovery capabilities. The evidence is clear, and the results are statistically significant, with detailed metrics such as ​Average Travel Time (ATT)​ and ​state recovery performance​ provided. Methods And Evaluation Criteria: The methods are ​well-designed and innovative, leveraging the strengths of diffusion models and reinforcement learning. The ​Dynamic State Infilling (DSI)​ algorithm is a key contribution, enabling real-time recovery of TSC data. The ​denoise​ and ​repaint​ algorithms are effectively used to handle adversarial attacks and missing data, respectively. The evaluation criteria are appropriate, focusing on ​ATT​ as the primary metric to measure the efficiency of TSC systems. The authors also use ​t-SNE plots​ and ​violin plots​ to visualize the state recovery performance, providing additional insights into the robustness of the proposed framework. Theoretical Claims: The theoretical claims are ​solid and well-grounded. The paper builds on the foundations of diffusion models and reinforcement learning, providing a clear theoretical framework for the proposed methods. The authors discuss the ​forward and reverse processes​ of diffusion models and how they can be adapted for TSC tasks. The theoretical basis for the ​denoise​ and ​repaint​ algorithms is also well-explained, with references to existing literature on diffusion models and adversarial attacks. The theoretical claims are supported by the experimental results, demonstrating the practical applicability of the proposed methods. Experimental Designs Or Analyses: The experimental design is ​rigorous and comprehensive. The authors use ​real-world datasets​ from three different cities (JiNan, HangZhou, and New York) to evaluate the performance of RobustLight under various adversarial attacks and sensor damage scenarios. The experiments are well-structured, with clear comparisons between RobustLight and traditional/RL-based TSC methods. The authors also conduct ​ablation studies​ to analyze the impact of different components of RobustLight, providing insights into the contribution of each component to the overall performance. The analysis is thorough, with detailed discussions on the results and their implications for real-world TSC systems. Supplementary Material: N/A Relation To Broader Scientific Literature: The key contributions of the paper are ​highly relevant to the broader scientific literature. The integration of diffusion models with reinforcement learning for TSC tasks is a novel approach that addresses the limitations of existing methods, which often fail to handle both adversarial attacks and missing data simultaneously. The proposed framework builds on recent advancements in diffusion models and RL, extending their applications to the domain of traffic signal control. The paper also contributes to the growing body of research on ​robust RL​ and ​adversarial defense​ in real-world systems, providing a practical solution for improving the security and reliability of TSC systems. Essential References Not Discussed: There are no essential references not discussed in the paper. Other Strengths And Weaknesses: All strengths and weaknesses are mentioned above. Other Comments Or Suggestions: No. Questions For Authors: - Scalability: The paper mentions that the computational cost increases with the number of intersections. Could the authors elaborate on potential strategies to scale RobustLight for large-scale urban networks with hundreds of intersections? - Real-World Deployment: While the experiments are conducted on real-world datasets, the paper does not discuss the challenges of deploying RobustLight in real-world TSC systems. What are the practical considerations and potential barriers to real-world implementation? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: First and foremost, we sincerely thank you for pointing out the issues, as your suggestions are invaluable in enhancing the quality of this paper. 1. Scalability: The paper mentions that the computational cost increases with the number of intersections. Could the authors elaborate on potential strategies to scale RobustLight for large-scale urban networks with hundreds of intersections? Here, we need to clarify that our method can be deployed in either centralized or decentralized scenarios. In the case of centralized deployment, the model inference speed may slow down as the number of intersections increases. **Distributed Deployment**: Specifically, if a distributed deployment is chosen, our diffusion model can be trained on a central server to fully leverage data from different intersections, while model parameter updates are performed on edge computing devices, similar to a federated learning architecture. **Centralized Deployment**: If a centralized deployment is preferred, scalability and real-time requirements for hundreds of intersections must be addressed. On the hardware side, we can procure high-performance servers or employ inference acceleration techniques, such as data partitioning and parallelized inference, to enhance the diffusion model's inference performance. On the algorithmic side, we can utilize techniques like DDIM [1] to accelerate the diffusion model's inference process. In summary, by carefully selecting the deployment strategy and leveraging advanced hardware and algorithmic optimizations, we can ensure the robustness, scalability, and real-time performance of our Robust TSC system while maintaining its security against potential attacks. [1] Denoising Diffusion Implicit Models. 2. Real-World Deployment: While the experiments are conducted on real-world datasets, the paper does not discuss the challenges of deploying RobustLight in real-world TSC systems. What are the practical considerations and potential barriers to real-world implementation? Our framework could leverage DDIM and high-performance GPUs for accelerated inference. Denoising supports decentralized deployment with centralized training, while data missing scenarios use centralized inference. DDIM acceleration times on one A100: | Deployment | Scenario | JN1 | HZ1 | NY1 | |---------------------|-------------------|-------|-------|-------| | Distributed | U-rand noise| 0.098s| 0.097s| 0.096s| | Distributed | Data missing| 0.129s| 0.115s| 0.131s| | Centralized | U-rand noise| 0.364s| 0.431s| 2.43s | | Centralized | Data missing| 0.123s| 0.164s| 0.81s | As shown, the total processing time is less than 1 second meet real-time control. However, it is important to note that in decentralized deployment scenarios, the repaint algorithm will not be able to address the **kriging missing (full-intersection failure)** issue within data missing. Distributed deployment significantly improves algorithm performance due to the reduction in the batch size of inference data. **We recommend deploying the centralized solution to address both kriging (full-intersection failure) and random missing (sensor-specific-direction-single-intersection-failure) data issues using data missing algorithm, while employing data noise algorithms on the edge side using low-cost hardware to achieve accelerated denoising.**
Summary: This paper point out the current challenge in the TSC systems, which include significant performance degration, limitation of existing defense methods and lack of online ability. To address these issues, authors propose RobustLight, a framework to enhance the robustness of online TSC systems, consisting of a TSC agent and a dynamic state filling (DSI) agent. This frame work contains 2 algorithms, denoise and repaint to defend against missing data and adversarial attack. Extensive experiments show the effectiveness of their framework. Claims And Evidence: The effectiveness of RobustLight is shown clearly in the experiment results in different datasets and different attacks. Methods And Evaluation Criteria: The benchmark used by their experiment session makes sense. Theoretical Claims: Although this paper focuses on application side, the theoretical explanation is clear and the Lemma 3.1 and algorithm boxes are clear. Experimental Designs Or Analyses: For the experiment part, my concern is whether this method is still effective under some potential adaptive attacks, such as attacks effective on Diffusion Models. Supplementary Material: Overall, the supplementary material focuses on the detailed experiment results. Relation To Broader Scientific Literature: I think the problem studied by this paper has a wide application in the real world traffic management. Essential References Not Discussed: NA Other Strengths And Weaknesses: My another concern about this method, which is also stated in this paper, is the latency issue of this method because of the usage of diffusion models and whether it can handle traffic problems in a real-time way. Other Comments Or Suggestions: NA Questions For Authors: My questions are 2 concerns mentioned in the previous sections. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: First and foremost, we sincerely thank you for pointing out the issues, as your suggestions are invaluable in enhancing the quality of this paper. 1. For the experiment part, my concern is whether this method is still effective under some potential adaptive attacks, such as attacks effective on Diffusion Models. Diffusion Models are generally robust due to their iterative denoising process, which inherently introduces noise and reduces the impact of adversarial perturbations. However, like other deep learning models, they are not immune to adaptive attacks specifically designed to exploit their weaknesses. This situation resembles a Russian nesting doll, but based on our experience in TSC deployments, most TSC control systems are deployed within internal networks. While outdoor sensing devices are prone to noise interference and sensor damage, the TSC control module and our diffusion module can be deployed in either a distributed manner (with control devices at each intersection connected via an internal network) or a centralized manner (deployed within the internal network). This network isolation effectively protects the diffusion module from potential attacks. **Distributed Deployment**: Specifically, if a distributed deployment is chosen, our diffusion model can be trained on a central server to fully leverage data from different intersections, while model parameter updates are performed on edge computing devices, similar to a federated learning architecture. **Centralized Deployment**: If a centralized deployment is preferred, scalability and real-time requirements for hundreds of intersections must be addressed. On the hardware side, we can procure high-performance servers or employ inference acceleration techniques, such as data partitioning and parallelized inference, to enhance the diffusion model's inference performance. On the algorithmic side, we can utilize techniques like DDIM [1] to accelerate the diffusion model's inference process. In summary, by carefully selecting the deployment strategy and leveraging advanced hardware and algorithmic optimizations, we can ensure the robustness, scalability, and real-time performance of Robust TSC system while maintaining its security against potential attacks. [1] Denoising Diffusion Implicit Models. 2.The latency issue of this method because of the usage of diffusion models and whether it can handle traffic problems in a real-time way. Our framework could leverage DDIM and high-performance GPUs for accelerated inference. Denoising supports decentralized deployment with centralized training, while data missing scenarios use centralized inference. DDIM acceleration times on one A100: | Deployment | Scenario | JN1 | HZ1 | NY1 | |---------------------|-------------------|-------|-------|-------| | Distributed | U-rand noise| 0.098s| 0.097s| 0.096s| | Distributed | Data missing| 0.129s| 0.115s| 0.131s| | Centralized | U-rand noise| 0.364s| 0.431s| 2.43s | | Centralized | Data missing| 0.123s| 0.164s| 0.81s | As shown, the total processing time is less than 1 second meet real-time control. However, it is important to note that in decentralized deployment scenarios, the repaint algorithm will not be able to address the **kriging missing (full-intersection failure)** issue within data missing. Distributed deployment significantly improves algorithm performance due to the reduction in the batch size of inference data. **We recommend deploying the centralized solution to address both kriging (full-intersection failure) and random missing (sensor-specific-direction-single-intersection-failure) data issues using data missing algorithm, while employing data noise algorithms on the edge side using low-cost hardware to achieve accelerated denoising.**
Summary: The paper introduces RobustLight, a novel framework designed to enhance the robustness of Traffic Signal Control (TSC) systems against adversarial attacks and missing data. The key contribution of RobustLight is the integration of an improved diffusion model into TSC, which enables real-time recovery of noisy or missing traffic data without altering the existing TSC algorithms. The framework consists of two main components: the Dynamic State Infilling (DSI) algorithm, which trains the diffusion model online, and two auxiliary algorithms, Denoise and Repaint, which leverage the trained diffusion model to address adversarial attacks and missing data, respectively. ## update after rebuttal Rebuttal acknowledged. Claims And Evidence: The claims made in the submission are generally supported by clear and convincing evidence, particularly through extensive experimental results and visualizations. However, this paper mentioned online training and real-time implementation, but it does not provide sufficient theoretical nor experimental results to validate these claims. While the paper demonstrates the effectiveness of RobustLight in recovering noisy or missing data and improving traffic signal control performance, it does not explicitly show the complexity of the framework or how the framework performs in a real-time, online setting. Methods And Evaluation Criteria: The proposed methods and evaluation criteria in the paper make sense for the problem of enhancing the robustness of Traffic Signal Control (TSC) systems against adversarial attacks and missing data. Theoretical Claims: This paper focuses on experimental results and algorithmic contributions rather than theoretical proofs. This paper provides a detailed description of the proposed methods, including the Dynamic State Infilling (DSI) algorithm, Denoise algorithm, and Repaint algorithm. This paper does not include formal theoretical claims or proofs. Instead, the paper relies on experimental results to demonstrate the effectiveness of the proposed RobustLight framework. Experimental Designs Or Analyses: The experiments are well-designed and provide convincing evidence to support the claims made in the paper. Supplementary Material: The anonymous GitHub URL mentioned in the paper is: https://anonymous.4open.science/r/RobustLight72B2/README.md is missing. Relation To Broader Scientific Literature: The key contributions of the paper are related to and build upon several areas of the broader scientific literature, including Traffic Signal Control (TSC), Reinforcement Learning (RL), Robust RL, and Diffusion Models. Essential References Not Discussed: This paper lacks comparison with other recent methods. The latest method compared in the paper is proposed in 2022. Other TSC methods (e.g., GNN-based or Transformer-based approaches) or robust RL methods (e.g., adversarial training, self-supervised learning) can be compared and discussed (e.g., Explainable Deep Adversarial Reinforcement Learning Approach for Robust Autonomous Driving). Other Strengths And Weaknesses: Strengths: - This paper proposes a novel framework, "RobustLight." It introduces the Diffusion Model into the field of traffic signal control (TSC) to deal with data noise and missing value problems. Diffusion model has been widely used in the field of image generation, but it is an innovative attempt to apply it to TSC system to enhance robustness, especially in real-time online processing. - RobustLight is not only able to handle a single type of attack (such as Gaussian noise), but also multiple complex attack types (such as MAD and MinQ attacks), and is able to deal with the problem of missing data caused by sensor corruption. This integrated defense capability is rare in current TSC systems, demonstrating the broad usage and robustness of the framework. - The experiments are extensive. They are conducted on multiple real-world data sets to verify the effectiveness of RobustLight. Also, the visualization methods such as t-SNE visualization and violin diagram are used to demonstrate the effectiveness of state recovery, which enhances the reliability of experimental results. Weaknesses: - The code is not available. - Though the authors discuss the detection method, the performance in a normal situation without an adversarial attack is not presented. The authors discussed the detection of outliers in the paper. However, if the noise attack changes the queue length from 3 to 5, how can this be detected with outlier detection? Other Comments Or Suggestions: - It is suggested that related works be added back to the main body of the paper to provide background knowledge; now, it appears in the Appendix. Questions For Authors: 1. Where is the correct code link? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: First and foremost, we sincerely thank you for pointing out the issues, as your suggestions are invaluable in enhancing the quality of this paper. W1: https://anonymous.4open.science/r/RobustLight-72B2/README.md W2:Traffic movements and TSP are defined in Figure 1 and Section 2.1, with cyan coloring used for aesthetic design. D() represents the norm distance. W3&W6: We conduct the experiments for Difflight[1] and MissingLight[2]. [1] uses offline RL and diffusion models for two missing-data cases: **random missing** (single-intersection-sensor failure) and **kriging missing** (full-intersection failure), requiring separate treatments. To validate our approach, we setup involved randomly masking data from Kriging Missing (12.5%) and Random Missing (12.5%). ### Data Missing , Our is Based On Advanced-Colight | Method | JN1 | HZ1 | |----------------------------|-----------------|-----------------| | [2] | 354.73 | 348.68 | | [1] | 353.45 | 346.05 | | **Our** | **320.31** | **296.69** | ### Data Noise Scale 3.5 | Method | JN1 | HZ1 | |----------------------------|-----------------|-----------------| | [1]-MinQ | 321.98 | 426.14 | | **Our** | **283.13** | **397.32** | | [1]-MAD | 384.71 | 366.14 | | **Our** | **323.25** | **309.24** | As can be seen, our method not only effectively handles data missing scenarios in both (**kriging missing and random missing**) but also performs well in data noisy condition. ### W6:Without Data Missing and Data Noise | Dataset | DiffLight | Advanced-Colight | |---------|-----------|------------------| | JN1 | 268.43 | **245.73** | | HZ1 | 283.92 | **270.45** | Under clean data, RobustLight preserves the optimal performance of the TSC algorithm without unnecessary denoising, it simplifies integration and enhancing system stability and reliability. Compared to [3], which uses adversarial pre-training to improve robustness of RL, but it cannot repair corrupted inputs and risks overfitting to specific attacks, our diffusion model handles diverse disturbances without attack-specific training, directly denoising states for robust RL inputs as a plug-and-play module. [1] DiffLight: A Partial Rewards Conditioned Diffusion Model for Traffic Signal Control with Missing Data. NIPS 2025 [2] Reinforcement Learning Approaches for Traffic Signal Control under Missing Data. IJCAI 2023 [3] Explainable Deep Adversarial Reinforcement Learning Approach for Robust Autonomous Driving. TIV 2024 W4: Our framework could leverage DDIM and high-performance GPUs for accelerated inference. Denoising supports decentralized deployment with centralized training, while data missing scenarios use centralized inference. DDIM acceleration times on one A100: | Deployment | Scenario | JN1 | NY1 | |---------------------|-------------------|-------|-------| | Decentralized | U-rand noise| 0.098s| 0.096s| | Centralized | Data missing| 0.123s| 0.81s | As shown, the total processing time is less than 1 second meet real-time control. Further deployment discussion can be found in reviewer Jqz812. W5: Our RobustLight addresses challenges in TSC deployment, where TSC RL inputs often suffer from noise or missing values, impacting control. To handle the dynamic nature of traffic flow, we propose an online training framework using diffusion models as the upper level RL policy during clean data periods. When noise or missing data is detected, our trained DSI agent employs denoising and repainting algorithms to restore data quality, improving lower level TSC RL performance. W6&W7: [1] introduced TP-FDS, which identifies anomalies by comparing new data distributions with historical data from the same period, achieving an AUC of 96% and an F1 score of 76%. Minor changes like 3-5 have minimal impact on system efficiency. Anomalies can also be detected by cross-referencing data from multiple sensors, such as cameras and radar. Rule-based methods are another option; for example, a queue increase from 3 to 5 during a north-south green light would signal an anomaly. To simulate real-world scenarios, we conducted experiments with detection rates of 80% and 60%: ### MAD Scale 3.5 | Detection Rate | Base | ATT | Base| Throughput | Dataset | |-----------------|-----|-----|-----|------------|----------| | 80% | 487| **297** | 5812 |**6154** | JN1 | | | 463 | **326** |2888 | **2938** | HZ1 | | 60% | 487| **328** | 5812|**6131** | JN1 | | | 463 |**331** |2888| **2930** | HZ1 | It sustains high performance with different detection rates. Refer to Jqz8 and ACd7, our theoretical explanation based Lemma 3.1 [1] Traffic Anomaly Detection: Exploiting Temporal Positioning of Flow-Density Samples
Summary: This paper focuses on a very interesting problem. For the data missing problem faced in traffic signal control, the authors use the diffusion model to complete and clean the data. The experimental results show that this method effectively improves the control performance of the reinforcement learning model in data missing or contaminated environments. Claims And Evidence: i) Experimental results on some datasets are not reported. ii) Lack of baseline comparison for data generation methods. Methods And Evaluation Criteria: Traffic uses common standards in the field. There is a lack of quantitative standards for data denoising and completion. Theoretical Claims: This paper does not provide any theoretical proof. Experimental Designs Or Analyses: The comparison method is not perfect and some experimental results are not reported. Supplementary Material: The access link has expired. Relation To Broader Scientific Literature: Provides a new perspective for studying traffic signal control issues. Essential References Not Discussed: The authors put the related work section in the appendix and did not discuss similar data generation methods. Other Strengths And Weaknesses: Strengths: i) The authors have addressed a new issue ii) Use the new diffusion model to complete missing data and denoise Weaknesses: i) The authors do not fully introduce other related data generation methods ii) The experimental part lacks experimental results on some datasets iii) There is no quantitative analysis of the quality of the generated data, nor is there any comparison with baseline methods. Other Comments Or Suggestions: i) Other related data generation methods are introduced to reflect the innovation of the proposed data generation method. ii) Give complete experimental results iii) Update the link to the code repository iv) Provides quantitative analysis results of data generation methods and comparative experimental results of generation quality Questions For Authors: i) What is the difference between the proposed diffusion model-based data generation method and the existing diffusion models? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: First and foremost, we sincerely thank you for pointing out the issues. Your suggestions are invaluable in enhancing the quality of this paper. Below is our answer to your questions. 1. Experimental results on some datasets are not reported. Due to space constraints, data noise results for JN2 and HZ2 were omitted from the main text. The partial snapshot below shows RobustLight’s superior performance. Additional dataset results are included in the appendix and link. ### Advanced-CoLight Noise Scale 3.5 | Dataset | Noise Type | base| RobustLight | |------------|------------|--------------------------|-------------------------------| | JN1 | MAD | 325.87 | **271.58** | | | MinQ | 323.07 | **300.31** | | JN2 | MAD | 405.37 | **359.03** | | | MinQ | 399.87 | **372.84** | | NY1 | Mask25\%| 1246.85 | **1086.77** | | NY2 | Mask25\%| 1430.49 | **1290.79** | 2. Other related data generation methods are introduced to reflect the innovation of the proposed data generation method. HINT[1] proposed using Transformer combined with GAN for data reconstruction, while CaPaint[2] employs diffusion for data generation. However, these methods require extensive offline data for pre-training and only address data missing, not data noise. DiffLight[3] tackles random missing and kriging missing but uses separate algorithms for each and underperforms SOTA on clean data. In contrast, our method employs a single model to simultaneously resolve data missing and noise, handling both random missing and kriging missing with a unified algorithm. Moreover, our approach acts as a plug-in, optimizing SOTA algorithms without altering their core structure. [1] HINT: High-quality INpainting Transformer with Mask-Aware Encoding and Enhanced Attention [2] Causal Deciphering and Inpainting in Spatio-Temporal Dynamics via Diffusion Model [3] DiffLight: A Partial Rewards Conditioned Diffusion Model for Traffic Signal Control with Missing Data 3. Missing baseline and quantitative analysis. We fine-tuned the HINT model on an offline dataset and evaluated it on the TSC task, quantifying performance using PSNR (higher is better) and MAE (lower is better). Our algorithm surpasses HINT in data generation capability. ### Data Generation Performance Comparison | Dataset | Method | PSNR | MAE | ATT | |---------|-------------|-----------|----------|----------| | JN1 | HINT | 9.78 | 1.11 | 396.24 | | | **RobustLight** | **10.46** | **0.66** | **298.35** | | JN2 | HINT | 18.88 | 1.36 | 283.98 | | | **RobustLight** | **24.07** | **1.20** | **259.56** | | JN3 | HINT | 10.24 | 1.33 | 395.62 | | | **RobustLight** | **11.71** | **1.15** | **301.63** | | HZ1 | HINT | 11.90 | 0.88 | 371.74 | | | **RobustLight** | **13.90** | **1.01** | **328.26** | | HZ2 | HINT | 5.35 | 0.76 | 384.67 | | | **RobustLight** | **6.53** | **0.85** | **375.64** | | NY1 | HINT | 16.31 | 1.67 | 1189.56 | | | **RobustLight** | **17.25** | **1.51** | **1086.77** | | NY2 | HINT | 13.39 | 1.53 | 1394.96 | | | **RobustLight** | **15.65** | **1.29** | **1290.79** | To validate our approach, we setup involved randomly masking data from **Kriging Missing (12.5\%)** and **Random Missing (12.5\%)**. AMPR (Advanced-MaxPressure-Robustlight), ACR (Advanced-Colight-Robustlight). ### Performance Compared with DiffLight | Dataset | Method | Noise/Mask | PSNR | MAE | ATT | |---------|-------------------------|------------|-----------|----------|----------| | JN1 | DiffLight | U-rand | 6.71 | 6.26 | 310.92 | | | **AMPR** | | **7.20** | **5.42** | **304.34** | | HZ1 | DiffLight | | 6.85 | 7.93 | 361.31 | | |**AMPR** | | **7.71** | **5.12** | **297.34** | | JN1 | DiffLight | 25% | 7.66 | 1.05 | 366.05 | | | **ACR** | | **9.34** | **0.89** | **304.13** | | HZ1 | DiffLight | | 18.05 | 1.84 | 372.53 | | | **ACR** | | **22.96** | **1.17** | **306.56** | 4. The difference with other diffusion model. Key modifications: (1)use a new beta schedule (2) optimized with new loss function. (3) employ new action gradient method to improve data noise and missing. 5. New link: https://anonymous.4open.science/r/RobustLight-72B2/README.md 6. More baseline results refer to qV9c and code link. 7. Refer to Jqz8 and ACd7, our theoretical explanation based Lemma 3.1
null
null
null
null
null
null
Robust and Conjugate Spatio-Temporal Gaussian Processes
Accept (poster)
Summary: The authors combine ideas from recent work on robust Gaussian processes with filtering ideas used in (spatio)temporal Gaussian process regression. They use temporal structure of the problem to set parameteres in the robust Gaussian process framework proposed by Altimirano et al 2024 (RCGP) in a sensible and automatic manner. They show that a computational speedup is possible using the filtering approach, allowing RCGPs to be scaled to spatio-temporal Gaussian processes with many time steps. Claims And Evidence: *Claim: The computational cost of the proposed method scale linearly in $N$ (made in paragraph state-space formulation)* This claim is partially supported. Scaling is shown to be linear in the number of time steps (but not in the number of spatial locations). This claim is later stated correctly, and shown in proposition 3.1. The speed-up is also demonstrated in experiments. The others should avoid making the claim that the method scales linearly in the total number of observations, but the broader claim that the method improves scalability as compared to robust alternatives is well-supported. *Claim: Selection of the prior mean is important in RCGP. The proposed method improves selection of the prior mean using temporal structure in the problem.* Both parts of the claim, that RCGP is sensitive to selection of the prior mean, and that the proposed method, ST-RCGP, reduces this sensitivity, seem well-supported by the figure and argument in the text. Figure 3 supports the first part of this claim. I found the second part of the claim slightly less well-supported. It would be useful, though I do not think essential, if the authors ran an experiment in which the centering function was adaptive, but all other parameters were kept as in RCGP to show that this does in fact reduce sensitivity. I didn't see such an experiment in the supplement. If it is already there I would appreciate a pointer to it, and encourage the authors to reference it when discussing issue #1 in section 2. *Claim: ST-RCGP improves upon (frequentist) coverage issues with RCGP* This claim is supported by the coverage plot in figure 4. I think this claim has the least support. Consider including coverage plots for at least one other dataset, for example the temperature dataset considered. If this is not possible, could you explain why and consider including an additional simulation with spatial dimensions illustrating coverage properties of both methods? Methods And Evaluation Criteria: The evaluation criteria and baseline make sense for the problem considered. Specific questions below: *Figure 1 (description in appendix C.4), "For both the STGP and ST-RCGP, we fit the data with the optimisation objective φ and use de-contaminated data (original data without outliers) for the objective."* Why is decontaminated data used for selection of hyperparameters? Wouldn't it be more realistic to use the contaminated data? *Timing in figure 5, details in appendix C.10 "The execution time is computed post-optimisation of each method, since we wish to capture execution time at inference.Also, to avoid caching and establish a fair comparison, each model has a second instance specifically for inference-making that hasn’t observed data yet but has the optimised hyperparameters.."* I'm not convinced this is a meaningful timing comparison. It seems to me as though either optimization time should be included, as we are interested in the total cost of a procedure or caching should be used, because we only care about prediction at some new points after an algorithm has been trained. I would suggest including optimization time, or presenting both. Or could you clarify why the current approach might be meaningful for someone interested in using these methods in practice. *Appendix C.10* are the parameter presented the parameter selected after optimization, or the initialized values of parameters? Theoretical Claims: I looked over the proofs presented in appendix B. Although I did not go through them in sufficient detail to be confident of correctness, I did not see glaring issues, and the results seem very reasonalbe. Experimental Designs Or Analyses: I looked through the experimental design. Specific concerns were raised in a previous box on evaluation methods. Supplementary Material: I reviewed appendices A-C in varying amounts of detail. Comment on appendix A: - In the notation section, the use of $v_i$ as both an element of $\mathbb{R}$ (when defining the vector $\mathbf{v}$ and a function from $\mathbb{R}^d \to \mathbb{R}$ is confusing. Consider at least stating that this is an abuse of notation. Or perhaps $\mathbf{v}$ is meant to always be a function from $\mathbb{R}^d \to \mathbb{R}^{N}$ (so that the expression involving gradients makes sense) in which case this should be stated. These aren't significant notational issues, but do place additional burden on the reader to understand what each object is. Comment on appendix C: Typo, backwards quotes in line 1197: ”close" Relation To Broader Scientific Literature: Generally, discussion of prior work was good. There should be a more clear description of the contribution relative to the prior work Duran-Martin et al, 2024. In particular, what is new? It seems the methodology in your paper is based on combining the Kalman filtering approach used in spatial GPs with the robust GP method proposed in Altamirano et al 2024. Initially I thought this was the main contribution. How does this compare to what was already done in Duran-Martin et al? Should I really understand the main contribution of the paper to be the selection of better weighting and centering functions? I think either is a reasonable contribution for a conference paper, given the high quality of presentation. But I'd like the scope of what is new, and how the paper builds on current work to be a bit clearer. Essential References Not Discussed: I am not aware of essential references that were missed. Other Strengths And Weaknesses: Presentation in the paper is generally very good. Problems are well-motivated. Other Comments Or Suggestions: I don't understand the caption of figure 2. What is mean't be "We emphasize..." and where can I see this in the figure? Questions For Authors: How does the point in time at which outliers occur affect the sensitivity of your method? In particular, it seems like at the first time step, your centering function doesn't improve on prior work since it relies on the filtering estimate. Does this mean the proposed method struggles more with outliers at early time steps than later time steps. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their careful consideration of our paper and helpful feedback. We address below the valuable suggestions made to further improve our work: **Scaling is shown to be linear in the number of time steps (but not in the number of spatial locations).** We agree with this concern. In the camera-ready version, we will replace the claim with “An alternative approach which has linear-in-time cost is to use a state-space representation.” **It would be useful (...) if the authors ran an experiment in which the centering function was adaptive, but all other parameters were kept as in RCGP to show that this does in fact reduce sensitivity.** We agree that isolating the effect of the adaptive centering function would strengthen our claim regarding reduced sensitivity. We will add an experiment comparing ST-RCGP and RCGP where all parameters are kept as in RCGP apart from the centering function. We further develop it by also altering the shrinking function. This analysis can be found here: https://anonymous.4open.science/r/ST-RCGP-21DD/tests/RCGP-vs-ST-RCGP-centering-function-sensitivity-analysis.ipynb. We welcome any further feedback. **Consider including coverage plots for at least one other dataset, for example the temperature dataset considered.** We agree that having more datasets for which we demonstrate the (frequentist) coverage of ST-RCGP would help further strengthen our claim. We will include in the paper a coverage analysis of the ST-RCGP for the temperature dataset. This analysis can be found at https://anonymous.4open.science/r/ST-RCGP-21DD/experiments/weather-forecasting/model-fitting.ipynb (at the end of the notebook). **Why is decontaminated data used for selection of hyperparameters? Wouldn't it be more realistic to use the contaminated data?** Fitting the RCGP on contaminated data leads to poor estimates due to overfitting outliers during hyperparameter optimisation. Using decontaminated data improves RCGP results. This choice thus strengthens our claim: even with this advantage, RCGP underperforms compared to ST-RCGP. **I'm not convinced this is a meaningful timing comparison.** We don’t include optimisation time because it can be easily skewed by tempering with the optimisation procedure, especially for STGPs, which rely on gradient-based optimisation without clear stopping criteria, number of optimisation steps, learning rate, and even optimisers. Either way, the two optimisation objectives (for STGP and ST-RCGP) are extremely similar, and their computational cost are nearly identical. Focusing on inference time is meaningful in online settings where one-step inference matters. For example, real-time applications like stock price estimation may preclude methods that iterate at each time step (e.g. variational STGP) due to time constraints, favoring closed-form solutions such as the Kalman filter, the robust filter from Durán-Martín et al. (2024), or the ST-RCGP. **Appendix C.10 are the parameter presented the parameter selected after optimization, or the initialized values of parameters?** The parameters presented are the ones post-optimization. **There should be a more clear description of the contribution relative to the prior work Duran-Martin et al, 2024. In particular, what is new?** The key differences are: 1) We deal with STGP problems such as hyperparameter optimisation and smoothing. 2) We carefully specify the weight function (see answer to vhF8 on IMQ) and its parameters to improve robustness (see Weight Function paragraph on line 237), whereas Duran-Martin et al. (2024) provide few justifications for the centering and shrinking functions and learning rate, which are parameters intrinsically connected to robustness and crucial to improving performance. 3) Duran-Martin et al (2024) use a weighted log-likelihood, while we use the weighted score-matching loss from Altamirano et al. (2024), leading to a different update step. We will include in the paper, at line 226, “However, these methods do not deal with STGP problems such as hyperparameter optimisation and smoothing and do not investigate downweighting optimally.” **I don't understand the caption of figure 2.” “Comment on appendix A about abuse of notation.** We thank the reviewer for bringing this to our attention. We will remove this sentence in the caption in the camera-ready version, and make clear that we are abusing notation. **How does the point in time at which outliers occur affect the sensitivity of your method?.** An empirical analysis we conducted, available at https://anonymous.4open.science/r/ST-RCGP-21DD/tests/outliers-early-vs-late.ipynb, demonstrates that the method is not affected by the impact of outlier position, which we expect to be because of smoothing. Finally, we want to thank the reviewer again for the careful review and consideration given to our paper. We hope this rebuttal addresses any remaining questions and concerns. --- Rebuttal Comment 1.1: Comment: The authors addressed my questions regarding the experiments. The additional notebooks provide useful illustrations of different parts of the model. I am maintaining my score. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the thoughtful feedback and for mentioning the usefulness of the additional notebooks. We appreciate the decision to maintain the score.
Summary: This paper introduces Spatio-Temporal Robust and Conjugate Gaussian Processes (ST-RCGPs), which are an extension of robust and conjugate Gaussian processes (RCGPs) [1]. ST-RCGPs leverage the state-space formulation of spatio-temporal GPs (STGPs) to achieve computational efficiency while maintaining the robustness properties of RCGPs. The paper also addresses three key limitations of RCGPs: sensitivity to prior mean misspecification, poor uncertainty quantification, and reliance on manual hyperparameter tuning. Robustness to outliers is established by showing that the posterior influence function is bounded. Experiments on synthetic, financial, and weather data compare ST-RCGP to STGP, RCGP, and other baselines in settings with outliers. [1] Altamirano, M., Briol, F., Knoblauch, J. "Robust and Conjugate Gaussian Process Regression". ICML 2024. --- ## Update after rebuttal The rebuttal has addressed my concerns and answered my questions. I have raised my score from 3 to 4. Claims And Evidence: The paper claims that - ST-RCGP is an "outlier-robust spatio-temporal GP with a computational cost comparable to classical spatio-temporal GPs" - The robustness to outliers is demonstrated empirically (see e.g. Figures 4, 5, and 6, and Table 3) and theoretically (see Proposition 3.2). The improved computational efficiency is also demonstrated empirically (see e.g. Figure 5) - ST-RCGP can "overcome the three main drawbacks of RCGPs: their unreliable performance when the prior mean is chosen poorly, their lack of reliable uncertainty quantification, and the need to carefully select a hyperparameter by hand. - This claim is addressed in Section 3 by adapting the weight function throughout filtering steps, whereas RCGPs require it to be fixed. Methods And Evaluation Criteria: In terms of methods, the main idea is to combine the existing RCGP framework with state-space GPs for improved computational efficiency in spatio-temporal domains, which seems sensible to me. The fact that this allows for an adaptive weight function with additional benefits is great and also makes sense. In terms of evaluation, the experiments seem to be somewhat contrived but reasonable to test the model. For example, the outliers introduced in the temperature forecasting experiment seem to be kind of unrealistic. The quantitative evaluation uses RMSE, NLPD, and compute time as metrics, which can be considered as standard in the literature. Theoretical Claims: - Proposition 3.1 derives the state-space formulation of RCGPs. The resulting equations seem reasonable and the proof is provided in Appendix B.1, although I did not check the latter carefully. - Proposition 3.2 shows that the posterior influence function is bounded (in its first argument) to demonstrate theoretical robustness. The assumptions seem to be mild, but the result is also not very strong (just because a quantity is not infinite it could still be really, or even arbitrarily, large...). The proof is provided in Appendix B.2., but I did not verify its correctness. Experimental Designs Or Analyses: The paper presents four different experiments: 1. "Fixing vanilla RCGP" - This experiment compares ST-RCGP to RCGP on 1D synthetic data to demonstrate that ST-RCGP addresses the three issues outlined in Section 2 (sensitivity to prior mean, poor uncertainty quantification, selection of shrinking function necessary). The visualization (see Figure 4) looks convincing, although it is only a single example which could be selected on purpose. 2. "ST-RCGP in Well-Specified Settings" - This experiment compares ST-RCGPs to regular STGPs and RCGPs on synthetic data, demonstrating that ST-RCGP performs better than both other methods in the presence of outliers and matches performance otherwise. Since the data is synthetic, I have some concerns about how meaningful the results really are. 3. "Robustness During Financial Crashes" - This experiment is taken from [1] and [2]. It considers historical financial data and demonstrates that ST-RCGP is robust to a sudden crash which produces outliers in the time series. The experiment also demonstrates the superior computational efficiency of ST-RCGP compared to RCGP, which is obtained due to the state-space formulation. In a later part of this experiment, a larger version of the data with a synthetically induced crash is considered. 4. "Forecasting Temperature Across the UK" - This experiment focuses on temperature predictions and demonstrates that ST-RCGP performs well in the presence of outliers, whereas the forecasting of the regular STGP is strongly influenced by the outliers, which results in incorrect predictions. Again, the outliers are synthetically introduced. While the presented results look quite strongly in favor of ST-RCGP, I am somewhat concerned about the amount of synthetic data used. Even the experiments involving real-world data have synthetically introduced outliers, which might not represent realistic scenarios. [1] Altamirano, M., Briol, F., Knoblauch, J. "Robust and Conjugate Gaussian Process Regression". ICML 2024. [2] Ament, S., Santorella, E., Eriksson, D., Letham, B., Balandat, M., Bakshy, E. "Robust Gaussian Processes via Relevance Pursuit". NeurIPS 2024. Supplementary Material: Except for the appendix in the submitted manuscript, the submission does not contain any supplementary material. I have briefly reviewed some of the derivations and additional figures in the appendix. I also looked for sufficient information to reproduce the empirical results. Relation To Broader Scientific Literature: The paper is primarily combines [1] with a state-space formulation of GPs. The resulting update equations are somewhat related to the Kalman filter. An important component of ST-RCGP (and also the original RCGP) is the inverse multi-quadratic kernel (IMQ) which is used to set the weights for each data point, such that outliers can (ideally) be assigned negligible contribution. In terms of the general goal of designing a robust GP, [2] is a recently published method with the same goal. [1] Altamirano, M., Briol, F., Knoblauch, J. "Robust and Conjugate Gaussian Process Regression". ICML 2024. [2] Ament, S., Santorella, E., Eriksson, D., Letham, B., Balandat, M., Bakshy, E. "Robust Gaussian Processes via Relevance Pursuit". NeurIPS 2024. Essential References Not Discussed: I am not aware of any essential references which were not discussed. Other Strengths And Weaknesses: Strengths: - Novel combination of robustness (RCGP) and efficiency (state-space formulation) - Provides theoretical argument for robustness (Proposition 3.2) - Thorough empirical evaluation on four different experiments Weaknesses: - The statement of Proposition 3.2 does not seem to be very strong - Experiments make heavy use of synthetic data Other Comments Or Suggestions: N/A Questions For Authors: 1. Do you have any additional experiments / empirical evidence which does not rely on synthetic data? If I understand correctly, only Figure 5 considers outliers from a real-world scenario. 2. Can you comment on any alternatives to the IMQ kernel for assigning the weights? This seems like a really important component for the method, yet the choice of IMQ seems somewhat arbitrary right to me. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful review and positive remarks on our empirical evaluation and the ST-RCGP’s novel combination of robustness and efficiency. We comment on the feedback below: **Since the data is synthetic, I have some concerns about how meaningful the results really are.** Robustness studies often inject outliers due to the lack of datasets with labelled outliers and the ability to control testing across varied outlier types (as was done in our work). However, we will add an experiment in the paper with the well-log dataset from [8, 11], which consists of 4,050 nuclear magnetic resonance measurements recorded while drilling a well. Outliers were identified scientifically and correspond to short-term events in geological history. The results can be found here: https://anonymous.4open.science/r/ST-RCGP-21DD/experiments/well-log-data/NMR-data-fit.pdf with the code in the same folder. **The statement of Proposition 3.2 does not seem to be very strong.** Whilst we empathise with this comment, the study of the influence function (IF) dates from the seminal work of Hampel in 1968 [1] and has been the gold standard in robust statistics ever since [2,3,4]. The typical approach to show robustness is to bound the influence function. While one could work out explicitly the bounds, there is no clear gain in doing so as the values of the KL divergence are not very interpretable. ​​To prove the robustness of an algorithm, bounding the IF is then sufficient and remains one of the strongest (and in many cases the only) available criteria within the robust Bayesian literature [see e.g. 5,6,7]. **Can you comment on any alternatives to the IMQ kernel for assigning the weights?** We choose the IMQ for several reasons. To elaborate on those, in the final version, we change lines 237-243 with: “The statements of Propositions 3.1 and 3.2 impose some constraints on the choice of weight function. In particular, it should be strictly positive and differentiable over its domain to ensure quantities in Proposition 3.1 are well-defined, and it must have a bounded supremum to satisfy Proposition 3.2. Moreover, to avoid attributing weight to arbitrarily large $y\in \mathbb{R}$, we require $\lim_{|y| \rightarrow \infty}w(\mathbf{x}, y) = 0$ (decaying property). These requirements make the IMQ an appropriate choice of weight function, in addition to the fact that it has been well-studied and recommended in prior literature [6, 9, 10,11]. We further justify our choice on the basis that the IMQ hyper-parameters γ,$\beta$ and c are related to concepts of robustness—relations we exploit to specify the IMQ. In particular, to select γ, β and c, we follow four guiding principles; we want to: …” We acknowledge the need for a sensitivity analysis to understand how the shape of the weight function affects ST-RCGP's posterior estimates. In the camera-ready version, we will examine how varying the IMQ exponent—currently $\alpha:=-1/2$—impacts results. We conduct this analysis because Proposition 3.2 shows robustness requires weights to decay faster than $1/\sqrt{|y|},$, i.e., $\alpha<-1/4$; but, overly fast decay can reduce statistical efficiency (overly robust)—highlighting a tradeoff worth exploring. The result of this analysis can be found at https://anonymous.4open.science/r/ST-RCGP-21DD/tests/IMQ-sensitivity/IMQ-exponent-testing.pdf, with the code in the same folder. Thank you again for the thorough review. We hope this rebuttal addresses any remaining concerns and contributes to a stronger evaluation of our paper. [1] Hampel, Frank Rudolf. Contributions to the theory of robust estimation. 1968. [2] Huber, Peter J., and Elvezio M. Ronchetti. Robust statistics. 2011. [3] Maronna, Ricardo A., et al. Robust statistics: theory and methods (with R). 2019. [4] Hampel, Frank et al. Robust statistics: The approach based on influence functions. 1987. [5] Ghosh, A. and Basu, A. Robust Bayes estimation using the density power divergence. Annals of the Institute of Statistical Mathematics, 68(2), 2016. [6] Matsubara, T., Knoblauch, J., Briol, F.-X., and Oates, C. J. Robust generalised Bayesian inference for intractable likelihoods. Journal of the Royal Statistical Society: Series B (Statistical Methodology). 2022. [7] Duran-Martin, Gerardo, et al. "Outlier-robust Kalman filtering through generalised Bayes." 2024. [8] Altamirano, Matias, François-Xavier Briol, and Jeremias Knoblauch. "Robust and scalable Bayesian online changepoint detection." International Conference on Machine Learning. PMLR, 2023. [9] Chen, Wilson Ye, et al. "Stein point markov chain monte carlo." International Conference on Machine Learning. PMLR, 2019. [10] Riabiz, Marina, et al. "Optimal thinning of MCMC output." Journal of the Royal Statistical Society Series B: Statistical Methodology 84.4 (2022). [11] Ruanaidh, J. J. O. and Fitzgerald, W. J. Numerical Bayesian methods applied to signal processing. Springer Science & Business Media, 1996. --- Rebuttal Comment 1.1: Comment: I thank the authors for addressing my questions, and for providing additional explanations and empirical evidence. I have raised my score from 3 to 4. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the thoughtful feedback and for acknowledging the additional explanations and empirical evidence, as well as for raising the score from 3 to 4.
Summary: This paper introduces a methodology for spatio-temporal Gaussian Processes based on a state-space model and generalized Bayesian inference. Building on the robust and conjugate Gaussian processes (RCGPs) framework, it addresses and overcomes its key limitations, enhancing both robustness and computational efficiency. Claims And Evidence: The claims in this submission are well-supported by rigorous mathematical proofs and empirical experiments. Methods And Evaluation Criteria: The proposed methods are evaluated against both standard Gaussian Processes and the state-of-the-art RCGP framework, using multiple datasets and diverse evaluation metrics. Theoretical Claims: Overall looks good to me. Experimental Designs Or Analyses: The experimental design looks good to me. Supplementary Material: Yes. The proof part. Relation To Broader Scientific Literature: This paper extends the setting of previous work to spatio-temporal modeling by incorporating a state-space formulation, as discussed in prior literature. This formulation addresses three key limitations of the earlier approach. Essential References Not Discussed: None. Other Strengths And Weaknesses: Strengths: - Provides strong theoretical results. - Evaluates the proposed ST-RCGP method across multiple experiments using various metrics. Weaknesses: - The baseline models used for comparison vary across experiments due to data characteristics. It would be beneficial for the authors to include an additional state-of-the-art model beyond RCGP to better assess the overall effectiveness of the proposed method. Other Comments Or Suggestions: None. Questions For Authors: See above comments. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer’s feedback and are glad our theoretical results and experimentation on our proposed method were appreciated. It has been mentioned that: **“The baseline models used for comparison vary across experiments due to data characteristics. It would be beneficial for the authors to include an additional state-of-the-art model beyond RCGP to better assess the overall effectiveness of the proposed method.”** We agree that it would be beneficial to illustrate the fit of additional methods apart from RCGP. For this reason, we will include in the paper the following plot: https://anonymous.4open.science/r/ST-RCGP-21DD/experiments/financial-applications/high-frequeny-data/HFT-data-induced-crash-fit.pdf, which was produced by code available here: https://anonymous.4open.science/r/ST-RCGP-21DD/experiments/financial-applications/high-frequeny-data/HFT-index-futures-speed-comp.ipynb. This plot compares ST-RCGP against some of the state-of-the-art methods offered by the “BayesNewton” package and is an extension of Table 2. Again, we are grateful for the consideration given to our paper and hope our answer satisfies the concern raised.
Summary: This paper expands the robust and conjugate GP framework to what it refers to as spatiotemporal GPs (sometimes referred to in other places as Markovian GPs, linear time GPs, or state space GPs). This is achieved by a generalized Bayes filtering solution, somewhat similar to other recent works on sequential generalized Bayes. Additionally, by considering the sequential nature of generalized Bayesian filtering, the authors are able to address some difficulties in "vanilla" robust and conjugate GPs that arise from finding appropriate weighting functions. The proposed method is tested on a number of synthetic and real-world datasets to show its effectiveness, both in spatiotemporal and 1-dimensional settings. ## Update after rebuttal The authors have clearly answered my questions, softening their claims and pointing to related work where appropriate. In addition, I find the additional notebooks provided to other reviewers in the rebuttal process nice. I thus raised my score from a 3 to a 4 post-rebuttal. Claims And Evidence: The article makes a few main claims. The first is that the proposed generalized Bayes filtering solution is indeed a realization of the robust and conjugate GP framework. In particular, based on the chosen hyperparameters, it is claimed that the solution can be made robust (in the Huber sense), and that the vanilla GP can be recovered. A secondary set of claims state that several issues with vanilla RCGPs are ameliorated with some proposed changes to the weighting function. I find both of these sets of claims to be well-evidenced. The only claim I find problematic is that the method "provides inferences that are comparable to state-of-the-art non-Gaussian STGPs in the presence of outliers, but at a fraction of the cost": Hamelijnck et al. show that their "parallel" formulation provide GPU implementations of MVGPs that are potentially orders of magnitude faster than the sequential formulation on CPU (c.f. Figure 4 in the Supplementary Material of [1]). To my knowledge, this formulation would not be possible with RC STGPs; the reviews indicate that all experiments are on CPU, in which case I find this claim a bit too general. I also have some minor technical comments (see the points in "Other Comments or Suggestions" below) Methods And Evaluation Criteria: I believe the proposed methods and evaluation criteria make sense for the problem at hand. My only technical comment is regarding the EWR: it is stated that "since the STGP is not robust to outliers, the closer EWR is to one, the less robust our posterior is to outliers, and vice-versa." I don't think this statement is strictly accurate. For example, if one downweights in the "wrong" locations, robustness is actually decreased with the EWR. Perhaps a more correct statement is just that EWRs that are near one are not necessarily optimal, since an EWR of 1 necessarily implies a solution which is not robust. I understand the sentiment of this statement, however, and find it an intuitive and appropriate measure for experiments. Theoretical Claims: I have checked Proposition 3.1 carefully. My only concern is that the proof depends on a particular choice of $\mathcal{L}$, which is only briefly mentioned in the background section. I think it could avoid possible confusion to more clearly state that Proposition 3.1 assumes the modified Fisher divergence. I went through Proposition 3.2 more briefly, but did not spot any issues. It's not entirely clear to me how smoothing works (asked below). Experimental Designs Or Analyses: I believe the experimental design and analysis are sound, besides the potential omission of the "parallel" variational STGPs as mentioned above. Supplementary Material: I reviewed the appendices. They are generally well-written, and provide valuable intuition on the benefits of ST-RCGPs over RCGPs. The code appears readable and correct (though only implemented for Matern processes, as far as I can tell). Relation To Broader Scientific Literature: Robust and scalable solutions to GPs are increasingly important. The submission does a good job of highlighting its relation to two main subsets of the GP literature: robust solutions -- particularly through generalized Bayes (such as RCGP), and scalable spatiotemporal solutions through state-space representations. Besides approximate or non-conjugate GPs, there is a small body of work using $3\sigma$-rejection-esque filters for Markovian GPs that are not cited [2, 3]; these can be seen as creating a more naive (and degenerate) weighting function. Importantly, both of these works are "adaptive" like the ST-RCGP, in the sense that the current posterior predictive moments are used sequentially to detect anomalies/outliers. Essential References Not Discussed: Besides approximate or non-conjugate GPs, there is a small body of work using $3\sigma$-rejection-esque filters for Markovian GPs that are not cited [2, 3]; these can be seen as creating a more naive (and degenerate) weighting function. Importantly, both of these works are "adaptive" like the ST-RCGP, in the sense that the current posterior predictive moments are used sequentially to detect anomalies/outliers. Other Strengths And Weaknesses: ### Strengths - The paper is generally well-written and easy to follow. ### Weaknesses - The paper might be seen as a straightforward combination of a few different ideas: STGPs and generalized Bayes filters. However, the benefits of adaptive weight centering and the thoroughness of experiments outweigh this weakness, in my opinion. Other Comments Or Suggestions: I have a few minor editorial comments: 1. (Line 082, Right Column) It is stated that "If standard GPs were used here, the cost would be $\mathcal{O}(n_t^3 n_s^3)$, which would be impractical," which I find unnecessarily strong. I suggest that this "may become impractical," instead. 2. (Line 224, Right Column) I think this should read "accomplishing the purpose of [principle] 3". 3. (Section 4) I understand that space constraints are an issue, but I think it would improve the exposition to motivate the EWR in a sentence or two, rather than just pointing the reader to a technical definition in the appendix. And a small technical comment: 1. (Line 087, Right Column) It is written "assuming a stationary and separable kernel [...]" but stationarity is not sufficient for representation as an LTI SDE (e.g., the squared exponential kernel). Notably, however, any stationary kernel can be approximated arbitrarily well (c.f. [4]). Questions For Authors: 1. At several points, the computational-aware RCGP is mentioned. [5] also takes a similar step in proposing computation-aware STGPs. Are there obvious technical barriers in also combining these works, i.e., having computation-aware ST-RCGPs? 2. How does the smoothing work? I see that there are smoothing solutions in the code, but it is not explained in the text. Does one obtain the "correct" smoothing solution by simply using the stored GB filtering statistics and applying RTS smoothing naively? ### References [1] Hamelijnck, Oliver, et al. "Spatio-temporal variational Gaussian processes." Advances in Neural Information Processing Systems 34 (2021): 23621-23633. [2] Bock, Christian, et al. "Online time series anomaly detection with state space Gaussian processes." arXiv preprint arXiv:2201.06763 (2022). [3] Waxman, Daniel, and Petar M. Djurić. "A Gaussian Process-based Streaming Algorithm for Prediction of Time Series With Regimes and Outliers." 2024 27th International Conference on Information Fusion (FUSION). IEEE, 2024. [4] Loper, Jackson, et al. "A general linear-time inference method for Gaussian Processes on one dimension." Journal of Machine Learning Research 22.234 (2021): 1-36. [5] Pförtner, Marvin, et al. "Computation-aware Kalman filtering and smoothing." _arXiv preprint arXiv:2405.08971_ (2024). Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate the thoughtful feedback and the expertise on the matter. We are glad you found the paper clearly written and easy to follow. We comment on the feedback below: **1) “provides inferences that are comparable to state-of-the-art non-Gaussian STGPs in the presence of outliers, but at a fraction of the cost" (...) I find this claim a bit too general.** We acknowledge that our claim is too broad and will adjust it as follows: “The ST-RCGP provides inferences comparable to state-of-the-art non-Gaussian STGPs with a computational cost similar to classical STGPs.” Additionally, we believe further accelerating the ST-RCGP is an exciting direction for future work, so we will replace lines 422-426 (right side) with: “Our method builds on classical STGPs, which remain vulnerable to scaling in the spatial dimension and are not optimal if parallel computing is available. These issues were addressed by Hamelijnck et al. (2021) through variational approximations and parallel-scan algorithms (see also [1]), and could be adapted to ST-RCGPs, providing the same benefit.” **2) Regarding the EWR: it is stated that "since the STGP is not robust to outliers, the closer EWR is to one, the less robust our posterior is to outliers, and vice-versa." I don't think this statement is strictly accurate** We agree and will replace the highlighted sentence with: “Therefore, since the STGP is not robust to outliers, EWRs that are near one are not necessarily optimal, since an EWR of 1 implies a solution—the vanilla STGP—which is not robust." **3) I think it could avoid possible confusion to more clearly state that Proposition 3.1 assumes the modified Fisher divergence.** Thank you for raising this point. We will ensure to make this clearer in the camera-ready version by replacing the current text in Proposition 3.1 with: “Then, we obtain a (generalised) posterior (…) with the score-matching loss $\mathcal{L}$ given by X for Y” where X will be replaced by the loss as provided in lines 674-675, and Y the mathematical objects used in the loss, which are defined on line 677. **4) There is a small body of work using 3σ-rejection-esque filters for Markovian GPs that are not cited.** ​​Thank you for bringing this to our attention. We will include the following statement after the last sentence of the paragraph on line 066: “We also highlight a small body of work that uses outlier-rejection Kalman filters with STGPs to improve robustness in inference tasks such as prediction and anomaly detection. While these methods offer robustness at lower computational cost compared to non-conjugate STGPs, they are generally considered less expressive and not as strongly supported by theoretical foundations.” **5) minor editorial comments** We agree with these editorial comments and will make the necessary changes in the final version. **6) It is written "assuming a stationary and separable kernel (...)" but stationarity is not sufficient for representation as an LTI SDE (e.g., the squared exponential kernel).** Thank you for highlighting this. We will update the final version accordingly. **7) Are there obvious technical barriers in also combining these works, i.e., having computation-aware ST-RCGPs?** We believe there should be no fundamental technical barriers to combining these approaches because the filtering update in ST-RCGP closely mirrors that of the standard STGP. Given this structural similarity, the main arguments presented by Pförtner, Marvin, et al (2024) [2] should remain applicable. **8) How does the smoothing work?** The generalised Bayes approach used on the filtering distribution produces a Gaussian posterior, and thus, the resulting smoothing distribution remains Gaussian. Therefore, as for classical STGPs, we can apply RTS smoothing using the GB filtering posterior. To clarify ambiguities about smoothing, the above explanation will be added in the paragraph on lines 224-235 (Left side, Methodology section). Finally, we want to thank the reviewer again for the careful consideration given to our paper. We hope we have responded to any remaining questions or concerns in this rebuttal. **References** [1] Särkkä, Simo, and Ángel F. García-Fernández. "Temporal parallelisation of Bayesian smoothers." IEEE Transactions on Automatic Control 66.1 (2020): 299-306. [2] Pförtner, Marvin, et al. "Computation-aware Kalman filtering and smoothing." arXiv preprint arXiv:2405.08971 (2024). [3] Bock, Christian, et al. "Online time series anomaly detection with state space Gaussian processes." arXiv preprint arXiv:2201.06763 (2022). [4] Waxman, Daniel, and Petar M. Djurić. "A Gaussian Process-based Streaming Algorithm for Prediction of Time Series With Regimes and Outliers." 2024 27th International Conference on Information Fusion (FUSION). IEEE, 2024. --- Rebuttal Comment 1.1: Comment: Thanks for the helpful reply. I have two remaining points before raising my score: > We acknowledge that our claim is too broad and will adjust it as follows: [...] Additionally, we believe further accelerating the ST-RCGP is an exciting direction for future work, so we will replace lines 422-426 (right side) with: “[...] and parallel-scan algorithms (see also [1]), and could be adapted to ST-RCGPs, providing the same benefit.” Thanks, I think the new claim is more appropriate. However, it is not obvious to me that parallel scans could be applied, at least not with the proposed weighting function -- parallel scans require associativity, and their application in Bayesian smoothing requires some quantities to be precomputed. Does the proposed (sequential) weighting not disrupt associativity/the ability to precompute all relevant quantities? I more or less agree that this would be possible using the original RCGP weighting function, but a core contribution of the current work is that this weighting function is problematic. > The generalised Bayes approach used on the filtering distribution produces a Gaussian posterior, and thus, the resulting smoothing distribution remains Gaussian. Therefore, as for classical STGPs, we can apply RTS smoothing using the GB filtering posterior. To clarify ambiguities about smoothing, the above explanation will be added in the paragraph on lines 224-235 (Left side, Methodology section). Thank you for the clarification. I understand that the filtering distribution is Gaussian and therefore RTS smoothing can be applied -- my question was moreso about the result. In particular, the paper claims that the smoothing solution would match the RCGP batch solution, but this isn't a priori obvious to me. Notably, I think this is different than the (still useful) claim that the filtering solution is correct and that the corresponding smoothing solution via RTS smoothing is robust/effective, which I do agree is well supported theoretically and experimentally. --- Reply to Comment 1.1.1: Comment: **1. Parallel-scan algorithms** Thank you very much for raising this point, which gave us the opportunity to dig deeper into this paper. It is not currently obvious to us that the sequential weights would disrupt associativity as used in this paper, although our limited expertise in this area means we have not been able to check this formally within the very small period of time available for this rebuttal. Since we are only mentioning parallel-scan algorithms as an interesting area of future work, we will instead temper our claim further in the conclusion as follows: “Our method builds on classical STGPs, which remain vulnerable to scaling in the spatial dimension and could be addressed similarly to Hamelijnck et al. (2021) through variational approximations. Furthermore, our method may be computationally sub-optimal if parallel computing is available, in which case parallel-scan algorithms (see also [1]) could be interesting to adapt to ST-RCGPs if possible”. **2. Smoothing Solution** Thank you, this is once again a really interesting point that we will clarify further in the final version of the paper. The batch implementation (as in standard RCGPs) and the sequential implementation (through Kalman filtering/smoothing as done in this paper) do lead to the same filtering and smoothing distributions assuming we use fixed (rather than adaptive) weights. We will add a proposition formally proving this in the camera-ready version of the paper. The gist of the argument is that if you take the smoothing distribution as expressed in Theorem 8.1 of [1] (In the “Bayesian smoothing equations” section of the 2013 version of the book), you can show inductively that each quantity in the expression is the same for the ST-RCGP and the RCGP, so long as a few conditions are met, such as the prior distribution and weight function being specified identically. Notably, this also requires the filtering distribution to match, which we will show in the added proposition. This being said, the reviewer is right that the adaptive weighting function proposed in this paper will result in two different distributions. We will clarify this point in the camera-ready version. To conclude, we would like to once again thank the reviewer for an engaging and in-depth discussion of our work that has further improved the manuscript. **References** [1] Simo Särkkä (2013). Bayesian Filtering and ¨ Smoothing. Cambridge University Press.
null
null
null
null
null
null
GaussMark: A Practical Approach for Structural Watermarking of Language Models
Accept (poster)
Summary: This paper proposes a watermarking for large language models. The watermarking injection process has an almost negligible overhead and applies to LLMs of any structure. The authors observed that a small perturbation to the parameters of LLMs will not significantly affect the model's performance. Based on this, the authors inject the watermarking into the model's weights. Specifically, in the watermarking injection phase, sample the watermarking from the Gaussian distribution, then add it to the parameters of the model (similar to model merging). In the detection phase, based on hypothesis testing, the watermarking is detected by calculating statistical information related to the gradient of the parameter part of the watermarking injection. A lot of theoretical analysis is done in the paper, and experiments show that this watermarking has advantages in detection success rate, impact on text generation, and anti-attack. ## Update after rebuttal The concern about "the effect of increased noise on model performance is not provable" has not been solved, so I maintain the original score. Claims And Evidence: The core innovation of the paper and the analysis of experimental conclusions are clear. Methods And Evaluation Criteria: This paper has used multiple datasets to empirically demonstrate the negligible influence on the original model. However, this evaluation method is ultimately limited, for example, the effect of the added parameter noise on the safety alignment of the model is not considered. Theoretical Claims: The reviewer has checked the correctness of the core theoretical claims of the paper. Experimental Designs Or Analyses: The reviewer has checked the soundness and validity of the experimental designs. The experimental setup and analyses are reasonable. Supplementary Material: The reviewer has reviewed the robustness evaluation in the appendix. Relation To Broader Scientific Literature: The paper introduces the related work in detail and the proposed method is novel. Essential References Not Discussed: The discussion on the robustness of the LM watermark is necessary, and diverse adversarial attacks against the watermark can be found in [1]. - [1] Liu, et al. On Evaluating The Performance of Watermarked Machine-Generated Texts Under Adversarial Attacks. Other Strengths And Weaknesses: Strengths: - It is simple to implement the proposed watermark algorithm. Watermarking injection and detection do not require much memory and computational overhead. - Rich theories to prove the correctness of detection metrics. - The redundancy of weight information is used to directly add watermarking to weights without forcing the selection of tokens, which maximizes the retention of language characteristics. Experiments have also proved this. Weaknesses - The reviewer's biggest concern is that the effect of increased noise on model performance is not provable, so there is a possibility, though a very low probability, of damaging model performance. - The choice of parameters of watermarking and the location of watermarking injection rely on experience, so extensive experiments are needed to prove the applicability of experience, which includes more models and more different types of datasets, such as translation and code. - Need more types of attacks to detect the robustness of watermarkings, such as emoje-attack or document-level attacks that copy text to large amounts of human text. Other Comments Or Suggestions: - Detailed discussion and analysis are needed to prove small perturbations cannot destroy the safety alignment of the model. - In the watermarking detection phase, it is necessary to obtain the user's prompt, which is almost impossible in practice due to privacy and timeliness. Although the experiment considers the case of prompt damage, this is also under the condition of knowing the real prompt. - The watermark detection algorithm and the examples are all based on single-round dialogue scenarios. However, in actual situations, users often gradually guide the model in the form of multiple rounds of dialogue. In this case, how to detect the watermark? For example: - user: please introduce the LLMs safety. - assistant: LLMs safety includes ... (1000tokens) - user: thank you, please summarize in one paragraph. - assistant: ok, ...(400tokens) - Typos: - The title of Figure 13-b does not match its horizontal axis. - Should the horizontal and vertical axes in Figure 5 be TPR and FPR? - The colors of low p-values in Figure 9 and 10 should be consistent. Questions For Authors: Please see the comments and weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for their careful attention to our work. ## Model Quality: Further benchmarks and safety We agree that more benchmarks, such as translation or code, would be better and we acknowledge in lines 407-408 2nd column that such checks are necessary before deployment. We also agree that safety benchmarks would be beneficial to consider. On the other hand, note that the suite of evaluations we did perform approximately matches or exceeds much of the empirical watermarking literature and the realities of resource constraints surrounding compute limit our ability to conduct a wider sweep. We agree that considering the effect of GaussMark on safety is an interesting direction for future research. ## Further corruption attacks and detection of multi-round conversations Regarding further corruption attacks, we agree that more sophisticated attacks would be interesting to study, but again note that the set of attacks we did investigate is fairly large relative to many of the other works proposing watermarking schemes. In particular, emoji attacks amount to structured token-level insertions and are likely less able to damage GaussMark than the random insertions we did investigate. While certainly more sophisticated document-level attacks are relevant to the investigation, note that we simulate a weaker version of this attack by adding random text to the beginning of the examined corpus. Further experiments are, of course, always better, but are subject to the same resource constraints mentioned above. On the subject of multi-turn dialogue, we believe that this is partially addressed by our discussion in lines 376-381 2nd column, where we mention that ignorance of the prompt does not substantially affect our ability to detect the watermark. For more detailed discussion of our robustness experiments, please see Appendix F. ## Typos Thank you for catching these; we will fix them in the revision.
Summary: This paper introduces GaussMark, a novel watermarking scheme for language models. The approach involves adding small Gaussian perturbations to a single MLP layer during generation and using statistical tests based on Gaussian independence to detect watermarked text. The watermarking scheme comes with formal statistical guarantees and is efficient to implement in practice. The authors test GaussMark on Llama3.1-8B, Mistral-7B, and Phi3.5-Mini, showing it can effectively watermark text while maintaining performance on the SuperGLUE, GSM-8K, and AlpacaEval-2.0 benchmarks. Claims And Evidence: The paper's claims are generally well-supported by both theoretical analysis and empirical evidence. - Statistical validity: The authors provide formal proofs about the power of the GaussMark test under an assumption that the language model can be approximated as a linear softmax model. - Minimal impact on generation quality: The empirical evaluations on SuperGLUE, GSM-8K, and AlpacaEval-2.0 convincingly demonstrate that GaussMark has minimal impact on model performance. - Efficient implementation: The timing measurements clearly show that GaussMark has negligible impact on generation latency and reasonable detection time, outperforming alternative approaches like KGW by orders of magnitude in generation time. - Robustness to corruptions: The paper presents evidence that GaussMark is somewhat robust to token-level corruptions and roundtrip translation, though it is less robust than some alternative approaches (e.g., KGW-2). Methods And Evaluation Criteria: The methods and evaluation criteria are appropriate and very comprehensive. - The evaluation across multiple models (Llama3.1-8B, Mistral-7B, Phi3.5-Mini) provides good coverage of different model sizes and architectures. - The benchmarks (SuperGLUE, GSM-8K, AlpacaEval-2.0) appropriately assess different aspects of model quality. - The corruption experiments (token insertion/deletion/substitution, roundtrip translation) are comprehensive and test realistic scenarios. Theoretical Claims: I checked the high-level proof ideas for the main theoretical claims in the paper, Propositions 3.1 and 3.3. The theoretical claims in the paper appear sound as far as I can tell and use standard statistical techniques. - Proposition 3.1 (statistical validity) uses the rotational invariance of Gaussian distributions to show that the test statistic follows a standard normal distribution under the null hypothesis. - Proposition 3.3 (power bound) derives bounds on the statistical power under the linear softmax model assumption. The authors are careful to state the details of their assumptions and Lemma I.6 justifies why this assumption may be reasonable in practice. Experimental Designs Or Analyses: The experimental designs are sound and comprehensive. - The authors use standard benchmarks and datasets (C4 for generation, SuperGLUE, GSM-8K, AlpacaEval-2.0 for evaluation). - Ablation studies thoroughly explore the effects of different hyperparameters. - Error bounds and multiple seeds are used to account for stochasticity. - Corruption experiments (token insertion/deletion/substitution, roundtrip translation) are comprehensive and systematically test different types and levels of text modification. One minor limitation is that the generation evaluation could benefit from more diverse text domains beyond just C4. Supplementary Material: I reviewed the whole appendix and paid specific attention to the mathematical proofs in Appendix I and the additional experiments and ablation studies in Appendix D-H. The theory and experimental sections are very comprehensive. Relation To Broader Scientific Literature: The paper is well-situated within the watermarking literature for language models. The authors situate their work clearly within the landscape of prior work and clearly identify their novel contributions: - Unlike token-level approaches like Kirchenbauer et al. (2023) and Kuditipudi et al. (2023), the proposed method makes edits to (few) model parameters. - The authors mention drawing inspiration from work on model weight manipulation, including model soups (Li et al., 2023; Sun et al., 2023; Fernandez et al., 2024; Sharma et al., 2024). Essential References Not Discussed: I am not aware of other key references that are not discussed. Other Strengths And Weaknesses: Other strengths: - The paper is comprehensive and clearly written. - The method is practical, with little computational overhead, and easily deployable, requiring no modifications to inference pipelines. Weaknesses: - The method is not as robust to more sophisticated attacks (e.g. roundtrip translation). Other attacks, e.g. targeted or non-consecutive paraphrasing could be more thoroughly explored. - The paper could benefit from more detailed discussion about the effect of the choice of model parameters to be perturbed, see questions. - The method may not be as applicable to smaller models, where weight perturbations may have larger effects. Other Comments Or Suggestions: I think the paper is generally very well written. One minor suggestion is that in my opinion the main text could benefit from a slightly more thorough description of the method hyperparameters $\sigma$ and $\theta$, see questions. Questions For Authors: 1. How valid is the linear softmax assumption for models that differ from the standard Transformer architecture, e.g. convolutional long-context models or models that combine global and local operations (e.g. sliding window attention)? This may be crucial for determining whether the method may lead to greater performance degradation on different models. 2. Do the authors have a general method for choosing $\sigma$, the noise variance, and $\theta$, the model parameters on which to apply the watermarking procedure, beyond simply testing the different options empirically? 3. Have the authors investigated how well GaussMark performs on models that have undergone quantization or other post-training modifications? This may be relevant for deployment scenarios where models are often compressed for efficiency. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their careful attention to our work. ## Choice in Perturbation Parameters We agree with the reviewer that a more thorough understanding of which parameters to perturb would be beneficial and included Figures 7-12 as some empirical guide as to the effect of layer, parameter, and variance on both detectability and model quality. At present, we do not have a strong theoretical intuition for which parameters are optimal to watermark beyond the notion that higher dimensional parameters with variance neither too high nor too low work the best. We suspect, as we allude to in Line 438 2nd column, that the optimal choice is to perturb multiple weights at once and it is an interesting direction for future research to better understand and develop a more general intuition for this question. ## Validity of linear softmax assumption beyond transformers This is a good empirical question and we are not sure as to the answer as we only experimented with transformer architectures. We suspect that recurrent networks likely have complicated nonlinear effects that may reduce the power of Gaussmark, but that convolutional operations, being linear, may still allow for the linear softmax assumption to apply, at least in later layers, but this is entirely guesswork and this is an interesting direction for future work. Note that this approximation is only important to theoretically bound the power of the test, and statistical validity holds unconditionally. ## Quantized models The question of precision is very good and we did not experiment with this so we are not sure of the implications. We suspect that our method would still work (although possibly with reduced power) as several weights can handle larger variance without reducing text quality (cf. Fig 11-12). Optimizing Gaussmark for quantized models is an excellent direction for future work.
Summary: This paper proposes a watermarking scheme for large language model output (LLMs) using gradient-based test statistics. The properties of the proposed method, GaussMark, are analyzed theoretically to derive signficance levels and power of the test. The method and its properties (efficiency, quality-preservation, and detectability) are empirically verified using three language models. Claims And Evidence: There are four main claimed properties of the method, GaussMark * It is statistically provable. I found no substantial flaws in the theory part, but the derivation relies on coarse assumptions that might not hold for real-world models. For instance it uses a linear approximation. * It does not deteriorate the quality of the outputs. I think the references and the empirical results support this point well. However, I think this should rather depend on the layer or the model where GaussMark is applied. But generally I am convinced that the method can be applied to models without deterioration in quality * It is robust to simple post-processing techniques. Empirically verified through experimental evaluation, but no theoretical results concerning this point. * It is very efficient. I think this claim is established well (it is conceptually straightforward to see) and experimentally demonstrated in Figure 3. However, I profoundly disagree with the claim that detection has the same memory requirements as inference made (in Section 5, l.434). For detection, a backward-pass is required which usually consumes substantially more memory. However, as the authors acknowledge, there are already a lot of watermarking approaches in the literature. I think this necessitates further competitive comparison concerning the key criteria: detectability, performance-degradation, and robustness to modifications. There is no direct comparison to any method in the main paper. This is an essential weakness to me as I think this contribution needs to be judged in relation to existing methods. Methods And Evaluation Criteria: I found no major issues with the experimental setup besides lack of baselines. Theoretical Claims: The paper contains a potentially concerning theoretical inconsistency regarding the noise parameter σ: In Section 3.2 (before Proposition 3.3), the authors justify the linear model assumption by arguing it holds when σ is "sufficiently small" Later, when discussing results, they analyze the case where σ → ∞, and state that larger sigmas are required for the method to work properly. This contradiction undermines the theoretical foundation of the work. Which condition is actually required for the method to function properly? The paper needs to resolve this fundamental inconsistency. The entire method seems to rely on the assumption in eqn. (3) that seems relatively coarse and drastic. Details of the approximation are deferred to the Appendix, but I think this part is crucial to understand the statistical grounding and reliability of the method. I have the impression that a lot of theoretical effort was invested in a method that is based on this uncertain approximation. Therefore, I am not sure how valueable the theoretic results are in practice. Experimental Designs Or Analyses: There seems to be on issue regarding hyperparamter selection: The hyperparameter values (sigma, and selection of layer) are not independent of the benchmarks, as it seems the same data and metrics were used to select them as far as I understand. It would be more sound to use a validation subset of the benchmark for selecting sigma/the layer according to the performance. Supplementary Material: Checked the supplementary briefly, but due to its extensive format, I did not have time to check everything and it might be that I am missing something. Please point me to the corresponding sections in this case. Relation To Broader Scientific Literature: The methodology and the test statistic appears to be directly adapted from Pawelczyk et al. (2025), "Machine Unlearning Fails to Remove Data Poisoning Attacks" (ICLR 2025; preprint available since June 2024), which established a nearly identical test statistic approach using Gaussian samples and input gradients. The current work essentially applies this existing methodology to LLM watermarking without proper acknowledgment. Even the key insight regarding improved performance with higher input dimensionality was previously established in the ICLR 2025 paper. The absence of proper citation and comparative discussion substantially diminishes the novelty claim. Essential References Not Discussed: see above, otherwise I just think more methods should be studied empirically. Other Strengths And Weaknesses: Writing and Clarity: The (main) paper is very well-written, besides a little too many references to the appendix, especially in the theoretical statements. Some parts, especially at the end of Section 3, just read like a concatenation of references to the appendix, which is not a nice read. In terms of the contents, think the paper could also fit a journal format well. There is a lot of theory in the appendix. I don’t think the theoretical contribution fits the conference review format as it would be very time-consuming to completely review. Scope: A fundamental limitation of the proposed method is its requirement for access to model gradients. Most watermarking schemes derive their utility from working with black-box API access – for instance, when detecting if students have used ChatGPT or Claude for assignments. Since this method requires gradient access, it cannot be applied to the predominant use case of commercial API-accessible models, severely limiting its practical impact. On the positive side, I appreciate that the proposed water-marking method is straightforward to implement and has no overhead in generation time. The topic of watermarking further is timely and relevant. **Summary** While the approach shows some interesting theoretical properties and can be easily implemented, there are concerns regarding a performance comparison to other methods, novelty in contrasts to earlier works using similar statistics, practicality/scope, and theoretical consistency that significantly impact its contribution. While the paper's execution and writing is generally good and the topic is timely, I currently cannot provide an accept score because of the many issues raised. Other Comments Or Suggestions: ## Rebuttal update increased score from 2 -> 3 based on rebuttal (see below) Questions For Authors: * Captions in Figure 5: Why are the x axes labeled “Size of test” in Figure 5? I think these are TPR/FPR curves, right? * To preserve utililty, the variance of GaussMark has to be chosen very small (e.g., 1-e5). For efficient generation quantization of weights is common. However, when using quantized models, e.g., with 16-bit floats, the variance used is far lower than o the representable precision (machine epsilon for fp16 is around 1e-3). Do you think for GaussMark’s performance full-precision computations are necessary? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their careful attention to our work. ## Comparison to other watermarking schemes Due to space, we deferred an extensive comparison with Kirchenbauer et al to appendix H, alluded to in the paragraph beginning in line 406. Please see that for results and explanation of why we chose this scheme as a benchmark. We are happy to move some of those comparisons to the main body subject to space constraints. ## Linear approximation We emphasize that while our motivation and power bounds rely on simplifying assumptions, in Proposition 3.1, *the statistical validity of the test holds without assumption*, in particular not requiring the linear softmax approximation to hold. The power bound is a theoretical result to guide our understanding of the effect of problem parameters, but is not directly relevant to practice; we verify empirically that Gaussmark has good power in detecting watermarked text. The discussion following Proposition 3.3, wherein we take $\sigma \to \infty$, is meant to serve as a guide for intuition and we give a finite bound on how large $\sigma$ must be in Corollary I.5, to which we allude in line 298. Furthermore, this intuition is reflected empirically in Figure 7, where we see that when variance is high for early layers we see a *decrease in power, manifesting through increased p-values when the variance increases*, suggesting that the lack of linear approximation hurts the test’s power. In the revision we will include additional discussion clarifying this point. ## Memory Requirements We agree that a full backward pass requires more memory than a forward pass, but we only keep track of gradients with respect to a single parameter matrix, usually in a late layer. We are happy in the revision to clarify that choosing a parameter at a much earlier layer results in additional memory as well as provide empirical results on the memory cost of detection relative to inference. ## Choice of parameters depending on benchmarks We believe there may be some confusion as to when a validation set is important. Our goal is to find watermarking parameters that do not adversely affect model quality (as measured by the chosen benchmarks) while still ensuring detectability, not to claim generalization. As such, including the entire benchmark set in evaluation is *more stringent* as it reduces the standard error of the estimate of model quality, requiring the watermarked model to perform better relative to the benchmark in order to qualify as a model that does not reduce quality. ## Relation to Pawelczyk et al 2025 Thank you for pointing this out; we agree the (very nice) work of Pawelczyk et al 2025 is relevant. We will discuss it in our revision, summarized here. First, we emphasize that while GUS perturbs *data* and takes gradients wrt inputs, GaussMark perturbs *parameters* and takes gradients wrt the same, as is appropriate for these very different settings. Indeed, that work considers unlearning and thus aggregates the inner product test statistic over multiple data points on a model that is passed through some noisy channel induced by the unlearning process, while ours focuses on a fixed data point and evaluating the likelihood that this datum is generated by a watermarked model. We make several novel theoretical claims: (1) We prove statistical validity under virtually no assumptions in the watermarking setting; (2) We prove power bounds in toy models that somewhat approximate LM’s and the intuition we glean from these broadly aligns with our empirical findings. In addition to theory, we extensively evaluate our proposed approach empirically in a very different setting from the mentioned paper. Furthermore, while that work does mention the blessing of dimensionality, they do this only in the Gaussian setting; this is subsumed by our strong log-concavity analysis but does not approximate the LM setting as well as the linear softmax model we consider. Second, Section 3.1 discusses how our test is, heuristically, a computationally efficient approximation of the minimax optimal test, giving theoretical grounding to our approach. In particular, GaussMark naturally emerges as the ‘right test’ for structural watermarking rather than being directly inspired by GUS. In fact, the resemblance of the two suggests that the GUS may be theoretically grounded as well, although this direction appears out of scope for that paper. TL;DR: we agree this work is relevant and will add a discussion and proper reference to Pawelczyk et al 2025 in the revision, but we disagree that the existence of the aforementioned work diminishes the novelty claim. ## Necessity of knowing model gradients This is a good point and is acknowledged as a limitation in line 415, 2nd column. Please see that paragraph for why we believe the white-box setting is reasonable. ## Questions Thank you for catching the typo in the caption. Please see our response to XB7Z regarding quantization. --- Rebuttal Comment 1.1: Comment: I thank the authors for the rebuttal. However, some points are still not quite clear to me: a) The linear softmax assumption vs. the assumption in Eqn. (3) vs. the linear softmax model in Definition 3.2.: I was referring to the assumption in eqn. 3 specifically, are they the same? Also can the authors explain in simple terms, which the validity /p-value of the test does not depend on either? I unfortunately don't quite understand the response regarding hyperparameter selection on the benchmarks. It seems that the authors argue that they don't want to show generalization (i.e., to other benchmarks). This would be unfortunate because I think that we would like to have a hyperparameter setting, that works for general purpose settings (e.g., and LLM behind and API). Regarding related work, I am aware that the unlearning setting has some differences. I nevertheless wanted to underline the technical similarities, which I think make this work an important reference and that the contribution (basically transforming it to another setting + extending some theorecal properties) should be appropriately framed as such. I still remain a bit sceptical about the writing, which is very dense (this also becomes clear in the reply, where the authors claim they "allude" to certain points -- apparently this was not sufficient for reader's like myself, familiar with LLM, watermarking, unlearning etc. in general, but not a statistics expert to understand these points). skeptical. I am still willing to increase my score if the authors can provide convincing and intuitive responses for my first two points. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their engagement and further questions. Please find our response below: > Linear Softmax Assumption vs (3) To clarify, the heuristic approximation in (3) is not the same as the linear softmax assumption. In particular, (3) is intended to heuristically motivate the proposed test as a computationally efficient, linearized approximation to the optimal test in this setting. This approximation is reasonable whenever the variance of the noise $\xi$ is small relative to the Frobenius norm of the Hessian of the log-likelihood at parameter theta. Note that even granting the linear softmax assumption, equation (3) remains an approximation due to the lack of the linear dependence of the normalizing factor on the parameter theta. To be clear: (3) is introduced only to motivate our test as a linear approximation to the optimal Neyman-Pearson likelihood ratio test with arbitrary differentiable dependence on theta and this approximation is not required to hold for any of our formal results, especially not the proof of statistical validity that holds unconditionally, i.e., our p-values are correct even if these assumptions do not hold (see the proof of Proposition 3.1). The linear softmax assumption in Section 3.2 is made to give a theoretical bound on the power of the proposed test in an analytically tractable setting that sufficiently approximates transformers so as to give some intuition as to when we expect Gaussmark to be powerful in detecting watermarked text. We again emphasize that this linear softmax assumption is only used in the power computation and is not related either to the motivation for the test (which relies on a linear approximation of the log-likelihood itself, which is a stronger condition than linear softmax due to the aforementioned normalizing factor) or the statistical validity of Gaussmark in any way. The reason the statistical validity holds without either (3) or the linear softmax assumption is fundamentally due to the rotational invariance of the Gaussian. Under the null, $y$ and $\xi$ are independent, so, conditioning on $y$, it holds that $\langle \xi, \nabla \log p_\theta(y) \rangle \sim N(0, \sigma^2 \| \nabla \log p_\theta(y) \|^2)$; thus, dividing the inner product by the norm of the gradient and $\sigma$, we get that our test statistic, under the null hypothesis, is distributed according to $N(0, 1)$. This holds regardless of the distribution of the $y$ and thus, for any distribution of $y$, as long as the $y$ are independent of $\xi$, we can condition on the value of $y$ and the preceding calculation holds. The fact that the distribution of the statistic under the null is independent of the precise distribution within the null is precisely what allows us to guarantee statistical validity. > Parameter Selection and Generalization to other benchmarks We apologize for the confusion and agree that the purpose of including the benchmarks is to suggest that we expect the comparison between watermarked and base models to generalize to other notions of model quality. To this end, we are happy to include in the revision additional results demonstrating the correlation in model performance across these benchmarks for base and watermarked models with different chosen parameters. Such correlation can already be observed to some extent in the disaggregated SuperGLUE scores included in the supplement. Moreover, to clarify, we picked our watermarked models based on SuperGLUE/GSM8K/p-value constraints and measured their performance on AlpacaEval, which already provides some evidence of this generalization. We will better clarify this point in the revision. > Discussion of related work We completely agree that Pawelczyk et. al. is a relevant reference and will definitely add proper citations and discussion in the final version of our paper. From our response, we simply wish to convey that the existence of this work does not diminish the technical novelty of the current paper due to the difference in setting, additional theoretical results, and substantial empirical evaluation. > Clarity of presentation of statistical concepts We apologize if some of the technical writing is overly dense; due to space constraints we did defer many of the proofs to the appendix. In the revision we will endeavor to give a more comprehensive introduction to the statistical concepts and proof techniques used, subject to space constraints.
Summary: The paper proposes GaussMark, a structural watermarking method for LLMs that perturbs model weights with Gaussian noise during generation. The authors claim this approach addresses limitations of token-level watermarking by embedding watermarks directly into model parameters. Detection leverages hypothesis testing on gradients of perturbed weights, with some statistical arguments. Claims And Evidence: Authors claimed that new method GaussMark: is designed to fix multiple problems of token-level watermarking, like "generation latency, detection time, degradation in text quality, or robustness". However, there are many mis-interpretation of prior token-level watermarking methods. See "Relation To Broader Scientific Literature" for details. The theoretical claims are not rigorously proved. See "Theoretical Claims". Methods And Evaluation Criteria: It makes sense. Theoretical Claims: The theoretical justification for GaussMark relies on a problematic approximation: $\log\bigl(p\_{\theta+\xi}(y|x)/\mathbb{E}\_{\xi^{\prime}\sim\nu}\bigl[p_{\theta+\xi^{\prime}}(y|x)\bigr]\bigr)\approx\langle\xi,\nabla_\theta\log p_\theta(y\mid x)\rangle$. Author claim that this approximation is valid only under the assumption that $\sigma \ll 1$ and that the higher-order terms, particularly $\frac{\sigma^2\left\|\nabla_\theta\log p_\theta(y\mid x)\right\|^2}{2}$, are negligible. However, the norm of the gradient, $\lVert\nabla_\theta\log p_\theta(y\mid x)\rVert$, is substantial in realistic scenarios with long generated texts and complex, high-dimensional language models. The paper does not adequately address the conditions under which this approximation remains valid for practical LLMs and text generation lengths. The claim of $\sigma \ll 1$ being sufficient is likely too simplistic, and a more stringent condition, possibly dependent on text length and model complexity, like $\sigma\ll\frac{1}{L\sqrt{n}}$, might be necessary for the approximation to hold, a consideration absent in both the theory and experimental analysis, as all following bounds builds upon such approximation. Experimental Designs Or Analyses: No issues identified. Supplementary Material: I.1. Missing Details from Section 3 C. Implementation Details Relation To Broader Scientific Literature: In the abstract and introduction, the authors argue that previous methods suffer from issues such as generation latency, detection time, text quality degradation, and lack of consideration for text structure. However, some of these claims regarding prior works are exaggerated or inaccurate. For example, the paper criticizes methods like Kuditipudi et al. (2023) for "extremely slow generation and detection," which is a mischaracterization. Kuditipudi's approach, based on manipulating logits and using efficient hash-based detection, is actually quite fast. Furthermore, the claim of text quality degradation in token-level watermarking is also misleading, e.g. "Unbiased Watermark for Large Language Models," have been developed to avoid performance drops. The paper's also claim that prior token-level watermarking methods ignores "inherent structure of text" and GaussMark addresses that by embedding watermarks into model weights is vague. It's not evident how this directly leverages or incorporates the inherent structure of language itself, as opposed to, for instance, semantic watermarking. On the other hand, semantic-based watermark, as seen in "Watermarking Conditional Text Generation for AI Detection: Unveiling Challenges and a Semantic-Aware Watermark Remedy", explicitly handle the language structure. Even though this paper cited some semantic watermark works, it only discuss its relationship with paraphrasing attacks, and didn't compare two approaches in terms of language structure. Therefore, the paper's initial framing of the problem and the necessity for a fundamentally different approach like GaussMark is built upon a somewhat flawed understanding of the current state of the art. Essential References Not Discussed: See above points for mis-interpretation of prior works. Other Strengths And Weaknesses: One notable strength of GaussMark is its novelty. While previous watermarking techniques often manipulate the token sampling process at the logits level, GaussMark takes a different approach by directly perturbing the model's parameters. This departure from logits-based and investigation on adding noise into model parameters manipulation is indeed novel. Despite the novelty, GaussMark exhibits several weaknesses concerning its problem framing and theoretical foundation. Other Comments Or Suggestions: No other comments. Questions For Authors: Why does TPR saturate in Figure 2a? Is it possibly because the fundamental approximation eq (3) of both theory and method became invalid in long sequence? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for their careful attention to our work. We wish to clarify a few points. ## The linear approximation We wish to emphasize that while the motivation and power bounds do rely on some simplifying assumptions, as we demonstrate in Proposition 3.1, *the statistical validity of the test holds under virtually no assumptions*, in particular not requiring the linear softmax approximation to hold. The power bound is an important theoretical result to guide us in understanding the effect different problem parameters have on the efficacy of our test, but is not directly relevant to practice; indeed, we verify empirically that Gaussmark has relatively good power in detecting watermarked text. The discussion following Proposition 3.3, wherein we take $\sigma \to \infty$, is meant to serve as a guide for intuition and we give a finite bound on how large $\sigma$ must be in Corollary I.5, to which we allude in line 298. Furthermore, we believe that this intuition is reflected empirically e.g. in Figure 7, which reports p-values of detection for different choices of variance and watermarked parameter; in this figure, when the variance is high for early layers (where we expect the linear approximation to be weaker) we see a *decrease in power, manifesting through increased p-values when the variance increases*, suggesting that the lack of linear approximation is indeed reducing the power of the test. In the revision we will include additional discussion in the main body clarifying this point. With respect to the motivation, note that the linear approximation allows us to find a computationally efficient test statistic that (if the approximation is sound) will perform approximately as well as the minimax optimal test, which is likely computationally inefficient. We agree (and will clarify in the revision) that the quality of the linear approximation depends on the operator norm of the Hessian which implicitly grows with sequence length, but this motivation is intended as a non-rigorous motivation for a statistic we find works quite well empirically. To summarize: in the event that the linear approximation does not hold, there is no effect on statistical validity and while there may be an effect on empirical power, we demonstrate conclusively that one may choose parameters that do not suffer from this too much. ## The importance of fast generation and detection Regarding the speed of generation of watermarked text, it is important to differentiate between theoretical and empirical latency. We agree that the approach of Kuditipudi et al is theoretically fast with respect to generation, but they do not report generation times themselves. One key advantage of Gaussmark (cf. lines 173-175, 2nd column) is that it can be immediately integrated into preexisting generation pipelines, like vLLM that allow for substantially faster generation than more naive approaches. The approach of Kirchenbauer et al is at least as fast in terms of generation as that of Kuditipudi et al and, as we show in Figure 27, the former is orders of magnitude slower in generation empirically (cf. Line 1406). Furthermore, as noted in Kuditipudi et al, detection time grows quadratically in text length and their reported detection times are significantly slower than ours with much shorter texts. ## Semantic Watermarking We are happy to include additional discussion of other semantic watermarking schemes. The paper mentioned by the reviewers is certainly relevant, but note that their results do not in any way dominate ours. Indeed, as an example, their approach (as summarized by Table 1 in that paper) leads to substantial declines in text quality, despite improving on pre-existing work, in contradistinction to GaussMark, which we demonstrate does not lead to reduced text quality on a wide variety of benchmarks. ## Why the TPR saturates in Figure 2a This is a good question. Note that the answer is likely *not* because the linear approximation becomes invalid for long sequences, but rather because the power calculation predicts that our test is only consistent with high dimension, not with increasing sequence length. Were the linear approximation breaking down the cause of this phenomenon, we would expect the power of the test to potentially decrease (as we discuss above in reference to Figure 7), which we do not observe. We suspect that the power could be improved by adding noise to multiple parameters at once, as we suggest in Line 438, 2nd column; this is an interesting direction for future research.
Summary: The paper introduces GaussMark, a watermarking scheme that embeds a subtle signal into a language model by additively perturbing its weights with a small Gaussian noise. Instead of operating at the token level, GaussMark leverages the inherent structure of text by modifying a single MLP layer within the model. The method formulates watermark detection as a statistical hypothesis test, where a test statistic based on the inner product between the noise vector and the gradient of the log-likelihood is computed. The authors provide rigorous statistical guarantees (e.g., via Propositions 3.1 and 3.3) and validate the approach with extensive experiments on several modern language models using standard benchmarks (SuperGLUE, GSM–8K, and AlpacaEval–2.0). Their empirical evaluation shows that the scheme maintains high text quality and very low latency in both generation and detection. ## update after rebuttal I have read the other reviews and the authors' responses. I think this paper meets the bar of ICML, so I recommend acceptance. Claims And Evidence: Claims: - GaussMark is practical and efficient, with essentially no impact on generation latency. - It provides formal statistical guarantees for detection, ensuring control over false positive rates. - The method is robust to common token- and sequence-level corruptions and does not degrade model quality. Evidence: - Theoretical results (e.g., Proposition 3.1 establishes the test’s validity and Proposition 3.3 provides bounds on detection power) back up the claims. - Experimental results demonstrate that watermarking does not harm downstream performance and that detection remains fast and reliable even under various corruptions. Methods And Evaluation Criteria: Methodology: - The watermark is embedded by perturbing a chosen model weight with Gaussian noise. Detection uses a normalized statistic that is compared against a threshold derived from the Gaussian CDF. - This approach sidesteps the latency issues seen in token-level watermarking methods by integrating directly into the inference pipeline. Evaluation: - The evaluation is comprehensive, employing standard benchmarks (SuperGLUE, GSM–8K, AlpacaEval–2.0) and covering various aspects such as quality, speed, and robustness. - Experiments also include ablation studies (e.g., on the effects of hyperparameter choices and rank-reduction variants) to illustrate the trade-offs between watermark strength and model performance. Theoretical Claims: The theoretical claims appear sound within the stated assumptions. However, the dependence on linear approximations may limit applicability in regimes where non-linear effects are significant. Experimental Designs Or Analyses: - Experiments are conducted on multiple language models (e.g., Llama3.1–8B, Mistral–7B, Phi3.5–Mini) with clear comparisons between watermarked and unwatermarked variants. - The design includes measuring generation latency, detection latency, and performance on language understanding and reasoning benchmarks. Supplementary Material: The supplementary material includes extensive ablation studies (Appendices D and E), robustness evaluations (Appendix F), rank-reduction experiments (Appendix G), and comparisons with other watermarking schemes (Appendix H). Relation To Broader Scientific Literature: This paper leverages established ideas from statistical hypothesis testing and recent empirical findings on weight perturbations and model merging. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: - Simplicity & Efficiency: The method is conceptually straightforward and easily integrated into existing pipelines without slowing down generation. - Theoretical Rigor: Provides clear statistical guarantees and theoretical analyses that enhance credibility. - Practical Impact: The approach is designed with deployment in mind, including fast detection and minimal impact on model quality. Other Comments Or Suggestions: See listed above. Questions For Authors: See listed above. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We thank the reviewer for their careful attention to our work. One point on the theoretical claims that we wish to clarify is that the statistical validity of our test does not depend on the linear approximations being sound and so the test’s statistical significance holds under virtually no assumptions. It is only the power of the test for which we require linear approximations (or local strong log-concavity, cf. the paragraph beginning in line 300) to justify theoretically. Note that the bound on the power of the test is not necessary for practical deployment as long as the true positive rate of detection is relatively high at a fixed false positive rate (as we verify empirically).
null
null
null
null
Theoretical Performance Guarantees for Partial Domain Adaptation via Partial Optimal Transport
Accept (poster)
Summary: The paper studies the problem of Partial Domain Adaptation (PDA), where the target label space is a subset of the source label space. The authors propose a theoretically grounded approach based on Partial Optimal Transport (POT) to tackle PDA, deriving generalization bounds that justify the use of the Partial Wasserstein Distance (PWD) as a domain alignment term. These bounds also provide explicit weight formulations for the empirical source loss, distinguishing their method from prior heuristic weighting strategies. The authors introduce WARMPOT, an algorithm that leverages the derived bounds to optimize domain adaptation performance. They validate WARMPOT through extensive numerical experiments, demonstrating competitive results compared to state-of-the-art (SOTA) methods. The paper claims that their weighting strategy improves upon existing approaches and provides better theoretical justification for PDA method. Claims And Evidence: * WARMPOT is theoretically justified – the paper derives generalization bounds that incorporate PWD as a domain alignment term and explicitly define source loss weights. Evidence: Theoretical analysis and derivations in Section 3​ . * WARMPOT outperforms existing PDA methods – The proposed algorithm achieves better accuracy compared to methods like MPOT, PWAN, and ARPM. Evidence: Empirical results in Section 5.4, showing improved performance on the Office-Home dataset​ . * The weighting strategy is more effective than existing ones – WARMPOT's weights successfully reduce the influence of outlier classes, improving adaptation. Evidence: Table 1 shows that the weighting strategy leads to better classification accuracy and mitigates negative transfer​ * Most claims are well-supported by theoretical arguments and experimental validation. However, additional sensitivity analyses on weight selection would further strengthen the claims. Methods And Evaluation Criteria: The authors adopt the Office-Home dataset as the main benchmark, comparing WARMPOT against existing PDA algorithms. Evaluation focuses on classification accuracy across different domain shifts. The Partial Wasserstein Distance is used to measure alignment between source and target distributions. The chosen dataset and metrics are well-aligned with the problem, and comparisons with recent PDA approaches (e.g., MPOT, ARPM) provide a meaningful assessment​. Theoretical Claims: The paper presents theoretical results regarding generalization bounds for PDA, particularly: * Feature-based bound (Theorem 3.2) – Establishes a bound on the target loss using PWD between empirical feature distributions. * Joint distribution-based bound (Theorem 3.3) – Extends the previous bound to account for target labels. Experimental Designs Or Analyses: The experimental design includes comparisons on a standard PDA dataset, tuning of key hyperparameters $\alpha,\beta$, and ablation studies on weighting strategies. A notable strength is the comparison against both heuristic and theoretically motivated weighting methods, providing insight into the benefits of WARMPOT's approach​. However, additional statistical significance tests or error bars would improve confidence in the reported results. Supplementary Material: The supplementary material provides useful clarifications, especially regarding implementation details and theoretical derivations. Relation To Broader Scientific Literature: The paper builds upon prior work in domain adaptation, optimal transport, and partial optimal transport (POT). The paper propose a theoretically grounded approach based on Partial Optimal Transport (POT) to tackle PDA, deriving generalization bounds that justify the use of the Partial Wasserstein Distance (PWD) as a domain alignment term. Essential References Not Discussed: No Other Strengths And Weaknesses: Some weaknesses: * Limited exploration of additional datasets beyond Office-Home. * Lack of error bars to support experimental results. Other Comments Or Suggestions: No. Questions For Authors: Given that solving the partial optimal transport problem is computationally expensive, how does WARMPOT compare in training time to previous approaches? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their comprehensive evaluation and helpful suggestions. > _Questions for authors_: **Given that solving the partial optimal transport problem is computationally expensive, how does WARMPOT compare in training time to previous approaches?** Indeed, solving the optimal transport problem is generally computationally expensive. One common way to circumvent this problem is via mini-batch approaches. We follow this approach. Specifically, in the numerical experiments reported in the paper, we use the mini-batch partial optimal transport framework proposed in _Improving mini-batch optimal transport via partial transportation_ by Nguyen et al. (2022). The most well-performing algorithms in the literature, including PWAN and ARPM, also require one to solve optimal transport problems as part of their algorithms. We would also highlight that different weighting strategies come with different computational costs. In WARMPOT, the weights, which need to be computed per mini-batch, are obtained directly from the solution of the partial optimal transport problem without any additional overhead. On the contrary, a weight update in both the BA3US and the ARPM weighting strategy involves the entire dataset. Additionally, we see from the tables inserted below (see final comment for further details) that there is a trade-off between performance and computational cost for these weighting strategies. Such a trade-off is not present in WARMPOT. > _Claims and Evidence_: **Additional sensitivity analyses on weight selection would further strengthen the claims.** We have added sensitivity analyses for the $\alpha_{max}$ and $\beta$ parameters on ImageNet $\rightarrow$ Caltech (see response to QCqK). > _Weakness 1_: **Limited exploration of additional datasets beyond Office-Home.** We have added results for ImageNet $\rightarrow$ Caltech (see response to 9cQ1 for details). > _Weakness 2_: **Lack of error bars to support experimental results.** In order to address this point, we have re-run our experiments using additional random seeds for OfficeHome (6 seeds) and computed average and standard deviation. We have also conducted a weighting-scheme comparison on ImageNet $\rightarrow$ Caltech using 3 random seeds. In the tables shown below, we considered for BA$^3$US and ARPM the weight update interval (in epochs) indicated in the parentheses. We see from the table that the performance of BA$^3$US depends on the update interval (the smaller the interval, the better the performance) Gu et al., 2024 recommend using 500 and 2000 as update interval for ARPM on OfficeHome and ImageNet $\rightarrow$ Caltech, respectively. Adopting the same update interval for BA$^3$US, to maintain a fair comparison, we see that, as claimed in the paper, WARMPOT results in better performance than MPOT and ARPM(500), and yields performance comparable to BA$^3$US(500) (overlapping confidence intervals) on both datasets. Also, the confidence intervals of the best performing algorithms in Table 2 reported in the paper (ARPM and ARPM+our weights) overlap. |Weighting scheme|BA$^3$US(100) weights|BA$^3$US(500) weights|BA$^3$US(750) weights|WARMPOT (ours)|MPOT weights|ARPM(500) weights| |----|-----|-------|-|--|--|--| |Avg. test acc. on OfficeHome|78.1 (0.4)|77.6 (0.4)|77.4 (0.4)|77.6 (0.7)|76.0 (0.4)|72.9 (0.3)| |Weighting scheme|BA$^3$US(750) weights|BA$^3$US(1500) weights|BA$^3$US(2000) weights|BA$^3$US(5000) weights|WARMPOT (ours)|MPOT weights|ARPM(2000) weights| |----|-----|-------|-|--|--|--|-| |Test acc. on ImageNet $\rightarrow$ Caltech|86.1 (0.6)|85.0 (0.2)|84.7 (0.7)|83.1 (1.4)|84.8 (0.1)|78.6 (1.2)|79.2 (1.4)| --- Rebuttal Comment 1.1: Comment: I thank the authors for providing the responses. Thus, I would keep my initial score due to my lack of expertise.
Summary: This paper deals with Partial Domain Adaptation (PDA), a setting where source and target domain distributions differ, and where the target domain label space is a subspace of the source domain label space. The authors propose to tackle this important problem through Optimal Transport (OT), an established field of mathematics that has previously contributed to domain adaptation in general. The authors use this framework not only to propose new algorithms, but also to provide theoretical generalization bounds. --- __Post-rebuttal.__ The authors did a good job on their rebuttal, especially, they provided new experiments on a large scale adaptation task. Overall, this is a good paper with strong theoretical contributions and convincing experiments. Hence, my final score is __4. Accept__ Claims And Evidence: Here is a list of highlevel claims made in the introduiction, 1. "we provide theoretically motivated algorithms for PDA" 2. "we derive generalization bounds on the target population loss and devise training strategies that minimize them" 3. "our bounds give rise to weights that, when combined with the ARPM algorithm of Gu et al. (2024) lead to SOTA results for the Office-Home data set." Claims 1 and 2 are well supported by the theoretical parts of the paper. While claim 3. is also true it is fairly limited in scope. See my comments on the next section about. Methods And Evaluation Criteria: In this point, the paper falls short on the acceptance criteria. While the authors provide a comprehensive comparison with the state-of-the-art, they do so in a single benchmark, i.e., the Office-Home benchmark. For instance, other papers such as (Gu et al., 2024) and (Cao et al., 2018) have considered the following (on top of Office-Home), - Office 31 - Imagenet -> Caltech - VisDA2017 (Real -> Synthetic and Synthetic -> Real) Which provide more thorough comparisons. In my view the authors should complete their experiments with at least other benchmark (preferably Imagenet -> Caltech of VisDA2017, which are more complex and large scale than Office 31). Theoretical Claims: The authors provide a series of 2 theorems, alongside 1 lemma and 2 corolaries. Overall, they provide generalization bounds in terms of the partial Wasserstein distance between the distributin of extracted features. These generalization bounds are novel, and in line with previous research on domain adaptation theory. I checked the appendix provided by the authors, and, as far as my knowledge goes, the proofs of theorems 3.2 and 3.3 are correct. I did not check the proof of lemma 3.4. Experimental Designs Or Analyses: The experimental design an analysis is in lign with current domain adaptation practice, and are, in this regard, correct. The authors could have explored more benchmarks to validate their method, as I highlighted in `Methods and Evaluation Criteria` Supplementary Material: I reviewed most of the appendices, which are good. The proofs are clear and easy to follow. Relation To Broader Scientific Literature: The current paper goes in a similar direction to previous papers (Fatras et al., 2021; Khai et al., 2022) that propose Optimal Transport techniques for partial domain adaptation. An important feature of this paper is that the authors provide generalization bounds in terms of the partial Wasserstein distance. (Fatras et al., 2021) Fatras, Kilian, et al. "Unbalanced minibatch optimal transport; applications to domain adaptation." International Conference on Machine Learning. PMLR, 2021. (Khai et al., 2022) Nguyen, Khai, et al. "Improving mini-batch optimal transport via partial transportation." International conference on machine learning. PMLR, 2022. Essential References Not Discussed: The authors do a good job summarizing partial Optimal Transport. However, this **is not** the only way of tackling partial DA. Especially, the authors do not discuss, nor compare the use of umbalanced Optimal Transport (Fatras et al., 2021) for partial domain adaptation (Fatras et al., 2021) Fatras, Kilian, et al. "Unbalanced minibatch optimal transport; applications to domain adaptation." International Conference on Machine Learning. PMLR, 2021. Other Strengths And Weaknesses: Here I give a summary of strenghts and weaknesses. Please use this list when writting your rebuttal. If the authors provide a rebuttal that answers the following weaknesses, I will raise my score accordingly. __Strenghts__ 1. Sound theoretical analysis with novel results for Partial DA 2. New algorithm with promising results on Office-Home benchmark __Weaknesses__ 1. The most important, the empirical evaluation of this paper is very limited. The authors should complete their empirical validation with other benchmarks in Partial DA Other Comments Or Suggestions: __Comment 1.__ The term $L_{f}$ looks like $\lambda$ in (Redko et al., 2017) and other theoretical DA works. Could the authors provide additional discussion on the potential similarities? __Comment 2.__ The cost in (10) looks a lot with the cost of the joint cost proposed in (Courty et al., 2017; reference in the main paper). I think the authors could add some discussion about the similarities as well. This discussion could also make links between the feature importance factor $\xi \gamma$ weighting the features in the ground-cost __Comment 3.__ Given the practical applications of their work, I think the authors could comment on the restrictiveness of their hypothesis. For instance, they assume 1. The encoder $g$ is $\gamma-$Lipschitz 2. The label loss function is a distance, and $\xi-$Lipschtiz In common DA practice, neither of these 2 hypothesis are met. For instance, since WGANs, we know that enforcing $\gamma-$Lipschitzness on neural nets is tricky. Furthermore, neural nets are often trained with the cross-entropy loss, which is not a metric on $\mathcal{Y}$. (Redko et al., 2017) Redko, Ievgen, Amaury Habrard, and Marc Sebban. "Theoretical analysis of domain adaptation with optimal transport." Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2017, Skopje, Macedonia, September 18–22, 2017, Proceedings, Part II 10. Springer International Publishing, 2017. Questions For Authors: N/A See previous section Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their careful reading and helpful comments. > _Methods_ and _Weaknesses_: **While the authors provide a comprehensive comparison with the state-of-the-art, they do so in a single benchmark, i.e., the Office-Home benchmark. In my view the authors should complete their experiments with at least other benchmark (preferably Imagenet $\rightarrow$ Caltech or VisDA2017, which are more complex and large scale than Office 31).** We have now benchmarked our algorithm on ImageNet $\rightarrow$ Caltech. The test accuracy of WARMPOT for this data set is 84.8%, while ARPM+our weights achieves 85.1%. For reference, the test accuracy of ARPM is 84.1%. Hence, this additional experiment provides further indication that WARMPOT, and specifically its weights, provides a robust approach for PDA tasks. |Algorithm|ImageNet $\rightarrow$ Caltech| |-|-| |ResNet-50|69.7| |DAN|71.3| |DANN|70.8| |IWAN|78.1| |PADA|75.0| |ETN|83.2| |DRCN|75.3| |BA3US|84.0| |ISRA+BA3US|85.3| |SLM|82.3| |SAN++|83.3| |AR|85.4 (0.2)| |ARPM|84.1 (1.4)| |PWAN|86.0 (0.5)| |WARMPOT (ours)|84.8 (0.1)| |ARPM+our-weights|85.1 (0.9)| > _Essential References_: **The authors do a good job summarizing partial Optimal Transport. However, this _is not_ the only way of tackling partial DA. Especially, the authors do not discuss, nor compare the use of unbalanced Optimal Transport (Fatras et al., 2021) for partial domain adaptation** In Table 4 of Nguyen et al. (2022), a comparison is made between their MPOT approach and a mini-batch unbalanced OT (UOT) approach. The results indicate that the POT-based approach works better. We have revised the paper to extend the discussion of UOT approaches and include numerical comparisons. > _Comment 1_: **The term $L_f$ looks like $\lambda$ in (Redko et al., 2017) and other theoretical DA works. Could the authors provide additional discussion on the potential similarities?** $L_f$ indeed plays a similar role as, e.g., $\lambda$ in Redko et al. (2017) and admits the same interpretation, in the sense that it is related to the difficulty of the domain adaptation problem. However, while $\lambda$ is the smallest achievable sum of population losses, $L_f$ is the smallest achievable loss maximized over the empirical source and target set. Meanwhile, $\tilde L_f$ relates to the minimal achievable target loss, and $\Xi$ measures the performance gap between considering source and target tasks jointly or separately. > _Comment 2_: **The cost in (10) looks a lot with the cost of the joint cost proposed in (Courty et al., 2017; reference in the main paper). I think the authors could add some discussion about the similarities as well. This discussion could also make links between the feature importance factor $\zeta\gamma$ weighting the features in the ground-cost.** The cost in (10) is indeed the same as the one proposed in Courty et al., 2017. We will make this point clearer in the revised version of the manuscript. From a theoretical perspective, the factor $\zeta\gamma$ (which appears both in our cost and the one of Courty et al., 2017) corresponds to Lipschitz parameters that are generally not available (see below). Hence, from a practical point of view, we treat it as a hyperparameter in our algorithm. > _Comment 3_: **Given the practical applications of their work, I think the authors could comment on the restrictiveness of their hypothesis. For instance, they assume 1) the encoder $g$ is $\gamma$-Lipschitz, 2) the label loss function is a distance, and $\zeta$-Lipschitz. In common DA practice, neither of these 2 hypothesis are met. For instance, since WGANs, we know that enforcing $\gamma$-Lipschitzness on neural nets is tricky. Furthermore, neural nets are often trained with the cross-entropy loss, which is not a metric on $\mathcal Y$.** Indeed, the Lipschitz assumptions required for our theoretical results are often not satisfied for loss functions used to train neural nets in practice. Such Lipschitz assumptions are commonly invoked in the literature to obtain generalization bounds that depend on Wasserstein distances. Mathematically, the Lipschitz assumption allows one to relate the loss function to the cost appearing in the Wasserstein metric (see Lemma A.4). Note that, in our numerical experiments, we consider loss functions that do not necessarily conform to the theoretical assumptions. The fact that we still observe good performance indicates that these assumptions are not critical for the general insights obtained from our theoretical results to hold. --- Rebuttal Comment 1.1: Comment: Thank you for your rebuttal. I consider that my questions have been correctly addressed, and the new experiments are convincing. I will raise my score accordingly, i.e., I will raise it from __3. Weak Accept__ to __4. Accept__ Other than that, I have one _small_ remark. The authors should be careful in their claims, especially when saying > The fact that we still observe good performance indicates that these assumptions are not critical for the general insights obtained from our theoretical results to hold. While I do agree that the regularity assumptions for the loss function may not be necessary for a good empirical performance, good performance does not count as evidence that the theoretical results hold, especially as there may be other reasons to why the method work in practice. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their thoughtful feedback and for considering our responses convincing. We also appreciate the updated evaluation. Note, though, that the score has not been updated in the system yet. We agree with the reviewer’s remark regarding the following statement in our response: > The fact that we still observe good performance indicates that these assumptions are not critical for the general insights obtained from our theoretical results to hold. We will make no claims of this sort in the revised version of our paper.
Summary: This submission studies the generalization bound and empirical model for the partial domain adaptation problem, where the label spaces across domains are different. The key idea of this paper is to use the weight deduced from the partial transportation mass, which is claimed to be able to filter the outliers (that are redundant for the target domain). Theoretical bound is provided to show the target error can be upper-bounded by weighting source risk, partial Wasserstein discrepancy, estimation bias on target distribution, and (intractable) worse error. The proposed method is compared with other SOTA PDA and OT methods on a PDA dataset. Claims And Evidence: There are several concerns regarding the claims: C1. The merits seem to be overclaimed, e.g., the generalization bound for PDA, partial alignment. Note that several works have also derived bounds for PDA with OT or even more general framework, where the idea of partial alignment and weighted risk estimation is also presented. More details are discussed in the *Essential References Not Discussed* part. C2. In line 171, the is a claim that “when $\alpha=1$, we have that $q_j = 1/n_t$”. This claim seems to be problematic: if $\beta$ is not small enough (i.e., $1/ (\beta n_s) $ is not big enough) or $n_t$ is small (i.e., $1/n_t$ is large), then this claim indeed cannot be satisfied due to the inequality constraint of POT. C3. In line 174, assume that $q_j = 1/n_t$ (even it seems to cannot be guaranteed), then why this equality can ensure that the outliers are ignored? Considering the same case in C2 while the mass of source outliers is large (i.e., the mass sum of shared samples in $P_s / \beta$ is still smaller than the total mass requirement $\alpha$), there always seems to be transport masses from outliers. Methods And Evaluation Criteria: The methodology and evaluation criteria are generally appropriate. Theoretical Claims: The theoretical results and proofs look correct. Experimental Designs Or Analyses: The comparison experiment is only conducted on a single dataset, while a consistent improvement over different datasets is necessary for demonstrating the empirical performance of the proposed method. Supplementary Material: The technical parts w.r.t. the proofs are roughly checked. Relation To Broader Scientific Literature: The key idea is related to the recent progresses on sample-level weight estimation for (label) shift, the generalization bound for PDA (with optimal transport), where the main difference is that this work considering the partial optimal transport framework. Essential References Not Discussed: The missing references (which are closely related to this submission) can be summarized from the following aspects: 1. Generalization bounds. In fact, from the views of distribution shift (particularly the label shift), both PDA, Open-Set DA (OSDA), and universal DA (UniDA) can be considered as the label shift (LS) or generalized label shift (GLS), where these extreme shift scenarios imply that the supports of label distributions are different. Therefore, there are actually works that provide the same innovation, i.e., 1) GLS bound: upper-bound with weighted source risk, shift on (conditional) representation distribution, and shift on label distribution [r1,r2] 2) GLS bound under optimal transport as discrepancy measures [r3]. 2. The idea of employing the marginals of transport plan (of POT) as tool for automatically address the extreme shift have been studied in Unified OT [r4], which solving a more general setting (i.e., PDA, OSDA and UniDA) and considering the hard threshold for weight (where this work considers soft weight $p_i$). **References** [r1] Tachet des Combes, Remi, et al. "Domain adaptation with conditional distribution matching and generalized label shift." Advances in Neural Information Processing Systems 33 (2020): 19276-19289. [r2] Luo, You-Wei, and Chuan-Xian Ren. "When Invariant Representation Learning Meets Label Shift: Insufficiency and Theoretical Insights." IEEE Transactions on Pattern Analysis and Machine Intelligence (2024). [r3] Kirchmeyer, Matthieu, et al. "Mapping conditional distributions for domain adaptation under generalized target shift." International Conference on Learning Representations. 2022. [r4] Chang, Wanxing, et al. "Unified optimal transport framework for universal domain adaptation." Advances in Neural Information Processing Systems 35 (2022): 29512-29524. Other Strengths And Weaknesses: **Pros:** 1. The organization is clear and easy to follow. 2. Theoretical analysis is provided for the proposed OT framework. **Cons:** 1. The interpretations of the main technique are unrigorous and insufficient. 2. The related works are not properly discussed, which makes it hard to assess the essential merits. 3. The empirical improvement is limited and the experiments. Other Comments Or Suggestions: 1. The definition of $p_i, q_j$ could be confusing due to the incomplete definition of $\Pi^*$. It is necessary to justify that what problem does $\Pi^*$ correspond to (i.e., $(\mathbb{PW}_\alpha (\cdot,\cdot))$)? Though they are defined in the proof part of the appendix, it should be clarified in the main body. Questions For Authors: I would like to authors to address my concerns on the claims and missing references, which are justified in detail in *Claims And Evidence* part and *Essential References Not Discussed* part. Besides, here are additional questions: Q1. It seems that the joint distribution-based bound in Thm. 3.3 induces larger error than Thm. 3.2. Though Eq. (5) has a factor of 2 for $\mathbb{PW}$, note that the discrepancy in Eq. (9) is induced by the joint cost function (which naturally combines the two costs, i.e., feature cost and label cose). Thus, it is generally the same for the partial Wasserstein term while Eq. (9) induces more complex term that are intractable, i.e., term in Eq. (11). Some justifications are highly appreciated. Q2. The main difference between this submission and the existing GLS correction method [r4] is that this work adopts POT as a discrepancy metric. However, as discussed in the *Claims And Evidence* part, the $\mathbb{PW}$ is not definitely guaranteed to exclude the outliers. Thus, how to understand the essential advantages of the proposed method? Q3. The improvement over ARPM is only 0.2\% on Office-Home dataset, while no other empirical results are provided. This result seems to imply that the essential function of the proposed method is generally admitted by the existing ARPM. Justifications or additional experiments are highly appreciated. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their careful reading of the paper, relevant references, and constructive comments. >_C1 and Essential References_: We thank the reviewer for pointing out these relevant references. While all of them are relevant, consider similar settings and techniques, and deserve to be reviewed in the introduction, there are a few key differences between our work and the mentioned papers. First, to the extent that the mentioned papers present bounds in terms of a weighted source loss, they all rely on classwise weights defined in terms of unknown data distributions. These are then estimated using a method based on _Detecting and correcting for label shift with black box predictors_ by Lipton et al. (2018). However, these estimates are only guaranteed to be accurate if GLS holds exactly, i.e., if the feature representation $Z=g(X)$ of the input $X$ is such that $P(Z|Y=y)=Q(Z|Y=y)$ for source $P$ and target $Q$. Our results require no such assumptions, and yield explicitly computable weights from only the observed data. Additional differences are detailed below. 1. The results in [r1] that contain the weighted source loss are bounds on distribution discrepancies, and not target loss. Furthermore, the weights therein are class-level weights that depend on the unknown underlying distributions, and the domain discrepancy is measured in terms of Jensen-Shannon divergence. In [r2], the same kind of weights are used, and domain discrepancy is measured in terms of metrics like the total variation. Finally, the risk bound in [r3] does not include source loss weights, and depends on the 1-Wasserstein distance rather than its partial counterpart considered in our paper. 2. While [r4] considers a more general setting, aiming for, e.g., private class discovery, the proposed algorithm is not accompanied by any theoretical analysis. Furthermore, as pointed out by the reviewer, they use unbalanced rather than partial optimal transport, and consider binary rather than soft weights. In summary: the listed references are very relevant, and we have updated the discussion of related work to include them. However, our theoretical results provide test risk bounds in terms of empirically computable weights and apply without any assumptions on the specific form of distribution shift. We have updated our stated contributions to clarify these points and avoid overclaiming the merits of our work relative to prior art. >_C2, C3, and Q2_: First, note that we assume $\beta\in(0,1]$. Hence, the mass of the scaled source distribution is always at least $1$. When $\alpha=1$ and all entries of $Q_{\tilde X}$ equal $1/n_t$, the condition below Eq. (4) $\mathbf{1}^T_{n_s}\Pi\mathbf{1}_{n_t}=1$ requires the $n_t$ column sums of $\Pi$ to add up to $1$, whereas the condition $\Pi^T \mathbf 1_{n_s} \leq Q_{\tilde X}$ limits each row sum to be $1/n_t$ at most. Combined, they imply that $q_j=1/n_t$. Regarding guaranteeing that outliers are ignored: The parameter $\beta$ needs to be chosen to be small enough to enable the algorithm to ignore outliers. In particular, if $\alpha=1$, $\beta$ can at most equal the outlier proportion. However, this still does not _guarantee_ that outliers are ignored. In the same way, while the approach of [r4] is designed to avoid outliers, there is also no guarantee that this works. Empirically, as shown in Fig. 1, WARMPOT does significantly downweight outlier samples in practice. The essential advantages of WARMPOT for PDA compared to the approach of [r4] are: _(a):_ WARMPOT is endowed with theoretical guarantees, _(b):_ the transport plan for POT is more interpretable than the one for unbalanced OT, and _(c):_ our soft weights enable different samples of the same class to have different influence on the final hypothesis. >_Q1_: As the reviewer notes, Thm. 3.2 includes only a feature cost whereas Thm. 3.3 additionally incorporates a label cost. Intuitively, one would expect the feature-only approach to suffice when the distribution shift is restricted to covariate shift and a setting where the supports of the input distributions mostly overlap. However, in cases of label shift, incorporating the label cost may be helpful. The factor 2 in Thm. 3.2 essentially compensates for the absence of the label loss, and stems from the use of Lemma A.4 (where we need to use triangle inequality twice). A similar point applies to the uncomputable terms that capture the difficulty of the specific task ($L_f$ and $\tilde L_f$). Whether or not one of the bounds leads to better performance is a largely empirical question, and our experiments indicate that, for the tasks under consideration, it is generally beneficial to include a label cost. >_Q3_: We have added results for ImageNet $\rightarrow$ Caltech (see response to 9cQ1 for details). >_Other:_ We now use $\tilde p_i$ and $\tilde q_j$ for the weights in Thm. 3.3 to avoid confusion, and clarify the difference compared to $p_i$ and $q_j$. --- Rebuttal Comment 1.1: Comment: I thank the authors for providing detailed responses, where most of the concerns are addressed. Thus, I would raise the score accordingly.
Summary: The paper presents (PAC) bounds and on the (expected) empirical loss using partial Wasserstein distance in either the marginal (features only) or joint (features and labels). The first two terms of the loss are the loss weighted provided by the marginal of the partial transport plan and the partial Wasserstein distance itself. Minimizing these leads to an algorithm that is performant on a benchmark dataset and when the weights are used in a more complicated heuristic achieves state of the art. ## update after rebuttal Further experiments suggested by reviewer confirm that the approach achieves performance. Limitation due to bias caused by batch sampling, which has been noted in the continuous conditional flow matching case, could be noted. Claims And Evidence: The theory is developed rigorously and the the experimental results (one dataset but many source/target pairs) shows consistent performance. Methods And Evaluation Criteria: For the ground distance in the label space the loss should be a metric, but this is not specified. Theoretical Claims: I did not check them rigorously but they seem logical. Experimental Designs Or Analyses: The second regards how hyperparameter search can be performed on a dataset that its unsupervised... Supplementary Material: I reviewed the appendix. Relation To Broader Scientific Literature: Domain transfer and partial domain alignment is a very practical problem that is well established. The contributions are meaningful and will have impact if reproduced. Essential References Not Discussed: Not that I noticed. Other Strengths And Weaknesses: The paper is clear and well written. It should have significance because it is an important problem due to its widespread nature. Other Comments Or Suggestions: - Questions For Authors: Is a metric actually required for $\ell$? For instance, cross-entropy/log loss/KL divergence doesn't satisfy the requirements for a metric. Line 820 "Through a parameter search," on what dataset and what performance metric? Especially in a dataset that is claimed to be unsupervised, this could be 'information leakage' from the test set performance. Generally, the percent of outliers is unknown. Although empirical measures are used, the results would appear to be for the whole sample not the mini-batch. The implications of the reduced sample size from the mini-batch isn't clear in the bounds. Is there an effect? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their helpful and constructive comments. > Q1. **For the ground distance in the label space the loss should be a metric, but this is not specified. Is a metric actually required for $\ell$? For instance, cross-entropy/log loss/KL divergence doesn't satisfy the requirements for a metric.** The theoretical derivations require the loss function to be symmetric in the model prediction and true label, and that the triangle inequality holds. Note, though, that in our numerical experiments we consider loss functions that do not necessarily conform to the theoretical assumptions. The fact that we still observe good performance indicates that these assumptions are not critical for the general insights obtained from our theoretical results to hold. > Q2. **Line 820 "Through a parameter search," on what dataset and what performance metric? Especially in a dataset that is claimed to be unsupervised, this could be 'information leakage' from the test set performance. Generally, the percent of outliers is unknown.** In order to assess the impact of this hyperparameter selection, we have conducted a sensitivity analysis for the $\alpha_{max}$ and $\beta$ parameters on ImageNet $\rightarrow$ Caltech dataset. We set the following values for hyperparameters: $\eta_1 = 0.92$, $\eta_2 = 5.47$, $\varepsilon = 5.59$. In the experiment on $\alpha_{max}$, we set $\beta = 0.72$ and in the experiment on $\beta$, we set $\alpha_{max} = 0.08$ (see Appendix E in our paper for the notation used). As seen in [Figure 1](https://gofile.io/d/PvEK7c), the impact of varying $\beta$ is not large (about 2%) over the entire range of $0<\beta\le 1$. Similarly, in [Figure 2](https://gofile.io/d/ZsHkUK), when $0<\alpha_{max}\le 0.1$, the performance difference is not large (about 3%). However, for $\alpha_{max}>0.1$ there is a significant drop which is due to the large number of outliers in source sample. Using a larger value of $\alpha_{max}$ results in a positive weight over outlier instances thus degrading the performance. The results indicate that the specific choice of these parameters has a minor impact over a range of reasonable values. Furthermore, note that for the _ARPM+our-weights_ algorithm, we use the same parameters as the original ARPM algorithm. > Q3. **Although empirical measures are used, the results would appear to be for the whole sample not the mini-batch. The implications of the reduced sample size from the mini-batch isn't clear in the bounds. Is there an effect?** As the reviewer notes, we solve the mini-batch partial optimal transport problem during training rather than the full-data transport problem due to computational considerations. To study the effect that this has on the resulting weights, we compared: (i): the average weights applied to each sample during training that arise from the mini-batch partial optimal transport problem. (ii): the weights arising from the full-data transport problem at the end of training; The results indicate that the two approaches lead to very similar weight distributions, and in particular, the weight proportion assigned to shared samples is 86.37% with mini-batch weights and 90.54% with full-sample weights. This aligns with discussions in _Improving mini-batch optimal transport via partial transportation_ by Nguyen et al. (2022) regarding the possibility of using mini-batches to approximate full-data transport problems.
null
null
null
null
null
null
msf-CNN: Multi-Stage Fusion with Convolutional Neural Networks for TinyML
Reject
Summary: The paper presents a multi-stage fusion technique for optimizing CNN inference on memory-constrained microcontrollers (MCUs), called msf-CNN. The main objective of this work is to efficiently execute deep neural networks on resource-limited IoT devices by reducing RAM usage through layer fusion while balancing inference latency. The proposed method formulates the problem as a shortest path search in a directed acyclic graph (DAG) to identify optimal fusion configurations. The proposed method is validated on various MCU platforms (ARM Cortex-M, RISC-V, ESP32), showing significant reductions in RAM usage with up to 50% less than existing methods like MCUNetV2 and StreamNet. Claims And Evidence: Overall, the paper backs up most of its claims with solid evidence, but a few could be clearer. The claim that msf-CNN reduces RAM usage by up to 50% is mostly supported, but the actual savings vary by model, sometimes being much lower. The idea that msf-CNN introduces a new trade-off between memory and compute overhead is valid, but the impact on real-time performance isn't fully explored, especially since inference latency can increase in extreme cases. Methods And Evaluation Criteria: Yes, for the most part. The main objective of the evaluation in this work is to explore different trade-offs between RAM usage and compute overhead through a thorough analysis across multiple MCU architectures (ARM Cortex-M, RISC-V, ESP32). The comparison against baselines like MCUNetV2 and StreamNet is well-structured, and the use of RAM and compute latency measurements is appropriate for this type of application. However, the efficiency of the proposed method could be clearer if the trade-off between RAM and compute latency were visualized as a curve, ideally with a Pareto-optimal front to better illustrate the best achievable balance between memory and performance. This would provide a more intuitive understanding of how msf-CNN performs under different constraints and would make it easier to compare against prior works. Theoretical Claims: No, I didn't thoroughly checked the correctness of theoretical analysis in this paper. Experimental Designs Or Analyses: Yes, they all make sense. Supplementary Material: Yes, I checked the analysis of the number of MAC operations provided in Appendix. Relation To Broader Scientific Literature: This paper builds on prior works including MCUNetV2 and StreamNet by enabling deeper fusion blocks and using a graph-based shortest-path approach instead of brute-force searches. It formulates fusion as a DAG-based optimization problem, improving RAM efficiency while balancing compute overhead. Unlike Neural Architecture Search, which redesigns models, msf-CNN optimizes memory allocation for existing CNNs. It also introduces iterative pooling and dense layers, reducing memory use similar to streaming CNN architectures. While msf-CNN improves memory efficiency, energy consumption analysis and support for non-CNN models could further connect it to real-world TinyML applications. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: -- The use of graph-based optimization to find efficient fusion settings is novel. -- There is a significant decrease in peak RAM usage (65-87%), making CNN inference feasible on extremely memory-constrained MCUs. -- The paper provides mathematical formulations for peak RAM and compute cost. Weaknesses: -- The current formulation only applies to CNNs and does not extend to other architectures like transformers, RNNs, or hybrid models. -- The trade-off between latency and memory could have been better presented as a curve rather than a table. -- While the shortest path problem is well-explained, the complexity analysis of the search algorithm is not fully detailed. -- Energy consumption is another important factor for edge devices. An analysis of energy consumption under different fusion configurations would strengthen the practical implications of the work. Other Comments Or Suggestions: N/A Questions For Authors: See my points listed as weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you very much for the insightful feedback on our manuscript. Below, we address each of your comments and questions in detail. **[C1. The current formulation only applies to CNNs and does not extend to other architectures like transformers, RNNs, or hybrid models.]** Thanks for the suggestion. We acknowledge that our current work focuses exclusively on CNNs. As stated in Section 9, we are actively working to extend the methodology to other architectures (e.g., transformers, RNNs). **[C2. The trade-off between latency and memory could have been better presented as a curve rather than a table.]** Thanks for the suggestion. Here we visualized the RAM-latency trade-off in [anonymous link](https://anonymous.4open.science/r/msf-CNN-3BE5/RAM-latency%20trade-off.pdf). **[C3. While the shortest path problem is well-explained, the complexity analysis of the search algorithm is not fully detailed.]** Thanks for the remark. We will add a more detailed analysis on computational complexity in the appendix. We provide below a quick preliminary analysis of the *worst-case scenario*. We also highlight that these shortest path computations do not take place on the microcontroller at runtime, but offline on a PC (which expands the realm of what can be assessed as bearable computation). First, we consider the *lower-bound* of the search algorithm. As shown in Section 6, both problem P1 and P2 without constraints can be transformed into a multiple single-source-single-target shortest path problem, which can by solved by Dijkstra's algorithm with Fibonacci heap [1] with complexity: $$ O(E + V log(V)), $$ where $E$ and $V$ denotes edges (possible fusion blocks) and vertices (layers) of the DAG. In the worst case $E=\sum_{n=1}^V (n-1)$, which leading the *overall lower-bound to $O(V^2)$*. Concerning Problem P1 with constraints: if we don't prune the search space (iteratively), we need to brute force all possible fusion settings of the DAG to form a subspace that fulfills the latency constraints. This leads to enumerating all simple paths from input layer to the output layer. In the worst-case, the $i$-th layer of an $N$-layered CNN has $2^{i-2}$ paths from input layer pointing to it. Thus, starting from the input layer, we obtain $2^{N-2}$ fusion combinations, which becomes unbearable for a deep neural network. Hence, we apply a pruning strategy (see Equ. 11-13, line 177-183) to reduce the complexity from $O(2^{N-2})$ to $O(N^2)$. The idea: we erase the edges with maximal RAM usage per iteration. In the worst-case, only one edge is erased in each iteration, with a complete DAG with $\frac{N(N-1)}{2}$ edges. Thus, the worst-case complexity of our search algorithm for constrained Problem P1 is $O(N^2)$. **[C4. An analysis of energy consumption under different fusion configurations would strengthen the practical implications of the work.]** While energy-efficiency is important as pointed out, our work primarily targets RAM budget constraints rather than energy budgets. Fitting model operation within the limits of a tiny maximal memory budget is the critical first hurdle faced by deep edge AI developers. Our approach is particularly relevant for small devices where RAM is the first-order bottleneck (e.g., MEMS and tiny sensors). We will tackle energy consumption measurements in future work, as suggested. We acknowledge that energy consumption is highly hardware- and use-case-dependent, as well as influenced by computation latency and the amount of RAM used. For completeness, we will add a short discussion in the appendix on this aspect, without blurring the main purpose of our work. Thank you again for your valuable feedback. Please let us know if you have further questions. **Reference** [1] Fredman, Michael L., and Robert Endre Tarjan. "Fibonacci heaps and their uses in improved network optimization algorithms." Journal of the ACM (JACM) 34.3 (1987): 596-615.
Summary: In this paper, the authors introduce msf-CNN, a multi-stage fusion (msf) approach that identifies optimal fusion settings for CNNs by navigating the fusion solution space represented as a directed acyclic graph (DAG). The goal is to reduce RAM usage without introducing significant computational overhead. The msf-CNN was evaluated on various MCU platforms, including ARM Cortex-M, RISC-V, and ESP32, and demonstrated up to a 50% reduction in RAM usage compared to previous approaches such as MCUNetV2 and StreamNet. Claims And Evidence: Many claims made in the paper are supported by clear and convincing evidence, except: 1. It claims that msf-CNN can generalize beyond CNNs, but no experiments on non-CNN architectures are provided.
 2. It fails to clarify the trade-offs between RAM reduction and inference latency. For instance, Table 4 shows a consistent 2-5x increase in inference latency, which contradicts the central claim that msf-CNN simultaneously achieves both low memory and low latency. Methods And Evaluation Criteria: Convincing: 1. Under the DAG representation, the fusion block optimization problem is formulated as the shortest-path problem. In addition, the pruning strategy reduces the search complexity from O(2^N) to O(N^2). These techniques are both novel and practical for TinyML deployment.
 2. Their focus on peak RAM usage (rather than average RAM usage) is meaningful for avoiding memory overflow on MCUs, which is often the bottleneck in edge devices.

 Not convincing: 
 1. The paper only evaluates on MobileNetV2 and MCUNet, which are popular models, but there is no evaluation on standard TinyML benchmark datasets. 2. Significant reduction in RAM usage is undoubtedly important, but without showing accuracy changes at the same time, their study is incomplete. Theoretical Claims: It makes sense for the authors to formulate the fusion block optimization problem as the shortest-path problem under the DAG representation. It is also true that the pruning strategy reduces the search complexity from O(2^N) to O(N^2). While this may not be highly problematic, the pruning strategy is not guaranteed to find the global optimal fusion setting though.

 Their analyses on RAM usages in “Iterative Computation of Global Pooling” and “Iterative Computation of Dense Layer” are valid. But the analysis on the corresponding computational cost is missing and thereby unconvincing. Experimental Designs Or Analyses: Their main experimental designs or analyses align well with the goal of optimizing CNNs for TinyML deployment. But let’s reiterate the insufficiencies:
 1. The paper claims that msf-CNN can generalize beyond CNNs, but no experiments on non-CNN architectures are provided.
 2. The paper fails to clarify the trade-offs between RAM reduction and inference latency. For instance, Table 4 shows a consistent 2-5x increase in inference latency, which contradicts the central claim that msf-CNN simultaneously achieves both low memory and low latency. 3. The paper only evaluates on MobileNetV2 and MCUNet, which are popular models, but there is no evaluation on standard TinyML benchmark datasets. 4. Significant reduction in RAM usage is undoubtedly important, but without showing accuracy changes at the same time, their study is incomplete. Supplementary Material: There seems to be no supplementary material. Relation To Broader Scientific Literature: Prior work, such as MCUNetV2 (NeurIPS 2021) and StreamNet (NeurIPS 2024), introduced basic layer fusion for reducing RAM usage on MCUs. 

 What msf-CNN excels is that it extends previous effort by performing multi-stage fusion and formulating the fusion process as the shortest-path problem on DAG. This approach makes the whole problem more efficient and scalable, leading to 50% more RAM reduction. Essential References Not Discussed: The paper “μNAS: Constrained Neural Architecture Search for Microcontrollers” [arXiv:2010.14246] was published in the Proceedings of the 1st Workshop on Machine Learning and Systems in 2020. In this paper, neural networks are represented as DAGs, with nodes and edges representing operators (layers) and their connectivity. 

 Please mention this paper and give them appropriate credits. Other Strengths And Weaknesses: None Other Comments Or Suggestions: None Questions For Authors: Why is model accuracy not reported in the experimental results? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you very much for the insightful feedback on our manuscript. Below, we address each of your comments and questions in detail. **[Q1. Why is model accuracy not reported in the experimental results?]** We appreciate the reviewer raising this, as it allows us to clarify a crucial aspect of our method. msf-CNN is a **computation scheduling and memory optimization technique that does _not_ alter the model's architecture, parameters, or the mathematical operations performed.** It only changes _when_ and _how_ intermediate results are computed and stored to minimize peak memory. Therefore, the final output, and consequently the model's accuracy, remains **identical** to the original, unfused model. Hence, standard ML performance benchmarks focusing on accuracy are irrelevant here. Accordingly, we will add disambiguating text in the paper, explicitly mentioning the above. **[C1. Missing Reference to μNAS]** We thank the reviewers for pointing out this paper. We agree that μNAS is a seminal work in representing neural networks as directed acyclic graphs (DAGs) and has inspired many subsequent studies, including our own. We will add a citation to μNAS in the related work section and acknowledge its contributions. Although both works focus on reduction of peak memory usage and ultilized DAG to model the problem, μNAS addresses it by altering the neural network architecture and reordering the execution of operators, while msf-CNN developed a graph-based solver to find fusion settings that have optimal RAM-latency trade-off under specific constratins. We believe both methods are orthogonal and can be applied simultaneously. **[C2. The paper claims that msf-CNN can generalize beyond CNNs, but no experiments on non-CNN architectures are provided.]** We would like to clarify that this work currently focuses exclusively on convolutional neural network architectures (CNNs). Nevertheless, as stated in the future work we described (in Section 9) we are currently working on generalizing our framework to other architectures. **[C3. The paper fails to clarify the trade-offs between RAM reduction and inference latency...which contradicts the central claim that msf-CNN simultaneously achieves both low memory and low latency.]** We noticed the ambiguity in our initial phrasing. We acknowledge that multi-stage fusion inherently involves a trade-off: reducing peak RAM often requires recomputation, which increases inference latency. Our central claim is _not_ that msf-CNN simultaneously achieves the low memory and low latency compared to all baselines in all scenarios. Instead, msf-CNN provides a **systematic framework (the DAG-based search) to explore the RAM-Latency Pareto frontier** more effectively than prior methods. This allows practitioners to find _optimal trade-offs_ that meet specific constraints. More details can be found in Table 5. **[C4. The paper only evaluates on MobileNetV2 and MCUNet, which are popular models, but there is no evaluation on standard TinyML benchmark datasets.]** Please check the above response to Q1. Accuracy is systematically unchanged, hence standard accuracy-focused benchmark suites are irrelevant for our purpose. **[C5. Significant reduction in RAM usage is undoubtedly important, but without showing accuracy changes at the same time, their study is incomplete.]** Please check the above response to Q1. Accuracy remains strictly unchanged, hence additional accuracy measurements are not needed. **[C6. Missing Computational Cost Analysis for Iterative Computations]** Figures 3 and 4 illustrate our technique for global pooling and dense layers, wherein computation is partitioned into several iterations synchronized with partial outputs from the preceding fusion block. By processing input vectors element-by-element, this approach calculates the output iteratively while preserving the exact set of arithmetic operations. Therefore, the total number of MACs is identical to the baseline, resulting in zero computational overhead. Thank you again for your valuable feedback. Please let us know if you have further questions. --- Rebuttal Comment 1.1: Comment: The authors have clarified most of my concerns to some extent, and I would now recommend a "weak accept". --- Reply to Comment 1.1.1: Comment: Thank you very much for your feedback and for acknowledging our clarifications in the rebuttal. We sincerely appreciate your updated "weak accept" recommendation. However, we notice the system still displays the original "weak reject" score. Could there be a technical issue preventing the update, or do we need to take any specific action? Please forgive our potentially naive question on this.
Summary: msf-CNN is a framework for reducing CNN memory usage on very small devices (e.g., MCUs) through multi-stage layer fusion. The authors represent CNN layers as edges in a directed acyclic graph, then systematically search for fusion “blocks” using graph-based algorithms to minimize peak RAM or computation cost. They also introduce iterative global pooling and dense layers to further reduce memory. Experiments on multiple MCU architectures (Cortex-M, RISC-V, and ESP32) show that msf-CNN cuts RAM usage by 50% or more compared to leading approaches, at the price of potentially higher latency. The system lets designers choose whether to favor minimal memory or minimal computational overhead, broadening CNN deployment possibilities for highly constrained hardware. Update after rebuttal: Thanks authors for the replies which resolve my concerns, my original recommendation decision remain the same. Claims And Evidence: There are 3 major claims (6 in total mentioned in paper) Claim 1: msf-CNN yields substantial RAM savings over existing solutions. Evidence for this appears in Section 8 (Experiments on Microcontrollers), where the authors compare msf-CNN to MCUNetV2 and StreamNet. For example, Table 3 shows that msf-CNN can achieve as little as 8.56 kB of RAM usage for a MobileNetV2 variant that un-fused baselines (and even state-of-the-art fusion approaches) cannot shrink below 60–65 kB. These are measured results on real boards (esp32s3, STM32, RISC-V boards). Claim 2: msf-CNN allows flexible trade-offs between memory and computation cost. Evidence is given in the tables of Section 6.3 and expanded in Section 8, where setting different constraints for peak RAM usage or latency leads to different—but valid—fusion strategies. Under one scenario, msf-CNN can reduce memory usage at the cost of 2× to 5× extra computations; in another, it can optimize for minimal overhead by allowing a higher memory budget. Experimental data confirm that even with a strict memory limit, msf-CNN still finds feasible solutions. Claim 3: msf-CNN is portable to a broad range of microcontrollers. Section 8 illustrates results on MCUs from various families (ARM Cortex-M7, Cortex-M4, ESP32, RISC-V), showing that the same approach can be applied without major changes in code generation. The authors also cite in Section 7 (Implementation Details) that they rely on the microTVM code generator, then custom rewrite the fusion blocks. Overall, the paper’s core claims are well supported by experimental evidence and fairly thorough memory and compute measurements. Methods And Evaluation Criteria: The authors base their methodology on re-creating an inverted dataflow graph of the CNN (Section 5), encoding possible single-layer and multi-layer fused operations as edges. They then use a “shortest path” perspective to find the minimal peak memory route or the minimal total computation route through the graph. The authors’ evaluation criteria focus primarily on: 1. Peak RAM usage (with direct measurement of actual memory consumption at runtime). 2. Computation overhead (expressed in MAC counts as well as real-world latency). 3. Portability (testing on different MCUs with different memory and CPU constraints). This set of evaluation criteria is quite suitable for the problem at hand: microcontroller-based deployment demands a careful balance of memory versus latency. The authors also clearly state that they are not primarily targeting top classification accuracy or novel neural architectures—rather, they concentrate on making standard CNNs feasible on extremely constrained hardware. Theoretical Claims: The theoretical foundation rests upon: 1. Graph interpretation of CNN layers (Section 5): The authors describe each layer or fused block as an edge with associated “peak RAM usage” and “MAC cost.” 2. Shortest-path or minimax path approach (Sections 5 and 6): For instance, minimizing peak memory is transformed into a minimax path problem, solvable with minor modifications to standard algorithms like Dijkstra’s or BFS/DFS-based searches. These approaches are correct at a conceptual level, and the paper does not contain advanced new proofs. Experimental Designs Or Analyses: The paper’s experiments (Section 8) involve: 1. Deploying three distinct CNN backbones (MobileNetV2 variants and MCUNetV2) with different image sizes and channel scaling. 2. Measuring memory usage and latency on real MCU hardware across ARM, RISC-V, and Xtensa. 3. Comparing msf-CNN with two prior methods: MCUNetV2 and StreamNet. These experiments convincingly show the memory savings as well as the performance overhead. The authors also explore different constraints (peak memory limit or overhead limit) to demonstrate msf-CNN’s flexibility. The discussion of observed latency variations across architectures and clock speeds shows thoroughness. The design is sound, covering multiple MCU families and providing repeated references to measured rather than purely simulated results, which is a strong point. A minor improvement in the analysis might be to include details about standard deviation of runtime or memory measurements. But overall, the design is robust, and the analyses in Section 8 reasonably validate the authors’ claims. Supplementary Material: Yes, it is at https://anonymous.4open.science/r/msf-CNN-3BE5/ I did not run the code and check the result, but logically it looks right to me. And artifacts are provided. Relation To Broader Scientific Literature: The paper builds on a line of work dealing with memory minimization and scheduling for neural networks on FPGAs and GPUs, referencing Alwani et al. (2016) for fused-layer approaches and subsequent expansions, as well as past MCU-specific optimization frameworks like MCUNetV2 and StreamNet. It also integrates with the general domain of “TinyML compilers” such as microTVM and IREE, stating clearly that existing compilers lacked multi-block fusion optimization. Compared to prior studies, msf-CNN adds: 1. A more comprehensive search space (rather than a single big fuse block or a forced partial fusion). 2. A graph-based solver for user-driven constraints (peak memory or time). 3. Extensions to iterative global pooling and dense layers to reduce memory usage further. Essential References Not Discussed: One relevant area for further contextualization is dynamic scheduling approaches (e.g., memory re-use optimization beyond standard layer-by-layer tiling) in compilers like Apache TVM’s “AutoScheduler” or advanced memory planning from specialized frameworks. Other Strengths And Weaknesses: NA, already pretty much been covered by previous questions. Other Comments Or Suggestions: 1. Clarify memory measurement procedures: It would be helpful to describe how precisely the peak memory usage is measured at runtime—some extra detail on whether instrumentation or static analysis was used could increase reproducibility. 2. Highlight impact of flash memory reads: The authors note (Section 8.3) that re-fetching weights from flash can degrade performance. A small dedicated subsection about practical flash-read overhead on MCUs would be valuable, especially for new TinyML practitioners. Questions For Authors: 1. Has msf-CNN been tested on advanced caching paradigms, beyond H-cache? If so, do you anticipate further memory savings or latency improvements by partial caching along both dimensions (height and width) at different layers? Potential Effect on Evaluation: This would show how future expansions might yield intermediate solutions with moderate memory usage and moderate overhead. 2. Could msf-CNN fuse multiple non-convolutional layers? Many CNN-based networks now incorporate activation layers or attention blocks. Is there a straightforward extension of your DAG approach to these operators? Potential Effect on Evaluation: It may confirm that the approach is robust beyond typical CNN layer sequences. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you very much for the insightful comments and constructive suggestions on our work. Below, we address each of your comments and questions in detail. **[C1. Essential References Not Discussed]** Sorry for the omitting the mention of TVM's "AutoScheduler". We will definitely mention it in the paper's final version. **[C2. Clarify memory measurement procedures]** We relied on TVM's [Ahead-of-Time (AoT)](https://discuss.tvm.apache.org/t/implementing-aot-in-tvm/9206) compilation for model code generation, and enabled [Unified Static Memory Planning (USMP)](https://discuss.tvm.apache.org/t/rfc-unified-static-memory-planning/10099) for static memory allocation. So we can fetch the runtime RAM usage directly from that memory planner. We will add this details into the updated manuscript. **[C3. Highlight impact of flash memory reads]** Thanks for pointing this out. Currently we are also conducting experiments to more precisely measure this effect. We will add a short discussion on this in our manuscript. **[Q1. Has msf-CNN been tested on advanced caching paradigms, beyond H-cache? ]** Yes. Our solver is designed to easily extend to alternative caching schemes and varying input patch sizes. Furthermore, we are actively exploring a **mixed-mode caching strategy** within individual fusion blocks (e.g., employing distinct caching schemes across sequential layers). So the search space is enlarged to yield a better RAM-Latency trade-off. We will definitely add corresponding experimental results in appendix of our manuscript, if initial findings are validated before the deadline. **[Q2. Could msf-CNN fuse multiple non-convolutional layers?]** Yes. For activation layers, our method is orthogonal to kernel fusion and they can be applied concurrently. For example, a convolutional layer followed by Batch Normalization and ReLU will be first merged into one operator (kernel fusion, Conv+BN+ReLU), and try to fuse with other convolutional layers by our msf-CNN. We just need to add latency and RAM usage estimator of the activation layers onto our DAG-based solver. However, we acknowledge that currently our method only discuss the fusion of CNN-based layers. We will further explore the fusion potential of non-CNN architectures like RNN, attention blocks etc. Thank you again for your valuable feedback an suggestions. Please let us know if you have further questions.
Summary: This work proposes a multi-stage fusion method, called MSF-CNN, to optimize RAM usage for tiny CNNs on microcontrollers. By modeling fusion as a graph optimization problem, it minimizes memory and computation costs. Experiments on various MCUs demonstrate that MSF-CNN significantly reduces peak RAM usage, providing flexible trade-offs for real-world TinyML applications. Claims And Evidence: The claims are clear and supported by theoretical/empirical analysis. Methods And Evaluation Criteria: Yes, the evaluation criteria generally makes sense. Theoretical Claims: N/A Experimental Designs Or Analyses: Yes. This work analyzes the peak memory usage and latency of different fusion strategies. Supplementary Material: Yes, I checked the provided anonymous repository, which includes the code, but the README is missing. Relation To Broader Scientific Literature: This work can serve as a tool for deploying tiny CNNs, helping to determine the optimal trade-off between memory and latency when using fusion. Essential References Not Discussed: N/A. Other Strengths And Weaknesses: **Strengths:** 1. The proposed framework is generally interesting and sound, with the potential to serve as a tool for determining the optimal trade-off between memory and latency when using fusion. **Weaknesses:** 1. The major concern is whether the proposed method can practically improve latency under a given peak memory constraint compared to previous heuristic solutions. For example, MCUNet identifies that peak memory usage occurs in the first layers, a common pattern in most CNNs, and employs a simple fusion strategy to reduce it. I suspect such a heuristic solution could be applicable to most CNNs, making the proposed framework less practically useful. The authors should demonstrate whether the search space is large and whether a heuristic or simple strategy performs sufficiently well, which is not addressed in the current paper. 2. This work lacks sufficient benchmarking: no comparison has been provided regarding latency relative to previous methods. For example, only peak memory usage is compared with MCUNet, whereas ultimately, overall latency under the peak memory constraint is what truly matters. 3. The proposed solution is generally intuitive, and this paper would be strengthened if the authors leveraged the framework to analyze the fusion search space. Other Comments Or Suggestions: It would be helpful to provide background information on the implementation of fusion. Questions For Authors: My questions have been included in the weakness section. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you very much for the insightful remarks on our manuscript. We address below your questions/comments: **[C1. Can msf-CNN practically improve latency under a given peak memory constraint compared to previous heuristic solutions? ]** We agree that this is a key consideration. Compared to prior work using simpler heuristic strategies (e.g. MCUNet) we wish to highlight the following: - **Experimental Evidence (Table 5):** our measurements show that msf-CNN **can find solutions with much lower peak memory usage** compared to MCUNet (though sometimes potentially at the cost of higher computational latency, illustrating a different trade-off point). This indicates that even for seemingly simple fusion problems, a systematic search can yield better memory optimization than a simplified heuristic, justifying the utility of our approach. - **Enlarged Search:** For an N-layer network (excluding input/output layers for fusion), there exist $2^{N-2}$ potential fusion configurations. When facing **extremely tight resource constraints** (e.g., MCUs with RAM < 50 kB), heuristics considering only a small subset of configurations e.g. the initial few layers as in MCUNet, are often **insufficient** to meet the memory budget. In such stringent scenarios, exploring more complex, multi-stage fusion strategies becomes essential, which is precisely where msf-CNN excels by providing a structured method to navigate this larger space. - **New Tradeoffs:** In some cases, a latency slump may be tolerable if memory is significantly reduced, within some critical thresholds. A concrete example: the audio signal analysis use-case described in [1]. In such use-cases, provided inference still completes within "real-time" execution bounds, a trade-off could be considered a good deal on a small microcontroller if the reduced memory footprint now fits within the total available RAM budget -- whereas previously it did not -- even if inference is slower **[C2. Benchmarks lack latency measurements comparing to prior work.]** **Table 5** provides measurement of latency with msf-CNN compared to prior work (MCUNetV2) under various constraint settings. **This table explicitly lists both peak memory usage and the corresponding inference latency for different configurations**. More specifically, the results presented in Table 5 (**marked in bold**) clearly show how, under certain constraints, the fusion configurations identified by msf-CNN achieve both lower latency and RAM usage compared to MCUNetV2. **[C3. The proposed solution is generally intuitive, and this paper would be strengthened if the authors leveraged the framework to analyze the fusion search space.]** Thank you for the suggestion, which can indeed be the subject for future work. In this paper, we have focused on the primary challenge, i.e., systematically determining the _optimal_ multi-stage fusion strategy, especially when considering the trade-off between peak memory, computational latency, and specific hardware characteristics. We also note your complementary remark on the code's main README which we will of course provide in the final version of the artifact. As suggested we will also add further implementation details on the fusion in the appendix. Thank you again for your valuable feedback. Please let us know if you have further questions. **Reference** [1] Z. Huang et al. TinyChirp: Bird Song Recognition Using TinyML Models on Low-power Wireless Acoustic Sensors, in Proceedings of the IEEE International Symposium on the Internet of Sounds. Erlangen, Germany, September 2024.
Summary: The paper presents msf-CNN, a novel approach with open-source code to optimize convolutional neural network (CNN) inference on microcontrollers (MCUs) by employing multi-stage fusion techniques. Motivated by TinyML’s stringent memory and computational constraints, it reformulates the fusion configuration search as a graph-based shortest-path problem with constraint optimization, aiming to minimize peak RAM usage or computational cost. Key findings include substantial memory savings (up to 87% reduction compared to prior methods like MCUNetV2 and StreamNet), achieved through a directed acyclic graph (DAG) representation, a pruning strategy reducing complexity from O(2^(N-2)) to O(N^2), and iterative optimizations for global pooling and dense layers. Experimental results validate the approach across ARM Cortex-M, RISC-V, and ESP32 MCUs, showcasing flexibility for diverse IoT scenarios. Claims And Evidence: The claims—significant RAM reduction, flexible trade-offs, and efficient optimization—are robustly supported by evidence. Analytical results (Table 1) and experiments (Tables 3-5) show up to 87% RAM savings and consistent performance under constraints, with clear comparisons to MCUNetV2 and StreamNet. No unsupported claims were identified; the evidence is convincing and well-presented. Methods And Evaluation Criteria: The methods—graph-based fusion optimization, iterative layer computation, and H-cache usage—are well-suited for TinyML’s resource-limited context. The evaluation criteria, measuring peak RAM (kB) and latency (ms) on diverse MCU boards (Table 2), are practical and relevant. Benchmarks like MobileNetV2 and MCUNet variants are apt choices, reflecting real-world AIoT needs. Theoretical Claims: I verified the optimization problems (P1, P2) and MAC analysis (Appendix A). The conversion to a shortest-path problem using inverted dataflow graphs is theoretically sound, and equations (e.g., Eq. 5, 16, 17) for cache size and MAC counts are correctly formulated. No errors were found. Experimental Designs Or Analyses: I reviewed the experiments in Section 8 (Tables 3-5), testing minimal RAM usage, RAM budget, and compute cost limits. The design—spanning multiple MCU architectures and comparing to MCUNetV2 and StreamNet—is solid, with results corroborating analytical predictions (Section 6.3). The validation is thorough, and no issues arose. Supplementary Material: The code is open source and looks solid Relation To Broader Scientific Literature: The paper extends prior fusion work (e.g., Alwani et al., 2016; Lin et al., 2021) by introducing multi-stage fusion and graph-based optimization, aligning with TinyML advancements (e.g., Lin et al., 2020). It builds on dataflow graph concepts (TensorFlow, PyTorch) and complements memory-efficient techniques like TinyEngine (Lin et al., 2021), enhancing their MCU applicability. Essential References Not Discussed: No critical omissions were noted. The paper cites key works (e.g., MCUNetV2, StreamNet) and foundational fusion studies, providing sufficient context. No recent, essential breakthroughs appear overlooked. Other Strengths And Weaknesses: Strengths: The conversion of fusion optimization into a shortest-path problem using inverted dataflow graphs is innovative, leveraging efficient graph algorithms for practical results. The detailed microTVM implementation and validation across MCU architectures (up to 87% RAM reduction) highlight its real-world utility. The flexibility to tune fusion for varying IoT resource profiles is a significant advantage. Clarity and originality shine through, combining existing ideas creatively for TinyML. Weaknesses & Suggestions: Increased Computation Latency: The 2× to 5× latency increase for minimal RAM settings (Table 4) may limit real-time use. Exploring latency optimization could broaden applicability. Hardware-Specific Considerations: Performance varies across architectures (e.g., Xtensa vs. RISC-V, Table 4); more insights into hardware-specific tuning would enhance portability. Parameter and Architecture Exploration: The fixed output elements and H-cache focus limit the search space. Expanding to dynamic parameters or other architectures (e.g., transformers, RNNs) could amplify impact. Other Comments Or Suggestions: None Questions For Authors: None. The paper is clear, and limitations are acknowledged as future work, not requiring responses to shift my evaluation. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you so much for the supportive comments and for your valuable suggestions for future work in various directions. We are incidentally working on exploring hardware-specific tuning, leveraging more advanced caching techniques and covering architectures other than CNN.
Summary: This paper present msf-CNN, by leveraging traditional DAG and kernel fusion techniques, msf-cnn achieves 50% less peak memory usage while achieving similar inference performance. The paper logic flow is clear, but lack of novelty, and some experiment results baseline are not strong. ## update after rebuttal After careful review of authors comments, I still want to remain my review and score. Thanks Claims And Evidence: No. For example, the most important claim that the paper repeatedly highlighted as performance gain is not clear claim. e.g., "msf-CNN can achieve inference with less than 50% the peak RAM usage state-of-the-art". What does it mean by state-of-the-art, in which dimension latency? throughput? or accuracy? Methods And Evaluation Criteria: Methods: the paper use very standard graph optization and kernel fusion techniques in ML compiler level. For example, use DAG to describe computation flow, and leveraging kernel fusion to reduce peak memory i/o or reduce recomputation. Optimizing for either low memory usage or less compute make sense. However, the baseline that this paper compared with is too weak. It compared against pure vanilla execution of sequential kernels without any basic kernel fusion techniques. Theoretical Claims: yes Experimental Designs Or Analyses: baseline too weak. Supplementary Material: N/A Relation To Broader Scientific Literature: This contribution can be helpful to reducing peak memory usage or reducing recomputation cost in CNN inference on edge devices. Essential References Not Discussed: TorchDynamo, TorchCompile, ExecuTorch (https://pytorch.org/executorch/stable/index.html) Other Strengths And Weaknesses: Strength: 1. logic flow is very clear and easy to follow 2. theoretical analysis are easy to follow Weaknesses: 1. methods used are very basic and standard practice in main-stream AI inference optimization, such as describe compute flow as DAG (same as torch.fx), kernel fusion ( torch.compile, torchdynamo), deployment to edge device ( ExecuTorch). 2. In experimental design, vanilla baseline is too weak for comparison purpose. Any inference systems nowadays includes some level of automatic kernel fusion techniques and kernel dimension fine-tuning. Therefore the vanilla baseline of sequencial execution of every kernel is too weak to compare with. Other Comments Or Suggestions: In related work section 10, ". However, none of the above tools provide CNN fusion optimization mechanisms, in contrast to msf-CNN". I don't see it is true. For example, in tvm paper (https://homes.cs.washington.edu/~arvind/papers/tvm.pdf), section3, operator fusion part, it clearly side conv2d can be fused with other element-wise kernels. Questions For Authors: Over the paper is trying to optimize for lower memory usage or less computation cost on edge device, which is a promising direction. I have several questions: 1. how does this work compared with main-stream kernel fusion techniques such as torch.compile, torchDynamo, torch fx.graph, ExecuTorch? 2. For inference reducing peak mem usage or less computation, how does this work different from unsloth (https://github.com/unslothai/unsloth), liger (https://github.com/linkedin/Liger-Kernel) and similar work? 3. kernel fusion based on computation graph optimization is well defined area. As mentioned in this paper, tvm or micro-tvm already fully support such feature (also with automatic fusion schema). Any novel contribution made in this paper? Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you very much for your comments pointing out that a reader might potentially confuse **multi-stage fusion** (msf-CNN, our approach) on the one hand, and on the other hand traditional **kernel fusion** techniques. These two approaches are orthogonal and can be applied concurrently for maximum benefit. While kernel fusion optimizes computation overhead, msf-CNN targets instead memory efficiency, the latter being the critical first hurdle on small edge devices. As such, we could disambiguate this further in the related work section: - **Kernel fusion** [1-3] focuses primarily on reducing redundant data movements between GPU and RAM by combining multiple primitive operators (e.g. Batch Normalization, ReLU, Softmax, etc.) with a primary, memory-bound operator (e.g. conv, pooling) into a single kernel. While kernel fusion improves compute latency and data throughput, it does not address the fundamental memory usage issue that arises when processing multiple primary operators sequentially. - **multi-stage fusion (msf-CNN, our approach)** extends the idea of *patch-based fusion* [4, 5]. More specifically, msf-CNN: - Fuses multiple layers (i.e. primary operators like convolution and pooling) into a single computational stage. - Implements _patch-based partial computation_, which drastically reduces peak memory usage by processing input data in smaller patches while maintaining accuracy. - Introduces a compute-memory trade-off mechanism that allows users to prioritize either memory consumption or computational efficiency based on their deployment constraints. This makes msf-CNN fundamentally different from traditional kernel fusion techniques. To the best of our knowledge, the closest related works to msf-CNN are StreamNet and MCUNetv2, which have been the state-of-the-art for patch-based fusion on microcontrollers so far. Compared to this state of the art, based on our measurements, msf-CNN achieves up to 50% reduction of peak RAM usage in model inference, for the same inference accuracy, which thus enables more models to fit smaller devices. We provide below our answers to the other questions you raised. **[Q1. how does this work compared with main-stream kernel fusion techniques such as torch.compile, torchDynamo, torch fx.graph, ExecuTorch?]** While mainstream tools like TorchDynamo and ExecuTorch implement kernel fusion to optimize compute latency, msf-CNN introduces a novel layer-level fusion strategy that specifically targets memory efficiency. Our approach can be applied in conjunction with existing kernel fusion techniques to achieve both latency and memory optimizations. Without lacking the generality, we leave this for future development. **[Q2. how does this work different from unsloth, liger and similar work?]** unsloth and Liger-kernel focus on optimizing large language models (LLMs) for fine-tuning and (post-)training scenarios, primarily targeting GPU-based systems. Their optimizations, such as LoRA and FP8/4 quantization, are designed for LLM-specific workloads and not targets on general-purpose CNN inference on edge devices. In contrast, msf-CNN is designed for general-purpose CNN inference on tiny edge devices with limited resources. **[Q3. kernel fusion based on computation graph optimization is well defined area...Any novel contribution made in this paper?]** The relation between kernel fusion and our method is explained above. Moreover, the key novel contributions of our work include: - A compute-memory trade-off framework with efficient optimizer that allows users to optimize for specific resource constraints. - A patch-based fusion mechanism that enables layer-level fusion and significant memory savings. - A general-purpose solution for CNN inference on edge devices, which is distinct from existing tools focused on LLM optimization. - An open source framework enabling ultra-low RAM footprint of neural network inference. Thank you again for your valuable feedback. Please let us know if you have further questions. **Reference** [1] Wang, et al. "Kernel fusion: An effective method for better power efficiency on multithreaded GPU." 2010 IEEE/ACM Int'l Conference on Green Computing and Communications & Int'l Conference on Cyber, Physical and Social Computing. IEEE, 2010. [2] Niu, Wei, et al. "Dnnfusion: accelerating deep neural networks execution with advanced operator fusion." Proceedings of the 42nd ACM SIGPLAN International Conference on Programming Language Design and Implementation. 2021. [3] Zhao, Jie, et al. "Apollo: Automatic partition-based operator fusion through layer by layer optimization." Proceedings of Machine Learning and Systems 4 (2022): 1-19. [4] Alwani, Manoj, et al. "Fused-layer CNN accelerators." 2016 49th Annual IEEE/ACM MICRO. IEEE, 2016. [5] Mei, Linyan, et al. "Defines: Enabling fast exploration of the depth-first scheduling space for dnn accelerators through analytical modeling." 2023 IEEE HPCA. IEEE, 2023.
null
null
Solving Satisfiability Modulo Counting Exactly with Probabilistic Circuits
Accept (poster)
Summary: This paper presents a new exact satisfiability-modulo-counting solver: `Koco-SMC`, and demonstrates its performance on benchmarks from a UAI benchmark set, compared to existing state-of-the-art exact and approximate SMC solvers. The paper attempts to address a weakness in existing exact solvers: the need for them to alternate between a SAT solver and a probabilistic inference solver. It does so by incorporating an Upper Lower Watch (ULW) algorithm that keeps track of upper and lower bounds on the probabilistic inference parts of the problem, learning conflict clauses from those bounds as it goes. The `Koco-SMC` pipeline relies on knowledge compilation to create a decision diagram that is converted into an arithmetic circuit, which is used for the inference. It incorporates branching heuristics, propagation and conflict-driven clause-learning techniques from the literature. In the experimental evaluation `Koco-SMC` seems especially strong for problems where the probabilistic constraints are very hard to satisfy, or even result in unsatisfiable problems. `Koco-SMC` seems to refute unsatisfiable instance much faster than existing methods, due to its ability to detect conflicts instead of having to search a large search space. Claims And Evidence: I find that largely the claims about the experimental evaluation seem to be backed-up by the data. I do have some issues with the presentation of some of the data and the apparent conclusions that are drawn from it. I also feel that the paper may be missing some relevant literature, and therefore maybe also may be missing some SOTA in its evaluation. See comments below. Methods And Evaluation Criteria: Based on the main text, I do not understand how the benchmark instances were selected for the experiment. I cannot find a URL to download the instances, or a reference to which instances were used, exactly. I have no idea what "50 models over binary variables are kept." means in this context? As I understand it, the benchmarks are created by somehow Frankensteining probabilistic constraints and SAT formulae together, but it is not clear to me how. I do like that the description of the used solvers is finally made quite explicit in Section 5.1. Theoretical Claims: There is a lemma, with a sketch of the proof. I'm not sure that a lemma is needed here. I find that it is phrased somewhat awkwardly ("equality *can* be achieved"? so sometimes it isn't?), and I feel like it does not add much compared to the existing (cited) literature on computing probabilities with arithmetic circuits. Hence, in my opinion this lemma does not add much. I would rather want to see the space it takes up now used to discuss more of the existing literature, or to clarify certain parts of the implementation or empirical evaluation (see also my notes elsewhere in this review on those topics). Experimental Designs Or Analyses: I find some of the claims insufficiently supported, in part because I find the way in which they are presented somewhat misleading. An example is Figure 3. Its caption claims that "Koco-SMC solves 80% of SMC problems in 20 ... ". However: as I understand the text, the figure shows the result for one problem ("specific combination (3-color-5x5.cnf with smokers_10,uai)", the meaning of which is also unclear to me) for different thresholds $q$. This implies to me that the problem structure (and thus the compiled probabilistic circuit) remains the same. As I understand it, the experiments show running time including the compilation time. Given that compilation time can certainly vary quite a lot from instance to instance (and are independent of threshold $q$), I find that presenting those times for just one problem instance (for varying $q$) a bit reductive. Furthermore, the claim that "our method requires significantly less time across most instances" does not seem to be backed up by any statistical tests that would demonstrate statistical significance, and it is not clear to me what this sentence even means (less time than what? what does "across most instances" mean?). I do think that reading Appendix C in detail would inspire trust in the quality of the empirical evaluation. However, my above comments on how I find the claims related to Figure 3 somewhat misleading, stands. Supplementary Material: I skimmed through Appendices A and B. Found them helpful. Briefly glanced through Appendix C. Looks like the contents are sufficient to inspire trust in the empirical results. Relation To Broader Scientific Literature: As I understand it, the main contribution improves on the reported state of the art by learning conflicts directly from the probabilistic constraints, instead of alternating between a SAT solver and a solver for probabilistic inference. In my opinion, the paper misses some of the relevant literature, see remarks below. Essential References Not Discussed: The reference for `CNFgen` is missing in the main text of the paper (and it only has a footnote in the appendix). In my opinion, this paper should mention prior work on reasoning over constraints on the success probability of a literal-weighted CNF formula, and on the computation of upper and lower bound on the weighted model count of such a formula. The following two works come from a line of work that presents algorithms for optimisation problems that require solving a constraint $C$ of the form $C := \text{Pr}(F(X,Y)) \bowtie \theta$, where $F(X,Y)$ represents a CNF on decision variables $X$ and random variables $Y$, where $0 < \theta \leq 1$ is a threshold on the success probability, and $\bowtie \in \{\leq, \geq\}$. They leverage existing MILP and CP technology to aid in branching, propagation and conflict learning, as well as creating a propagation algorithm specifically for probabilistic constraints on arithmetic circuits, allowing for the stochastic constraint to be combined with other constraints. Since they allow for multiple constraints of the above form to be added, and are similar in terms of approach (knowledge compilation, CNF, etc), I feel like they are close enough to the contents of this paper to merit at least a mention in the related work section, but probably a more detailed discussion or comparison: A. L. D. Latour, B. Babaki, A. Dries, A. Kimmig, G. Van den Broeck, and S. Nijssen, ‘Combining stochastic constraint optimization and probabilistic programming — from knowledge compilation to constraint solving’, in _Proceedings of the 23rd international conference on principles and practice of constraint programming (CP 2017)_, in Lecture notes in computer science, vol. 10416. Springer, 2017, pp. 495–511. A. L. D. Latour, B. Babaki, and S. Nijssen, ‘Stochastic Constraint Propagation for Mining Probabilistic Networks’, in _Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence_, Macao, China: International Joint Conferences on Artificial Intelligence Organization, Aug. 2019, pp. 1137–1145. doi: [10.24963/ijcai.2019/159](https://doi.org/10.24963/ijcai.2019/159). The following work studies the use of knowledge compilation to derive lower and upper bounds on the success probability of a literal-weighted input Horn formula. Despite the title mentioning approximation guarantees, I believe the methods provided in this work also provide true weighted model counts. Given that the proposed work puts a lot of emphasis on the bounds computation, I feel that this literature should probably be mentioned: A. Dubray, P. Schaus, and S. Nijssen, ‘Anytime Weighted Model Counting with Approximation Guarantees for Probabilistic Inference’, presented at the International Conference on Principles and Practice of Constraint Programming (CP), 2024. doi: [10.4230/LIPIcs.CP.2024.10](https://doi.org/10.4230/LIPIcs.CP.2024.10). In the same spirit I think this work could merit a mention, since it concerns satisfiability modulo counting the number of solutions of a set of mixed-integer constraints, again keeping track of lower and upper bounds: C. Ge and A. Biere, ‘Improved Bounds of Integer Solution Counts via Volume and Extending to Mixed-Integer Linear Constraints’, presented at the International Conference on Principles and Practice of Constraint Programming (CP), 2024. doi: [10.4230/LIPIcs.CP.2024.13](https://doi.org/10.4230/LIPIcs.CP.2024.13). Finally, the following work also focuses on knowledge compilation for lower and upper bounds on the success probability of a literal-weighted input CNF, certifying the correctness of those bounds. This bit of literature is maybe a bit further from the contents of the proposed work, but I believe that it might still merit a mention: C. Cheng, Y.-R. Luo, and J.-H. R. Jiang, ‘Knowledge Compilation for Incremental and Checkable Stochastic Boolean Satisfiability’, presented at the Thirty-Third International Joint Conference on Artificial Intelligence, Aug. 2024, pp. 1862–1872. doi: [10.24963/ijcai.2024/206](https://doi.org/10.24963/ijcai.2024/206). I also find it strange that [Choi et al., 2022] is used as the reference for Smooth & Decomposability. Why not cite A. Darwiche, ‘On the Tractable Counting of Theory Models and its Application to Truth Maintenance and Belief Revision’, Journal of Applied Non-Classical Logics, vol. 11, no. 1–2, pp. 11–34, Jan. 2001, doi: 10.3166/jancl.11.11-34. for smoothness and A. Darwiche, ‘Decomposable negation normal form’, _J. ACM_, vol. 48, no. 4, pp. 608–647, Jul. 2001, doi: [10.1145/502090.502091](https://doi.org/10.1145/502090.502091). A. Darwiche, ‘Compiling knowledge into decomposable negation normal form’, in _Proceedings of the sixteenth international joint conference on artificial intelligence, IJCAI 99, Stockholm, Sweden, july 31 - august 6, 1999. 2 volumes, 1450 pages_, T. Dean, Ed., Morgan Kaufmann, 1999, pp. 284–289. [Online]. Available: [http://ijcai.org/Proceedings/99-1/Papers/042.pdf](http://ijcai.org/Proceedings/99-1/Papers/042.pdf) for decomposability? Or maybe the classic A. Darwiche and P. Marquis, ‘A Knowledge Compilation Map’, _Journal of Artificial Intelligence Research_, vol. 17, pp. 229–264, Sep. 2002, doi: [10.1613/jair.989](https://doi.org/10.1613/jair.989) ? If the papers must cite Choi et al. 2022, then please fix the currently incorrect representation of Guy Van den Broeck's name? It's rendered correctly in [Kisa et al., 2014]. I find in general that the bibliography is very sloppy. [Darwiche 1999] and [Darwiche 2002] have "Citeseer" in their bibliography entries. The reference for [Shet et al. 2023] is a preprint, even though that work also seems to have a published version. There is some weirdness going on in the pdf where there are a lot of instances of empty space hyperlinking to the bibliography entry for [Li et al., 2024], which itself does not list page numbers. A lot of titles are rendered incorrectly, with letters that should be in uppercase being rendered as lower-case, instead. Furthermore, some items use abbreviated venue titles, while others write them out fully. Other Strengths And Weaknesses: I really like the detailed case studies that are described in the paper. I also want to compliment the authors on their efforts to make the line graphs b+w printer-friendly. The paper contains few typos, and I find that it reads well in general. Other Comments Or Suggestions: lines 47-48: I'm a bit surprised to find only references to workshops here, not to actual literature, except for Sheth et al 2023, which is a reference to a preprint even though a published version of that same work is also available. - Figure 2: poor formatting. Subfigures are not actually formatted as such, which results in a mismatch of fonts and an inability to search the figure for text. - line 80: I would really like to see references here so I know which "Current exact SMC solvers" the paper is referring to. - line 90: why is there a hyperlink to the [Li et al., 2024] entry in the bibliography at the start of this paragraph? - lines 102-103: what is meant with "the largest dataset based on the UAI Competition benchmark"? Could this be made more specific? As I understand it, each edition of UAI has a different benchmark set? It would be helpful if the paper was more transparent about which benchmarks were used exactly, and if there was a reference here. - lines 105-106: It would help my understanding if the paper explained the notation of $\{f_i\}^{K}_{i=1}$. What is $K$? What is $i$? Why does $i=1$ need to be there? - Lines 108-110: Is it essential that $|\mathbf{x}| = |\mathbf{y}_i| = |\mathbf{z}_i|$? Why or why not? Are there any constraints on the value of $L$? - Line 143: "Formally, PC" missing indefinite article. - Line 157: "The root node $r$ in the graph has no parent node." This confuses me. Seems obvious, right? Why does it need specifying? - The use of the word "model" is somewhat overloaded in the paper. It seems that in line line 137 it means "a satisfying assignment to a Boolean satisfiability formula", in line 158 it means "a representation of a probability distribution", and in line 315 it means "problem instance"? Do I understand this correctly? Could the writing be improved by being more explicit about this overloading? - Line 151: why is there no part (d)? - Line 155 "An existing exact SMC" solver: Which one? Can I have a reference? Or is the intended meaning "any existing exact SMC solver that we are aware of"? - Line 185: "SMC solver" missing "s". - Line 350: I find "facing SMC problems" a very awkward description of what is going on. - Line 352: Should this and the following $Q$s be $q$s, instead? - Line 355-356: I have no idea what "specific combination (3-color-5x5.cnf with smokers_10,uai)" means. - I will die on this hill: an x-axis is only an x-axis if it is labelled with "x". Otherwise, it is likely a horizontal axis, or a time axis, or in this case maybe s $q$-axis, or whatever is appropriate. Similar hill for y-axis. - There are spaces missing here and there after Figure numbers. Questions For Authors: Q1. Can the authors please reflect on how their work relates to the work presented in [Latour et al., 2017], [Latour et al., 2019], and [Dubray et al., 2024] (see above)? As I currently understand the paper, I feel like these are relevant enough to be mentioned, or maybe even compared against. For now, this is a reason for me to recommend rejection. Q2. Can the authors please elaborate on their claim that "Our method typically requires significantly less time across most instances."? As argued above, I find this claim, especially in the context of Fig 3 somewhat misleading. As I explained above, I do not know what it means, which for now also makes me less eager to recommend accepting the work. Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: Thank you very much for the detailed review. We will address your concerns and questions in the following reply. --- ### **Q1: Related works** We deeply appreciate your effort in pointing out the lines of work we missed. We have carefully reviewed the related literature and summarized the main changes below: 1. Relavance to upper and lower bound computation: [Link](https://anonymous.4open.science/r/anonym_koco_smc-61FD/plot/related-bounds.png) 2. Relavance to Stochastic Constraint Optimization Problems: [Link](https://anonymous.4open.science/r/anonym_koco_smc-61FD/plot/related-optimization.png) 3. Modified citations for PCs: [Link](https://anonymous.4open.science/r/anonym_koco_smc-61FD/plot/related-pc.png) --- ### **Q2: SMC problem definition** 1. What is {$f_i$}$_{i=1}^K$? It means a set of functions {$f_1,\ldots, f_K$}. K is the number of total probabilistic constraints. 2. Is it essential that $|x|=|y_i|=|z_i|$? No, they are not, because x represents the decision variable, and y, z are marginalized-out latent variables. Their lengths can be arbitrary. We revised the notation in our problem formulation as follows and hope this answers your question. [Link to the revised version](https://anonymous.4open.science/r/anonym_koco_smc-61FD/plot/problem-formulation.png) --- ### **Q3: Lemma 1 seems redundant** Our Lemma 1 holds, or equivalently, Equation 3 holds, when the PC satisfies the smoothness and decomposability properties. Otherwise, the computed upper and lower bounds (UB and LB) through our procedure may not be valid; that is, Equation 3 may be violated for some value assignments to the remaining variables. Thus, it is necessary to place Lemma 1 in the main text. --- ### **Q4: Experiments** 1. Dataset details Each SMC instance consists of a CNF file for the Boolean SAT constraint, a UAI file for the probabilistic constraint, and a threshold value. The original CNF files (9 files generated by CNFGen) and UAI files (50 files selected from UAI competitions held from 2010 to 2022, see details in Appendix C.4) are in the anonymous repository: `https://anonymous.4open.science/r/anonym_koco_smc-61FD/data/` The threshold values are available in the file: `anonymous.4open.science/r/anonym_koco_smc-61FD/data/benchmark_ins/data/inst_[cnf name]_[uai name]/thresholds.txt` For the compiled probabilistic circuit files, most of them are excessively large (>1 GB), so we didn’t include them in the repository for now. 2. Detailed settings for Figure 3 In Figure 3, we conduct an experiment on a single instance, where the CNF uses the 3-color5x5.cnf file (representing a 3-coloring problem on a 5×5 grid map), and the probabilistic distribution uses the smokers-10.uai file (from the UAI 2012 Competition), with various threshold $q$. The running time of our approach includes the knowledge compilation time for a fair comparison. We conduct extended experiments in Figures 14, 15, and 16. 3. Regarding 'our method requires significantly less time across most instances' According to the results in Figures 3, 14, 15, and 16, we find our method is more efficient than baselines in most cases. In some cases, like Figure 14(c), our method shows inferior running time performance due to huge compilation overhead, which agrees with your statement. --- ### **Q5: Clarification on “Current exact SMC solvers (line 80)” and "An existing exact SMC solver (line 155)"** We realize that these phrases are misleading as they suggest the existence of an actual tool. To clarify, we have revised the text accordingly. ``` Since there is no general exact SMC solver, solving SMC problems exactly requires combining tools from SAT solving and probabilistic inference. ``` --- ### **Q6: Other issues** 1. On line 90, we didn’t see the hyperlink in the PDF. We would appreciate it if you could provide more context. 2. Bibliography issue. We've carefully revised the bibliography. [Link to new Bibliography](https://anonymous.4open.science/r/anonym_koco_smc-61FD/plot/references.png) 3. Language and writing issues: We have cleared up the improper use of language errors according to your suggestions. 4. Line 157: "The root node in the graph has no parent node." It is changed to: “The root node is the final output of the probabilistic circuit. It represents the overall probability distribution over the variables.” 5. The use of the word "model" is somewhat overloaded in lines 137, 158, and 315. We have cleaned up the usage of the word “model” in these lines for clarity. 6. The use of the x-axis and y-axis for Figure 3 caption. We have cleaned up the related text. It should be: [Link to the revised Figure 3](https://anonymous.4open.science/r/anonym_koco_smc-61FD/plot/figure3.png) --- Thank you again for your detailed review—your comments have been incredibly helpful in improving the paper. We hope this addresses your concerns, and we’d be happy to discuss further if you have any additional feedback. --- Rebuttal Comment 1.1: Comment: Dear authors, Thank you for your detailed rebuttal. Here are some follow-up questions regarding your answers: Q2, related works point 2: Can the authors elaborate on what they mean by "Our KOCO-SMC is more specialized for this kind of problem"? Specialised how? What exactly are the authors referring to with "this kind of problem"? Q6, point 1: the point is that the hyperlink is invisible until you "mouse-over" it. It happens when I mouse-over the left side of the "W" that starts line 90. Q6, point 2: bibliography still messy. Still faulty rendering of Guy Van den Broeck's name in IJCAI 2020 workshop reference. Should be "Van den Broeck, Guy". Didn't check the rest. Kind regards --- Reply to Comment 1.1.1: Comment: Dear reviewer, We deeply appreciate your comments. --- ### **Q2 related works** **Connection between our method and existing works:** Stochastic Constraint Optimization Problems (SCOPs) have an intrinsic connection with SMC problems. This connection enables the transformation of SMC problems into SCOPs, which can then be reformulated as Mixed-Integer Linear Programs (MILP) or Constraint Programming (CP) problems. Such reformulations allow us to take advantage of powerful, off-the-shelf solvers that utilize techniques like branch-and-bound to solve these problems efficiently. **Our novelty:** KOCO-SMC is an exact solver tailored for general SMC problems, combining SAT solvers with probabilistic inference techniques. Our experiments specifically demonstrate the effectiveness of the ULW algorithm compared with vanilla baselines. In future work, we plan to compare our approach with MILP and CP solvers to identify the strengths and weaknesses in solving SMC problems. --- ### **Q6 point 1** Thank you for the detailed explanation. We have identified the issue—it was caused by a template problem on line 74, where the reference (Li et al., 2024) was split across two pages. We've fixed it accordingly. --- ### **Q6 point 2** Thank you for confirming the name "Van den Broeck, G." We identified that the issue originated from the DBLP biography with the auto-formatting style. We have corrected the name using the proper format and have reviewed all other information as well. --- We sincerely thank you for your response. We hope this addresses all of your concerns. Please don’t hesitate to let us know if there's anything further you'd like to discuss—we would greatly appreciate the opportunity to continue the conversation. Best regards
Summary: This paper investigates the Satisfiability Model Counting problem, an extension of SAT that incorporates constraints involving probabilistic inference. The authors propose, KOCO-SMC, an efficient exact SMC solver that leverages probabilistic circuits through knowledge compilation to accelerate repeated probability estimation. To enhance efficiency, they introduce the ULW algorithm, which monitors lower and upper bounds of probabilistic inference for early conflict detection. Empirically, KOCO-SMC significantly outperformed existing exact and approximate SMC solvers on real-world problems. ## update after rebuttal I thank the authors for their response and maintain my acceptance recommendation. Claims And Evidence: The properties of the ULW algorithm are supported by theoretical analysis, while empirical evaluations confirm the superior runtime performance of KOCO-SMC compared to existing methods. Methods And Evaluation Criteria: The proposed methods behind KOCO-SMC, along with its empirical evaluation, are well-suited for SMC problems. Theoretical Claims: I have verified the claim in Lemma 3.1, and it is correct. Experimental Designs Or Analyses: I reviewed the experimental designs for the UAI dataset, the two real-world problems, and the ablation study for ULW. The designs and analyses appear sound to me. Supplementary Material: No Relation To Broader Scientific Literature: The SMC problem can encode both symbolic and probabilistic constraints and has found applications in real-world scenarios. This work introduces an efficient exact SMC solver, advancing the practical applicability of SMC problems. Previous works have primarily focused on approximate methods or relied on a straightforward combination of a SAT solver and a probabilistic inference solver, often resulting in poor efficiency. Essential References Not Discussed: Citations are missing in the Probabilistic Inference and Model Counting section of the related works. Other Strengths And Weaknesses: N.A. Other Comments Or Suggestions: Please add citation to CaDiCaL. At line 200, "into" is missing between “probabilistic constraints” and “probabilistic circuits”. Questions For Authors: Beyond conflict detection, is KOCO-SMC able to derive propagations from probabilistic inference constraints? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you very much for your review and suggestions. --- ### **Q1: Beyond conflict detection, is KOCO-SMC able to derive propagations from probabilistic inference constraints?** This is a very good question. In our context, propagation specifically refers to unit propagation, which derives new variable assignments based on current assignments. Currently, we only perform unit propagation on the Boolean constraints and not on the probabilistic constraints. The reason is as follows: consider the constraint $\sum_y f(x_1, x_2, x_3, y) > q$. Suppose $x_1$ and $x_2$ have already been assigned, and $x_3$ remains unassigned. Since $f(\cdot)$ is represented as a complex probabilistic circuit (PC), we cannot directly derive the assignment for $x_3$. Instead, we need to evaluate both cases—assigning $x_3$ as True and False—because this assessment can yield three possible outcomes: (1) Both assignments (True and False) satisfy the constraint. (2) Only one assignment satisfies the constraint. (3) Neither assignment satisfies the constraint. --- ### **Q2: Add Citations** We have added a citation to CaDiCaL and included additional references in the "Probabilistic Inference and Model Counting" section: [1] Chavira, Mark, and Adnan Darwiche. "On probabilistic inference by weighted model counting." *Artificial Intelligence* 172.6-7 (2008): 772-799. [2] Cheng, Qiang, et al. "Approximating the sum operation for marginal-MAP inference." *Proceedings of the AAAI Conference on Artificial Intelligence*. Vol. 26, No. 1, 2012. [3] Gomes, Carla P., Ashish Sabharwal, and Bart Selman. "Model counting: A new strategy for obtaining good bounds." *AAAI*. Vol. 6, 2006. [4] Achlioptas, Dimitris, and Panos Theodoropoulos. "Probabilistic model counting with short XORs." *International Conference on Theory and Applications of Satisfiability Testing*. Springer International Publishing, Cham, 2017. --- ### **Q3: Writing typos** We have added the missing word "into" at line 200. --- ### **Q4: Ethical Review Flag** We noticed that you flagged this paper for an ethics review. We wanted to confirm whether this was done by accident. We would greatly appreciate further details to properly address any concerns. --- We greatly appreciate your insightful comments. If you have any additional suggestions or questions, please feel free to let us know—we are happy to address them. --- Rebuttal Comment 1.1: Comment: The ethics flag was applied by mistake. I have removed it.
Summary: This paper aims to provide an efficient solution to the satisfiability modulo counting problem. It proposes to use probabilistic circuits to encode the propositional formula which is further combined with a conflict-driven clause learning framework to compute bounds for the marginal distributions. Empirical evaluations on the resulting algorithm are further presented. Claims And Evidence: At the end of Section 3.1, the paper claims that the current solvers suffer from the fact that they need to go back and forth invocation between SAT solver and the probabilistic inference process. Still, in the proposed algorithm solver Koco-SMC, it uses the CDCL framework and there's still the back-and-forth since when an unsat assignment comes up, it augments the PC with the unsat clauses just like what the baseline does. Further, since the PC is required to be both smooth and decomposable in this framework, the process of adding learned clauses might introduce non-negligible computational cost. It is unclear to me why the proposed framework is more efficient. Methods And Evaluation Criteria: I don't fully understand the problem formulation of SMC: 1) What are the differences between variables x, y and z in Equation (1) and (2)? 2) It is unclear what the given probabilities are but only the constraints on the marginal distributions are defined as in Equation (1) and (2). Are some probability tables also defined in the formulation of SMC as to the conditional probability tables in Bayesian networks? (3) I don't understand the initialization as described at the end of Section 3 due to the previous two confusions. (4) How is the 0.1 at Line 165 obtained? Theoretical Claims: Not applicable. Experimental Designs Or Analyses: The experimental results seem sound and the proposed algorithm outperforms the baseline methods. Supplementary Material: I mainly read the figures to understand the main paper. Relation To Broader Scientific Literature: It contributes to the field of statistical machine learning. Essential References Not Discussed: This work is closely related to the field of weighted model counting where some of the WMC algorithms are also CDCL [1] based but no reference is presented. [1] Möhle, Sibylle, and Armin Biere. "Combining Conflict-Driven Clause Learning and Chronological Backtracking for Propositional Model Counting." GCAI. 2019. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: See Methods And Evaluation Criteria. - What is the time complexity required to add the learned clauses to PCs? Would it blow up the size of PCs? - It seems only when the upper bound is lower than q or the lower bound is greater than q, there would be definite outcomes for satisfiability. What if it's neither case? Would it result in approximate solutions instead of being an exact solver? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your careful review and for raising valuable questions. --- ### **Q1: Why is Koco-SMC More Efficient?** In general, KOCO-SMC saves time by detecting conflicts early using partial variable assignments, whereas baseline solvers require full variable assignments. - **Baseline Solvers:** These solvers require multiple back-and-forth interactions between SAT-solving and probabilistic inference. Typically, they first assign **all decision variables** using the SAT solver before checking probabilistic constraints. When a conflict occurs, baseline solvers can only learn to avoid repeating that specific complete assignment. - **Koco-SMC:** In contrast, Koco-SMC performs back-and-forth checks after assigning **just a single variable**. It immediately updates the upper and lower probability bounds using the efficient ULW algorithm to detect conflicts. This approach allows conflicts to be discovered much earlier—after only partial assignments—thus avoiding unnecessary evaluations of remaining variables. **Extreme Example:** When probabilistic constraints in an SMC problem are inherently unsatisfiable, Koco-SMC can confirm unsatisfiability by assigning just a single decision variable. Baseline solvers, however, must enumerate and test all potential assignments generated by the SAT solver. --- ### **Q2: Clarification of the Problem Formulation** **(1) What are the differences between variables x, y, and z in Equations (1) and (2)?** The variables **x** denote the decision variables, while **y** and **z** are latent variables that are marginalized out. **(2) It is unclear what the given probabilities are in Equations (1) and (2).** The given probability distributions can be any weighted functions mapping Boolean variables to real numbers—for example, probabilistic circuits, Markov random fields, or Bayesian networks. The SMC problem formulation imposes no specific constraints on these probability distributions. For KOCO-SMC, we specifically compiles probability distributions into probabilistic circuits because PCs have structural properties that significantly benefit our ULW algorithm. **(3) I don't understand the initialization as described at the end of Section 3 due to the previous two confusions.** The initialization mentioned in line 226 refers to the computation of the initial upper and lower probability bounds within the PC. **(4) How is the value 0.1 at line 165 obtained?** This value is provided as an illustrative example demonstrating why our method is more efficient. Detailed steps for this calculation are included in Appendix Figure 7. Specifically, line 165 illustrates the calculation using the example distribution represented by the probabilistic circuit in Figure 2(c). By setting **x₁** and **x₂** to **True**, we can compute the marginal probability over **x₃** and **x₄** as **0.1**. --- ### **Q3: Answers to Specific Questions** - **What is the time complexity required to add the learned clauses to PCs?** The time complexity for adding learned clauses is constant. Our approach adds learned clauses only to the Boolean formula to prevent repeated conflicts. The structure of the PCs remains unchanged. - **What if neither bound condition is met? Would it result in approximate solutions rather than an exact solver?** No, it would not lead to approximate solutions. Instead, it implies that additional variables must be assigned to narrow down the upper and lower bounds. Consider a constraint such as $\sum_y f(x,y) > q$. If the lower bound of $\sum_y f(x,y)$ is greater than $q$ (or the upper bound is less than $q$), we can conclude UNSAT (or SAT). Otherwise, we continue assigning variables. Once all variables are assigned, $\sum_y f(x,y)$ will yield an exact numeric value without ambiguity. Thus, Koco-SMC remains an exact solver. --- ### **Q4: Other Issue** Thank you for suggesting the WMC algorithms based on CDCL. We have now incorporated these references and other relevant related work into the paper. --- Thank you again for your constructive comments. If there are any additional points you'd like us to clarify or discuss, please feel free to let us know.
Summary: Satisfiability modulo counting (SMC) is a generalisation of SAT that consists of a propositional formula phi(x,b) and a collection of statements of the forms 1) the marginalisation of a discrete probability function f(x,y) (marginalising the variables in y) is at least some constant q, 2) the marginalisation of a discrete probability function f(x,y) (marginalising the variables in y) is at least the marginalisation of a discrete probability function g(x,z) (marginalising the variables in z). The truth value of statements of the form 1) and 2) determine the values of the boolean variables in b. That is, SMC instance asks whether values for x can be give such that phi(x,b), where the variables in b are interpreted as the truth values of statements of the form 1) or 2). The authors introduce an exact solver for SMC, which they call Koco-SMC. The authors claim that previous approaches to integrate SAT solver to this task boil down to having a SAT solver to enumerate all possible solutions to phi(x,b), and then using a probabilistic inference solver to check whether the constraints 1) and 2) hold, as dictated by the selection of the variables in b. The approach that the authors take in Koco-SMC is to use standard tools for CSPs (e.g., propagation, conflicts clause learning, backtracking) to pick partial assignments to variables in phi(x,b) and then to check whether those partial assignments for variables yield a contradiction for the probability statements. The authors construct probabilistic circuits for computing the value of the functions mentioned in 1) and 2). The use of such functions allows the authors to efficiently compute upper and lower bounds for the values of the functions under partial assignments. These lower and upper bounds can be then used to infer contradiction regarding the probability statements. Finally the authors experimentally compare their solver to other approximate and exact solver for solving SMCs. Their findings show that their approach is more efficient than others in the literature. ## update after rebuttal I maintain my evaluation. Claims And Evidence: The paper is well written and presented. The claims given are supported. Methods And Evaluation Criteria: The authors give a comprehensive comparison of their approach to others in the literature. The methods and evaluation criteria seems suitable for the task. Theoretical Claims: The description of their approach in the main part of the paper seems plausible and correct. I did not check the correctness of all the claims from the appendix. Experimental Designs Or Analyses: I did not check details related to the experiments. Supplementary Material: I did not review supplementary material outside of the appendix. I did not check all the details from the appendix. Relation To Broader Scientific Literature: The authors do a good job in setting the scene of the paper and in positioning their results with respect to literature. Essential References Not Discussed: I am not aware of related works that should be cited. Other Strengths And Weaknesses: I think the main abstract contribution of the paper is to use probabilistic circuits as an efficient method to compute upper and lower bounds for the probabilistic functions, which can then be used to yield contradictions. This approach also requires the use of partial assignments, but there the use of CSP techniques are quite standard. Otherwise, from the theoretical point of view, the paper is not that deep. However, if this approach is indeed novel, in this context, it a valuable contribution. Also the experimental results show that their approach work better in practice than the previous approaches. Other Comments Or Suggestions: None. Questions For Authors: None. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you very much for your review and your positive assessment of our paper. We appreciate your concise summary of our contributions and your acknowledgment of the practical value demonstrated by our experimental results. --- ### **Q1. Main Contribution** We propose an integrated **exact SMC solver**, addressing the existing gap in this research community, which previously only had approximate solvers available. Our method achieves higher computational efficiency compared to vanilla exact solvers by utilizing our proposed ULW algorithm. Vanilla exact solvers directly combine SAT solving and probabilistic inference processes, resulting in slow performance due to frequent back-and-forth invocations between these two solvers. --- ### **Q2. Novelty of Our ULW Algorithm** Our ULW algorithm is inspired by CSP, SAT, and optimization literatures. We have revised the related work section to further clarify the novelty of our contributions: 1. **Relevance between ULW and Upper and Lower Bound Computation**: ``` Our KOCO-SMC efficiently tracks upper and lower bounds for probabilistic constraints with partial variable assignments. In literature, (Dubray et al., 2024; Ge & Biere, 2024) compute approximate bounds for probabilistic constraints based on the DPLL algorithm. (Choi et al., 2022; Ping et al., 2015; Marinescu et al., 2014) provide exact bounds by solving marginal MAP problems, yet they are more time consuming than our method. ``` 2. **Relevance between SMC and Stochastic Constraint Optimization Problems**: ``` Stochastic Constraint Optimization Problems (Latour et al.,2019; 2017) can be formulated as MILP or CP problems using stochastic constraints to solve SMC problems. Our KOCO-SMC is more specialized for this kind of problem. ``` The related works above bridge the connection to the existing theoretical foundations. --- Thank you once again for your valuable feedback. Please let us know if you have any further concerns, and we would be happy to discuss them.
null
null
null
null
null
null
Rethink the Role of Deep Learning towards Large-scale Quantum Systems
Accept (poster)
Summary: The authors conduct a thorough investigations of Deep Learning (DL) vs Machine Learning (ML) methods and all of the design choices surrounding them for the tasks of quantum system learning (QSL). They consider the tasks of quantum phase classification (QPC) and ground state property estimation (GSPE) and study three Hamiltonians. The authors point out that DL methods require more quantum resources. The authors conduct thorough experiments to test scaling laws of DL and ML methods, the effect of measurement as data features in in GSPE and QPC tasks, and offer some observations to the QSL community. ML: linear regressors, kernel methods and tree methods. DL methods: multilayer perceptrons, convolutional neural networks, and self-supervised learning. Real outcomes as embeddings show little improvement to DL model via a randomization test. ## update after rebuttal I have monitored the rebuttal and will maintain my score. Thank you to the authors for their responses. Claims And Evidence: The authors have supported the claims of their work, showing that ML methods are on par with DL methods and rethinking the role of measurement outcomes as embeddings, scaling laws, etc. Methods And Evaluation Criteria: The methods and evaluations in the paper are appropriate and have captured the existing literature well. Theoretical Claims: This is an empirical study, so N/A Experimental Designs Or Analyses: The design and analyses are sound and well-documented. Supplementary Material: Yes, all of it. Relation To Broader Scientific Literature: The paper is an important comparisons of established methods in the field and the key contributions are observed trends that will inform future research in the field. Essential References Not Discussed: Not that I am aware of. Other Strengths And Weaknesses: Very thorough experiments: * scaling laws * comparisons of ML and DL methods under the same quantum measurement bugdets * effect of measurements on predictions * scaling the size of the model as vs the strength of the regularization. Other Comments Or Suggestions: The role of the measurement outcomes as embeddings should be discussed, e.g. maybe some exploration of the way these embeddings go into the model is important, and your studies may change when controlling for that. Ideally the quantum resources shouldn't be only measured as number of measurements and examples. Is there a more intrinsic measure to the simulations, akin to FLOPs in standard analyses in deep learning scaling? Questions For Authors: How does your analysis change if you lift the restriction on the quantum resources? Do your findings still hold? In Figure 1 why interestingly why is there a difference between the left half and the figure and the right half? It seems that the scaling comparative trends reverse as we go from left to right. Why the different prediction objectives (correlation vs. entropy) provide different ordering of the scaling curves? Could you elaborate on this? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank Reviewer 3fBm for the positive recognition of our work. Below, we address the remaining concerns. For clarity, questions in `Comments Or Suggestions` and `Questions For Authors` are referred to as `COS` and `QA`. All newly added simulations are attached to [*LINK*]. [*LINK*] https://anonymous.4open.science/r/ml4quantum-C80F/Rebuttal_icml_25.pdf > **Q1 [`COS`] Do results depend on how measurement outcomes are embedded?** As noted by the reviewer, an important future direction in quantum system learning (QSL) is the exploration of task-specific embedding strategies, which we have discussed in Line 437 of the main text. That is, **our empirical findings are not intended to dismiss the potential of using measurement outcomes as input features in QSL but rather to motivate the development of novel embedding strategies that can enhance learning performance**. To further address the reviewer’s suggestions, we have added two additional simulation tasks, as detailed below. - (1) For the three embedding strategies explored in the main text, we vary the number of measurement shots $M$ from $1$ to $512$ and evaluate the performance of DL models when applied to predict the correlations of $|\psi_{\rm{HB}} \rangle$ with $N\in \{63, 100, 127\}$. The results are summarized in Table 7 [*LINK*], where the performance of DL models does not vary too much. These results suggest fundamental limitations of the explored embedding methods in correlation prediction tasks. - (2) We further evaluate two additional embedding strategies commonly used in DL models for QSL tasks, beyond the three methods discussed in the main text. Specifically, these additional strategies include (i) using raw normalized measurement outcomes as input and (ii) using averaged normalized outcomes as input. We evaluate DL models using these two embedding strategies for correlation prediction of $|\psi_{\rm{HB}} \rangle$. The results are summarized in Table 8 [*LINK*].  Among all embedding methods, we find that the averaged embedding, while still underperforming ML models, achieves better learning performance than the others in predicting the correlations of $|\psi_{\rm{HB}} \rangle$. > **Q2 [`COS`] Is there a more intrinsic measure to the simulations, akin to FLOPs in standard analyses in deep learning scaling?** We would like to address this question from two aspects. - (1) Quantum resources versus classical resources. In QSL, quantum resources (i.e., the total number of queries to the quantum system) are **significantly more scarce and expensive than classical resources** like memory, compute time, or FLOPs. This scarcity motivates our decision to unify quantum resource usage across all evaluated models, enabling fair and realistic comparisons. At the same time, as the reviewer rightly noted, classical resources are also important. Metrics like FLOPs remain valuable for assessing the computational complexity of different learning models. **Since quantum and classical resources are complementary**, future benchmarking efforts could benefit from jointly considering both to more fully characterize learning efficiency in hybrid quantum-classical systems. - (2) The way of acquiring information from quantum systems. Recall that most current QSL models rely on **incoherent and local measurements**, where the number of measurement shots is a natural and practical metric. However, if a QSL model is instead based on coherent measurements or intermediate quantum memory, a more comprehensive cost model becomes necessary. In such cases, one must also account for the differing costs associated with acquiring information from the quantum system. > **Q3 [`QA`] Do your findings hold without quantum resource constraints?** Yes, our findings remain valid even when the quantum resource constraint is lifted. As noted in our response to **Q5** from Reviewer HcSt, we conducted additional simulations using $n=10^5$ training examples for 8-qubit systems, with **an infinite number of measurements per sample** (see Tables 3 and 4 [*LINK*]). Even under these settings, the achieved results continue to exhibit the advantage of ML models. > **Q4 [`QA`] In Fig. 1, why is there a difference between the left half and the right half?** We would like to clarify that the apparent discrepancy between the left and right halves of Fig. 1 primarily stems from differences in hyperparameter settings across tasks. In particular, the learning models were independently tuned for correlation and entropy prediction, and thus their performance is not directly comparable across the two objectives. As a result, interpreting the reversal in scaling trends as a meaningful comparison between the tasks can be misleading. To further support this explanation, we include additional simulations. As summarized in Table 9 [*LINK*], by properly adjusting the input feature dimension of Ridge regression, the performance gap between TFIM and HB systems in the correlation task can be eliminated.
Summary: The paper examines the necessity and effectiveness of deep learning in quantum system learning (QSL), particularly in estimating ground state properties (GSPE) and quantum phase classification (QPC). The paper systematically benchmarks deep learning models against traditional machine learning approaches while maintaining equivalent quantum resource usage. The paper shows that machine learning and deep learning models both improve with more training data and measurement snapshots. Moreover, machine learning models often achieve performance comparable to or exceeding that of deep learning approaches in QSL tasks. Lastly, a proposed randomization test shows that measurement outcomes have minimal impact on deep learning models’ performance in GSPE but are important for QPC. Claims And Evidence: The paper provides extensive experimental results to enforce the claims: - The paper ensures fair comparisons between the deep learning and machine learning models by maintaining equivalent quantum resource usage while testing across three distinct Hamiltonian families. - The paper presents extensive empirical results on GSPE and QPC, showing clear trends in model performance when the data size and measurement snapshots are increased. Moreover, the experimental results show that machine learning models can outperform deep learning models under resource constraints. - The paper conducts a randomization test to analyze the impact of measurement outcomes as input features, providing strong statistical evidence that deep learning models do not effectively leverage this information in GSPE tasks. Meanwhile, for QPC tasks, measurement outcomes significantly affect model performance. Methods And Evaluation Criteria: Since the paper presents benchmarking settings to evaluate the role of machine learning and deep learning models, the provided benchmarks, as discussed in "Claims And Evidence", make sense for the problem. Theoretical Claims: The paper does not provide any theoretical proof of the claims. This is somewhat acceptable since the paper focuses on extensive empirical proofs. Experimental Designs Or Analyses: The experimental designs and analyses look reasonable. Supplementary Material: I have reviewed the supplementary material, including preliminaries, machine learning and deep learning model details, and datasets. Relation To Broader Scientific Literature: Many studies have applied DL to quantum tasks, assuming it provides superior performance [1,2,3,4]. The paper's empirical results question the claimed advantages of deep learning models in quantum tasks. The findings in the paper show that machine learning methods may be more suitable for real-world QSL applications due to the limited availability of quantum resources. [1] Wang, Haoxiang, et al. "Predicting properties of quantum systems with conditional generative models." arXiv preprint arXiv:2211.16943 (2022). [2] Tran, Viet T., et al. "Using shadows to learn ground state properties of quantum hamiltonians." Machine Learning and Physical Sciences Workshop at the 36th Conference on Neural Information Processing Systems (NeurIPS). 2022. [3] Zhang, Yuan-Hang, and Massimiliano Di Ventra. "Transformer quantum state: A multipurpose model for quantum many-body problems." Physical Review B 107.7 (2023): 075147. [4] Tang, Yehui, et al. "Towards LLM4QPE: Unsupervised pretraining of quantum property estimation and a benchmark." The Twelfth International Conference on Learning Representations. 2024. Essential References Not Discussed: There are no additional related works that are essential to understanding the key contributions of the paper. Other Strengths And Weaknesses: Besides the experimental results enforcing the claim that the machine learning models outperform the deep learning models, the paper should present the evaluation protocol limitations of the prior works. This could be crucial since the paper's results go against the prior results. The reader might want to see the reasons, and the difference could be the evaluation protocol. Other Comments Or Suggestions: No other comments or suggestions. Questions For Authors: I am kind of positive about the paper. Two things that I am concerned about are: - the theoretical explanation or proof of this phenomenon; - and the limitations of the prior evaluation protocol. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate the Reviewer m6FX's positive affirmation of our work. Below, we provide detailed responses to the remaining concerns. For clarity, questions in `Strengths and Weaknesses` and `Questions for Authors` are referred to as `S&W` and `QA`, respectively. > **Q1 [`S&W`, `QA`] The paper should present the evaluation protocol limitations of the prior works.** Thanks for the comment. We have followed the reviewer's suggestion to append further explanations about the limitations of evaluation protocols adopted from prior literature in related works and Appendix A.3. For self-consistency, let us first briefly recap how our work differentiates from previous studies. In Line 47 of our submission, we explicitly state that our study *"... heuristic advantages of DL models often ignore their dependence on quantum resources, resulting in an unfair comparison with ML approaches"*. Besides, as indicated in Line 787, prior works have primarily focused on proposing new neural network architectures for QSL tasks, which laid the groundwork for our benchmarking efforts.   In what follows, we provide a summary of our main revisions to further highlight the limitations of the evaluation protocols used in prior work. Prior studies on DL models in QSL typically rely on classical simulators to generate ideal labels [Wang et al., arXiv:2211.16943; Zhang & Di Ventra, PRB 107, 075147 (2023); Tang et al., ICLR 2024.], without accounting for the limitations imposed by a finite number of measurement shots. This overlooks a critical aspect of practical quantum settings, where obtaining accurate labels requires a substantial number of costly and limited quantum measurements, e.g., collecting $10^6$ examples with 1000 shots per example would take about 35 years on current ion-trap quantum chips. In contrast, ML-based QSL studies more often incorporate quantum resource constraints by using labels derived from classical shadows or from a limited number of actual quantum measurements, thereby better reflecting realistic deployment scenarios [Huang et al., Science 2022.; Cho et al., Nat. Commun. 2024.; Lewis et al., Nat. Commun. 2024.]. **This inconsistency in evaluation practices hampers fair comparisons between DL and ML models and risks overstating the practical effectiveness of DL approaches**. As stated in Line 47 of our manuscript, such inconsistencies lead to the problematic claim that DL models offer heuristic advantages over ML models. In our study, we explicitly address these limitations by **unifying quantum resource constraints across all models and evaluating their performance under this consistent cost framework**. This evaluation protocol allows meaningful comparison and motivates the development of DL models that are viable under practical quantum constraints. > **Q2 [`QA`] Provide a theoretical explanation or proof of this phenomenon.** Thanks for the insightful comment. Let us first address that our work represents the first systematic empirical study evaluating the performance of existing ML and DL models in quantum system learning (QSL) under realistic quantum resource constraints. Much like the influential study *"Rethinking Generalization in Deep Learning"* [Zhang et al., ICLR 2017.], our aim is not to theoretically dismiss the potential of DL models in QSL, but rather to **initiate and guide rigorous theoretical investigations into their capabilities and limitations in this setting**. The empirical findings reported in our work reveal the current challenges faced by standard DL architectures and offer concrete evidence that motivates the development of QSL-oriented DL models. These results underscore the need to better understand when and how deep learning can be effectively applied in QSL. While our study is primarily empirical, the observed phenomenon can be partially understood through known theoretical principles in statistical learning theory. - (1) Under fixed quantum resource budgets, there exists an inherent trade-off between the number of training examples $n$ and the number of measurement shots $M$ per example. Increasing $n$ reduces $M$, leading to higher label noise. DL models, which typically require large amounts of high-quality data to generalize well, are particularly sensitive to this noise. In contrast, ML models with hand-crafted feature maps are often more robust in such regimes and come with provable theoretical guarantees. - (2) The use of raw measurement outcomes as input features substantially increases input dimensionality while often introducing redundancy. This combination—high-dimensional and noisy input with limited supervision—poses a considerable challenge for DL models. Without task-specific architectural design, deeper networks may overfit or fail to extract meaningful structure from the data. --- Rebuttal Comment 1.1: Comment: The authors have addressed my questions. I have updated the score. --- Reply to Comment 1.1.1: Comment: Thanks for your reply and positive feedback. We sincerely appreciate the time and effort you have dedicated to reviewing our work and providing valuable insights, which we will thoughtfully incorporate into our revised manuscript.
Summary: This paper considers the state properties of quantum systems problems. In order to deal with the issue of unfair comparison, this paper benchmarks DL models against traditional ML approaches across the Hamiltonian. Claims And Evidence: yes Methods And Evaluation Criteria: yes Theoretical Claims: yes Experimental Designs Or Analyses: yes Supplementary Material: yes Relation To Broader Scientific Literature: benchmark DL models Essential References Not Discussed: yes Other Strengths And Weaknesses: This paper considers the problem of how to learn information from deep learning under quantum resources. In the process of revisiting, the total number of queries is fixed, and the randomization test is designed. The motivation is not clear in deriving the observation that the outcomes are largely redundant as input representations. Moreover, it is unclear whether this phenomenon is based on the quantum system or deep learning. Furthermore, in the deep learning experiments, a deeper network is not considered in the setting, which does not clearly validate the findings. Other Comments Or Suggestions: no Questions For Authors: 4 Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate Reviewer Wxrf's thoughtful review. For clarity, questions in `Strengths And Weaknesses` are abbreviated as `S&W`. All newly added simulations are attached to [*LINK*]. [*LINK*] https://anonymous.4open.science/r/ml4quantum-C80F/Rebuttal_icml_25.pdf > **Q1 [`S&W`] The motivation is not clear in deriving the observation that the outcomes are largely redundant as input representations.** Thanks for the comment. Let us first kindly remind the reviewer that as indicated by Reviewer m6FX, the key motivation of our work is *"...benchmarks deep learning models against traditional machine learning approaches while maintaining equivalent quantum resource usage..."*. To this end, we conducted systematic experiments to explore the capabilities and limitations of current learning models on standard QSL tasks. As the reviewer rightly noted, a key perspective in understanding the power of DL models lies in unveiling how data features influence their performance. In particular, assessing the necessity of using measurement outcomes as input features is important for the following three reasons: - (1) Recall that quantum resources are expensive as shown in Appendix A.4. If DL models with and without measurement outcomes as input features achieve similar performance, measurement-free DL models are preferred, as they **avoid quantum hardware interaction during inference** and substantially reduce quantum resource consumption. - (2) Feature selection impacts model size and computational efficiency. Removing unnecessary features of the model helps reduce the model size. For instance, in the case of $8$-qubit $|\psi_{\rm{HB}}\rangle$ with $512$ shots, the used MLP size is reduced over $2$ times when removing raw measurement outcomes as input features, leading to a substantial reduction in memory and inference costs on the classical side. - (3) The proposed randomization test helps identify effective embedding strategies for measurement outcomes. As noted in **Q1** by Reviewer 3fBm, our results suggest that a well-designed embedding can improve the performance of DL models. Following this approach, any newly proposed embedding strategy can be evaluated using the randomization test to assess its contribution to learning performance. > **Q2 [`S&W`] It is unclear whether this phenomenon is based on the quantum system or deep learning.** Before presenting the details to address the reviewer’s concern, we would like to respectfully highlight that **our work is the first to systematically evaluate the performance of current ML and DL models on QSL tasks under realistic quantum resource constraints**. The obtained empirical findings offer concrete evidence to guide the community in identifying **new scenarios where DL models may excel**, while also shedding light on **why existing DL models often underperform compared to ML models** in standard QSL settings. Below, we outline two possible explanations for the latter case. - (1) Under realistic and constrained quantum resource budgets, collecting large-scale and high-quality datasets for QSL is often prohibitively expensive. This introduces an inherent trade-off between the number of training examples $n$ and the quality of labels associated with each sample. When $n$ increases, a fixed total query budget necessitates allocating fewer measurement shots per example, resulting in increased label noise. In this context, expanding the input feature space by including raw measurement outcomes does not necessarily improve performance, since they are noisy and high-dimensional, which may introduce **redundant or irrelevant information**. - (2) As noted in our response to **Q6** of Reviewer HcSt, **current DL-based QSL models lack well-designed feature mappings**, limiting their ability to extract useful information from the measurement outcomes to enhance the learning performance. As a result, it is **an active research field** about developing novel embedding maps with respect to the measurement outcomes. We have conducted additional experiments on embedding measurement outcomes for MLP shown in Table 8 [*LINK*]. The achieved results reveal that a good pre-processing of measurement outcomes brings out performance improvement. This motivates the exploration of **more effective embedding techniques** and **novel neural architectures** to enhance DL models' performance in QSL tasks. > **Q3 [`S&W`] In the deep learning experiments, a deeper network is not considered in the setting, which does not clearly validate the findings.** We have followed the reviewer's advice to extensively evaluate much deeper neural networks, such as MLP and CNN, incorporating well-known techniques such as dropout, residual connections, and appropriate $l_2$ regularization to ensure effective training. Results are summarized in Tables 1 and 2 (see [*LINK*]). The achieved results indicate that compared to DL models with increased depth, ML methods consistently retain their advantages.
Summary: The authors study supervised machine learning in the framework of quantum tasks in particular the Ground State Properties identification and Quantum Phase Classification. The authors evaluate a set of shallow and deep classifiers in both tasks and evaluate the computational cost as well as accuracy. The authors study three types Hamiltonian specification of problems, the Heisenberg models, Ising transverse models and the neutral Rydberg atoms models. The tasks under consideration are the estimation of the ground state and the quantum phase classification. The authors provide a set of experiments showing that in the average the DL models are not much better than than the used shallow classifiers. Claims And Evidence: The authors claim that the DL models are either equivalent or worse when trained and evaluated along with a set of shallow classifier on the two main quantum learning tasks. However I feel that in particular the evaluation criteria being problematic. The reasons that DL work sis because the amount of data became large enough to allow the averaging of features and updates and adjustment of millions of parameters. In this particular work I feel that DL are unjustly compared in conditions that are not well suited to the basic principles of training DL, that is the amount of data is not really a parameter available to manipulation. In addition the there are specific paradigms such as few shot learning, single shot learning, reduced precision networks) that would probably behave better in the conditions described in this paper. On the other hand the models evaluated are using single layer convolutional layer and as such are not exploring the full potential of the convolutional features. Methods And Evaluation Criteria: Yes the general idea to evaluate the machine learning methods is consistent with the authors claim. Theoretical Claims: There are not any theoretical claims, but my problem remains with the fact that models that are compared might not have been compared on a proper basis. Experimental Designs Or Analyses: Experiments are designed well for this type of work Supplementary Material: Supplementary material provides enough information for one to replicate the experiments. Relation To Broader Scientific Literature: The work is well situated in the current works on the learning of quantum properties from classical observations. Essential References Not Discussed: Most of relevant literature has been cited. Other Strengths And Weaknesses: I feel that this work is not deep enough for the experiments. The authors had a generator and thus should use the full potential to generate large amount of data and relate the amount of data to the performance of the models on a larger scale Other Comments Or Suggestions: I would suggest to augment all experiments beyond reasonable real-world data availability and consider the problem space as a method for estimating the limits and the requirements for these methods to work to an expected error Questions For Authors: So if one had more data would the f\shallow classifiers be under-perform or is the nature of this data so global that local filters fail to extract the necessary information for learning? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank Reviewer HcSt for insightful comments. We have addressed all your concerns in the detailed responses below. For clarity, questions in `Claims and Evidence`, `Theoretical Claims`, `Strengths and Weaknesses`, and `Questions for Authors` are referred to as `C&E`, `TC`, `S&W`, and `QA`, respectively. All newly added simulations are attached to [*LINK*]. [*LINK*] https://anonymous.4open.science/r/ml4quantum-C80F/Rebuttal_icml_25.pdf >**Q1 [`C&E`, `TC`] The evaluation criteria are problematic.** We respectfully disagree with the reviewer’s concern regarding the validity of our evaluation criteria, particularly for dataset size $n$. **While many learning tasks assume access to large datasets, this assumption is impractical in quantum system learning (QSL) due to the high cost of quantum data generation**. As detailed in Appendix A.4, a rough estimation reveals that collecting $10^6$ examples would take about 35 years on current ion-trap quantum chips. Despite this, prior works often overlook quantum resource constraints and benchmark DL models using unrealistically large $n$, raising concerns about their realistic applicability. To address this gap, we **intentionally constrain** $n$ to enable a fair and realistic comparison between current ML and DL models in QSL, as stated in Line 44, Page 1. >**Q2 [`C&E`, `TC`] Are DL models unfairly compared by limited data?** As noted in **Q1**, we **have to use restrictive datasets in QSL** because of the practical constraints of current quantum chips, i.e., one must choose between a high-quality dataset with small $n$ or a low-quality dataset with large $n$. To address the latter, we conducted simulations with $n=10^5$. As shown in Table 3 & 4 [*LINK*], together with those in the main text, show that ML models **consistently outperform existing DL models** across commonly studied QSL datasets. > **Q3 [`C&E`] Would specialized DL models perform better here?** Yes, it is possible. In Appendix C.5, we have numerically shown that DL can outperform ML models for long-tailed datasets. Notably, there is **ongoing research** on exploring scenarios where DL models can surpass ML models in QSL tasks. However, the **primary goal** of our study is to revisit **existing DL and ML models on standard QSL tasks** under realistic conditions **instead of exploring new scenarios where DL models advance**. The achieved findings in this submission not only inform DL model design but also motivate further exploration into when and how DL can outperform ML models in practical QSL scenarios as the reviewer is concerned. > **Q4 [`C&E`] How do CNNs perform as their depth increases?** We have conducted additional experiments to address this concern. We apply the four models introduced in the main text to estimate the correlation $\bar{C}$ of $|\psi_{\rm{HB}} \rangle$ and $|\psi_{\rm{TFIM}}\rangle$ with the qubit count $N$ ranging from $48$ to $127$, and varying $n$ from $20$ to $100$. The depth of DL models is increased up to $100$ layers. Other settings are the same as those in the submission. Results are summarized in Tables 1 & 2 [*LINK*]. As the depth of DL models increases, **their performance underperforms shallower ones, widening the gap with ML models**. > **Q5 [`S&W`, `QA`] Use generators to scale data and assess performance trends.** As shown in Table 3 & 4 & 5 [*LINK*], we consider infinite measurement shots and include additional simulations with $n=10^5$ for 8-qubit $|\psi_{\rm{HB}}\rangle$, and $n=10^4$ for larger-scale systems. Even under these settings, the achieved results continue to demonstrate the advantage of ML models. Let us kindly remind the reviewer that while classical simulations can generate large noise-free datasets, they contradict QSL’s ultimate goal, which is using the proposed models to facilitate **real QSL tasks**. Due to the high cost of querying quantum systems explained in **Q1**, the accessible QSL dataset is very limited in practice. The use of classical simulators here is a convention, allowing researchers to bypass the need for scarce quantum resources and explore the fundamental capabilities of different models. > **Q6 [`QA`] Would DL models outperform ML with more data, or is the task fundamentally better suited to ML?** If 'f/ classifier' refers to ML models, the results in **Q2** and **Q5** confirm that **current DL models do not outperform ML models with more data**. Moreover, the reasons why current DL models underperform ML models remain largely unexplored. The results presented in this submission **further highlight the urgency of addressing this gap**. Besides the potential impact of global features as noted by the reviewer, there are two other possible factors: - Most DL models overlook the challenges posed by limited data availability, whereas many ML models explicitly account for this with theoretical guarantees; - DL models are often adapted from unrelated learning tasks and typically employ poorly tailored feature maps.
null
null
null
null
null
null
Unified K-Means Clustering with Label-Guided Manifold Learning
Accept (poster)
Summary: This paper introduces a framework that integrates K-means clustering, Kernel K-means clustering, and Fuzzy K-means clustering with manifold learning. This framework aims to tackle the challenges of initial centroid sensitivity, handling nonlinear datasets, and achieving balanced clustering in traditional K-means clustering. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. The experiments on benchmark datasets and toy datasets demonstrate the method's excellent clustering performance, its ability to handle nonlinear data, and its capacity for balanced clustering. Theoretical Claims: Yes, the paper presents several theoretical claims, and the theoretical claims in this paper are overall correct and supported by the corresponding proof. Experimental Designs Or Analyses: Yes. The experimental design of the work follows existing works, and the analyses are overall reasonable. Supplementary Material: Yes, and I have checked the appendix of the work. Relation To Broader Scientific Literature: The unified framework proposed by the authors addresses both the sensitivity of K-means to cluster centers and its inability to handle nonlinearity, as well as the issue of balanced clustering. Essential References Not Discussed: In both the introduction and the comparison algorithms, the authors discussed essential relevant works. Other Strengths And Weaknesses: This paper introduces a unified framework that addresses both the sensitivity of K-means to cluster centers and its inability to handle nonlinearity, and proposes a low-pass filtering distance metric. It achieves superior experimental results. The shortcomings can be found in the comments and questions. Other Comments Or Suggestions: There are some typos, such as the comparison algorithm K-sum-x being written as K-sum-X in Section 6.2. Questions For Authors: 1. Formulas (20) to (24) are not very logical; please restate them. 2. Is the label matrix F a discrete hard label or a continuous label? 3. The issue associated with K-means that is sensitive to centroid initialization is a well-recognized challenge, and has been studied by several works. What differentiates the proposed method from existing approaches, and what advantages does it offer? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you very much for your recognition and valuable comments. We provide the following responses according to your questions: **1. Explain the formula (20) to (24)** **A**: We compute the $\ell_{2,1}$-norm of the cluster indicator matrix $\mathbf{F}^\top$ and then maximize this term to achieve balanced clustering. Equations (20) to (24) are used to prove that when maximizing the $\ell_{2,1}$-norm of matrix $\mathbf{F}^\top$, its optimal solution is a discrete matrix $\mathbf{F} $. We will make a detail explanation for them. **2. Is label matrix F a discrete hard label or a continuous label** **A**: We have demonstrated that problem (24) reaches a maximum when $\mathbf{F}$ is a discrete indicator matrix, and thus, the final optimized matrix $\mathbf{F}$ is a discrete hard label. **3. The difference of the proposed method from existing works addressing centroid initialization sensitivity and its advantages** **A**: To avoid the impact of center initialization on K-means clustering, methods such as K-means++, K-sum, and K-sum-X have been proposed. K-means++ randomly selects a point from the dataset as the first cluster center. For all unselected points in the dataset, it calculates the minimum Euclidean distance to the currently selected center and then randomly selects the next cluster center based on the squared distance as a probability weight, until convergence. Although K-means++ alleviates the sensitivity of K-means to initial centers to some extent, it still relies on the computation of cluster centers and may be affected by outliers. K-sum and K-sun-X, on the other hand, do not directly compute cluster centers but perform clustering based on the relationships between data points. While these two methods address the sensitivity of K-means to initial centers, they do not consider balanced clustering. Compared with these methods, our methods introduces the $\ell_{2,1}$-norm regularization term to prevent certain clusters from containing too few or too many data samples, addressing the impact of imbalanced datasets on the model and achieving more balanced clustering results. --- Rebuttal Comment 1.1: Comment: The authors' responses have addressed my concerns, and I have checked the responses to the questions of all the reviewers and have no further questions. Therefore, I decide to raise the rating accordingly. --- Reply to Comment 1.1.1: Comment: Thank you for your support and recognition of our work. We will carefully revise the final version based on your valuable feedback to further improve the quality of the paper. We sincerely appreciate the thoughtful suggestions you have provided and the time and effort you have dedicated to this process.
Summary: In this paper, a novel K-Means clustering was proposed, named unified K-Means clustering with label-guided manifold learning, to solve the problems of traditional K-Means algorithm, such as the sensitivity of initial centroid selection, the limited recognition ability of intrinsic manifold structure of nonlinear data sets, and the difficulty of achieving balanced clustering in practical scenarios. The method does not need to calculate cluster centers and uses labels to guide the exploration of the nonlinear structure of the data. It realizes the cluster balance with L2,1-norm and achieves high clustering accuracy and robustness. Extensive experiments are performed, whose results prove the superiority of the proposed method. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes, I reviewed the correctness of the proofs for the theoretical claims presented in the paper, specifically focusing on Theorem 3.1 and Theorem 4.1, and there is no problem. Experimental Designs Or Analyses: Yes, the authors selected 10 multi-view datasets and 6 comparison methods, using ACC, NMI and Purity as clustering measures. Data sets range from small data sets to large data sets with sample sizes of 10,000 and 30,000. The comparison method also includes the SOTA variants of K-Means. There is no obvious problem with the experimental designs or analyses. Supplementary Material: Yes, I have reviewed the supplementary material, which contains some acceleration strategies, convergence analysis, and some additional experiments. Relation To Broader Scientific Literature: The paper's main contribution is the unified K-Means clustering with label-guided manifold learning and diverse distance metrics. It enhances robustness and achieves broader applications on nonlinear data, supported by comprehensive experiments that validate the method's effectiveness. Essential References Not Discussed: The related works covered in this paper are relatively comprehensive, involving traditional k-means, fuzzy k-means, kernel k-means for handling nonlinear data, balanced clustering method RKM, and k-means with centered variants CDKM and k-sum and k-sum-x without centers, etc. Other Strengths And Weaknesses: Strengths: 1. The authors proposed a novel K-Means clustering, named unified K-Means clustering with label-guided manifold learning, which addresses the weakness of K-means that is sensitive to the initialization of cluster centers and achieves a broader application on nonlinear data. 2. The authors imposed a L2,1-norm constraint on the clustering indicator matrix, allowing the model to achieve class balance during clustering. Detailed logical reasoning and proof processes were provided to enhance readability. 3. The authors also employed different distance matrices to enhance the model's generalization ability. Weaknesses: 1. The paper is on the basis of manifold learning but provides little discussion on how it is connected to manifold learning and its advantages. Other Comments Or Suggestions: 1. What is the significance of balanced clustering? The authors should emphasize this issue in Introduction. 2. The authors should carefully check the paper. For example, in Equation (16), it seems matrix F is missing in some minimization operations. 3. The unified framework is defined as min tr(GᵀDG), but the final model is min tr(FᵀDF) without the matrix P. The authors should provide a detailed explanation for this issue. Questions For Authors: 1.In Equation (14), matrix G is constructed from matrices F and P. Why is matrix P not present in the final expression? What is the relationship between matrix P and matrix F? 2. How is the parameter a for the low-pass filtering distance chosen? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you very much for your recognition and valuable comments. Here are our responses: **1. Connection to manifold learning and its advantages** **A**: We have shown the equivalence between K-means and manifold learning by transforming K-means into the form of manifold learning using the cluster indicator matrix to construct the similarity matrix $\mathbf{S}$. This transformation eliminates the need to compute cluster centers and allows K-means to explore the local manifold structure of the data, making it suitable for nonlinear datasets. **2. The significance of balanced clustering** **A**: Many clustering algorithms like K-Means are based on distance tend to assign more data samples to larger clusters, which may lead to a decrease in algorithm performance because some datasets may contain outliers or noise. K-means may group the majority of samples into one cluster while assigning outliers to another cluster. Balanced clustering can avoid it by ensuring the sample number of each cluster to be roughly the same, thereby achieving better stability and reliability. In summary, balanced clustering enhances resistance to outliers. **3. The reason for that the final model is min tr(FᵀDF) without the matrix P, and the relationship between matrix P and matrix F** **A**: According to Theorem 3.1, $\mathbf{G} = \mathbf{F}\mathbf{P}^{-1/2}$, where $\mathbf{P} = diag(p_1, \ldots, p\_c)$ and $p\_k = \sum\_{i=1}^{n} \mathbf{F}\_{ik}$. Since $\mathbf{F}$ is a cluster indicator matrix, $\mathbf{F}\_{ik} = 1$ when the $i$-th sample belongs to the $k$-th cluster. Each column of the matrix represents a class. Therefore, we can say that the $k$-th element of matrix $\mathbf{P}$ is actually the number of samples in the $k$-th cluster, which can also be rewritten as $\mathbf{P} = \mathbf{F}^\top \mathbf{F}$. Our model implements balanced clustering, ensuring that the number of samples in each cluster is equal. Thus, $\mathbf{P} = \mathbf{F}^\top \mathbf{F} = \frac{n}{c}\mathbf{I}$, where $n$, $c$, and $\mathbf{I}$ represent the number of samples, the number of clusters, and the identity matrix, respectively. Ignoring the constant term, the optimization problem $\min tr(\mathbf{G}^\top \mathbf{D} \mathbf{G})$ becomes $\min tr(\mathbf{F}^\top \mathbf{D} \mathbf{F})$. **4. How to choose the parameter $a$ for the low-pass filtering distance** **A**: Our low-pass filtering distance assigns a smaller distance to samples with high similarity and a larger distance to samples with low similarity, effectively pulling similar samples closer and pushing dissimilar samples farther apart. The advantage is that it increases the discriminability of samples by making dissimilar samples more distant. The parameter $a$ controls the degree of this pulling and pushing. However, a higher value of $a$ is not always better because it may also push similar samples farther apart, which is not conducive to clustering. Therefore, we choose $a = 1$ or $2$ for our low-pass filtering distance to better handle nonlinearly separable data.
Summary: The manuscript presents a new balanced k-means clustering framework based on manifold learning for k-means clustering problem. The framework formulates balanced k-means as an optimization problem about the clustering label matrix and realizes clustering in one step by minimizing the objective function. In this paper, it is proved that k-means is equivalent to manifold learning under certain circumstances. Unlike traditional k-means, the model does not require centroid initialization, which the authors claim could lead to more robust results. The convergence can be accelerated by a pre-computational strategy. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. The paper presents Theorems 3.1 and 4.1, both of which are accompanied by proofs. Theorem 3.1 establishes the equivalence between K-means and manifold learning, with a proof that is clear and well-structured, and Theorem 4.1 discusses the ability of this method that supports the balanced clustering. Experimental Designs Or Analyses: Yes. The dataset used in the experiment covers a wide range of categories and quantities, enabling a comprehensive presentation of the results. Additionally, the comparison of experimental methods includes advanced models from the past three years, enhancing the credibility of the experimental results. Supplementary Material: Yes. Relation To Broader Scientific Literature: Compared to previous studies, the highlights of this research lie in centerless clustering, which effectively reduces the impact of outliers on clustering results. Additionally, by incorporating manifold learning, the method better handles nonlinear structured data. Lastly, the introduction of a balanced regularization term ensures more balanced clustering results and prevents trivial solutions. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strength: The advantage of this research lies in the innovative combination of two traditional machine learning methods, K-means and manifold learning, and the rigorous derivation process that demonstrates the scientific validity of the approach. Furthermore, its effectiveness is well demonstrated through comprehensive datasets and experimental comparisons. Weakness: However, the explanation of the necessity of balanced clustering is somewhat lacking. Providing additional clarification on this aspect would enhance the readability of the paper. Other Comments Or Suggestions: The subheadings of Figures 6(b), (c), and (d) contain errors and should be replaced with the correct dataset names. Questions For Authors: 1. In the comparative experiments, K-sum and K-sum-x should also be centerless clustering methods. Why does your method achieve better results? 2. The \(\ell_{2,1}\) norm regularization is generally a non-convex problem. What techniques or methods are used in the paper to achieve the optimization objective? 3. From the evaluation results on different datasets, why does the same method exhibit significant differences in performance across different datasets? What are the underlying reasons for this? Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you very much for your recognition and valuable comments. Here are our responses: **1. Necessity of balanced clustering** **A**: Many clustering algorithms like K-Means are based on distance tend to assign more data samples to larger clusters, which may lead to a decrease in algorithm performance because some datasets may contain outliers or noise. K-means may group the majority of samples into one cluster while assigning outliers to another cluster. Balanced clustering can avoid it by ensuring the sample number of each cluster to be roughly the same, thereby achieving better stability and reliability. In summary, balanced clustering enhances resistance to outliers. **2. The reason for better performance against other centerless methods (K-sum and K-sum-x)** **A**: K-sum is a clustering algorithm based on a k-NN graph, and K-sum-X is a variant of K-sum that does not rely on a k-NN graph but directly uses features for clustering. These two comparison algorithms do not require the initialization and computation of cluster centers. Compared with these two methods, our method further introduces a balance regularization term, which has been demonstrated to be effective to improve clustering performance. Thus, our method achieves better performance than these two methods. **3. The technique to optimize the non-convex $\ell_{2,1}$-norm regularization term** **A**: The $\ell_{2,1}$-norm regularization is indeed a non-smooth problem, making our final model (26) difficult to solve directly using coordinate descent. Therefore, we perform a first-order Taylor expansion on the regularization term and transmit it into an iterative problem as (29), which is a convex function with respect to matrix $\mathbf{F}$, successfully addressing the non-convex issue of $\ell_{2,1}$-norm regularization term. **4. The reason of same method’s different performance across different datasets** **A**: Datasets may have vastly different distributions and structures. Some datasets may have clear and separable clusters, while others may have overlapped or nested clusters. The complex structure of datasets can affect clustering performance. For example, the FERET and Mpeg7 datasets have dimensions of 6400 and 6000, respectively, and numbers of clusters of 200 and 70. Unsupervised clustering methods might not realize satisfactory performance when directly applied to these high-dimensional datasets with many clusters. For the balanced CMUPIE dataset, comparison algorithms achieve an average clustering accuracy of 0.1920, while our method, with the balance regularization term, reaches 0.3478. Overall, the different structures of datasets directly affect clustering performance, but our method, with the introduction of the balance term, performs better than other K-means clustering methods.
Summary: This work introduces an innovative centerless K-means clustering framework combined with manifold learning to improve clustering robustness and accuracy. By eliminating centroid initialization and utilizing a label matrix for similarity computation, the proposed method aligns manifold structures with class labels. Additionally, four distance metrics, including low-pass filtering distance, are incorporated to enhance adaptability to complex data. Theoretical derivations and comprehensive experiments on benchmark datasets demonstrate the effectiveness of the approach. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes Experimental Designs Or Analyses: Yes Supplementary Material: Yes Relation To Broader Scientific Literature: K-means clustering has gained widespread attention for its simplicity and effectiveness. However, it inevitably struggles with nonlinear data and requires cluster centroid initialization, making it susceptible to noise. This paper reconstructs the K-means objective based on label-guided manifold learning, effectively addressing these shortcomings in the relevant field. Essential References Not Discussed: I personally believe that the related work it mentions is quite comprehensive. Other Strengths And Weaknesses: This paper proposes a centerless K-Means clustering method based on manifold learning, which enhances clustering stability and robustness while improving adaptability to nonlinear data structures. Experimental results indicate that it outperforms traditional K-Means and some improved methods. However, this approach relies on the selection of the parameter $\lambda$. Additionally, although its centerless strategy eliminates sensitivity to initial centroids, it may introduce extra computational overhead under certain data distributions. Other Comments Or Suggestions: The authors should pay more attention to some formatting details. For example, equation (12) does not explain what the matrix D is, the notation for the trace (tr) in equation (16) is inconsistent, as it uses both text and character formats, along with a series of similar issues. Questions For Authors: 1. How is the matrix F initialized, and what are the convergence conditions for Algorithm 1 and Algorithm 2? 2. Why is there the conclusion that "The solution to the maximization problem (24) should be realized when F_i has only one element equal to 1 and the rest are 0, and the maximum value should be 1. Thus, we can conclude that the problem (24) only reaches a maximum when F is a discrete label matrix"? 3. The input to the low-pass filter distance is a graph; does the graph construction process introduce additional computational complexity? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you very much for your recognition and valuable comments. Here are our responses: **1. The method relies on selection of $\lambda$** **A**: In the model proposed in this paper, the parameter $\lambda$ is a hyperparameter associated with the $\ell_{2,1}$-norm of the matrix $\mathbf{F}$. It plays a role in regulating the balance of clustering results in the objective function. By adjusting the value of $\lambda$, we can control the extent to which the model focuses on the balance of clusters. When the value of $\lambda$ is large, the model will emphasize the balance of samples in each cluster more, making the number of data points in each cluster as close as possible. The selection of $\lambda$ depends on the structural characteristics of the dataset. A larger $\lambda$ is assigned when the dataset has equal sample sizes per class to ensure balance, while a smaller $\lambda$ is used for uneven distributions to balance results while respecting the original structure. The selection of $\lambda$ will impact the performance, demonstrating the regularization term of $\ell_{2,1}$-norm is effective. **2. Centerless strategy may introduce extra computational overhead** **A**: Our centerless strategy indeed eliminates the dependence on initial centroids, thereby enhancing the robustness of the algorithm. The core of the centerless strategy lies in computing the distances between sample pairs rather than the distances from samples to cluster centers. This requires us to construct a distance matrix $\mathbf{D} \in \mathbb{R}^{n \times n}$, and then build a similarity matrix $\mathbf{S} \in \mathbb{R}^{n \times n}$ through the cluster indicator matrix $\mathbf{F} \in \mathbb{R}^{n \times c}$, where $\mathbf{S} = \mathbf{F}\mathbf{F}^\top$. We initialize and update the cluster indicator matrix $\mathbf{F}$ instead of relying on the initialization and update of centroids. The complexity required to update the cluster indicator matrix $\mathbf{F}$ through formula (36) is $\mathcal{O}(n^2 c)$ for each iteration. Here, we introduce an acceleration strategy by precomputing and storing $\mathbf{F}^\top \mathbf{D}$, which has a time complexity of $\mathcal{O}(n^2 c)$. Then, we iteratively solve for matrix $\mathbf{F}$ with a time complexity of $\mathcal{O}(n c)$. Therefore, the overall computational complexity of updating $\mathbf{F}$ is $\mathcal{O}(n^2 c + t_1 n c)$, $t_1$ represents the number of iterations. Therefore, the centerless strategy truly increases the complexity, but the acceleration strategy helps to alleviate the computational burden. Although our complexity is $\mathcal{O}(n^2)$, the convergence experiment in the appendix shows that our model can converge quickly within 5 iterations. **3. Initialization of $\mathbf{F}$** **A**: We initialize the $\mathbf{F} \in \mathbb{R}^{n \times c}$ by setting an identity matrix to every $c$ rows. Specifically, we take the first $c$ rows as an identity matrix, then the second $c$ rows as an identity matrix, and so on. **4. Convergence conditions for Algorithm 1 and Algorithm 2** **A**: The convergence condition for Algorithm 1 is to check whether the difference between the updated matrix $\mathbf{F}$ and the previous matrix $\mathbf{F}$ is zero. If the difference is zero, the algorithm terminates. For Algorithm 2, we calculate the difference between the objective function values before and after each update of matrix $\mathbf{F}$. If the difference is less than $10^{-6}$, the loop terminates. **5. The reason of "problem (24) only reaches a maximum when F is a discrete label matrix"** **A**: Problem (24) is a convex optimization problem because the function ${\sum_{k=1}}\mathbf{F}^{2}\_{ik}$ is convex, and the constraints $0 \leq \mathbf{F}\_{ik} \leq 1$, and $\sum\_{k=1}^c \mathbf{F}\_{ik} = 1$ define a compact set. According to convex optimization theory, if a convex optimization problem has a solution on a compact set, then the solution must lie at one of the extreme points of that set. For the compact set $\lbrace \mathbf{f} \in \mathbb{R}^c | \mathbf{f}^\top \mathbf{1} = 1, \mathbf{f} \geq 0 \rbrace$, its extreme points can only be the $c$ one-hot vectors. Thus, we can conclude that problem (24) only reaches a maximum when $\mathbf{F}$ is a discrete indicator matrix. **6. Does graph construction introduce additional computational complexity?** **A**: In Appendix A.4, we provide details on how to construct the input graph for the low-pass filtering distance. The computational complexity for selecting anchor points using the DAS method and constructing the anchor graph is $\mathcal{O}(nrd + nr \log(r))$, where $n$, $r$, and $d$ represent the number of samples, anchor points, and dimensions, respectively. Therefore, the graph construction will only incur a linear complexity with regard to sample number, which is efficient and will not introduce additional computational complexity. --- Rebuttal Comment 1.1: Comment: The authors have addressed my previous concerns. Therefore, I decide to raise my score to 4. --- Reply to Comment 1.1.1: Comment: Thank you for your support and recognition of our work. We will carefully revise the final version based on your valuable feedback to further improve the quality of the paper. We sincerely appreciate the thoughtful suggestions you have provided and the time and effort you have dedicated to this process.
null
null
null
null
null
null
Improving Compositional Generation with Diffusion Models Using Lift Scores
Accept (poster)
Summary: This paper aims to improve compositional generation at inference time via rejection sampling using Lift scores on each condition to be composed. **Update after rebuttal** I appreciate the additional data (which I found convincing) and clarifications provided by the authors during the rebuttal. I was previously unaware of the prior work CAS mentioned by reviewer kyUu and agree that it may impact the novelty, but I also found the authors' clarifications about their focus specifically on compositions and systematic evaluation fairly convincing. Overall, I will keep my score at 3. Claims And Evidence: Yes Methods And Evaluation Criteria: Quantitative metrics are important for this study. I feel that this was done thoroughly and mostly reasonable choices were made, although I have a few questions. 2D synthetic: (Q) Why did you choose Chamfer Distance (as opposed to other metrics e.g. KL, Wasserstein etc.)? CLEVR: SAM2 and Liu pretrained classifier both make sense, it is nice that they were both tried and compared. SDXL: CLIP and ImageReward seem reasonable here. (Q) Could Segment Anything potentially be used in this context? (why not?). (Q) Would it be feasible to evaluate with the TIFA score as in Karthik et al (https://arxiv.org/pdf/2305.13308)? (Q) (L407) For assessing alignment via CLIP do you use the entire prompt (including all conditions) or assess the CLIP alignment on each condition individually? I believe it’s the former but do you think they latter might work better? (do you have any evidence on this?) (Q) Do you have any way to assess whether CLIP vs ImageReward is a more appropriate metric, and how well each of them actually checks whether *all* the conditions are present in the composition (e.g. for AND). Theoretical Claims: N/A Experimental Designs Or Analyses: Please see Methods And Evaluation Criteria Supplementary Material: No Relation To Broader Scientific Literature: This approach builds on various existing approaches in diffusion, composition, rejection sampling, Lift scores, etc. I don't feel that it represents a big conceptual leap beyond these existing methods, however I think it cites prior work appropriately and focuses mainly on getting a conceptually simple idea to work well empirically with suitable validation. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: I appreciate the focus on the missing objects problem. I find the samples and metrics fairly convincing (though please see Questions). Weaknesses: A lot of the decisions feel a bit ad-hoc (e.g. how many activated pixels “count” e.g. tau = 250 on L245 and the choice to replace epsilon by eps_theta(x, c_compose) in Fig 6) although I don’t consider this a deal-breaker for a methods paper. Other Comments Or Suggestions: I would appreciate a clearer discussion of the final probability you are actually optimizing for (or energy you are minimizing) after your rejection-sampling procedure, if there is anything you can say theoretically about it. I did not find the variance-reduction interpretation of replacing epsilon by eps_theta(x, c_compose) very clear. I also wonder if there are any connections with CFG that you know of? Questions For Authors: Please see the questions in Methods And Evaluation Criteria. A few others: Are Figure 4, 7, samples cherrypicked? If so I feel this should be acknowledged in the caption, and possibly some not-so-great examples included in appendix. Answers to these questions would help me find the samples and metrics more convincing. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your valuable comments. We address your concerns as follows: > Chamfer Distance We chose Chamfer Distance because it (1) applies to uniform distributions and (2) is sensitive to out-of-distribution samples. KL is inapplicable for out-of-distribution samples with undefined density ratio in uniform distribution settings. Wasserstein Distance is more robust to outliers, which doesn't meet our requirement to sensitively capture unaligned samples. > Segment Anything for t2i Vanilla Segment Anything lacks text prompt support. Grounded Segment Anything [1] would be a good candidate. We used CLIP/ImageReward following standards in previous works [2,3], but we will add a discussion in Conclusion. > TIFA score [2, 4] New TIFA experiments show improvements across categories: | Method | Animals | Object&Animal | Objects | | --- | :---: | :---: | :---: | | | TIFA ↑ | TIFA ↑ | TIFA ↑ | | Stable Diffusion 1.4 | 0.692 | 0.822 | 0.629 | | SD 1.4 + *Cached CompLift* | 0.750 | 0.886 | **0.685** | | SD 1.4 + *CompLift* | **0.794** | **0.902** | 0.682 | | | | | | | Stable Diffusion 2.1 | 0.833 | 0.873 | 0.668 | | SD 2.1 + *Cached CompLift* | 0.905 | 0.911 | **0.731** | | SD 2.1 + *CompLift* | **0.927** | **0.912** | 0.726 | | | | | | | Stable Diffusion XL | 0.913 | 0.964 | 0.755 | | SD XL + *Cached CompLift* | **0.949** | 0.972 | **0.790** | | SD XL + *CompLift* | 0.946 | **0.974** | 0.782 | > entire prompt vs individual condition We used entire prompt. We add new experiment with minCLIP (minimum CLIP score across subjects): | Method | Animals | Object&Animal | Objects | | --- | :---: | :---: | :---: | | | minCLIP ↑ | minCLIP ↑ | minCLIP ↑ | | Stable Diffusion 1.4 | 0.218 | 0.248 | 0.237 | | SD 1.4 + *Cached CompLift* | 0.225 | 0.260 | 0.249 | | SD 1.4 + *CompLift* | **0.228** | **0.263** | **0.252** | | | | | | | Stable Diffusion 2.1 | 0.237 | 0.258 | 0.247 | | SD 2.1 + *Cached CompLift* | 0.248 | **0.265** | 0.260 | | SD 2.1 + *CompLift* | **0.249** | **0.265** | **0.261** | | | | | | | Stable Diffusion XL | 0.243 | 0.269 | 0.264 | | SD XL + *Cached CompLift* | 0.248 | **0.271** | 0.269 | | SD XL + *CompLift* | **0.250** | **0.271** | **0.271** | Similar performance gains were observed, indicating CompLift primarily improves the weaker condition (typically missing object). > CLIP vs ImageReward We manually labeled 100 samples from Fig. 10 for presence of both black car and white clock. With 62 positive and 38 negative samples, we calculated metric performance: | Metrics | CLIP | ImageReward | TIFA | | --- | :---: | :---: | :---: | | ROC AUC | 0.949 | 0.955 | 0.857 | | PR AUC | 0.972 | 0.968 | 0.901 | CLIP and ImageReward perform similarly, both better than TIFA. CLIP slightly preferred due to imbalanced data. > decisions feel a bit ad-hoc We choose $\tau=250$ as the median activated pixel count among all images. Tests at 25th and 75th percentiles showed the median works best - lower $\tau$ reduces accuracy due to estimation variance, while higher $\tau$ increase rejection rates. Regarding $\epsilon_\theta(x_t, c_{compose})$: this design is from empirical observations. Intuitively, if object $c$ exists in image $x$, then for most noisy images $x_t$, $\epsilon_\theta(x_t, c)$ should be closer to $\epsilon_\theta(x_t, c_{compose})$ than the unconditional $\epsilon_\theta(x_t, \varnothing)$ in the corresponding pixels. We'll add this explanation to the paper. > connections with CFG Our paper uses the constrained distribution: \begin{equation} x_0 \sim p_{\text{generator}}(x_0), \quad \text{s.t. } \log p(x_0 \mid c_i) - \log p(x_0) > 0, \quad \forall\, c_i. \end{equation} Now we show that [5,6] tries to satisfy the constraint using soft regularization. Using Lagrangian relaxation and multipliers $\lambda_i \geq 0$, the objective can be transformed into: \begin{equation} \mathcal{L}(x_0, \lambda) = \log p_{\text{generator}}(x_0) + \sum_{c_i} \lambda_i \Bigl( \log p(x_0 \mid c_i) - \log p(x_0) \Bigr), \quad \lambda_i \geq 0. \end{equation} Since $\nabla_{x_t}\log p_\theta(x_0) \propto \epsilon_\theta(x_t, t)$, and [5, 6] assume an unconditional generator, the derivative matches Equation 11 in [6]: \begin{equation} \nabla_{x_t}\mathcal{L}(x_0, \lambda) \propto \epsilon_\theta(x_t, t) + \sum_{c_i} \lambda_i \Bigl( \epsilon_\theta(x_t, t \mid c_i) - \epsilon_\theta(x_t, t) \Bigr), \quad \lambda_i \geq 0. \end{equation} CFG [5,6] uses fixed $\lambda_i=w$, not guaranteeing constraint satisfaction. > Are Figure 4, 7, samples cherrypicked? Yes: Figure 4 shows the ones with most-improved CLIP scores. Figure 7 uses random prompts with clear pixel separation and aesthetic quality. We will add more samples to Appendix and captions to make the selection clear. [1] https://arxiv.org/abs/2401.14159 [2] https://arxiv.org/abs/2305.13308 [3] https://arxiv.org/abs/2301.13826 [4] https://arxiv.org/abs/2303.11897 [5] https://arxiv.org/abs/2207.12598 [6] https://arxiv.org/abs/2206.01714
Summary: - The paper introduces a novel criterion CompLift for rejecting samples of conditional diffusion models based on lift scores. - For compositional generation, i.e., cases in which the condition for sampling, e.g, a text prompt can be described as a composition of conditions (like desired individual objects in the image), CompLift intuitively evaluates whether final samples are more likely given each individual condition than without it and therefore whether the conditions have been properly considered throughout the generation process. - Formally, this criterion can be described in terms of lift scores, an existing concept in data mining, for which the authors introduce an approximation using the same conditional diffusion model as for sampling. - In an exploration of the design space, the authors evaluate the effect of noise and timestep sampling for the approximation and propose a more efficient algorithm that caches intermediate results from the generation process for later evaluation of the rejection criterion. - An evaluation on synthetic data, a toy image dataset, as well as text-to-image generation shows improved alignment with the conditions for compositional generation. Claims And Evidence: Most claims in the submission are supported by clear and convincing evidence except for: - The paper claims "significantly improve[d] compositional generation" (lines 24 ff., left column) while the quantitative results on the CLEVR position dataset mainly show accuracy improvements with 4 and 5 constraints, for which the FID however is worse than the Composable Diffusion baseline as also mentioned by the authors (lines 424 ff., left column) - If the rejection of samples reduces sample diversity as hypothesized by the authors, the kind of improvements for compositional generation (condition / prompt alignment) should be specified to avoid misunderstandings. - The paper compares quantitatively on the 2D synthetic dataset against additional baselines (EBM [1]), but limits qualitative comparisons to the Composable Diffusion baseline only. - The main paper compares only on the 2D synthetic dataset against baselines (EBM [1]), but misses to do so on the CLEVR and text-to-image setups. The appendix provides comparisons on CLEVR. [1] Reduce, Reuse, Recycle: Compositional Generation with Energy-Based Diffusion Models and MCMC. ICML 2023 Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense except for: - For the text-to-image compositional task, the number of trials is different for the vanilla version and the cached version. While I understand that the number of trials for the cached version is equal to the number of sampling steps, a fair comparison of both versions in terms of number of trials would be interesting to see for this task in order to evaluate the effect of caching. - While providing results on the 2D synthetic dataset makes sense, there is a severe lack of clarity regarding this benchmark: - Figure 1 showing results on this dataset is already on page 2 but first referenced on page 7. The caption of it also does not describe the experimental setup. I found this figure to be unclear if the dataset is not introduced yet. - As a result of that, the Compose function (lines 119 ff., right column) together with table 1 lack intuition. Having the synthetic 2D dataset described earlier for an example of different compositions of conditions (or possibly a text-to-image example in the introduction using different compose functions) would be helpful. - The introduction of the dataset in section 6 and metrics after the design space exploration / ablation (section 4) with quantitative resutls on this benchmark and figures 2 and 3 also raise questions about the dataset and how the accuracy is measured while reading the paper. - The description of the 2D synthetic dataset at the beginning of section 6.1 is lacking information about how the dataset is generated. Theoretical Claims: There are no theoretical claims that require proofs. Experimental Designs Or Analyses: All experimental designs and analyses seem to be valid. Supplementary Material: I reviewed the complete supplementary material, but did not check the pseudo-code algorithms 3 to 6 in detail. Relation To Broader Scientific Literature: Given complex conditions such as text prompts, diffusion models are known to hallucinate samples not following the correct conditional distribution. If conditions can be decomposed into smaller conditions, prior work like Composable Diffusion [1] has shown that this compositionality can be effectively leveraged to generate samples with better alignment to complex prompts. This paper proposes a method orthogonal to prior work by introducing a rejection / resampling criterion to ensure that the final sample is positively correlated with the conditions. Essential References Not Discussed: I am not aware of any essential missing references. Other Strengths And Weaknesses: Strengths: - The paper is mostly well-written and easy to follow. Abstract, introduction, and related work provide a good motivation and introduction into the topic of compositional generation. - The method section includes formal derivations of the lift score approximation as well as the intuition behind the equations, which I found very helpful. - The qualitative results are convincing: - Once the 2D synthetic dataset and task is understood, the qualitative results illustrate the effect of CompLift and the different Compose functions well. - The pixel-wise scores for text-to-image generation clearly decompose the image according to the individual conditions. - The quantitative results show consistent improvements over the Composable Diffusion baseline. Weaknesses: - As already indicated in review section "Methods And Evaluation Criteria", the structure of the paper w.r.t. the 2D synthetic dataset, figure 1, the different compose functions with table 1, and section 4 consisting of ablations before the dataset description is suboptimal and results in a lack of clarity. - And the dataset description itself also lacks information about how it was generated. - More lack of clarity: - In lines 154 ff., right column, the explanation for the small performance loss using noise sharing strategies in negation tasks is unclear to me. - I find the description of the cached CompLift version in Section 4.3. and Algorithm 2 difficult to understand, even though the idea itself is quite simple and intuitive. - The paper never introduces the abbreviations for the baselines from EBM and also not EBM itself. Other Comments Or Suggestions: - In line 194 f., right column, you reference section 4.3. in section 4.3. itself. - In line 317, left column, the $z$ should be a $z_t$, if I am not mistaken. Questions For Authors: I do not have any particular questions to the authors. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your valuable feedback. We address your concerns below. > Overstatement of minimal We will modify the abstract to make the contribution accurate as "significantly improved the condition alignment for compositional generation". > Limited comparisons: no T=50 for vanilla CompLift; missing EBM baselines on CLEVR/text-to-image We have conducted additional experiments to address both concerns: 1) We compared against EBM+ULA [1] on text-to-image generation (ULA is the default text-to-image sampler in their repo). 2) We ran vanilla CompLift with T=50 to enable fair comparison. The consolidated results are in the table below. CompLift outperforms EBM+ULA across all model variants, and the cached version performs similarly to vanilla CompLift with the same T=50, with CLIP scores very close while ImageReward scores are slightly lower for the cached version. We wish to clarify that our main focus is demonstrating vertical improvement - how CompLift boosts the base method's performance. The horizontal comparison to other baselines serves as supportive evidence that this boost helps achieve state-of-the-art performance. | Method | Animals | | Object&Animal | | Objects | | |--------|---------|---------|---------|---------|---------|---------| | | CLIP ↑ | IR ↑ | CLIP ↑ | IR ↑ | CLIP ↑ | IR ↑ | | Stable Diffusion 1.4 | 0.310 | -0.191 | 0.343 | 0.432 | 0.333 | -0.684 | | SD 1.4 + EBM (ULA) | 0.311 | 0.026 | 0.342 | 0.387 | 0.344 | -0.380 | | SD 1.4 + *Cached CompLift* | 0.319 | 0.128 | 0.356 | 0.990 | 0.344 | -0.131 | | SD 1.4 + *CompLift* (T=50) | 0.320 | 0.241 | 0.355 | 0.987 | 0.344 | -0.154 | | SD 1.4 + *CompLift* (T=200) | **0.322** | **0.293** | **0.358** | **1.093** | **0.347** | **-0.050** | |||||||| | Stable Diffusion 2.1 | 0.330 | 0.532 | 0.354 | 0.924 | 0.342 | -0.112 | | SD 2.1 + EBM (ULA) | 0.330 | 0.829 | 0.357 | 0.981 | 0.348 | 0.218 | | SD 2.1 + *Cached CompLift* | 0.339 | 0.880 | 0.361 | 1.252 | 0.354 | 0.353 | | SD 2.1 + *CompLift* (T=50) | **0.340** | **0.992** | 0.361 | 1.263 | 0.354 | 0.454 | | SD 2.1 + *CompLift* (T=200) | **0.340** | 0.975 | **0.362** | **1.283** | **0.355** | **0.489** | |||||||| | Stable Diffusion XL | 0.338 | 1.025 | 0.363 | 1.621 | 0.359 | 0.662 | | SD XL + EBM (ULA) | 0.335 | 0.913 | 0.362 | 1.676 | 0.361 | 0.872 | | SD XL + *Cached CompLift* | 0.341 | **1.244** | **0.364** | 1.687 | 0.365 | **0.896** | | SD XL + *CompLift* (T=50) | **0.342** | 1.222 | **0.364** | 1.700 | 0.365 | 0.842 | | SD XL + *CompLift* (T=200) | **0.342** | 1.216 | **0.364** | **1.706** | **0.367** | 0.890 | > Structure issues Thank you for the constructive feedback. We will try our best to make the concept more clear. In particular, we will make the following modifications: 1. Self-inclusive caption - summarize the experiment setup in the caption of Figure 1, including the component distribution, the algebra, the data generation, and the training. 2. Early introduction of 2D dataset - briefly mention the 2D synthetic dataset in Section 3, including the description about the data generation, the component distribution, the algebra, and the accuracy metric. We will add a sentence to refer reader to Section 6 and Appendix D for more details. 3. Intuitive explanation - add several more explanation of the Compose function, using the examples in the 2D dataset. > 2D dataset missing generation details We will add more text to Section 6.1 about the dataset generation. In short, the distributions follow the generation way in [1] - they are either Gaussian mixtures or uniform distribution. We sample 8000 data points randomly for each component distribution, and train 1 diffusion model for each distribution. We will include more parameters of the distributions in Appendix D. > Unclear noise sharing performance loss in negation Our hypothesis is that sharing the same noise introduces some bias in the estimation, and it makes CompLift over-reject samples as a conservative way. With more trials, the bias of the estimation is amplified, thus makes more samples over rejected. After taking a deeper look in Figure 2, we also observe similar sharing-noise regression for Product and Mixture, though the regression is very slight for those 2 algebras. We will modify the explanation in the paper to make this hypothesis more clear. > Cached CompLift description unclear We provide more details about the algorithm in Appendix C. Algorithm 5 and 6 are in pseudo-code styles and might be easier for the reviewer to parse. We will add a sentence in Section 4.3 to refer readers to Appendix C for more context. Please let us know if there remains such an issue, we will keep making the paper easier to read. > EBM-related abbreviations Thanks. We'll add explanations in Section 6.1 for all abbreviations (EBM, ULA, U-HMC, MALA, HMC). > Typos Thanks. We'll remove the self-reference and change $z$ to $z_t$ in paragraph L317. [1] https://arxiv.org/abs/2302.11552 --- Rebuttal Comment 1.1: Comment: I appreciate the rebuttal from the authors that addresses all my concerns from my review. I do not have any follow-up questions.
Summary: This work proposes CompLift, a resampling criterion based on the concept of lift scores used to improve the compositional generation capabilities of pretrained diffusion models. CompLift approximates the lift scores with the diffusion modules noise estimation, without requiring any external reward modules to measure its alignment to the given condition. The authors additionally propose a caching technique for CompLift and achieves computationally effecient pipeline. Through evaluations on both simple synthetic generation and more complex text-to-image generation, the paper shows that CompLift leads to accurate compositional generation without additional training. Claims And Evidence: The overall writing of the paper is well-structured, with a clear problem definition and a simple but effective solution. The idea of adopting the concept of lift scores for improving diffusion model's compositional generation is interesting. However, the paper lacks reference and discussions on an important related work, as detailed below: CompLift seems to resemble a closely related work CAS [1], in which the authors define a novel "condition alignment score" CAS as $\log p (x_0 | c) - \log p (x_0)$ (Fig. 3 of the paper). The main argument of CAS is that this term can effectively measure the alignment between the generated output $x_0$ and the given condition $c$, and therefore it can be used as a alignment metric without the need of external modules. This claim is similar to the main contribution of this work. In this regard, the proposed formulation of lift scores in Eq. (2)-(4) of this work seems to be quite similar to CAS, and the authors will need to provide a discussion on the difference between the two approaches in order to claim the novelty of CompLift. [1] CAS: A Probability-Based Approach for Universal Condition Alignment Score, Hong et al., ICLR 2024 Methods And Evaluation Criteria: 1. For the request for the justification on the novelty of CompLift, please refer to the above section. 2. Evaluations on the effect of CompLift on text-to-image generation is not very convincing as it does not include comparisons with the baselines. While the authors evaluate using the benchmark from Attend-and-Excite [1], I couldn't find the comparisons with the Attend-and-Excite itself. Since Attend-and-Excite (or its follow-ups) also does not require external modules for measuring the alignment with the given condition, I believe it should be a valid baseline for comparison. Otherwise, as stated in the introduction, it would be nice if the authors show that CompLift can indeed to applied together with such methods, yielding additional improvements. [1] Attend-and-Excite: Attention-Based Semantic Guidance for Text-to-Image Diffusion Models, Chefer et al., SIGGRAPH 2023 Theoretical Claims: This paper doesn't provide theoretical claim or proof, and instead focuses on the empirical evidence on numerous types of data. Experimental Designs Or Analyses: 1. Comparisons on the running time in Fig. 5 clearly shows the advantage of CompLift over the MCMC-based approaches in terms of efficiency. 2. The idea of "counting the activated pixels" in Section 5.1 seems quite confusing. I'm curious whether this design choice can consider a variety of objects. For instance, some objects are likely to take up a large area of the image, while other objects might be generated in smaller sizes. If the same threshold for number of pixels is applied for checking the existence, can it handle both cases? How did you set the threshold $\tau$? Supplementary Material: The results of 2D toy experiments in Fig. 11-13 seem quite intuitive and clearly show that CompLift has an advantage over Composable Diffusion. In addition to this, I'm also curious whether the same trend holds for the more recent previous work "Reduce, Reuse, Recycle" [1] which is based on MCMC. [1] Reduce, Reuse, Recycle: Compositional Generation with Energy-Based Diffusion Models and MCMC Relation To Broader Scientific Literature: As also mentioned in the paper, the idea of using the diffusion model itself as a source of a conditional reward function could be useful for inference-time scaling methods for diffusion models aiming for better condition alignment. Essential References Not Discussed: A critical related work CAS [1] is missing, as mentioned in the "Claims" section. [1] CAS: A Probability-Based Approach for Universal Condition Alignment Score, Hong et al., ICLR 2024 Other Strengths And Weaknesses: I agree with the fundamental goal of the paper that "diffusion models should be able to assess its own alignment to the given condition". And the method is quite simple and intuitive. However, for now it is hard to give a positive score as the paper fails to address a critical previous work that has proposed a similar claim and solution. Other Comments Or Suggestions: typo: page 8 line 408: ImageResizer -> ImageReward Questions For Authors: Crucial questions are included in the "Claims" section. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful and constructive feedback. We address your concerns as follows. > Relationship to CAS [1] Thank you for pointing out this important related work, which we previously overlooked. We will add reference to this valuable work, and incorporate a discussion of CAS in the "Related Work" section of the revised version. A summary of the relationship is as follows: Our work can be seen as an extension of CAS, which investigates the potential of using CAS as a compositional criterion to decompose the alignment requirements of a complex prompt into multiple acceptance criteria. To approximate CAS, we employ ELBO estimation to reduce computational cost, as an alternative to the Skilling-Hutchinson estimator used in the original CAS paper. It would be interesting to explore how Skilling-Hutchinson-based estimation performs in a compositional setting. It may yield higher accuracy at the expense of greater computational overhead, which we plan to investigate in future work. > Comparison with Attent-and-Excite [2] on text-to-image generation Thank you for the great question. We conducted a new experiment using Attent-and-Excite [2], focusing on the additional improvement achieved by incorporating CompLift. We observed consistent performance gains with both SD 1.4 and SD 2.1. Note that SD XL is not included due to the lack of support in the original Attent-and-Excite code. | Method | Animals | | Object & Animal | | Objects | | | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | | CLIP ↑ | IR ↑ | CLIP ↑ | IR ↑ | CLIP ↑ | IR ↑ | | A&E (SD 1.4) | 0.330 | 0.831 | 0.357 | 1.339 | 0.357 | 0.815 | | A&E (SD 1.4) + *Cached CompLift* | **0.338** | 1.156 | **0.361** | **1.469** | **0.362** | 0.934 | | A&E (SD 1.4) + *CompLift* | 0.337 | **1.160** | **0.361** | 1.458 | 0.361 | **0.990** | |||||||| | A&E (SD 2.1) | 0.342 | 1.225 | 0.360 | 1.471 | 0.366 | 1.219 | | A&E (SD 2.1) + *Cached CompLift* | 0.344 | 1.298 | 0.364 | 1.488 | **0.371** | 1.245 | | A&E (SD 2.1) + *CompLift* | **0.346** | **1.337** | **0.365** | **1.516** | 0.370 | **1.246** | > The idea of "counting the activated pixels" in Section 5.1 seems quite confusing... How did you set the threshold $\tau$? Thank you for raising this thoughtful concern. We agree that performance could be further improved by making $\tau$ an object-specific hyperparameter. For simplicity, we currently set $\tau$ as a uniform threshold. We chose $\tau = 250$ as the median value among the number of activated pixels across all images. We also experimented with the 25th and 75th percentiles, and found the median performed best in practice. A lower $\tau$ leads to less accurate rejection due to ELBO variance, while a higher $\tau$ increases the rejection rate. We will include more details on the derivation of $\tau$ in the Appendix. Intuitively, $\tau = 250$ corresponds to ~1.5% of the total number of the latent pixels in SDXL (128x128 latent space) and ~6.1% in SD 1.4/2.1 (64x64). We find this small threshold sufficient, since our focus is on identifying "missing object" issues. If an object is missing, it tends to result in almost no activated pixels. Additional discussion will be added to the Appendix. > Does the same trend hold for the recent work "Reduce, Reuse, Recycle" [3] based on MCMC? Thank you for this suggestion. We also observed a clear overall advantage of our method over MCMC-based methods like "Reduce, Reuse, Recycle" [3]. While certain MCMC variants such as U-HMC and MALA perform comparably in specific scenarios (e.g., the first test case in Product and second in Mixture), they often generate samples outside the target distribution in other settings, unlike our method. We will update Appendix D to include visualizations comparing results from the MCMC-based methods. > Typo: page 8 line 408: ImageResizer -> ImageReward Thank you for catching this typo. We will correct it in the revised version. [1] https://openreview.net/forum?id=E78OaH2s3f [2] https://arxiv.org/abs/2301.13826 [3] https://arxiv.org/abs/2302.11552 --- Rebuttal Comment 1.1: Comment: I appreciate the authors' rebuttal and their efforts in answering all the raised questions. However, I still have concerns regarding the core technical contribution of LiftScore over CAS, outlined below: While the authors distinguish between conditional generation (in CAS) and compositional generation (in LiftScore), it is unclear whether these tasks are fundamentally independent. The tasks in LiftScore (e.g. text-to-image generation, position task) could be framed as conditional generation tasks, which means that they can also be addressed by CAS. Could the authors clarify why LiftScore could have an advantage over CAS specifically in the compositional setting? If the key difference between the two methods is the choice of the approximation, I am concerned whether this choice can be justified for the specific tasks. --- Reply to Comment 1.1.1: Comment: Thank you for your acknowledgement of our rebuttal effort. We address your question as follows: > advantage of compositional criteria We acknowledge that the compositional acceptance / rejection task can also be framed using 1 single criterion that works directly on the whole prompt, as addressed by CAS. To test how CAS-like variant performs for prompts containing multiple objects, we have conducted a new ablation study. Here, CAS variant means that we use the single criteria $\log p(z| c_{\text{compose}})-\log p(z| \varnothing)$ as the latent lift score, which replaces the composed criteria from multiple individual lift scores in CompLift. Note that this is a controlled experiment to check the advantage of compositional criteria, thus, we keep the same estimation method using ELBO. We provide the following table as the result. We observe only modest improvement when using the CAS variant. We hypothesize that CAS variant might face a similar problem as the original Diffusion Model for multi-object prompts - the attention to the missing object is relatively weak in the attention layers. Similar discussions can be found in previous works such as Attend-and-Excite [1], where diffusion model $\epsilon_\theta(x, c_\text{compose})$ sometimes has weak alignment with some condition $c_i$ in $c_\text{compose}$. We will add the new table and the related discussion to the Appendix. | Method | Animals | | Object&Animal | | Objects | | | --- | :---: | :---: | :---: | :---: | :---: | :---: | | | CLIP ↑ | IR ↑ | CLIP ↑ | IR ↑ | CLIP ↑ | IR ↑ | | SD 1.4 | 0.310 | -0.191 | 0.343 | 0.432 | 0.333 | -0.684 | | SD 1.4 + *CAS Variant* | 0.312 | -0.153 | 0.348 | 0.708 | 0.337 | -0.373 | | SD 1.4 + *CompLift* | **0.322** | **0.292** | **0.358** | **1.094** | **0.347** | **-0.050** | |||||||| | SD 2.1 | 0.330 | 0.532 | 0.354 | 0.924 | 0.342 | -0.112 | | SD 2.1 + *CAS Variant* | 0.333 | 0.626 | 0.355 | 1.080 | 0.347 | 0.144 | | SD 2.1 + *CompLift* | **0.340** | **0.975** | **0.362** | **1.283** | **0.355** | **0.489** | |||||||| | SD XL | 0.338 | 1.025 | 0.363 | 1.621 | 0.359 | 0.662 | | SD XL + *CAS Variant* | 0.338 | 1.064 | 0.363 | 1.628 | 0.362 | 0.702 | | SD XL + *CompLift* | **0.342** | **1.216** | **0.364** | **1.706** | **0.367** | **0.890** | > side note: why compositional in general? One cause of the missing object issue might be the training-inference mismatch: similar combinations of objects in $c_\text{compose}$ are rare in the training set. As more objects of interest are involved, we can observe that this problem gets more significant as the whole composed condition grows more complex (e.g., the CLEVR experiment). Similarly, sampling/criteria based solely on $\epsilon_\theta(x, c_\text{compose})$ might not be as reliable as approaches that incorporate information from individual $\epsilon_\theta(x, c_i)$, as the table shown above. Compositional generation is one approach to generalize to more complex prompts. For example, the CLEVR model is trained with only 1 object position in the prompt. With Composable Diffusion + CompLift, we can extend it to the combination with 5 object positions with high accuracy, while such combinations are rare to see in training. > key core contribution of our work We would like to emphasize that our key contribution is to provide a systematic exploration and application of LiftScore / CAS, specifically to compositional generation challenges. Our contribution is not the invention of LiftScore, since it is already an existing concept in data mining, and theoretically equivalent to the CAS concept. Our work can indeed be viewed as a complementary extension of LiftScore / CAS into the compositional generation domain - much like how science builds upon previous discoveries, we too stand on the shoulders of giants. The CAS paper provided valuable insights on condition alignment with a single condition, which we acknowledge. Our contribution extends this foundation by: 1. Developing the mathematical framework to apply these scores to multiple condition compositions, including algebras like Product, Mixture, and Summation. 2. Introducing novel engineering solutions (like caching and variance reduction) that make compositional evaluation practical. 3. Systematic evaluation on 2D, CLEVR, and text-to-image datasets. We hope this clarification addresses your concerns about the relationship between our work and CAS. Our intention is to contribute meaningful extensions to this line of research by adapting and enhancing these techniques specifically for compositional generation tasks. Thank you again for your insightful feedback, which has helped us better articulate the positioning of our work. If our responses have addressed your concerns adequately, we would be sincerely grateful if you would consider raising your score accordingly as a recognition of our work and this rebuttal effort. Thank you once again for your time and support. [1] https://arxiv.org/abs/2301.13826
Summary: This paper proposes a training-free post-processing approach, CompLift, to select images with specified concepts from diffusion model-generated image candidates. The main idea is to use the lift score, which is equivalent to point-wise mutual information, to evaluate if conditioning $c$ reduces uncertainty of variable $\textbf{x}$. As a post-processing approach, the performance of CompLift hinges on the generative model (i.e., composable diffusion model) it is based on. If composable diffusion model cannot generate accurate images at all, then CompLift cannot make any improvement. Experimental results show improved generation accuracy by using the proposed approach. ### Most of my concerns are addressed. I maintain the rating. Claims And Evidence: The paper claims that "as a novel resampling criterion using lift scores for compositional generation, requiring minimal computational overhead". This assertion seems somewhat overstated, and the term "minimal" is ambiguous without a clear criterion. While in some cases, the cached strategy results in no additional computational overhead, this is not universally true. For text-to-image generation, when replacing $\epsilon$ with $\epsilon_\theta(z, c_{\text{ccompose}})$, additional computational overhead (n + 2) · T forward passes is involved. Methods And Evaluation Criteria: The proposed method make sense for the application. Theoretical Claims: No theoretical claims are provided in the paper. Experimental Designs Or Analyses: The paper states in Fig. 5 that the overhead introduced by the cached CompLift is negligible for the Composable Diffusion baseline (Liu et al., 2022). However, this experiment only shows running time without giving accuracy evaluation. It is not clear if it is trading accuracy performance with running time. It might be helpful to show both in a single figure. Supplementary Material: Did not review the supplementary material. Relation To Broader Scientific Literature: The proposed approach uses lift score as a criterion to evaluate if a concept appears in an image. The estimated lift score in equation (4) is actually equivalent to the point-wise mutual information discussed in equation (5) in [1]. The proposed approach can be seen as an application of point-wise mutual information in [1]. [1] Kong, X., Liu, O., Li, H., Yogatama, D., and Steeg, G. V. Interpretable diffusion via information decomposition. arXiv preprint arXiv:2310.07972, 2023. Essential References Not Discussed: N/A Other Strengths And Weaknesses: **Strength** 1. The proposed approach is training-free and requires little or no additional computational resources by cache design. 2. Extensive experiments are conducted and show improvements over baselines. 3. The writing is smooth and easy to follow. **Weakness** 1. Involving no extra training or guidance at inference time not only can be an advantage but also can be a limitation of the proposed model. This means that the proposed model cannot correct the generated images but can only select some from them. As a result, the performance of CompLift largely hinges on the generative model (like Composable Diffusion) it builds upon, because if Composable Diffusion does not generate accurate images, the CompLift cannot select accurate images from the generated images. 2. Though CompLift shows significant improvement over Composable Stable Diffusion on synthetic datasets, the improvement on real-world text-to-image generative model is trivial as shown in Table 3. Again, this hinges on the performance of Composable Stable Diffusion that has limited ability to generate accurate multi-object images. 3. Though called CompLift, the proposed approach itself is not compositional because it evaluate individual concepts separately. It is not evaluating a joint appearance of all concepts. For text-to-Image generation, only AND operation is considered by evaluating individual concepts. 4. For text-to-image generation, when replacing $\epsilon$ with $\epsilon_\theta(z, c_{\text{ccompose}})$, additional computational overhead (n + 2) · T forward passes is involved. This contradicts the minimal computational overhead requirement claim, and should be discussed to avoid overclaim. Other Comments Or Suggestions: N/A Questions For Authors: It is somewhat unclear about fair comparison with Composable Diffusion Model. Consider that Composable Diffusion Model generates 5 images, and one of them is accurate. CompLift selects the accurate one with lift score. Then how to determine if Composable Diffusion Model or CompLift is more accurate? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your valuable feedback and questions. We address your concerns below: > On CompLift's dependence on underlying generative model We agree and will add this theoretical limitation to our Conclusion. While theoretically CompLift cannot improve if the base method produces no accurate images, in practice even weak generators often improve with ≤5 candidate images. > On "minimal computational overhead" claim We'll remove the ambiguous term "minimal" and claim only "requiring no additional training." For text-to-image generation, the (n + 2) · T additional forward passes can be parallelized to reduce latency. Currently, generation takes ~15s and lift score calculation ~30s on a 4090 GPU, with GPU memory as the bottleneck. Ideally, we can further parallelize to the latency of O(1) forward pass given enough GPU memory. > On cached CompLift's accuracy-speed tradeoff Every column in Fig. 5 has a corresponding accuracy row in Table 2 (Cached CompLift has T=50 by default). We'll add a footnote clarifying this. In practice, we perceive small accuracy regression on Mixture and Negation task when switching from vanilla CompLift to Cached CompLift. However, the accuracy remains significantly higher than other baselines. The tradeoff exists, but seems to be mild and acceptable given the substantial speed improvement. > On lift score equivalence to point-wise mutual information [1] We'll update the Related Work section to reflect this equivalence. Our paper applies point-wise mutual information (PMI) as an acceptance/rejection criterion, focusing on missing-object cases and composing PMI for multiple objects. > On real-world text-to-image improvement The seemingly trivial CLIP improvement is due to the low magnitude of CLIP scores. Here, we provide another perspective to interpret the numbers. We compare the CompLift selector to the perfect best-of-n selector, which has direct access to the metric function. The percentage gain is calculated as (CompLift metric - baseline metric) / (perfect selector metric - baseline metric). On average, the gains are ~40% for vanilla CompLift and ~30% for cached CompLift. We will add more explanation to the Appendix. | Method | Animals | | Object&Animal | | Objects | | |--------|---------|---------|---------|---------|---------|---------| | | CLIP gain% ↑ | IR gain% ↑ | CLIP gain% ↑ | IR gain% ↑ | CLIP gain% ↑ | IR gain% ↑ | | SD 1.4 + *Cached CompLift* | 31.58 | 30.58 | 42.62 | 62.74 | 37.04 | 54.99 | | SD 1.4 + *CompLift* | **42.11** | **46.40** | **49.18** | **74.32** | **47.14** | **63.05** | |||||||| | SD 2.1 + *Cached CompLift* | 35.71 | 44.13 | 28.34 | 55.07 | 37.97 | 57.82 | | SD 2.1 + *CompLift* | **39.68** | 56.18 | **32.39** | **60.28** | **41.14** | **74.73** | |||||||| | SD XL + *Cached CompLift* | 16.95 | **55.88** | **26.13** | 46.15 | 24.49 | **46.06** | | SD XL + *CompLift* | **22.60** | 48.74 | **26.13** | **59.44** | **32.65** | 44.88 | > On CompLift's compositionality Our approach is to (1) evaluate individual concepts separately, and (2) compose an acceptance/rejection criteria from these multiple individual criteria, as Algorithm 3. Thus, the CompLift criteria seems compositional from our perspective. We wish to mention Composable Diffusion [2] as an example to elucidate such a perspective. Essentially, the approach uses (1) individual score on each concept, and (2) compose the score from these multiple individual scores. Such a factorize-and-compose way helps reduce the hardness to comply with complex prompt. As shown in our experiments, it improves performance on complex multi-object generation by evaluating individual conceptual alignment before making a composed decision. > On limited algebraic operations for text-to-image We acknowledge this limitation. While we tested all algebras in the 2D dataset, we found no existing mature benchmark for OR/NOT algebras in text-to-image generation. We'll note this in our Conclusion for future work. > On fair comparison with Composable Diffusion We recognize the challenge in comparing these approaches. Composable Diffusion generates candidates with no internal selection mechanism, while CompLift is a post-hoc filter. They serve different purposes and are not mutually exclusive—CompLift can enhance generation models by leveraging semantic alignment for filtering results. Our main goal in the experiments is to show such an enhancement, instead of a direct replacement, of other baselines such as Composable Diffusion. [1] https://arxiv.org/abs/2310.07972 [2] https://arxiv.org/abs/2302.11552
null
null
null
null
null
null
OV-MER: Towards Open-Vocabulary Multimodal Emotion Recognition
Accept (poster)
Summary: This paper proposes a new platform for emotion recognition studies. It extends the previously released MER2023 dataset, where GPT-3.5 is heavily utilized to group emotions meaningfully. For comparison, the evaluation benchmark includes many existing LMMs. Claims And Evidence: This research aims to overcome the limitations of previous studies that rely on predefined taxonomies to capture more complex, subtle emotions. However, the grouping and emotion recognition performance results are not so different from those of the existing studies. LLMs may have learned limited taxonomies (such as those in Emotion Wheels) from literature. While this research provides a more detailed vocabulary set of emotions, it is still unclear what the new findings are in this paper. Methods And Evaluation Criteria: The methods and evaluation criteria are sound and reasonable, but limited to identifying L1-level groups, which is not very different from the present classification framework. Theoretical Claims: This paper is theoretically sound. Experimental Designs Or Analyses: The experiments are well-designed and the ablation studies are good. Supplementary Material: This paper contains too many supplementary materials, making it difficult to understand without reading. It would have been better to clarify the focus. Relation To Broader Scientific Literature: This paper challenges universal symbol grounding problems, which are significant for analyzing human behavior. Thus, it relates to the border scientific literature. Essential References Not Discussed: I believe the survey in this paper is thorough. Other Strengths And Weaknesses: This paper proves that collaborative annotation between humans and AI is feasible and fruitful. Other Comments Or Suggestions: As the authors claim, human emotion includes many different elements. Therefore, emotion recognition should be a "multi-label" problem. I strongly recommend that the proposed framework be extended in this direction. ## update after rebuttal ## Thank you for the rebuttal. I did not change my score. Questions For Authors: I wonder how much agreement there is among annotators (Kappa value) in the annotation framework. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Q1:** This research aims to overcome the limitations of previous studies that rely on predefined taxonomies to capture more complex, subtle emotions. However, the grouping and emotion recognition performance results are not so different from those of the existing studies. The methods and evaluation criteria are sound and reasonable but limited to identifying L1-level groups, which is not very different from the present classification framework. **A1:** It seems the reviewer may have some misunderstandings about our work. The L1-level group is just one of several grouping techniques used in this paper. Specifically, Table 2 presents results using GPT-based grouping, Table 3 provides results using L1-level grouping, and Table 12 shows results using L2-level grouping. In this paper, our prediction approach follows an open-vocabulary setting, which allows for the recognition of unseen emotions. This is fundamentally different from traditional MER, which relies on fixed taxonomies. However, this flexibility introduces new evaluation challenges. Since there is no predefined label space, the model may predict closely related but differently expressed emotions (e.g., joyful and happy). To address this issue, we grouped similar emotions before computing evaluation metrics. **Thus, the grouping operation is not intended to restrict emotion recognition to L1-level groups. Instead, to align with our open-vocabulary setup, we designed grouping techniques to facilitate performance evaluation. Of course, other evaluation metrics that do not rely on grouping techniques could also be used.** **Meanwhile, due to the open-vocabulary setting of our OV-MER, we need to design not only new evaluation metrics but also novel dataset construction methods and baselines. These innovations are also key contributions of our work. Therefore, OV-MER is fundamentally different from traditional MER in terms of the task, dataset, evaluation metrics, and solution approach.** **Q2:** As the authors claim, human emotion includes many different elements. Therefore, emotion recognition should be a "multi-label" problem. I strongly recommend that the proposed framework be extended in this direction. **A2:** Thanks for your comment. **We would like to clarify that OV-MER has already attempted to address MER in a multi-label manner.** As shown in Figure 2 and Appendix E, a single instance can be associated with multiple emotion labels simultaneously. Furthermore, we argue that our proposed OV-MER task encompasses the concept of a multi-label problem but extends beyond it by incorporating flexible and expandable emotion expressions. Consequently, conventional evaluation protocols (e.g., L2 loss) cannot be directly applied to the OV-MER task. **Q3:** I wonder how much agreement there is among annotators (Kappa value) in the annotation framework. **A3:** Unlike the traditional single-label-based annotation method with a fixed label space, OV-MER employs a multi-label-based annotation method without a fixed label space. Therefore, we cannot directly compute the Kappa value between different annotators. **Therefore, we draw inspiration from Section M and utilize the Jaccard similarity coefficient to measure the inter-annotator agreement.** Specifically, assume there are $N$ samples and $K$ annotators. For each pair of annotators $A_m$ and $A_n$, their annotation results for each sample $x_i$ are denoted as $Y_m^i$ and $Y_n^i$, respectively. Here, $Y_m^i$ and $Y_n^i$ contain a set of emotion labels. We calculate the agreement score between annotators $A_m$ and $A_n$ as: $$ Similarity_{m,n} = \frac{1}{N}\sum_{i=1}^{N}\frac{|Y_m^i \cap Y_n^i|}{|Y_m^i \cup Y_n^i|} $$ In our annotation process, we hired 8 annotators and conducted two rounds of checks, with no overlap among annotators in each round (see Appendix K). For the first round, the inter-annotator agreement is shown as follows: | | A₁ | A₂ | A₃ | A₄ | |------|-----|-----|-----|-----| |**A₁** | 1.00 | 0.57 | 0.47 | 0.51 | | **A₂** | 0.57 | 1.00 | 0.49 | 0.48 | | **A₃** | 0.47 | 0.49 | 1.00 | 0.46 | | **A₄** | 0.51 | 0.48 | 0.46 | 1.00 | For the second round, the inter-annotator agreement is shown as follows: | | A₅ | A₆ | A₇ | A₈ | |------|-----|-----|-----|-----| | ​**A₅** | 1.00 | 0.66 | 0.71 | 0.77 | | ​**A₆** | 0.66 | 1.00 | 0.64 | 0.67 | | ​**A₇** | 0.71 | 0.64 | 1.00 | 0.69 | | ​**A₈** | 0.77 | 0.67 | 0.69 | 1.00 | We observe that through multi-round checks, the inter-annotator agreement gradually increases. These results demonstrate the necessity of multi-round checks, which help enhance label reliability.
Summary: This paper extends traditional MER and introduces a novel task called open-vocabulary MER (OV-MER). The primary motivation behind this is to expand the scope of emotion recognition to encompass more fine-grained emotion labels. Since OV-MER is a newly proposed task lacking datasets, metrics, and baselines, the authors further construct a dataset (OV-MERD), define metrics (set-level metrics based on GPT and the emotion wheel), and establish baselines (baselines based on LLMs). In summary, this paper extends traditional MER tasks to OV-MER, offering a new research direction in this field. Claims And Evidence: Yes. This paper contains extensive experiments to support its conclusions. Methods And Evaluation Criteria: Yes. Theoretical Claims: This is not a theoretical paper. Experimental Designs Or Analyses: Yes. Their experiment designs and analyses are clear and sound. This paper first presents the baseline results on OV-MER. Subsequently, it verifies the impact of different annotation methods, different baseline generation strategies, the correlation among different metrics, and the rationality of labels in OV-MERD. Meanwhile, there are extensive experiments in the appendix to explore more aspects. Supplementary Material: Yes. The code, dataset, and some baseline results are provided in the supplementary material. Relation To Broader Scientific Literature: The key contribution of this paper lies in extending traditional MER, which has a fixed label space, to OV-MER. OV-MER encompasses a more diverse range of emotion labels, thereby facilitating a more accurate description of emotions. Essential References Not Discussed: No, essential related work has been correctly cited. Other Strengths And Weaknesses: 1. In Table 2, some baseline models include both 7B and 13B versions. Please specify which version is used in the leaderboard? 2. Please explain the reason for reporting both English and Chinese results in Table 2. 3. Please discuss the correlation between English and Chinese results. 4. Please explain how the OV-MERD dataset is annotated. Traditionally, due to the fixed label space, MER datasets usually employ multiple annotators and use majority voting to determine the final label. In OV-MER, please explain the method for determining the final label. 5. Please discuss the reasons behind why the GPT-based scores are highly correlated with the EW-based metrics, but less correlated with the matching-based scores? 6. In Table 12, it seems that M*-L2 is always more correlated with the GPT-based metrics than M*-L1. Please explain this further. Other Comments Or Suggestions: Please refer to my comments on the weakness part. Questions For Authors: Please refer to my comments on the weakness part. Ethical Review Concerns: Not needed. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate your positive feedback and recognition of our contributions to advancing MER research. Your comments on our work are truly valuable to us. **Q1:** In Table 2, some baseline models include both 7B and 13B versions. Please specify which version is used in the leaderboard. **A1:** Thank you for your suggestion. In this paper, **we use the 7B model by default**, and we will clarify this in the revised manuscript. **Q2:** Please explain the reason for reporting both English and Chinese results in Table 2. **A2:** In Figure 2, we observe that there are certain differences in the labels extracted from different languages. **To study the impact of language differences**, we report results for both English and Chinese descriptions in Table 2. **Q3:** Please discuss the correlation between English and Chinese results. **A3:** We utilize the results from Table 2 to compute Pearson correlation coefficients (PCC) between the English and Chinese results for each metric. As illustrated in the following table, **all metrics demonstrate strong cross-linguistic correlations.** | Metric | Fₛ | Precisionₛ | Recallₛ | |--------------|--------|------------|--------| | PCC scores | 0.9896 | 0.9738 | 0.9817 | **Q4:** Please explain how the OV-MERD dataset is annotated. Traditionally, due to the fixed label space, MER datasets usually employ multiple annotators and use majority voting to determine the final label. In OV-MER, please explain the method for determining the final label. **A4:** We appreciate your comment regarding our annotation process. **As detailed in Appendix K, our labeling procedure involved eight annotators familiar with emotion definitions and utilized a two-round verification pipeline.** In the first round, four randomly selected annotators performed independent annotations. In the second round, the labels reviewed by the first group of annotators were merged, and the remaining four annotators conducted a second round of checks. This approach ensures that each preserved label receives confirmation from at least one annotator per round, thereby guaranteeing both the comprehensiveness and accuracy of the annotation results. **Q5:** Please discuss the reasons behind why the GPT-based scores are highly correlated with the EW-based metrics, but less correlated with the matching-based scores? **A5:** GPT-based and EW-based metrics are two grouping techniques used in our work, focusing on calculating emotion label similarity. In contrast, matching-based scores emphasize word-level matching between two descriptions, including non-emotional words. Consequently, matching-based metrics exhibit lower similarity compared to GPT-based and EW-based metrics. **Q6:** In Table 12, it seems that M*-L2 is always more correlated with the GPT-based metrics than M*-L1. Please explain this further. **A6:** M*-L1 emphasizes coarse-grained clustering information, whereas M*-L2 emphasizes fine-grained clustering information. The higher correlation between M*-L2 and GPT-based metrics suggests that **GPT-based metrics primarily rely on fine-grained emotion clustering during the metric calculation.**
Summary: The paper presents a novel paradigm for Open-Vocabulary Multimodal Emotion Recognition (OV-MER), addressing the limitations of existing MER systems that rely on predefined emotion taxonomies. The key contributions include: 1. A new MER paradigm (OV-MER): Unlike traditional MER, which limits emotions to a fixed set of labels, OV-MER enables models to predict emotions beyond predefined categories, allowing for a more nuanced and flexible representation of human emotions. 2. The OV-MERD dataset: A newly curated multimodal emotion dataset that supports open-vocabulary annotation, leveraging a human-LLM collaboration strategy to improve label richness. 3. New evaluation metrics: Since OV-MER allows flexible labeling, the paper proposes set-based evaluation criteria, including similarity-based grouping methods and modified precision-recall metrics. 4. Benchmarking and analysis: The paper provides extensive experiments evaluating state-of-the-art multimodal large language models (MLLMs) on the OV-MER task, showing that current models struggle to handle the complexity of open-vocabulary emotion recognition. The study suggests that OV-MER can significantly improve the generalizability of MER systems and facilitate a more human-like emotional understanding in AI applications. Claims And Evidence: The claims made in the paper are generally well-supported by experiments and analysis. Methods And Evaluation Criteria: The methods and evaluation criteria are well-justified for the problem, particularly: • Dataset Construction: The OV-MERD dataset introduces a novel human-LLM hybrid annotation process, which improves label diversity. • Evaluation Metrics: The study introduces set-based precision-recall metrics tailored for open-vocabulary tasks, which is appropriate given the nature of OV-MER. • Experimental Design: The evaluation compares multiple state-of-the-art MLLMs, making the benchmarks robust. A potential limitation is that human evaluations (e.g., expert reviews on model-generated labels) are not included, which could provide a more qualitative assessment of OV-MER’s effectiveness. Theoretical Claims: The paper does not focus on formal theoretical proofs but rather on empirical evaluation. Therefore, there are no formal proofs to verify. Experimental Designs Or Analyses: The experimental design is robust and well-executed, with appropriate baselines. However, two areas could be improved: 1. Ablation studies on the impact of different modalities (text, audio, video) are limited. While results indicate multimodal inputs improve performance, further analysis on their individual contributions would be valuable. 2. Generalization to real-world data is not extensively discussed—datasets are primarily sourced from movies and TV shows, which may not fully capture spontaneous human emotions. Supplementary Material: The supplementary material includes additional dataset details, evaluation methodologies, and model results, which enhance reproducibility. Relation To Broader Scientific Literature: The paper builds upon existing MER work but extends it toward open-vocabulary recognition, aligning with recent advances in LLM-driven perception models. Essential References Not Discussed: The paper provides a solid literature review on multimodal emotion recognition (MER) and open-vocabulary learning. Other Strengths And Weaknesses: Strengths Novelty: The introduction of open-vocabulary emotion recognition is a significant departure from traditional MER and provides greater flexibility in capturing nuanced human emotions. Dataset Quality: The OV-MERD dataset is well-constructed, with human-LLM collaboration ensuring diverse and high-quality annotations. Evaluation Metrics: The paper carefully designs set-based evaluation metrics, which are more appropriate for open-vocabulary settings than traditional classification metrics. Comprehensive Benchmarking: The study evaluates state-of-the-art MLLMs and provides thorough performance comparisons, offering valuable insights into current limitations in emotion AI. Weaknesses Limited real-world validation: The dataset is primarily sourced from movies and TV series, which may not fully represent spontaneous, real-life emotional expressions. A discussion on potential domain adaptation strategies would strengthen the work. No explicit human evaluation of model predictions: While human reviewers refine dataset labels, there is no separate human assessment of final model outputs. A user study evaluating how well OV-MER aligns with human perception would be beneficial. Other Comments Or Suggestions: No other comments or suggestions. Questions For Authors: I recommend conducting an ablation study to analyze the individual contributions of text, audio, and video modalities in OV-MER. This would help clarify the relative importance of each modality and provide insights into how multimodal integration enhances emotion recognition. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your positive feedback on our work. We greatly appreciate your recognition that OV-MER represents a significant advancement in MER systems, enhancing their generalizability and enabling more human-like emotional understanding in AI applications. **Q1:** While human reviewers refine dataset labels, there is no separate human assessment of final model outputs. A user study evaluating how well OV-MER aligns with human perception would be beneficial. **A1:** Thanks for your insightful comment. To assess how well OV-MER aligns with human perception, we conducted a user study. Specifically, we hired 9 annotators and randomly selected 20 samples from our dataset. Each annotator was presented with (sample, OV-MERD label) pairs and asked to judge their alignment with human perception using a binary (Yes/No) response format. To ensure annotation quality, we also included inspection data consisting of (sample, incorrect label) pairs. **We observe that 96\% of the annotations confirm the alignment between OV-MERD labels and human perception. Considering the potential annotator errors, this result demonstrates that our OV-MERD labels align well with human perception.** **Q2:** Ablation studies on the impact of different modalities (text, audio, video) are limited. While results indicate multimodal inputs improve performance, further analysis on their individual contributions would be valuable. **A2:** Thanks for your suggestion. In Table 2, we observe that CLUE-Video outperforms CLUE-Text, consistent with the nature of our OV-MERD dataset. To be specific, OV-MERD is derived from MER2023, where the textual modality contributes less than the visual modality in emotion recognition [1]. Meanwhile, CLUE-Audio achieves superior performance over both CLUE-Text and CLUE-Video, suggesting that although textual expressions may be ambiguous for emotion recognition, combining them with audio cues can effectively resolve these ambiguities, leading to better performance. [1] Lian, Zheng, Licai Sun, Yong Ren, Hao Gu, Haiyang Sun, Lan Chen, Bin Liu, and Jianhua Tao. "Merbench: A unified evaluation benchmark for multimodal emotion recognition." arXiv preprint arXiv:2401.03429 (2024). **Q3:** The dataset is primarily sourced from movies and TV series, which may not fully represent spontaneous, real-life emotional expressions. A discussion on potential domain adaptation strategies would strengthen the work. **A3:** OV-MERD is derived from MER2023, which is sourced from high-rated movies and TV shows. **The high ratings serve as an implicit validation of the actors' performances, ensuring spontaneous and realistic emotional expressions.** Currently, this type of dataset is the mainstream in the MER research community, as it provides a cost-effective means to expand dataset scale. **In the future, we plan to apply for additional funding to collect data featuring spontaneous, real-life emotional expressions by recruiting participants. Furthermore, we will employ domain adaptation techniques (e.g., Domain-Adversarial Neural Networks, DANN) to address potential domain gaps between different data sources.** These will be incorporated into the future work section of the revised manuscript.
Summary: This paper proposes a novel paradigm by integrating the open-vocabulary concept into Multimodal Emotion Recognition (MER), which facilitates emotion prediction without relying on predefined categories. Specifically, the authors introduce a new dataset generated via their proposed CLUE-Multi Generation method, accompanied by novel evaluation metrics and preliminary benchmarks designed to improve MER applicability in real-world scenarios. Claims And Evidence: Strengths: 1. The paper is comprehensive and well-organized. 2. The figures are clear, rich and concise. 3. Extensive experimental evaluations have been conducted. Limitations: 1. The authors introduce a new task, Open-Vocabulary MER (OV-MER), accompanied by a self-constructed dataset and evaluation metrics. However, the details regarding data collection and processing lack sufficient transparency. The authors compare their results, obtained on their dataset, against several general multimodal large language models (MLLMs) without fine-tuning, raising concerns about fairness and validity. For example, GPT-4V achieves scores of 55.51, 48.52, 64.86, 57.21, 54.61, and 60.07 (Table 2), whereas CLUE-Multi obtains significantly higher scores of 80.05, 80.03, 80.07, 85.16, 87.09, and 83.31, respectively. This substantial performance gap suggests potential methodological issues and undermines the credibility of the reported results. 2. Despite advocating for an open-vocabulary framework, the authors still categorize emotions into 236 classes (Table 5). While understandable for task evaluation purposes, the existence of finite categories means that traditional MER methods (such as [1-2]) could also be evaluated on the constructed dataset. However, the authors have not provided comparative experiments with conventional MER methods. Furthermore, all baselines listed in the main experiments are MLLM-based, neglecting the traditional MER approaches. Such omission significantly limits the robustness and comprehensiveness of the presented analysis. [1] Multimodal transformer for unaligned multimodal language sequences. ACL 2019. [2] Decoupled multimodal distilling for emotion recognition. CVPR 2023. 3. During dataset construction, the authors perform manual verification only for labels generated by ALLM/VLLM ("check×2"), while textual data generated by LLM (through "merge & analysis") undergo no manual checks, as explicitly stated by the authors. Considering the hallucination problem commonly observed in LLM-generated texts, an addition of manual verification for these texts is necessary. The lack of this critical step casts doubt on the reliability and rationality of the dataset construction process. 4. The authors state, "This dataset is an extension of MER2023 (Lian et al., 2023), from which we randomly select a portion of samples for further annotation." However, the description of dataset scale—including dataset size and proportions for training, validation, and testing—is absent both in the dataset description and in Table 1. This omission raises concerns regarding potential overfitting due to insufficient data, thus weakening confidence in the reported experimental outcomes. The vague dataset description fails to convincingly support the reliability of the experimental results. 5. Regarding the definition of evaluation metrics, the authors introduce a subscript "s," but do not explain its meaning or significance. Clarification of this notation is necessary to ensure clear understanding and reproducibility. 6. Although I acknowledge the comprehensive nature of this paper and appreciate its overall completeness, I find the heavy reliance on large language models for both dataset and methodological construction lacking sufficient theoretical underpinning—an essential criterion for ICML submissions. Given its current state, this work may be more suitable for submission to the NeurIPS Datasets and Benchmarks track, as it does not fully meet the rigorous theoretical standards expected for ICML (and of course this needs to be evaluated by AC and PC as well). Methods And Evaluation Criteria: Please see the Limitation 1 and 2 Theoretical Claims: I cannot evaluate the Theoretical Claims of this paper. This is because there are no theoretical claims in this paper. Experimental Designs Or Analyses: Please see the Limitation 1 and 2. The experimental designs and results are not convincing. Supplementary Material: Supplementary Material contains some demo codes. Relation To Broader Scientific Literature: n/a Essential References Not Discussed: please see Limitation 2 Other Strengths And Weaknesses: no other Strengths And Weaknesses Other Comments Or Suggestions: no other comments or suggestions Questions For Authors: please see Limitations. Ethical Review Flag: Flag this paper for an ethics review. Ethics Expertise Needed: ['Discrimination / Bias / Fairness Concerns', 'Inappropriate Potential Applications & Impact (e.g., human rights concerns)', 'Privacy and Security', 'Legal Compliance (e.g., GDPR, copyright, terms of use)', 'Responsible Research Practice (e.g., IRB, documentation, research ethics, participant consent)'] Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Q1:** CLUE-Multi achieves higher scores than GPT-4V. This performance gap suggests potential methodological issues and undermines the credibility of the results. **A1:** We believe the reviewer may have misunderstood the results in Table 2. As explained in Section 4.1 and illustrated in Figure 2, CLUE-Multi is the ground truth derived from **manually verified visual and acoustic clues**. In contrast, as shown in Figure 3, GPT-4V belongs to the CLUE-MLLM category, where **it does not utilize manually verified clues**. Therefore, **CLUE-Multi is the upper-bound performance, while GPT-4V is a baseline.** To enhance the clarity, we will revise the manuscript as follows: (1) The caption for the second part will be updated from "CLUE-MLLM" to "CLUE-MLLM (Baselines)". (2) The caption for the third part will be revised from "CLUE-M/A/T/V" to "CLUE-M/A/T/V (Upper-Bound Performance)". **Q2:** Comparison with conventional MER methods. **A2:** Traditional MER methods are not applied due to fundamental differences in our experimental setup. Specifically, conventional methods require identical label spaces $\mathcal{Y}$ for both training and testing sets. They cannot predict unseen emotions, i.e., $y \notin \mathcal{Y}$. However, we use an open-vocabulary annotation manner for each sample, which inherently cannot guarantee alignment between training and testing label spaces. **Since MLLM-based methods give us more freedom in emotion prediction, our work primarily leverages MLLM-based solutions.** **If we force the traditional MER approach, these models can only predict labels that belong to their training label space**. To address your concerns, we train on the IEMOCAP (or MELD) dataset and test on OV-MERD, following the zero-shot experimental setup for a fair comparison. For the model architecture, we evaluate both the attention model and MulT. ||M3-W1 L1|M3-W1 L2|M3-W2 L1|M3-W2 L2|M3-W3 L1|M3-W3 L2|M3-W4 L1|M3-W4 L2|M3-W5 L1|M3-W5 L2| |-|-|-|-|-|-|-|-|-|-|-| |​**Traditional Discriminative Models**||||||||||| |MELD+MulT|30.74|17.76|30.67|18.45|28.08|23.58|29.89|23.68|24.72|20.79| |MELD+Attention|33.61|23.16|32.27|23.42|35.17|30.41|30.88|25.75|33.72|29.53| |IEMOCAP+MulT|42.67|30.27|43.50|30.79|42.10|37.21|40.75|34.31|41.00|36.55| |IEMOCAP+Attention|45.64|32.23|46.18|32.31|44.42|39.23|43.40|36.67|43.65|38.49| |​**MLLM-based Generative Models**||||||||||| |Chat-UniVi|57.00|42.25|57.50|42.43|56.80|45.66|55.86|41.97|55.81|43.61| **Q3:** Manual checks were performed only for ALLM/VLLM outputs, not for texts generated by LLMs. **A3:** During our pipeline design, we observed noticeable errors and hallucinations in the outputs from ALLM/VLLM. In contrast, when it comes to the text merging task, given that GPT-series models exhibit impressive performance in reading comprehension [1] (close to human performance) and considering that multi-clue merging is a fundamental function in reading comprehension, we directly adopt the merging results from GPT-series models. This decision balances dataset reliability and construction cost. The user study in our response to Reviewer 2UWh further validates the high quality of LLM-based MER outputs. [1] Brown, Tom, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D. Kaplan, Prafulla Dhariwal, Arvind Neelakantan et al. "Language models are few-shot learners." Advances in neural information processing systems 33 (2020): 1877-1901. **Q4:** Training, validation, and testing are absent. This omission raises concerns regarding potential overfitting due to insufficient data. **A4:** We believe the reviewer may have misunderstood some aspects of our work. This paper focuses on establishing a **zero-shot benchmark** for OV-MER. Consequently, all data is used for inference, meaning every sample serves as part of the testing set. **Since our method does not involve any training process, the overfitting problem does not exist at all.** In this paper, our primary contributions are to propose a new task (OV-MER) and lay the groundwork for OV-MER. In our follow-up work (as discussed in Section B. Limitations), we plan to develop more effective frameworks to better address OV-MER. **Q5:** Meaning of the subscript "s". **A5:** The subscript ``s'' indicates that these metrics are **set-based**, distinguishing them from the traditional **single-label** metrics. **Q6:** Lack theoretical parts. More suitable for the NeurIPS Datasets and Benchmarks track. **A6:** This paper goes beyond merely proposing a dataset or establishing benchmarks within existing paradigms. Instead, it represents **a significant task-wise innovation** in the MER field by introducing a novel paradigm that enables more accurate emotion modeling. Additionally, this paper employs a **psychological theory-based method** to address the challenging evaluation problem of OV-MER. Given these substantial contributions to the MER research community, we firmly believe this work meets ICML's standards. --- Rebuttal Comment 1.1: Comment: Thank you for the author's response. I have carefully read this rebuttal, and some of my concerns are addressed. I also have some questions and suggestions as below: 1. As answered in A2, providing results from traditional MER methods can further enable readers to understand the value of this work and avoid ambiguity and misunderstanding. Therefore, I suggest the authors discuss more traditional MER methods in the revision. 2. What is the attention model? To my knowledge, MulT is also an attention-based method. Please clarify this. 3. As answered in A4, the key contribution of this work is to build a zero-shot benchmark for OV-MER. Therefore, the authors should open source this benchmark with high-quality (include dataset and codes for obtaining the results of Table 2, 3, etc) if this paper is accepted, otherwise this paper will be meaningless. Based on this rebuttal, I would like to raise my rating. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for improving the score. **Q1:** As answered in A2, providing results from traditional MER methods can further enable readers to understand the value of this work and avoid ambiguity and misunderstanding. Therefore, I suggest the authors discuss more traditional MER methods in the revision. **A1:** Based on your suggestions, we have conducted additional experiments using traditional MER methods. |Model|M3-W1 L1|M3-W1 L2|M3-W2 L1|M3-W2 L2|M3-W3 L1|M3-W3 L2|M3-W4 L1|M3-W4 L2|M3-W5 L1|M3-W5 L2| |-|-|-|-|-|-|-|-|-|-|-| |​**​Traditional Discriminative Models​**​||||||||||| |MELD+MFM[1]|22.28|13.51|21.77|13.51|19.59|17.67|22.10|18.20|16.72|14.82| |MELD+MISA[2]|28.72|21.75|27.59|22.43|34.31|28.50|26.19|21.80|34.79|29.24| |MELD+GMFN[3]|34.28|22.16|33.77|22.47|32.40|29.16|33.43|28.18|29.43|26.50| |MELD+MFN[4]|31.19|21.57|30.66|21.66|31.26|28.02|32.42|25.54|27.97|24.89| |MELD+MulT[5]|30.74|17.76|30.67|18.45|28.08|23.58|29.89|23.68|24.72|20.79| |MELD+LMF[6]|41.47|27.70|40.86|28.43|42.29|37.36|38.54|32.83|40.05|35.16| |MELD+TFN[7]|31.91|20.54|31.41|20.56|31.15|26.75|28.41|23.81|29.68|25.36| |MELD+Attention[8]|33.61|23.16|32.27|23.42|35.17|30.41|30.88|25.75|33.72|29.53| |IEMOCAP+MFM[1]|45.46|32.86|47.55|33.12|46.37|39.90|43.03|36.97|43.97|39.28| |IEMOCAP+MISA[2]|49.14|35.98|48.80|36.53|48.66|43.86|47.31|39.82|48.21|43.37| |IEMOCAP+GMFN[3]|49.35|35.85|49.57|36.09|49.18|43.29|46.72|39.28|47.30|42.71| |IEMOCAP+MFN[4]|50.56|36.82|50.86|36.72|49.97|44.70|48.69|40.55|48.97|44.11| |IEMOCAP+MulT[5]|42.67|30.27|43.50|30.79|42.10|37.21|40.75|34.31|41.00|36.55| |IEMOCAP+LMF[6]|46.34|32.44|46.42|32.94|44.19|39.22|44.23|36.78|43.57|38.57| |IEMOCAP+TFN[7]|46.13|33.45|46.66|33.91|46.27|41.27|42.31|35.95|45.82|40.69| |IEMOCAP+Attention[8]|45.64|32.23|46.18|32.31|44.42|39.23|43.40|36.67|43.65|38.49| [1] Tsai, Yao-Hung Hubert, Paul Pu Liang, Amir Zadeh, Louis-Philippe Morency, and Ruslan Salakhutdinov. "Learning Factorized Multimodal Representations." ICLR. [2] Hazarika, Devamanyu, Roger Zimmermann, and Soujanya Poria. "Misa: Modality-invariant and-specific representations for multimodal sentiment analysis." ACM Multimedia. [3] Zadeh, AmirAli Bagher, Paul Pu Liang, Soujanya Poria, Erik Cambria, and Louis-Philippe Morency. "Multimodal language analysis in the wild: Cmu-mosei dataset and interpretable dynamic fusion graph." ACL. [4] Zadeh, Amir, Paul Pu Liang, Navonil Mazumder, Soujanya Poria, Erik Cambria, and Louis-Philippe Morency. "Memory fusion network for multi-view sequential learning." AAAI. [5] Tsai, Yao-Hung Hubert, Shaojie Bai, Paul Pu Liang, J. Zico Kolter, Louis-Philippe Morency, and Ruslan Salakhutdinov. "Multimodal transformer for unaligned multimodal language sequences." ACL. [6] Liu, Zhun, and Ying Shen. "Efficient Low-rank Multimodal Fusion with Modality-Specific Factors." In ACL. [7] Zadeh, Amir, Minghai Chen, Soujanya Poria, Erik Cambria, and Louis-Philippe Morency. "Tensor Fusion Network for Multimodal Sentiment Analysis." EMNLP. [8] Lian, Zheng, Licai Sun, Yong Ren, Hao Gu, Haiyang Sun, Lan Chen, Bin Liu, and Jianhua Tao. "Merbench: A unified evaluation benchmark for multimodal emotion recognition." Arxiv. **Q2:** What is the attention model? To my knowledge, MulT is also an attention-based method. Please clarify this. **A2:** The "Attention" model refers to a foundation model architecture in MERBench [8]. Specifically, let $f_i^a \in \mathbb{R}^{d_a}$, $f_i^v \in \mathbb{R}^{d_v}$, and $f_i^l \in \mathbb{R}^{d_l}$ denote the acoustic, visual, and lexical features for a sample $x_i$, respectively. This model first converts all inputs into the same dimension and then computes importance scores $\alpha_i$ for each modality. Subsequently, it employs weighted fusion to obtain multimodal features $z_i$, which are utilized for emotion prediction. For more details, please refer to MERBench. \begin{equation} h_i^m =\mbox{ReLU}\left(f_i^mW_m^h + b_m^h\right), m \in \{a, l, v\}, \end{equation} \begin{equation} h_i = \mbox{Concat}\left(h_i^a, h_i^l, h_i^v\right), \end{equation} \begin{equation} \alpha_i = \mbox{softmax}\left(h_i^TW_\alpha+b_\alpha\right), \end{equation} \begin{equation} z_i = h_i\alpha_i. \end{equation} Here, $W_m^h \in \mathbb{R}^{d_m \times h}$, $b_m^h \in \mathbb{R}^{h}$, $W_\alpha \in \mathbb{R}^{h \times 1}$, and $b_\alpha \in \mathbb{R}^{3}$ are trainable parameters. For the output, we have $h_i^m \in \mathbb{R}^{h}$, $h_i \in \mathbb{R}^{h \times 3}$, $\alpha_i \in \mathbb{R}^{3 \times 1}$, and $z_i \in \mathbb{R}^{h}$. **Q3:** As answered in A4, the key contribution of this work is to build a zero-shot benchmark for OV-MER. Therefore, the authors should open source this benchmark with high-quality (include dataset and codes for obtaining the results of Table 2, 3, etc) if this paper is accepted, otherwise this paper will be meaningless. **A3:** Thank you for your comments. We promise to publish all the data and baselines if this paper is accepted.
null
null
null
null
null
null
Gandalf the Red: Adaptive Security for LLMs
Accept (poster)
Summary: The paper introduces a crowdsourced platform (RedCrowd) to test model security defences against prompt attacks and quantify their usability impact. Towards that they propose D-SEC a model for assessing the security-utlilty trade off. Using 279,000 attacks they demonstrate the strengths and weaknesses of various defences. Claims And Evidence: The claims are partially supported but there are also points with insufficient evidence. I found the claim that stricter security measures reduce usability, supported. As well as, the defence-in-depth finding. I found the "adaptiveness" rather simplistic so I'm not sure if we can claim generality in this case. Moreover, I am not convinced that the attacks are realistic enough to draw conclusions about the behaviour/response of the models in high-risk prompts. Methods And Evaluation Criteria: I found the idea of crowdsourcing data an excellent way of collecting traces from various types of "adversaries" with different approaches and strengths. However, I have concerns about the validity of the results. In particular: 1. I don't think that the security/utility trade off (formula 1) provides any actionable information. Such metrics have been studied for some time in computer security & usability field and identifying high quality metrics is more nuanced (e.g., Savola, Reijo M. "Quality of security metrics and measurements." Computers & Security 37 (2013): 78-90.) 2. As I mentioned earlier, I don't think the tasks are reliable to extract conclusions about the real capabilities of the models in tested. One of my concerns is sandbagging. In this case, the model may not take the task seriously (e.g., password) and thus does not actually "do it's best". Another (smaller) issue is with assessing the utility of the model as the quality of the responses may degrade without refusal. Cosine similarity and response length may give an indication but working with embeddings would probably give more reliable results. Theoretical Claims: N/A Experimental Designs Or Analyses: Please see "Methods And Evaluation Criteria" but I would like to ask why newer models such as 4o (which was available at the time) were not included in the experiments. It is possible that more capable models will exhibit distinctly different trade offs. Supplementary Material: I validated the adaptive_defence implementation, and worked through the rest of the experiments included in the source code. Relation To Broader Scientific Literature: There are only a few works in this space that involve such a high number of participants. This is a strong point for this work as the attackers are considerably more realistic than what's used in many other works. Essential References Not Discussed: N/A Other Strengths And Weaknesses: I found the paper well written and accessible. It develops and introduces concepts in an intuitive manner, making it easy to follow. Other Comments Or Suggestions: N/A Questions For Authors: 1. I wonder whether it'd be possible to replay traces of past attacks on newer models? This would help update the findings without having to involve participants again. While, of course, the exchange will not be tailored, it may still result in successful attacks (or blocks) which would still be informative (but qualitatively different to the live human attacks). Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for taking the time to review our paper. We group your comments and questions below and respond to each separately. **Generalizability of the results** - The D-SEC framework is not specific to any one application and generalizes beyond the experimental “password extraction” setup presented in the paper. Practitioners can apply security and utility metrics tailored to their use case. While the evaluation focuses on the narrow use case of password extraction, the findings provide generalizable insights as we observe consistent patterns across different target LLMs. For example, selecting a more relevant benign distribution has a consistent and measurable impact on the security-utility tradeoff across models. - We appreciate your comment regarding the use of embeddings in measuring utility drop beyond over-refusal. We would like to clarify that our method does use embeddings to compute cosine similarity, as described in Figure 4. [We are happy to make this clearer in the main text if helpful.] - To acknowledge the limitations of the task, we will add the following sentence to the Discussion section: “The insights from RedCrowd are limited by the narrow application to password extraction, and a broader empirical analysis is needed to draw application-agnostic conclusions.” **Actionable insights from the security-utility trade-off** - An advantage of Eq 1 is that it can be explicitly optimized, providing practitioners with a principled method for selecting defenses based on their specific goals. - In the “Defense in depth” section, we show how optimizing the aggregation of multiple defenses for different $\lambda$ yields a strictly better tradeoff than naive “or” or “and” strategies. - Beyond its practical optimization benefits, the framework highlights the often subtle but important ways in which security interventions can affect user utility, something we believe deserves more attention in the field. **Experiments with newer models** We limited our game design and analysis to three target LLMs. We selected models based on two criteria: (a) widespread use in real-world applications, and (b) variation in quality and size. GPT-4 was chosen over GPT-4o, which was not yet available when we designed the game protocol. Additionally, we were concerned that a stronger model might make it too hard for players to succeed, leading them to give up early, though we didn’t test this extensively. Expanding to a larger set of models remains an important direction for future work. We appreciate the suggestion to resubmit prompts to newer models. It is important to note that we cannot replicate our full setup: the adaptive, multi-turn nature of gameplay is lost, and password reveals (central to our success metric) are difficult to detect automatically, especially for obfuscated outputs (see Appendix G). However, two parts of our study can be re-run: the defense-in-depth and utility experiments. We re-ran both on GPT-4o and will include full figures and tables in the appendix. We provide a summary of findings below. **Defense-in-depth**: GPT-4o shows a higher “unblocked” rate (see Figure 5), indicating more frequent circumvention of explicit refusals. Many responses avoid giving the password without triggering simple refusal heuristics, reinforcing the need for stronger semantic detection. **Utility**: The table below reports SCR-derived false positive rates for GPT-4o alongside GPT-4 and GPT-4o-mini, including upper bounds of 95% confidence intervals. GPT-4o is consistently less likely to refuse benign inputs, particularly under strong system prompt defenses (C3). | Dataset | Model | B | C2 | C3 | |----------------|--------------|---------------|---------------|----------------| | BasicUser | GPT-4o-mini | 0.0% (0.37%) | 0.3% (0.88%) | 0.1% (0.56%) | | | GPT-4 | 0.3% (0.89%) | 0.5% (1.19%) | 0.2% (0.73%) | | | GPT-4o | 0.2% (0.72%) | 0.1% (0.56%) | 0.0% (0.37%) | | BorderlineUser | GPT-4o-mini | 0.0% (6.06%) | 1.7% (9.09%) | 11.9% (22.9%) | | | GPT-4 | 1.8% (9.39%) | 3.5% (12.1%) | 3.5% (12.1%) | | | GPT-4o | 0.0% (6.27%) | 1.8% (9.40%) | 0.0% (6.27%) | We also compare utility metrics based on relative prompt length and cosine similarity to undefended responses. GPT-4o outperforms GPT-4 across all settings except prompt length under the weakest defense (B). As with other models, we observe a drop in utility with stronger defenses, but GPT-4o exhibits smaller degradation overall. | Model | Length B | Length C3 | CosSim B | CosSim C3 | |--------------|----------|-----------|----------|-----------| | GPT-4 | 0.982 | 0.852 | 0.962 | 0.933 | | GPT-4o | 0.960 | 0.892 | 0.967 | 0.953 |
Summary: This paper points out that the current defenses against jailbreaking could not block adaptive attacks but impose useability penalties on common users. They propose D-SEC, a threat model which models the attackers and the common users in a session view. They then build a platform called RedCrowd to collect prompt attacks and do analysis. Claims And Evidence: Yes, the paper provides the detailed statistics to characterize the interplay among attack, defense and utility from 279K prompt attacks on RedCrowd platform. Methods And Evaluation Criteria: Yes, the author constructed a crowd-source platform RedCrowd which starts from May 2023 to collect the real-world data against GPT series models. Theoretical Claims: The threat model is reasonable, which characterizes an attacker who learns from feedback to improve its attack, which is a key reason that current defenses can be broken. Experimental Designs Or Analyses: Yes, the results are convincing. Supplementary Material: Yes, I have read the designs of the tasks on RedCrowd and some of the system prompts. Relation To Broader Scientific Literature: Yes, it would provide the community with a good dataset for studing the relation between attack intensity and the safety level. Essential References Not Discussed: None. Other Strengths And Weaknesses: The paper is well-written. Other Comments Or Suggestions: None Questions For Authors: - I'd like to know the safety policy that RedCrowd cares about. For example, what types of safety violation are collected? And the statistics. - What is the privacy and the ethics policy on your platform? Would the jailbreaking topics cause harm to the participants? ===== Post-Rebuttal: I haven't noted that the RedCrowd mainly experiments with the password scenario. The authors should highlight this point at a more clear position. Besides, I'd like to see how this can be extended to more content safety categories, which is not addressed in this work but is of more practical meaning. This explains why I turn down the score to 3. Ethical Review Flag: Flag this paper for an ethics review. Ethics Expertise Needed: ['Responsible Research Practice (e.g., IRB, documentation, research ethics, participant consent)'] Ethical Review Concerns: I think the authors should address the ethics problem in their article as the RedCrowd platform involves common users to jailbreak commercial LLM applications. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for taking the time to review our paper. We appreciate the reviewer’s interest in the safety policy used in RedCrowd. We have now clarified our ethics, privacy and safety policy in the manuscript by adding the following to the introduction: “RedCrowd is a white-hat red-teaming system designed to identify vulnerabilities in commercial LLMs before they can be exploited maliciously. It does not promote or expose users to unsafe content. The password extraction use case focuses on identifying security weaknesses, not on generating or disseminating harmful outputs.” Our focus was narrowly scoped to password extraction, which does not involve the generation of harmful or unsafe content. As such, we did not conduct an explicit analysis of broader safety violations committed by players. A more comprehensive taxonomy and analysis of safety violations is an important direction for future work, particularly as we extend RedCrowd to cover a wider range of misuse scenarios.
Summary: - This paper tackles a really important issue in LLM security – how do we stop prompt attacks without making the user experience terrible? I really liked how the authors separate attackers from regular users in their D-SEC model. A lot of past work just looks at how well a defense blocks attacks, but this paper actually cares about how it affects legit users too, which is super important. - The RedCrowd platform is a really cool idea. Instead of just testing defenses with pre-made benchmarks, they set up a gamified system where real people try to break the model in creative ways. That’s way more realistic than static datasets. Plus, they collect a massive dataset (279k attacks), which makes the analysis feel very solid. - They make a strong case for adaptive defenses. The idea that security shouldn’t just be "block everything suspicious" but should adjust based on the attack patterns is very convincing. Their experiments show that stacking multiple defenses together (defense-in-depth) and making them adaptive works much better than relying on just one method. This is one of the biggest takeaways from the paper. - One thing that could be clearer is how well these ideas apply to real-world LLM applications. They run all tests in a controlled setting with their RedCrowd platform, which is great for research, but I wonder how this would work in actual deployed systems. Like, do companies need to build their own version of RedCrowd to get similar results? Or can they just apply these techniques out of the box? Some discussion on practical deployment would be helpful. - Overall, really solid work with a clear real-world motivation. The focus on balancing security and usability is refreshing, and their dataset + analysis are strong contributions. Some details on real-world applicability could make the work even stronger, but the paper definitely moves the conversation in the right direction. ### Nits and Prior Relevant Work Which has not been cited: - Relevant to cite for possible adversarial attacks against LLMs https://arxiv.org/abs/2407.14937 Claims And Evidence: - The paper shows that pre-made attack benchmarks often give an overly optimistic view of security because they don’t account for how attackers adapt. Using RedCrowd, they find that non-adaptive defenses fail more often when attackers adjust their strategies across multiple attempts. - Their experiments measure both attack success rates (AFR) and how often normal users get blocked (SCR). They show that some defenses built inside the LLM (like strong system prompts) don’t just block attacks but also degrade normal responses, even when they aren’t outright refusing user requests. - The results show that combining multiple defenses (e.g., system prompts + input/output filters + adaptive blocking) is far better than using a single one. Methods And Evaluation Criteria: - Instead of relying on pre-made attack benchmarks, they let real users try to bypass security in a gamified setting. This helps them collect adaptive, realistic attacks rather than just one-off static prompts. - They separate attackers from normal users and analyze how defenses impact both. Their model tracks multiple interactions (not just one-time attacks) to see how attackers adapt over time. - AFR shows how often defenses successfully block attackers from getting the LLM to reveal sensitive info. SCR tracks how often normal users get blocked or experience degraded responses due to security measures. - They test whether adding multiple defenses together (defense-in-depth) and making them adaptive improves security without hurting usability too much. Theoretical Claims: While the paper is largely empirical, it makes some theoretical arguments about security modeling and evaluation that are tested through experiments. They introduce a developer utility function that balances security (AFR) and usability (SCR). The D-SEC framework argues that evaluating LLM defenses requires modeling attackers and legitimate users separately. This is a conceptual claim about how security should be assessed rather than an empirical result. Experimental Designs Or Analyses: - The authors use a large-scale, crowd-sourced red-teaming approach (RedCrowd) to collect adaptive attacks. Instead of using static benchmarks, they let real users try to break the LLM defenses in a gamified setting. This results in a dataset of 279k real-world attacks, making the evaluation more realistic than pre-scripted attack lists. - They compare different types of defenses: single-layer, multi-layer (defense-in-depth), and adaptive defenses. They test system prompts, input/output filters, and combined approaches to see which strategies work best. They also evaluate adaptive defenses that adjust based on session history. - Security and usability are evaluated separately using Attacker Failure Rate (AFR) and Session Completion Rate (SCR). AFR measures how often attacks fail, while SCR checks how often normal users are blocked. This dual-metric evaluation helps balance security vs. usability. - They analyze attack strategies by categorizing them based on techniques used (e.g., direct requests, obfuscation, persuasion, context override). This helps understand what types of attacks are most effective and how defenses should be designed to counter them. - They also test the impact of restricting LLM application domains (general chatbot vs. specific tasks like summarization or topic-based bots). Results show that more restricted LLM use cases are generally easier to secure. The experimental design and analysis are generally sound, with a strong empirical approach using real-world adaptive attacks, well-defined evaluation metrics, and a thoughtful comparison of different defense strategies Supplementary Material: I went through the Appendix Relation To Broader Scientific Literature: - Prior work on prompt attacks mostly relies on fixed benchmarks or synthetic attack generation. This paper improves on that by using a dynamic, real-world attack dataset collected via RedCrowd, making it more applicable to real-world security challenges. - The idea of balancing security and usability is well-known in traditional cybersecurity, but it's less explored in LLM defenses. The paper applies these principles to language model security, similar to how adversarial robustness is studied in image recognition or malware detection. - Work like OpenAI’s red-teaming efforts and adversarial testing in NLP show that human-in-the-loop security evaluation is necessary. This paper adds to that by structuring red-teaming in a crowd-sourced, gamified manner, creating a large and diverse attack dataset. - Similar to work on prompt injections, jailbreak attacks, and adversarial NLP techniques, this paper helps categorize how users attempt to break LLM security and which defenses are most effective. Essential References Not Discussed: The paper could be strengthened by talking about https://arxiv.org/abs/2407.14937, which provides a structured threat model for red-teaming LLMs. It organizes red-teaming attacks into different stages of LLM development and deployment, offering a more systematic way to think about security risks. The paper also covers various defense strategies and practical methods for adversarial testing, which align with the goals of this study. Including insights from this work could help place RedCrowd’s findings within a broader security framework and make the comparisons to existing red-teaming methodologies clearer. Other Strengths And Weaknesses: In the threat model, the goals and capabilities of actors are missing. Other Comments Or Suggestions: Recommend the authors to include relevant citations and a few lines about goals and capabilities of actors. Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for taking the time to review our paper. We have addressed both of your suggested improvements in the manuscript and respond explicitly to the comments below. **Additional citation and discussion on real-world applications** - We will make the implications of RedCrowd for real world deployments clearer by adding the following in the discussion: “While our experiments use RedCrowd in a controlled setting, the D-SEC framework is designed for real-world deployment. Developers can apply the core strategies—domain restriction, defense aggregation, and adaptive defenses—using their own user data, without needing to replicate RedCrowd in full. While large-scale crowdsourcing remains the gold standard for red teaming, it is resource-intensive; advancing highly capable automated attackers is an important direction for making these evaluations more practical at scale.” **Goals and capabilities of actors** - We appreciate the suggestion to incorporate adversary capabilities and goals to the threat model as discussed in [ https://arxiv.org/abs/2407.14937]. We will add the following sentence to the Threat Model section: “Following the framework in [1], we restrict the attacker’s capabilities to sending direct inputs to the target LLM via user messages. While indirect attacks via data (e.g., prompt injections in retrieved content) are also compatible with D-SEC, we focus on the direct setting for simplicity. Attacker goals are broad and application-specific.”
Summary: - the paper studies LLM prompting attacks (crafting prompts to (adversarially) manipulate model behavior - the paper provides the following: - "D-SEC", a threat model for prompting attacks that: - encompasses an attacker, a model user (who wishes to use the system for benign purposes), and the model developer (who wishes to balance the system's utility to user by avoiding over-refusal, while improving its resistance to the attacks). - considers the *adaptive* nature in realistic attack & usage scenarios - RedCrowd: a deployed platform that gamify the data collection of adversarial prompts and defenses - The collected datasets from RedCrowd - the paper justifies the D-SEC threat model based on the collected data and in-the-wild experimentation, and present useful findings such as - the inefficacy of system prompts (alone) as a defense strategy - the necessity and effectiveness of "in-depth" defense strategies (where combinations of individual strategies outperform the constituents) ## update after rebuttal I appreciate the authors' rebuttal and will keep my score and my assessment that the paper should be accepted. Claims And Evidence: Yes, the claims of the paper are generally well-supported by evidence. Some comments: - The paper claims that a novel threat model (D-SEC) to account for user utility is necessary, and subsequently provides justification through expeirments. - However, a key idea behind security-utility trade-off is *reducing over-refusal* (due to safety training), and this itself is not an entirely new idea; e.g., Anthropic explored techniques to maintain high-quality defenses while reducing over-refusal rates [1. 2]. - The paper claims to provide the collected dataset, which can be found in the appendix. I believe this is a very valuable contribution to the community, specifically for future learning-based defenses. Refs: - [1] https://www.anthropic.com/news/constitutional-classifiers - [2] https://www.anthropic.com/news/claude-3-family Methods And Evaluation Criteria: Overall the proposed methods and evaluation make sense. Since the paper focuses on a conceptual framework (D-SEC threat model), an in-the-wild data collection and gaming platform, and dataset collection, there are less of an "evaluation of a proposed algorithm" in the traditional sense. Theoretical Claims: Overall the paper does not make theoretical claims. I have one minor question about Eq 1 (security-utility trade-off), which is an abstract formulation: while the general formulation makes sense, why should one expect that the "security" and "utility" to be quantifiable in the "same space" as implied by the equation, and that they have a linear relationship? - That is, why should a fixed amount of security (say some constant number like 0.1) be equivalent to a fixed amount of utility? Why not some non-linear relationships? - I understand that the authors opt for "attacker failure rate (AFR)" and "session completion rate (SCR)" to quantify security and utility, and because these are numbers between 0 and 1, it makes sense to put them in the same equation. But I'm curious if authors explored other relationships between these quantities. Experimental Designs Or Analyses: Overall the experimental design and analyses look reasonable. The experiments of the paper look comprehensive. Supplementary Material: Yes. In particular: - Appendix A: the prompt templates and examples - Appendix H: the provided datasets Relation To Broader Scientific Literature: The paper is related to the jailbreaking literature. The key idea of the paper (threat model to account for more than just attack success) is seen in prior work (e.g. see "Claims And Evidence"), though the key contribution of the paper is novel (an in-the-wild interaction dataset and the platform to collect it). Essential References Not Discussed: None that I am aware of. Other Strengths And Weaknesses: Strength - The paper is well-written and easy to follow - References to prior work is comprehensive Weaknesses: - The paper structure can use better clarity; right now the sections are perhaps a bit loosely connected. For example, it will help for the paper the clearly list out what the research questions are for the experiments (if any), and what the key findings are. - The significance of D-SEC as a novel threat model is debatable. The community is aware that a security-only defense is bad (because it rejects too many benign requests; e.g. [1]). [1] https://arxiv.org/abs/2311.16119 Other Comments Or Suggestions: - The paper, in my opinion, can also be (and is arguably better) positioned as a datasets/benchmark paper, as the bulk of the analysis is performed with the in-the-wild RedCrowd platform and the collected session data therein, and the findings (e.g., system prompts are ineffective) can be reported independent of the proposed threat model. - Consider tightening the use of the words "dynamic" vs "adaptive"; e.g., in L121, "adaptive" is better suited for "Viewing the threat model as dynamic...". In general, the core of the paper seems to focus on "adaptive" attacks (which has roots in meaning from the differential privacy literature, e.g. [1]). - In section 2, there are several forward references to section 4.1, and it breaks the flow if the reader does not yet know what section 4 is about. Consider providing a synopsis of what that section is about before forward references. [1] https://differentialprivacy.org/privacy-composition/ Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for taking the time to review our paper. Below, we respond to all of the concerns and questions in your review. Please let us know if we missed anything or should expand on our responses. **Comments regarding the significance and novelty of the D-SEC framework** - We appreciate the reviewer’s observation and agree that prior work on over-refusals is highly relevant. We will incorporate an explicit reference to the suggested papers and add the following sentence to the main text: “User utility has previously been discussed in the context of model over-refusals [1, 2], a key way in which security interventions can degrade model usability. We analyze this effect in depth in the experimental section of this paper.” - We would also like to clarify that the notion of utility in D-SEC is broader than over-refusals alone. D-SEC facilitates simple modifications to the utility function to account for general types of user experience degradation. We propose such example utility metrics in Figure 4, where we measure changes in response content independently of over refusals. We expect these effects to be even more pronounced in agent deployments, where the security layer can meaningfully impact the program’s execution flow. **Question about Eq (1)** - You are correct that for Eq (1) to make sense, both security and utility should be measured in comparable units. While this might appear limiting at first glance, it actually only means that the metrics for security and utility need to be appropriately (nonlinearly) transformed before analyzing the security-utility trade-off. We opted for this representation for two reasons: (i) It makes it easier for readers to understand the security-utility tradeoff. (ii) It encourages practitioners to select an appropriate transformation of their desired metric for which units have a comparable meaning. That said, as you allude to, it may not always be as easy as with AFR and SCR to transform metrics onto a comparable scale. We extended footnote 3 by adding: “Both $\mathcal{Q}_{\mathcal{M}}$ and $\mathcal{R}_{\mathcal{M}}$ are assumed to have been transformed appropriately so they have comparable units.” **Suggestions on improving paper structure** - List research questions and findings in experiments: We agree that making our research questions and key findings more explicit will improve the paper’s clarity. We will revise the introduction to (a) clearly state our goal of using large-scale crowdsourced red teaming to evaluate the security and utility trade-offs of leading commercial LLMs, and (b) connect it to the main experimental findings regarding the three strategies consistently improve the trade-off: restricting the application domain, aggregating multiple defenses in a defense-in-depth manner, and using adaptive defenses. - In the current manuscript, we use “dynamic” to describe systems or assumptions that evolve over time (e.g., the threat model) and “adaptive” to describe mechanisms that incorporate past observations to modify behavior (e.g., attacks or defenses that respond to prior outcomes). We double-checked all occurrences and made sure they align with these definitions. - Forward references: We agree that introducing later sections early will provide clarity. For example, we will revise the text to include: “Section 4.1 summarizes how utility metrics vary with the choice of benign user distribution and highlights that utility degradation can extend beyond over-refusals.” --- Rebuttal Comment 1.1: Comment: I appreciate the authors' rebuttal and will keep my score and the assessment that the paper should be accepted.
null
null
null
null
null
null
Maintaining Proportional Committees with Dynamic Candidate Sets
Accept (poster)
Summary: This paper consider a series of problems about modifying the winner set of a multi-winner election when there are changes on candidates. The paper investigate three types of voter preferences: rankings, metric-space, and approval (0-1). Both positive (algorithms satisfying certain fairness axioms) and negative (some fairness axioms cannot even be approximated) theoretical results are given. Claims And Evidence: Yes, the claims are supported by the Theorems and proofs. Methods And Evaluation Criteria: Yes (once the proofs are correct). Theoretical Claims: I checked Theorem 3.1 and Theorem 4.1 I think Theorem 3.1 is correct. For Theorem 4.1, I would be careful to draw the same conclusion. The proof is extremely confusing as many steps are unexplained. The proof starts with creating clusters, while many details are omitted (such as what are the $p_j(i)$, and what if two clusters with same diameters comes together). Then it goes into show the envy relations keeps invariant in the modifications, but I don't see this is applied anywhere in the proof. Finally, the real part for the axioms seems have nothing to do with the dynamic changes. For Right Column of Line 277-286, the formula, \rho$ is undefined. I doubt it is the approximation ratio Gamma for proportionally fairness. -------------- After rebuttal: theorem 4.1 is further explained and looks better. I encourage the authors to improve the presentation of the proof. Experimental Designs Or Analyses: This paper have no experiments. Supplementary Material: The supplementary materials are mostly full proof of the paper. Carefully examine all of them is beyond my capability. Relation To Broader Scientific Literature: This paper expands the discovery of multi-winner election to the scenario where the candidates may change. The potential significance of the problem is illustrated in the introduction of this paper, where a series of scenarios with changeable candidates are given. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strength: The paper studies an interesting and well-motivated extension of multi-winner voting. The authors study the problem under three different preferences models and prove a bunch of non-trivial theoretical results, giving a somewhat thorough characterization. The paper also well placed itself in the previous literature. Weaknesses: There can be a large improvements on the presentation of the paper, especially the proof. The proofs lack intuition, intention, and explanations on each step, making it non-trivial to understand what each step is doing. This applies to the contradiction method in Theorem 3.1 (what makes a contradiction? specify it) and almost everything in Theorem 4.1. The paper will largely benefit from adding intuitive proof sketch, running examples, and more discussions. For now I will give a negative score. If the authors could persuade me that their Theorem 4.1 is correct, I am happy to raise my score. ------ Score updated. Other Comments Or Suggestions: No. Questions For Authors: 1. Explain you proof for Theorem 4.1 and the contradiction for Theorem 3.1. Address my questions in the ''correctness' part. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: - Explain your contradiction for Theorem 3.1 Answer: We are given a profile with $n$ voters and a committee $f(t-1)$ that previously satisfied PSC. At time $t$, a new candidate $c_t$ becomes feasible, and $f(t-1)$ may now fail to satisfy PSC -- we would like to fix that with a single swap involving $c_t$. Assume that we cannot do so, then $f(t-1)\cup \{c_t\}\setminus \{b\}$ violates PSC for all $b\in f(t-1)$. In total, this yields $k+1$ committees that violate PSC. For each of these committees, we prove that we can reserve $\frac nk$ voters. Thus, in total there are $(k+1)\frac nk$ voters -- more than $n$, a contradiction. - Explain your proof for Theorem 4.1. Answer: We apologize for the confusion created here, we agree that we were a bit sloppy in writing the proof. We are sorry for this. On a high level our proof works as follows: we start off with a pre-clustering phase, leading to $k$ groups of agents $N_1$ to $N_k$. In particular, in the way we define it an agent can be part of multiple groups. To construct these groups, we use a common technique from other papers, namely, we give each of the $n$ agents a budget of $k/n$. With $k$ groups to be opened at cost of $1$ each. this intuitively means that each agent can open their "proportional share" of a group. We now open these groups by letting the agents buy groups using their share of the budget, and we assign each agent to a group they paid for. Now, after creating these groups, we go one-by-one and let them pick the cluster center that is closest to them. When a point is added, we update these picked cluster centers and potentially swap cluster centers between these groups (that is the invariant envy relation). In the end, we use this fact that every group has their "favourite" cluster center to bound the factor by which any set of n/k agents can improve. - The proof is extremely confusing as many steps are unexplained. The proof starts with creating clusters, while many details are omitted (such as what are the p_j(i) Answer: In general, we will make sure to mark clearly the properties that are important for the high level idea of the proof, and state which arguments are only required to prove these important properties. E.g., it does not really matter how the prices are chosen and we agree that we should explain that. To show that it is possible to define prices, one way to do so is: (1) check for the first voter in $N'$, i.e., $i_1 = \min\{i\in N': b_i > 0\}$ whether she has a budget of $1$ on her own. If yes, we reduce her budget by $1$ and only let her pay for the cluster. (2) If no, voter $1$ pays all she has and we set her remaining budget to zero. Consider the next voter in $N'$ with positive budget. If $i_2$ can pay for the remaining cost, she does so and we reduce her budget accordingly. (3) Otherwise, she contributes all of her remaining budget, we set her budget to zero and continue. - and what if two clusters with same diameters comes together). Answer: If there are ties any of them can be chosen, i.e., we assume some arbitrary tie-breaking. We will note this in the paper. - Then it goes into show the envy relations keeps invariant in the modifications, but I don't see this is applied anywhere in the proof. Answer: The envy relation is just technical and guarantees us something important: the chosen committee contains the closest candidate to each cluster $N_i$. We apply this in line 320. Here, $i$ belongs to a cluster $N''$, and hence we can assume that cluster member is closer to the chosen committee than i is to c. We will state our goals more clearly in the proof. - Finally, the real part for the axioms seems have nothing to do with the dynamic changes. Answer: Indeed, the guarantee holds at every time step. Note that the tricky, dynamic part of the algorithm is described within the envy cycle procedure -- here, when candidates join or leave the election, we determine, depending on how the envy of the clusters, whether we need to modify the committee for this time step. - For Right Column of Line 277-286, the formula, rho is undefined. I doubt it is the approximation ratio Gamma for proportionally fairness. Answer: Sorry about this, indeed rho is supposed to be gamma, i.e., the improvement factor or approximation ratio. We will declare more clearly where the variable comes from. --- Rebuttal Comment 1.1: Comment: Thank you for your explanation. Though I am not totally confident in understanding the proof, your high-level explanation helps. I will raise my score.
Summary: This paper introduces a temporal element to multi-winner voting by studying a model wherein the set of candidates changes over time. This is separated into three settings: incremental (candidates are added over time), decremental (candidates are removed over time), and fully dynamic (candidates can be added or removed over time). These settings are each studied under three different paradigms of multi-winner voting: ranking-based, approval-based, and clustering. Within each paradigm two proportionality axioms are studied. The paper shows that, across paradigms, proportionality can typically be achieved or approximated when candidates are added over time. When candidates are removed or in the fully dynamic setting proportionality can sometimes be approximated. Claims And Evidence: The claims in the paper are described quite clearly. Some theorems are supported by a proof in the main text (while other proofs are relegated to an appendix) and the claims all seem to be quite reasonable and attached to some evidence. Methods And Evaluation Criteria: The paper is entirely theoretical and conceptual. The proofs that I considered were well-written and are the suitable method for supporting a theorem. Theoretical Claims: I lightly reviewed the proofs that are included in the main text and found no issues. Experimental Designs Or Analyses: N/A Supplementary Material: I briefly skimmed the material. I might often object to including definitions of voting rules only in the appendix but find it fairly reasonable in this paper. I do find the definitions fairly abstract though -- while they are of relatively low importance to the paper, more detail would be appropriate (having recently programmed STV, this definition is very far from complete). Relation To Broader Scientific Literature: The paper builds upon three existing paradigms of multi-winner voting and a specific priority (proportionality) which is popular within the multi-winner setting. Much work has been done on these topics; this paper adds what seems to be a fairly novel -- but quite natural -- component of dynamicity. The paper fits quite neatly into contemporary work on multi-winner elections within computational social choice. Essential References Not Discussed: N/A Other Strengths And Weaknesses: In general I find the paper to be quite well written. Put bluntly, I typically find purely theoretical papers to be quite a slog and this was much easier to read than I had anticipated. In considering three paradigms of multi-winner voting and three types of dynamic candidates the paper manages to quite naturally fit in an impressive amount of content while remaining fairly readable. Many temporal settings have been studied in the single winner setting (e.g. in iterative voting settings) but I have not previously seen work on this dynamic candidates in multi-winner settings. The motivation is quite natural and makes understanding the task seem a useful step towards approaching real-world applications. Personally, I would consider the paper to be more useful in a setting where page limits are not an issue. This might be better as a journal paper but I recognize that in our current time a conference is the expected venue for publication. Relatedly: while the paper fits in quite a bit already and is focused on establishing early theoretical results I can easily imagine some experiments being informative. In particular, I would be interested in understanding how often the various axioms you study are violated by various voting rules under different parameters (e.g. number of voters, number of candidates, preference distributions). Other Comments Or Suggestions: Tiny issues to adjust: - missing a space on line 3 of section 3 - line 336 should probably include W: "a committee $W$ satisfies" Questions For Authors: I am unlikely to update my evaluation based on your response. Feel free to respond, or not, to any portion of my review. I would be interested to hear your perspective on the informativeness of any experiments that might be done on this setting. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: Thank you very much for the kind review! - I would be interested to hear your perspective on the informativeness of any experiments that might be done on this setting. Answer: We have thought about this for a bit. There are a few possible experiments we could see. Firstly, we already did a bit of experimentation using preference sampling, namely, we tried to find simple examples akin to Observation 1 or Example 3 in sampled data (e.g., we used some preference distributions for approval preferences, ran common voting rules such as MES or PAV and tried to see if the committees selected by them are not robust to a single deletion). However, we were not able to find any such violations in the short experiments we conducted. Secondly, what might be a bit more interesting would be to experiment on real-world data. Namely, there are two recently "uncovered" large datasets from last year, one being a dataset of approval-based multiwinner voting elections for so-called proof-of-stake elections by Boehmer et al. [1] and the other one being a dataset of over 1000 multiwinner ordinal preference elections from local Scottish government elections by McCune and Graham-Squire [2]. In particular, we would find it quite interesting to evaluate the effect of so-called "by-elections", i.e., elections to replace a candidate who dropped out of the parliament. In particular, these by-elections are not done using a proportional voting method, but by employing the singlewinner instant runoff voting rule. It would be interesting to see, if somehow one can measure the impact of this on the proportionality of the outcome. Interestingly enough, the data provided by Boehmer et al., is actually temporal. Namely, in these blockchain elections, there is one election per day. We would find it interesting to see, if one can perhaps use this data to evaluate our findings and to actually check whether dynamic multiwinner voting is hard. (We note that their data comes with the added difficulty that it allows voters to change their preferences) References: [1] Boehmer, Brill, Cevallos, Gehrlein, Sánchez-Fernández, Schmidt-Kraepelin, 2024, AAAI, Approval-based committee voting in practice: a case study of (over-) representation in the Polkadot blockchain [2] McCune and Graham-Squire, 2024, Social Choice and Welfare, Monotonicity anomalies in Scottish local government elections --- Rebuttal Comment 1.1: Comment: Thank you for the response. This empirical evaluation you discuss sounds like it would be quite interesting.
Summary: The paper considers multiwinner voting rules when the candidate sets are dynamic in one of 3 ways - candidates arrive one at a time, leave one at a time, or a mix of both. They consider 3 different types of preference classes - ordinal, distance-based, and approval based. They show that in some settings, there exist voting rules that achieve proportionality (which is defined differently depending for each preference class) whereas in others, there cannot exist any such rule. Claims And Evidence: Yes, the claims are appropriately cited or proven. Methods And Evaluation Criteria: N/A Theoretical Claims: As far as I can tell, the proofs in the main body of the paper are correct. Experimental Designs Or Analyses: N/A Supplementary Material: I briefly skimmed Proposition 3.2 and the proofs of Theorem 5.2-5.4 Relation To Broader Scientific Literature: There appear to be several related works in related online settings, but none in their specific models. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The online models they consider are reasonable, and their results in these models appear promising. They also study several preference models. I also particularly like all the negative results they have, especially for existing voting rules. My only (minor) complaint would be that the proofs don't seem very technically novel, or at least it is unclear to me what the novel parts are. If there are any new ideas, it would be better if the authors could highlight them more. Other Comments Or Suggestions: Page 4, Near Line 184 ".As there are .." missing a space after the period. Page 5, proof of Theorem 4.1, I got briefly confused because $i$ is used for both the voters and the index of the clusters $N_i$ which have no relation to the individual voters. I would suggest a different indexing variable. Page 13, Proof of Proposition 3.2. It should say $t<s$, not $s<t$. Page 14, Proof of Theorem 5.3, the target size should be $k=6$, not $k=4$. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the comments and suggestions. We implemented the changes (for the next version).
null
null
null
null
null
null
null
null
Latent Mamba Operator for Partial Differential Equations
Accept (poster)
Summary: This paper introduces LaMO, which is an SSM-based neural operator designed to overcome the computational limitations of traditional neural operators for solving PDEs. It establishes a kernel integral interpretation of the SSM framework, proving its equivalence to integral kernel neural operators. It achieves an average 32.3% improvement over existing neural operators across multiple PDE benchmarks, including Navier–Stokes, Darcy flow, and elasticity problems. ## update after rebuttal Thank you for addressing the concerns. I am keeping my original score. Claims And Evidence: - Weaknesses: - Computational complexity analysis: While LaMO is claimed to be more efficient, a more precise breakdown of runtime (e.g., FLOPs, GPU memory usage per operator) would improve the claims. - Ablation studies on kernel choices: The study does not fully explore different kernel configurations for SSM parameterization. Methods And Evaluation Criteria: - Strengths: Multiple benchmark PDEs and baselines are tested. The evaluation metric (relative L2) also complies with standard practice. - Weaknesses: While the benchmarks are diverse, an application to real-world turbulent fluid dynamics (e.g., weather modeling, aerodynamics) would strengthen the evaluation. Theoretical Claims: Claims look good in general. Details of math are not checked line by line. Experimental Designs Or Analyses: - Weaknesses: - Limited discussion on hyperparameter selection: The role of latent dimension choices, SSM state sizes, and discretization steps could be better explained. - More runtime benchmarks needed: While Figure 4 suggests efficiency, a breakdown of training vs. inference time cost would be helpful. Supplementary Material: N/A Relation To Broader Scientific Literature: Sparse kernel methods (e.g., Gaussian Processes for PDEs) could be referenced for comparison. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: Impressive work! Besides the theoretical contributions, this work adds a novel type of kernel operators (Mamba) to the family of neural operators for PDEs. Its experiments also cover many SOTA baselines, providing a good benchmarking framework for future work on neural operators and Mamba-like kernels. The community would appreciate it if the authors make their code publicly available. (I have gone through the Supplementary Material.) I believe the authors will, yeah^^? Questions For Authors: See the "Weaknesses" items above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the positive comments. Please see the responses to your questions below. >Computational complexity analysis: While LaMO is claimed to be more efficient, a more precise breakdown of runtime (e.g., FLOPs, GPU memory usage per operator) would improve the claims. **A:** The above response in Table 3 presents the range of parameters per operator. Additionally, Figure 4 in the main text and Appendix Section E.3 offer a detailed analysis of the training, inference, and memory consumption, providing a precise breakdown of the computational requirements. >Ablation studies on kernel choices: The study does not fully explore different kernel configurations for SSM parameterization. **A:** In our experiments, we primarily investigate the effect of directionality in kernel parameterization, as presented in Table 6 of the Appendix. The kernel formulation is inspired by Mamba, which employs time-variant parameterization for matrices B and C. Additionally, matrix A follows a diagonal structure, demonstrating superior performance compared to time-invariant parameterization in computer vision and LLM tasks. >Weaknesses: While the benchmarks are diverse, an application to real-world turbulent fluid dynamics (e.g., weather modeling, aerodynamics) would strengthen the evaluation. **A:** In response to your suggestion, we have conducted additional experiments on the ERA dataset, which is derived from the fifth generation of ECMWF reanalysis data. Following standard weather forecasting practices, we selected the geopotential height at 500 hPa (Z500) with a resolution of 2.5° and a time interval of 3 hours as our data. The objective is to predict the geopotential height for 10 timesteps given 2 observations. Due to time constraints and the computational limitation of the experiments, we adopted a data split of 1000/200, consistent with the NS benchmark. We utilized 4 layers for the architecture for both Transolver and LaMO models. Our results indicate that LaMO outperforms Transolver by 25%, demonstrating its superior accuracy in this setting. We plan to explore this direction further as part of our future work. | Operator | Train error | Test error | | --------------- | -------- | -------- | | Transolver | 0.04197 | 0.31665 | | LaMO | 0.04072 | 0.23746 | ***Table 6: Relative L2 error on ERAZ500 dataset.*** >Limited discussion on hyperparameter selection: The role of latent dimension choices, SSM state sizes, and discretization steps could be better explained. **A:** We have employed a learning rate selected from the set {1e-3, 5e-4, 1e-4}, while the remaining hyperparameters are consistent with those used in Transolver. For the SSM, we utilized a state dim (DState) of 64 and varied the number of heads in the range {1, 4}. The influence of the latent dim and SSM state size, which follow the same configuration as Mamba, is presented in Table 2 of the main text and Table 7 in the Appendix. Our ablation indicates an optimal latent dimension, beyond which performance decreases before increasing again. Regarding discretization, we employed the ZOH scheme, which is similar to Mamba. >More runtime benchmarks needed: While Figure 4 suggests efficiency, a breakdown of training vs. inference time cost would be helpful. **A:** We have included the efficiency analysis, covering training time, inference time, and memory consumption, in Appendix Section E.3, where we compare our method with Transolver. >Sparse kernel methods (e.g., Gaussian Processes for PDEs) could be referenced for comparison. **A:** We highlight that extending sparse kernel methods, such as Gaussian Processes for PDEs, is computationally demanding and may become impractical for high-dimensional or large-scale PDE problems. Additionally, these methods often struggle with capturing complex, non-stationary dynamics, which neural operators are better equipped to handle. We will ensure that the relevant work is appropriately cited in the final version of the manuscript. >Impressive work! Besides the theoretical contributions, this work adds a novel type of kernel operators (Mamba) to the family of neural operators for PDEs. Its experiments also cover many SOTA baselines, providing a good benchmarking framework for future work on neural operators and Mamba-like kernels. The community would appreciate it if the authors make their code publicly available. (I have gone through the Supplementary Material.) I believe the authors will, yeah^^? **A:** Yes, we plan to open-source our codebase, which will contribute to benchmarking future operators and fostering further research and development in this area. Please note that our code is already a part of the supplementary material. With these, we hope to have addressed all the comments. Please let us know if you have any further concerns or questions. If not, we request you support the manuscript by raising the score.
Summary: - The Latent Mamba Operator (LaMO) is a scalable state-space model integrated with a kernel integral formulation. - The authors also provide a theoretical foundation for their approach. - LaMO demonstrates state-of-the-art performance across various problems. Claims And Evidence: The authors present a novel approach that achieves state-of-the-art performance across different PDEs, conducting extensive experiments on both regular and irregular domains, including turbulence-related problems. They compare their model against multiple baselines, providing a solid empirical foundation. However, additional evaluations on PDEs such as the Poisson equation, Wave equation, and problems involving temporal dynamics would further strengthen their claims. They also conduct scaling experiments, particularly focusing on the amount of training data. While Figure 2 implicitly provides scaling behavior with model size, a more detailed analysis of how the model scales with parameter count—whether it follows a power law or logarithmic trend—would be valuable. Methods And Evaluation Criteria: The study includes several important ablation experiments, such as the comparison between unidirectional and bidirectional approaches. This is a critical aspect of the analysis. Moreover, the authors investigate resolution invariance, a fundamental property for neural operators, and provide supporting evidence for it. Additionally, a clearer examination of model scaling with respect to parameter count would enhance the discussion of efficiency. Theoretical Claims: The paper includes an accessible theoretical study of the proposed model, demonstrating that its SSM formulation approximates a class of integral linear operators. However, a deeper exploration of the theoretical implications, particularly in comparison to existing SSM-based PDE solvers, would further solidify the claims. Experimental Designs Or Analyses: See above Supplementary Material: I examined the Supplementary Material but did not analyze the proofs in detail. Relation To Broader Scientific Literature: See my comments above. Essential References Not Discussed: The paper does not mention prior works on SSMs for PDEs, despite their relevance. Several recent studies explore SSM-based neural operators: [1] Cheng, C. W., Huang, J., Zhang, Y., Yang, G., Schönlieb, C. B., & Aviles-Rivero, A. I. (2024). Mamba neural operator: Who wins? transformers vs. state-space models for pdes. arXiv preprint arXiv:2410.02113. [2] Zheng, J., Li, W., Xu, N., Zhu, J., & Zhang, X. (2025). Alias-Free Mamba Neural Operator. Advances in Neural Information Processing Systems, 37, 52962-52995. [3] Hu, Z., Daryakenari, N. A., Shen, Q., Kawaguchi, K., & Karniadakis, G. E. (2024). State-space models are accurate and efficient neural operators for dynamical systems. arXiv preprint arXiv:2409.03231. Including discussions on these works would provide a more comprehensive positioning of the proposed approach within the broader landscape of SSM-based PDE solvers. Other Strengths And Weaknesses: / Other Comments Or Suggestions: I recommend accepting the paper, provided the authors incorporate prior work on SSMs for PDEs. The study is robust and well-executed, with no significant gaps. Questions For Authors: - Were the baseline errors sourced from existing literature, or did you train the baseline models yourselves? If you conducted the training, how were the architectures chosen, and what training procedures were followed? - How does your approach compare to previous works on SSMs for PDEs? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the positive comments. Please see the responses to your questions below. >They also conduct scaling experiments, particularly focusing on the amount of training data. While Figure 2 implicitly provides scaling behavior with model size, a more detailed analysis of how the model scales with parameter count—whether it follows a power law or logarithmic trend—would be valuable. **A:** We provide the parameter count scaling for the Darcy and NS datasets in Table 5. To better illustrate the scaling trend, we plot the parameter count on a log scale against the number of layers. The plot shows that the parameter growth follows a consistent linear trend, confirming the proportional relationship between the layer count and the model complexity. | # Layers | 2 | 4 | 8 | 12 | 24 | | -------------------------- | ---- | ---- | ----- | ----- | ----- | | Navier-Stokes (Transolver) | 2.93 | 5.70 | 11.23 | 16.76 | 33.35 | | Navier-Stokes (LaMO) | 2.72 | 5.17 | 10.06 | 14.95 | 29.62 | | Darcy (Transolver) | 0.74 | 1.43 | 2.82 | 4.21 | 8.38 | | Darcy (LaMO) | 0.38 | 0.63 | 1.14 | 1.65 | 3.18 | ***Table 5: Parameter count (in M) vs. layer count for NS (ν = 1e-5) and Darcy.*** >Additionally, a clearer examination of model scaling with respect to parameter count would enhance the discussion of efficiency. **A:** Please refer to Table 2, Table 4, and Figure 2 in the main text for details on model scaling concerning parameters and efficiency on the Darcy and Navier-Stokes datasets. As shown, the model performance improves with an increase in the number of parameters (layers). >However, a deeper exploration of the theoretical implications, particularly in comparison to existing SSM-based PDE solvers, would further solidify the claims. **A:** We acknowledge the importance of further exploring the theoretical implications, particularly in comparison with existing SSM-based PDE solvers. We will consider this as part of our future work. >Several recent studies explore SSM-based neural operators: **A:** We have cited [3] under SSMs for PDEs in the related work section. We will ensure that the recent works on SSMs for PDEs [1, 2] are included in the related work section of the final version and a relevant discussion. >Were the baseline errors sourced from existing literature, or did you train the baseline models yourselves? If you conducted the training, how were the architectures chosen, and what training procedures were followed? **A:** The baseline errors were sourced from existing literature, except Transolver. Since Transolver represents the SOTA, we ran its official codebase (keeping the hyperparameters, data splits, and other settings the same as the original paper) and reported the best errors from multiple runs. We adhered to the standard training procedures described in Appendix Section C for all experiments. >How does your approach compare to previous works on SSMs for PDEs? **A:** Our approach differs from previous works on SSMs for PDEs in several key aspects. While prior studies such as [3] employed unidirectional Mamba models for ODEs (1D baselines), we utilize a latent bidirectional Mamba architecture designed explicitly for PDEs (2D baselines). Furthermore, recent works [1, 2] also adopt unidirectional Mamba models (combined with convolution) but are limited to regular grid PDEs only. In contrast, our method leverages a latent bidirectional Mamba, making it applicable to regular and irregular benchmark datasets. We will include additional discussion in the manuscript to address these points. With these, we hope to have addressed all your comments. Please let us know if you have any further concerns or questions. If not, we request you support the manuscript by raising the score.
Summary: This paper introduces a new approach to solving PDEs by introducing the SSM-based Neural Operator on the latent space. The proposed method achieves a good balance on performance and efficiency. The authors provide theoretical analysis that reveals the equivalence of LaMO with kernel integration. With extensive experiments, the propsed method outperforms SOTA transformer-based model and achieving comparable computational efficiency. Claims And Evidence: Results in Supplementary E.3 show that the efficiency of your model is only comparable to TRANSOLVER, but the main text presents it as if your model is superior in all aspects/datasets, which is misleading. Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense for the application. Theoretical Claims: I have checked all the proofs and found no obvious issues. Experimental Designs Or Analyses: 1. The performance of TRANSOLVER on the Pipe dataset differs significantly from the results reported in the original paper. The original result is 0.0033, which is better than yours. Please explain this discrepancy. 2. The effect of the latent encoder/decoder and the SSM has not been quantified, making it hard to distinguish their individual contributions. Supplementary Material: I have reviewed the whole supplementary material. Relation To Broader Scientific Literature: They used Mamba (Gu & Dao, 2023) as the key component of the model and their Latent Encoder is inspired by the Perceiver (Jaegle et al., 2021). Essential References Not Discussed: Essential references are included. Other Strengths And Weaknesses: Strengths: 1. The paper is well written. 2. Comprehensive experiments, outperforming SOTA 3. Establish a connection between SSMs and kernel integral operators Other Comments Or Suggestions: 1. In the paragraphs "Latent Tokens" and "Efficiency" on Page 8, the authors do not indicate which exact table/figure they are referring to. 2. In the analysis of computational complexity (Page 5 Computational Analysis and Supplementary E.4.), while M is theoreticallt constant, but in practice it is a hyperparameter and will be set due to the size of N and other factors. You can compare the size of M and logN to give readers a better sense of the complexity of your model. Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the positive comments and for supporting the work. Please see the responses to your questions below. >Results in Supplementary E.3 show that the efficiency of your model is only comparable to TRANSOLVER, but the main text presents it as if your model is superior in all aspects/datasets, which is misleading. **A:** Thank you for pointing this out. We agree that LaMO's efficiency is indeed comparable to that of TRANSOLVER. However, in the main text, our reference to efficiency pertains specifically to comparisons with transformer-based baselines such as GNOT and ONO, where LaMO demonstrates superior efficiency. We will revise the final version's text to clarify this distinction and avoid any potential misinterpretation. >The performance of TRANSOLVER on the Pipe dataset differs significantly from the results reported in the original paper. The original result is 0.0033, which is better than yours. Please explain this discrepancy. **A:** All the baseline errors were sourced from existing literature, except Transolver. Since Transolver represents the SOTA, we ran its official codebase (keeping the hyperparameters, data splits, and other settings the same as the original paper) and reported the best errors from multiple runs in our manuscript. However, despite our best efforts, we could not replicate the result of 0.0033 reported in the original paper. This discrepancy may be due to differences in the computing environment or random initialization settings. To demonstrate this further, we have made the log file for multiple runs on the pipe dataset available on our anonymous GitHub repository. We shall clarify this in the final version of the paper. >The effect of the latent encoder/decoder and the SSM has not been quantified, making it hard to distinguish their individual contributions. **A:** Following your suggestion, in Table 4 (below), we present an ablation study on the latent encoder/decoder, where we observe that ViT patches outperform the Perceiver encoder on regular grids. Additionally, the effects of the SSM and the encoder are detailed in Table 2 of the main text and further elaborated in Appendix Section E.1. | Encoder/Decoder type | Relative L2 | | -------------------- | ----------- | | ViT | 0.0039 | | Perceiver (Unshared) | 0.0051 | | Perceiver (Shared) | 0.0056 | ***Table 4: Ablation on effect of latent encoder/decoder on Darcy.*** >In the paragraphs "Latent Tokens" and "Efficiency" on Page 8, the authors do not indicate which exact table/figure they are referring to. **A:** In the "Efficiency" paragraph, we are referring to Figure 4 in the main text. However, in the "Latent Tokens" paragraph, we are not referencing any specific table or figure, as it presents an additional ablation study on using latent tokens on a regular grid for the foundation model. >In the analysis of computational complexity (Page 5 Computational Analysis and Supplementary E.4.), while M is theoreticallt constant, but in practice it is a hyperparameter and will be set due to the size of N and other factors. You can compare the size of M and logN to give readers a better sense of the complexity of your model. **A:** We acknowledge that while $M$ is theoretically constant, it functions as a hyperparameter in practice, influenced by the size of $N$ and other factors. We will enhance the discussion by comparing $M$ to $log N$ better to illustrate its impact on the model's computational complexity. We have addressed all your concerns. Please let us know if you have any further comments or questions.
Summary: This paper introduces a methodology to use Mamba based architecture to model the spatial dynamics of a PDE, in an operator based setting. The authors take inspiration from the Perceive model and use a latent space, in addition to directly modeling the input physical properties, however instead of a Transformer layer, the authors use a Mamba layer. The key motivation here it to reduce the quadratic complexity of Transformer to that a linear complexity incase of Mamba. Furthermore, the hope is that Mamba based model is also able to effectively model long range spatial dynamics of the input physical process. The authors also theoretically prove that in general an SSM based architecture should be able to model the kernel integral operator on a domain, and proving that parametrically at least that the Mamba layer should matches the required structure. The authors back their claims with extensive empirical analysis, where across a series of benchmarks (like Darcy Flow and 2D Navier-Stokes on regular grids, as well as airfoil, plasticity etc on irregular meshes) Latent Mamba operator is able to outperform baselines such as Transolver. Strengths: - The experiments are extensive, and the authors show that their method outperforms the baselines consistently. - The authors perform ablation studies to explain the importance of different architectural design choices that they make in the design of their architecture. - They show that their method is efficient and often requires 5-7 times fewer parameters when compared to baselines. Few Limitations - They establish that an SSM is a Monte-Carlo Approximation of a Kernel integral operator. However, the theorem does not give any kind of a guarantee in the number of samples (given some assumption on the kernel $\kappa$ (like boundedness, or smoothness assumptions) or the error). I guess the proof is based on the parametric form (Equation 57 of Appendix) of how the SSM approximates the kernel, but even then something that establishes the approximation capacity of the approximation help establish the true equivalence. - In general the equivalence is made using $\approx$ which is not formal. - The scaling results are on Darcy Flow, which potentially tells us nothing. I think similar scaling results should be shown on relatively complex PDEs (perhaps even Navier Stokes). - In fact, Darcy Flow is quite a simple dataset that can potentially be learned with very few layers. Few Suggestions: - It will be useful to add the number of parameters (or FLOPs) of various methods used in Table 1. Claims And Evidence: The claim regarding the empirical benefits of the method are appropriately validated through experiments. However, the fact that the SSM layers can approximate the kernel integral operator are lacking given the lack of either number of samples or error bounds. Methods And Evaluation Criteria: The benchmarks used are on 2D datasets, though it will be interesting to see how the methods perform on 3D. Other than that, the authors also show results on irregular meshes with is good. Theoretical Claims: There are theoretical claims, though the statement can be made more formal and precise. I think its somewhat straightforward to show though. Please see the main review for more details. Experimental Designs Or Analyses: Yes, the empirical results are extensive. The authors show the scaling results on relatively simple datasets on Darcy flow, it will be interesting to see the scaling behaviour on more complex dynamics (such as Navier Stokes). Supplementary Material: Yes, most of it. Relation To Broader Scientific Literature: Very relevant, given the key point that the quadratic complexity of transformers may be prohibitive for higher dimensions. Essential References Not Discussed: There are a few recent works that try to use SSM based architectures that the authors may have missed: Hu, Zheyuan, et al. "Deepomamba: State-Space Model for Spatio-Temporal Pde Neural Operator Learning." *Available at SSRN 5149007*. Ruiz, Ricardo Buitrago, et al. "On the Benefits of Memory for Modeling Time-Dependent PDEs." *arXiv preprint arXiv:2409.02313* (2024). Other Strengths And Weaknesses: Mostly discussed in the main review. Other Comments Or Suggestions: N/A Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the positive comments and for supporting the work. Please see the responses to your questions below. >Sample Complexity: However, the theorem does not... **A:** A Monte Carlo integral approximation typically has $O\left(\frac{1}{\sqrt{n}}\right)$ convergence rate which is empirically corroborated from Figure 2. We do have a working proof (outlined below) which is subjected to the assumptions stated on $\kappa$. We will be adding it in the supplementary of the manuscript. A formal proof will be explored as part of future work. **Statement:** A Monte Carlo integral approximation of SSM with suitable assumptions on the kernel (e.g. boundedness and smoothness) provides an upper bound on the error as $\epsilon \sim O\left( \frac{1}{\sqrt{n}} \right)$. **Proof sketch:** Boundedness assumption: $\kappa(x, y) \le B$. In an SSM layer, the approximation of the operator computed via a Monte Carlo sum is: $\[T_n f\](t) = \frac{1}{n} \sum_{i=1}^{n} \varphi(t)^\top A \psi(s_i) f(s_i)$. Define the error operator as $E_n = T_n - T$. With probability of at least $1-\delta$ (with $0<\delta<1$), Berstein's inequality suggests that: $\|T_n - T\| \le C \ \sqrt{\sigma^2 \ \frac{\log(1/\delta)}{n}}$, where the variance parameter $\sigma^2$ is given by $\sigma^2 = \left\| \sum_{i=1}^{n} \mathbb{E}\left[(Z_i - T)^2\right] \right\|$, and $C$ is a constant. Hence, to achieve an error of at most $\epsilon$, substituting $\epsilon$ on the LHS, we note that $n \sim O\left( \frac{M}{\epsilon^2} \right)$ where $M$ is constant. Therefore, $\epsilon \sim O\left( \frac{1}{\sqrt{n}} \right)$. >Scaling results on relatively complex PDEs (perhaps even Navier Stokes). **A:** Following your suggestion, we performed additional experiments to show the scaling results on the Navier-Stokes (NS) dataset in Tables 1 and 2. We note that the operator's performance improves as the number of layers increases on the NS dataset. Notably, even with only 4 layers, LaMO achieves superior performance compared to Transolver (see Tab. 1). Furthermore, Tab. 2 demonstrates that LaMO trained on 400 NS samples delivers a performance comparable to that of Transolver trained on 1000 samples, highlighting LaMO's efficiency and effectiveness with fewer training samples. For a more comprehensive examination of scalability across different sample sizes, please refer to Appendix Section E.5, which presents the results for all benchmark datasets. | # Layers | 2 | 4 | 6 | 8 | | ---------------- | --- | --- | --- | ---- | | Transolver |0.1601|0.1518|0.1241|0.0957| | LaMO (Ours) |0.1038|0.0608|0.0524|0.0460| ***Table 1: Relative L2 error vs. layer scaling on NS (ν = 1e-5).*** | # Training samples | 200 | 400 | 600 | 800 | 1000 | | -------------------------- | ------ | ------ | --- | ------ | ------ | | Transolver | 0.2330 | 0.1874 |0.1552| 0.1120 | 0.0957 | | LaMO (Ours) | 0.1490 | 0.0972 |0.0648| 0.0570 | 0.0460 | ***Table 2: Relative L2 error vs. training samples on NS (ν = 1e-5).*** >Number of parameters (or FLOPs). **A:** In Table 3, we present the number of parameters for a range of neural operators across all benchmarks. The exact number of parameters used for each dataset is available and will be included in the Appendix. We will incorporate these changes into the manuscript and ensure they are reflected in the final version. | Operator | FNO | U-FNO | LSM | GNOT | Galerkin | Transolver | LaMO | | --------------- | -------- | -------- | -------- | --- | -------- | ---------- | ------- | | Parameters (M) | 0.9-18.9 | 1.0-19.4 | 4.8-13.9 | 9-14 | 2.2-2.5 | 2.8-11.2 | 1.1-4.0 | ***Table 3: Baselines parameter range (in M)*** >Performance on 3D. **A:** As shown in Table 4 of the Appendix, the benchmark datasets used, such as Navier-Stokes (regular) and Plasticity (point cloud), consist of 2D spatial and 1D temporal dimensions. However, it would be interesting to explore the performance of the methods on datasets with higher physical dimensions (e.g., 3D spatial and 1D temporal) as part of future work. >Recent literature. **A:** Thank you for highlighting the relevant references we missed. We will include these recent works on SSMs for PDEs in the related work section of the final version. We have addressed all your concerns regarding these. Please let us know if you have any further comments or questions.
null
null
null
null
null
null
Guardians of Image Quality: Benchmarking Defenses Against Adversarial Attacks on Image Quality Metrics
Accept (poster)
Summary: This paper presents the first comprehensive benchmark study on defense mechanisms for image quality assessment (IQA) metrics, systematically evaluating the performance of 30 defense strategies against 14 adversarial attacks on 9 IQA models. Claims And Evidence: All the content submitted has corresponding evidence to support it. Methods And Evaluation Criteria: The methods and evaluation criteria proposed have great novelty and pioneering significance. Theoretical Claims: The paper lacks substantial theoretical proof. Experimental Designs Or Analyses: The paper provides ample experiments, which can largely substantiate the authors' viewpoints.The author provides abundant experimental results that strongly support their views. Supplementary Material: The supplementary materials further help me understand the author's method and contributions. Relation To Broader Scientific Literature: The author presents an interesting application prospect in the field of quality evaluation, using a defense model to assess IQA. Essential References Not Discussed: The author has cited the vast majority of relevant literature. The third point of weakness is the need to supplement additional reference paper. Other Strengths And Weaknesses: Strengths: 1. The paper has a clear structure and rich content, with valuable experiments provided in the appendix. 2. The author has introduced large-scale subjective studies, and the substantial feedback results further validate the effectiveness of the defense mechanism. Weaknesses: 1. The paper states that diffusion defense is effective in classification tasks but performs poorly in IQA tasks. However, it seems to lack in-depth analysis, merely mentioning that “future research needs task-specific adjustments.” 2. In Table 2, the time used by DiffPure is 691.42 seconds. However, it appears to only be theoretically feasible and difficult to apply in real-time. 3. Can the authors highlight the significant differences between this work and the existing work [1]? A more detailed response from the authors would better help me resolve my confusion and evaluate this paper. 4. Have existing attack methods considered targeted attacks (e.g., disguising low-quality images as specific high-scoring ones)? Such methods would have a greater impact on ranking. [1] Kovalev E, Bychkov G, Abud K, et al. Exploring adversarial robustness of JPEG AI: methodology, comparison and new methods[J]. arXiv preprint arXiv:2411.11795, 2024. Other Comments Or Suggestions: My suggestions are already included in the weaknesses discussed earlier. Questions For Authors: Q1. In Table 3, why is the training time provided for Cert Defense but not for AT Defense? Q2. When forcibly discretizing regression-based metrics into classification problems, have you considered the potential bias that may arise? This could potentially affect the defense performance. Q3. The defense results show significant fluctuations on the AGIQA-3K dataset, but the authors seem to have not conducted an in-depth analysis of the impact of data bias. Q4.If the issues are clarified, I will consider increasing the score. Ethical Review Flag: Flag this paper for an ethics review. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewers for their thorough evaluation and constructive feedback, and will apply their recommendations to refine the revised paper. We answer your questions below: 1. For adversarial training, there is no inference overhead since the fine-tuning is done during the training phase. In contrast, purification and certified defenses require additional computation during inference, which is why their training time is explicitly reported. 2. Yes, we conducted additional experiments to identify the optimal number of classes for discretization. The main challenge is balancing the trade-off between correlations SROCC and certified radius R. As the number of classes increases, SROCC increases, but R decreases. This occurs because a higher number of classes makes it easier to cross class borders during Monte Carlo sampling. The results of these experiments can be found in Table 9, located in Appendix A.3.4. 3. Our analysis does not indicate significant fluctuations on the AGIQA-3K dataset specifically. The differences in ranking across datasets are minor and observed on all datasets; the closest one to AGIQA-3K is KADID1K. While data bias is important, our results suggest no substantial difference between AGIQA-3K and natural image datasets. Further investigation into potential biases remains a promising direction for future work. We address your concerns below: 1. In classification tasks, diffusion-based defenses only need to generate images that preserve class-relevant features, making perceptual quality less critical. In contrast, IQA tasks require preserving fine image details while removing adversarial noise, which standard diffusion models struggle with. To improve their applicability, future research should focus on task-specific adjustments, such as incorporating perceptual constraints or adapting diffusion models to minimize distortions introduced during defense. 2. We report time complexity in milliseconds, so DiffPure processes an image in ~0.7 seconds. This makes it applicable in real-world applications. 3. The mentioned study focuses on the robustness of neural-based image codecs, which makes two significant differences with our work: the area of research in our paper are IQA models, and instead of focusing on the robustness of the models themself, we focus on the effectiveness of defenses. Compared to [1] we use significantly different methodology, including datasets, evaluation metrics and a subjective study. Moreover, we evaluate 30 defense methods of three types (adversarial training, purification, and certified methods), while the authors of [1] used only 7 purification defenses. 4. Indeed, we considered exactly this kind of attack. The formal definition is in section 3.1, page 2. [1] Kovalev E, Bychkov G, Abud K, et al. Exploring adversarial robustness of JPEG AI: methodology, comparison and new methods[J]. arXiv preprint arXiv:2411.11795, 2024. If you have no further concerns, we would be sincerely grateful if you could consider raising the rating of our submission.
Summary: This paper addresses the vulnerability of neural-network-based Image Quality Assessment (IQA) metrics to adversarial attacks. The authors present the first comprehensive benchmark for evaluating defense mechanisms against such attacks. The study systematically evaluates 30 defense strategies—including purification, training-based, and certified methods—against 14 adversarial attacks in both adaptive and non-adaptive settings across 9 no-reference IQA metrics. The authors also conduct an extensive subjective study with over 60,000 responses to evaluate the perceptual quality of defended images. The benchmark aims to guide the development of more robust IQA defense methods and is open to submissions. ## update after rebuttal Thanks for the authors' reply. They addressed my questions. After referring to the opinions of other reviewers, I decide to keep my score. Claims And Evidence: The authors make several claims that are generally well-supported by their evidence: 1. The claim that neural-network-based IQA metrics are vulnerable to adversarial attacks is established through references to prior work and demonstrated in their experiments. The evidence provided is convincing. 2. The claim that this is the first comprehensive benchmark for IQA defense evaluation seems accurate based on the literature review provided on page 2, where they note that existing comparisons focus primarily on object classification rather than IQA metrics. 3. The effectiveness of various defense methods is substantiated through extensive experiments across multiple attack types, datasets, and defense strategies. The authors provide robust statistical analysis to support their conclusions. 4. The authors' observation that compression-based defenses (DiffJPEG, JPEG) are particularly effective is supported by both objective metrics and subjective evaluation. 5. The claim that different defense methods work better for different IQA model architectures (page 7-8) is well-supported by the analyses in Figure 4. A minor limitation is that while the authors emphasize the importance of their subjective study for evaluating perceptual quality, they don't fully integrate these findings into their main recommendations. Figure 5 (page 13) shows that Real-ESRGAN performs best in subjective evaluation, yet this finding isn't prominently emphasized in the conclusion. Methods And Evaluation Criteria: The methodology is generally sound and appropriate for the problem: 1. The authors' approach of evaluating across multiple datasets (Page 4, section 3.4) is commendable, as it ensures results are not dataset-specific. 2. The attack parameter selection method (Page 14-15, section A.3.4) is thoughtfully designed, ensuring fair comparison across attack types by aligning them to target "weak", "medium", and "strong" perturbation levels. 3. The evaluation metrics (Page 4-5, section 3.5) are well-chosen to capture different aspects of defense effectiveness (robustness, perceptual quality preservation, correlation with human judgment). 4. The subjective study design (Page 12, section A.2) using the Bradley-Terry model for pairwise comparisons is appropriate for assessing perceptual quality. However, I found some methodological concerns: 1. Page 13, Table 5 shows validation of the sampling methodology, but p-values of 0.1449 and 0.1958 suggest the differences between samples might be approaching significance (if using α=0.05). This deserves more discussion. 2. In the evaluation of adversarial training (Page 15-16, section A.3.5), the authors acknowledge that using ground-truth labels from clean images is inaccurate for adversarial examples, yet they don't fully address how this limitation affects their conclusions about adversarial training methods. Theoretical Claims: The paper is primarily empirical rather than theoretical. The authors reference theoretical guarantees for certified defense methods (page 6, page 7) but do not develop new theoretical results of their own. The authors correctly note on page 7 that "Despite the questionable applicability of randomized smoothing-based defenses to IQA, they remain effective, as the certified radii are sufficiently high and the number of abstentions is relatively low." This is an important observation about the practical utility of theoretical guarantees in this context. Experimental Designs Or Analyses: The experimental design is comprehensive and thorough: 1. The authors evaluate defenses across multiple dimensions: effectiveness against different attacks (Table 14), performance on different IQA models (Tables 12-13), and behavior on different datasets (Table 19). 2. The statistical significance testing using Wilcoxon Signed Rank Test with Bonferroni correction (Page 18-19, section A.4) strengthens the validity of their comparisons. 4. The analysis of defense performance across different attack strengths (Table 11) provides valuable insights about defense robustness. A few concerns about the experimental design: 1. In page 5, section 3.6, the authors mention using 40 NVIDIA Tesla A100 GPUs, but don't specify how computational constraints influenced their experimental decisions, such as limiting certified defense evaluations to only 10 images. 2. The authors don't fully explore potential trade-offs between defense effectiveness and inference time for real-time applications. While they report computation times (Tables 2 and 3), the implications for practical deployment aren't thoroughly discussed. Supplementary Material: I thoroughly reviewed the supplementary material, which includes: 1. Details of the study methodology (Sections A.3.1 through A.3.5) 2. Statistical tests (Section A.4) 3. Visual examples of attacks and defenses (Section A.5) 4. Additional experimental results (Section A.6) The supplementary material is well-organized and provides valuable context. The visualizations in Figures 8 and 9 are particularly helpful in understanding the visual artifacts introduced by different defense methods. Relation To Broader Scientific Literature: The authors position their work well within the broader literature: 1. They acknowledge prior work on adversarial attacks against IQA metrics (page 1, paragraph 2) while noting the gap in defense mechanism research. 2. They correctly observe that most defense methods have been developed for object classification rather than IQA tasks (page 2, paragraph 1). 3. The authors appropriately reference relevant work on adversarial training, purification methods, and certified defenses (page 2). One area where the connection to broader literature could be strengthened is in relating their findings to the general principles of adversarial robustness. The paper focuses on the specific domain of IQA metrics, but could better articulate how their findings might generalize to other regression-based neural network tasks. Essential References Not Discussed: no, the citation is appropriate Other Strengths And Weaknesses: Strengths: 1. The paper addresses a significant practical problem, as vulnerabilities in IQA metrics could affect search engine rankings, benchmarking, and content quality assessment. 2. The benchmark methodology is thoroughly documented and reproducible, with implementation details and code availability mentioned. 3. The large-scale subjective study (60,000+ responses) adds significant value by assessing perceptual quality, which cannot be fully captured by objective metrics. 4. The paper provides nuanced insights, such as the observation that transformer-based IQA metrics have greater intrinsic robustness (page 7, paragraph 2). Weaknesses: 1. The paper occasionally lacks clarity in explaining the practical implications of its findings. For example, the discussion of certified methods (page 6-7) focuses on technical metrics without clearly articulating the trade-offs involved in deploying such methods. 2. In paragraph 2 of page 3, the authors state they increase IQA scores during attacks "to reflect real-life applications," but don't adequately explain why this is more realistic than decreasing scores. 3. The limitations section in the appendix (page 12) feels somewhat disconnected from the main paper. Some of these limitations, particularly regarding attack parameter handling, should be integrated into the main discussion. 4. The paper introduces many evaluation metrics, which occasionally makes it difficult to determine which metrics should be prioritized when comparing defenses. Other Comments Or Suggestions: 1. Page 2, paragraph 1: The authors state, "making the development of a universally efficient and robust IQA metric impractical." This claim would benefit from more substantiation, as it's a significant assertion. 2. Page 2, paragraph 1: The sentence "To address this, we present a systematic comparison..." seems to imply that their work addresses the impracticality of creating universally robust metrics, but instead it focuses on enhancing existing models through defense mechanisms. This connection should be clarified. 3. Figure 1 provides a good overview, but the explanation of how the components interact could be improved. 4. Page 20, lines 1230-1231: The sentence "In summary, results do not change much across strength" contradicts some of the data in Table 11, where performance differences between weak and strong attacks are substantial for some methods. 5. The paper would benefit from a clearer discussion of the practical deployment considerations for these defense methods in real-world applications. Questions For Authors: 1. In your subjective study, Real-ESRGAN emerged as the top-performing method for perceptual quality (page 8), yet it doesn't perform as well on objective metrics. How do you reconcile this discrepancy, and what implications does this have for developing or selecting defense methods in real-world applications where human perception is the ultimate measure of quality? 2. Your findings show that transformer-based metrics demonstrate greater intrinsic robustness against adversarial attacks (page 7). Could you elaborate on the architectural features that might contribute to this robustness, and do you believe future IQA metrics should prioritize transformer-based architectures specifically for their robustness properties? 3. You mention on page 6 that certified methods are highly impractical due to computational overhead. Given the trade-off between security guarantees and computational efficiency, what modifications to certified methods might make them more viable for real-world IQA applications? 4. Figure 3(b) suggests some defense methods perform differently on AI-generated images versus natural images. As AI-generated content becomes increasingly prevalent, how might defense strategies need to evolve specifically to address the unique characteristics of such content? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your valuable suggestions and thoughtful feedback. We will use suggestions to enhance the revised version of the paper. Your questions are answered below: 1. The discrepancy between subjective and objective metrics is not uncommon in IQA tasks. While Real-ESRGAN may not achieve the best scores on metrics like PSNR or SSIM, its superior performance in subjective study indicates that it preserves natural image characteristics. In practical applications, these findings imply that defense methods should be evaluated using both objective and subjective measures. Ultimately, this balance is crucial for developing defenses that improve numerical performance and align with how humans perceive images. 2. Transformer-based IQA metrics likely benefit from their global attention mechanisms, which enable them to capture long-range dependencies and contextual relationships. Also, pretraining on large datasets can contribute to their generalization capabilities and robustness. While transformer-based architectures offer robustness advantages, they also have increased computational costs. Future IQA research may explore hybrid approaches that combine the strengths of transformers and CNNs, rather than solely prioritizing one architecture based on robustness alone. 3. Certified methods provide strong security guarantees but are extremely slow because of their design. To optimize them, one could leverage more efficient certification techniques balancing the trade-off between guarantee and number of samples, or incorporate lightweight architectural modifications. Additionally, hybrid approaches that combine certified defenses with empirical ones may offer a balance between robustness and efficiency. Our benchmark provides a foundation for assessing such trade-offs and guiding future improvements. 4. According to Figure 3, the performance on the AI-generated AGIQA dataset is very similar to the performance on the natural KONIQ dataset for most of the defenses, so we disagree that there is a difference in the results. We suggest general ML practices, covering all possible cases during training, including both types of content, to help defenses to generalize better. However, our results show no significant difference in defense efficiency between natural and AI-generated content. We address your concerns below: 1. Statistical tests cannot prove two samples are identical; a p-value above the alpha level means we cannot reject the null hypothesis. In our case, the p-values combined with the small variance of means reported in Table 5, indicate a high level of consistency in our sampling methodology. Figure 6 supports this, showing that score distributions across samples are nearly identical, with closely aligned means. Together, these results demonstrate that any observed differences are statistically negligible and do not compromise the validity of our sampling approach. 2. Our study compares two existing solutions for adversarial training of IQA metrics: in one of them, MOS values for adversarial images are penalized, and in another, original labels are used for training on clean data. We highlighted that in general, adversarial training methods that use ground-truth labels for adversarial images may yield decreased correlations, which is not a limitation of our procedure, but encourages future defenses to consider this issue 3. Computation time grows dramatically with the number of source images, IQA models and attack methods. Thus, using only 10 source images, we need to apply each certified defense method 1260 times. Overall, we spent about 3000 GPU-hours on certified defenses and ~25000 GPU-hours on all calculations. 4. There is a distinct difference in inference time across defense types. For adversarial training, there is no inference overhead since the fine-tuning is done during training. For purification defenses, most methods add only about 0.05–0.2 s per image. This overhead is generally manageable for real-time cases, e.g. quality screening for CCTV, video streaming. Certified defenses offer strong theoretical guarantees but are significantly slower than other methods. This trade-off makes them most suitable for highly sensitive applications where robustness is a necessity, e.g. in benchmarks. 5. Our reported computation times in Tables 2, 3 enable developers to choose a suitable defense method based on their deployment constraints. In conclusion, for real-time practical deployment adversarial training and fast purification methods are the best fit. 6. Practical applications usually seek to inflate the quality of content. For example, cheating in benchmarks that influence project investments, increasing bitrate after transcoding in target network, inflating results (e.g., Google’s libaom encoder with its --tune-vmaf option). Decreasing scores don't lead to these problems because most real-world systems are aiming to higher quality. Given these scenarios, we prioritized score increases in our approach.
Summary: The manuscript presents a comprehensive benchmark on defenses against adversarial attacks targeting neural network-based Image Quality Assessment (IQA) metrics. It evaluates 30 defense strategies across three categories (purification, adversarial training, and certified methods) against 14 adversarial attacks. The study covers adaptive and non-adaptive attacks on nine no-reference IQA metrics, with extensive experimentation on multiple datasets. The paper introduces a benchmark, a novel dataset of adversarial images, and an online leaderboard, aiming to guide future research in robust IQA metric development. ## update after rebuttal The positive judgment, also shared by the other reviewers, on this manuscript is confirmed having seen the responses to my comments. Claims And Evidence: - They argue that defenses such as compression-based methods are particularly effective against adversarial noise, and the experimental results (e.g., DiffJPEG outperforming other purification methods) substantiate this. - The manuscript asserts that transformer-based IQA models are naturally more robust than CNN-based ones; the reported robustness scores and attack performance trends align with this claim. - The claim that randomized smoothing methods provide theoretical guarantees is valid but could be expanded with more discussion on its practical applicability to IQA. Methods And Evaluation Criteria: - The methodology is thorough, covering multiple attack strengths, defense configurations, and datasets. - The evaluation is rigorous, using a mix of objective robustness metrics (e.g., R_{score} and D_{score}) and perceptual quality measures (PSNR, SSIM, crowd-sourced MOS study). - Adaptive and non-adaptive attack scenarios are considered, adding realism to the evaluation. - The computational complexity analysis of defenses is a strength but could be extended with a discussion on real-time applicability in practical IQA settings. Theoretical Claims: - The authors correctly differentiate empirical and certified defense methods. However, the manuscript could benefit from a more detailed theoretical discussion on why certified methods are less practical for IQA, given their computational overhead. - The claim that task-specific adaptations are needed for diffusion-based defenses is reasonable and supported by empirical evidence. - Some theoretical claims regarding robustness improvements from adversarial training could be better justified. Experimental Designs Or Analyses: - The experimental design is robust, with a diverse dataset selection and comprehensive attack-defense comparisons. - The use of adaptive attacks strengthens the validity of the results. - The subjective evaluation study (60,000+ responses) adds significant value but lacks a detailed breakdown of the participant demographics and potential biases. - A minor concern is that some defenses (e.g., adversarial training) were only evaluated on select IQA metrics, which could impact generalizability. Supplementary Material: The supplementary material includes an analysis of the limitations of the submitted work and a more detailed description of the subjective study (without addressing concerns about participant characteristics). Furthermore, additional details on the experimental setup are provided that improve the clarity of the work and enhance its reproducibility. Relation To Broader Scientific Literature: - The study fills an important gap in adversarial robustness research for IQA. - The comparison with adversarial robustness benchmarks in classification tasks is insightful. - It would be beneficial to discuss potential implications for other domains using perceptual metrics, such as medical imaging or deepfake detection. Essential References Not Discussed: While the manuscript is well-referenced, some recent works on adversarial robustness in vision models beyond IQA (e.g., adversarial defenses for GAN-based image generation) could provide additional context. Other Strengths And Weaknesses: Strenghts - First systematic study of defenses for adversarial IQA attacks. - Extensive experimentation across diverse datasets and metrics. - Well-defined benchmark with clear evaluation criteria. - Subjective quality assessment adds real-world validity. - Open-source dataset and leaderboard promote reproducibility. Weaknesses - Computational complexity of defenses is high, limiting real-world deployment discussion. - Some defenses were only tested on select IQA metrics. - More details needed on subjective study methodology. Other Comments Or Suggestions: - The manuscript could discuss whether findings extend to full-reference or reduced-reference IQA models. - The impact statement could be expanded to address ethical considerations of adversarial robustness in applications like image forensics or content moderation. Questions For Authors: 1. How were the attack strength levels chosen, and do they align with real-world adversarial scenarios? 2. How do the authors envision practical deployment of the best-performing defenses given their computational costs? 3. Would the inclusion of hybrid defense strategies (e.g., combining purification and adversarial training) improve robustness further? 4. Have the authors considered analyzing transferability of adversarial attacks across different IQA metrics? 5. What steps were taken to ensure diversity and reliability in the subjective study participants? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the valuable comments and questions. We appreciate the recognition of our study and analysis and will address the questions below: 1. We employed three metrics for attack strength estimation depending on the attack type. For $L_{\infty}$, we chose $\frac{2}{255}$, $\frac{4}{255}$, and $\frac{8}{255}$ as the most common thresholds in similar studies [1,2] and aligned them with potential real-world cases where imperceptibility is critical. For perceptual metrics-based attacks, we used SSIM and PSNR to quantify attack strength. To ensure alignment across different attacks, we computed average SSIM and PSNR values on a subset of 1000 images from KonIQ-10k corresponding to the selected $L_{\infty}$ thresholds. For each selected $L_{\infty}$ threshold, we calculate corresponding PSNR and SSIM values. For example, the average SSIM value for $L_{\infty}=\frac{2}{255}$ is approximately 0.9. [1] Croce, F., et al. "RobustBench: a standardized adversarial robustness benchmark," in NeurIPS 2020, arxiv: abs/2010.09670. [2] Dong Y. et al. "Benchmarking adversarial robustness on image classification," in proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, С. 321-331 2. Adversarial training requires no additional cost computation during inference, so this type is suitable for real-time quality measurement scenarios, e.g., image quality screening in CCTV, or for frame-by-frame video quality assessment. Purification methods have a wide range of computational overhead from 6 ms (Real-ESRGAN) to $\sim$140 ms (DISCO), with the slowest being diffusion-based methods ($\sim$700 ms for DiffPure). However, they can still be applied to image hostings to measure the quality of new images. Certified methods are much slower (3 to 40 seconds per image). They are best suited for security-sensitive scenarios where robust guarantees outweigh computational costs ー for example, in benchmarks to prevent cheating. 3. Combining purification and adversarial training could enhance robustness by leveraging their complementary strengths. Purification handles unseen perturbations, while adversarial training improves resilience to known attack patterns. This hybrid strategy may offer stronger robustness against adaptive attacks while minimizing computational costs. However, AT and purification methods decrease the correlation of IQA metrics with subjective quality, so a combined defense may yield a lower correlation. In this work, we focus on existing methods to establish a baseline before exploring more complex, combined approaches in future work. 4. We did not analyze the transferability of adversarial attacks across IQA metrics as the paper focuses on evaluating defense mechanisms. Some studies focused on attack methods did analyze transferability[1]. This is recognized as an important direction for future work on adversarial attacks to understand cross-model vulnerabilities. [1] A. Ghildyal, F. Liu, "Attacking Perceptual Similarity Metrics," in Transactions on Machine Learning Research, 2023 5. To ensure diversity and reliability, we conducted a large-scale crowd-sourced study using the Subjectify.us platform, which enabled us to gather opinions from a broad and diverse participant pool. This platform helps reduce sampling bias and expands the range of participants. There were also participants’ answers quality control measures implemented via verification questions. Furthermore, the platform ensures that each assessor can only participate once and that the total number of answers for each pair is at least 10. This process minimizes the impact of unreliable inputs and ensures the robustness of our subjective evaluations.
Summary: The paper proposes a benchmark for defending neural-network based image quality assesment (IQA) against adversarial attacks. The paper makes an extensive study with numerous datasets, IQA models and adversarial attacks of different types and discusses the evaluation results. Claims And Evidence: I think, the claim of proposing a comprehensive benchmark for evaluating defences against adversarial attacks on image quality assesment is supported in the paper. Up to my knowledge, there were no such benchmarks up to now. Methods And Evaluation Criteria: I think, the datasets, IQA models, adversarial attacks and evaluation metrics make sense for the problem. Theoretical Claims: I have not seen any theoretical claims or proofs that need to be checked. Experimental Designs Or Analyses: The validity of experimental designs appears to be adequate, but it was not carefully checked. Supplementary Material: I have looked through the paper appendix. Relation To Broader Scientific Literature: The paper relies on existing adversarial attacks and previously proposed IQA methods. Essential References Not Discussed: I am not aware of any essential references that were not discussed. Other Strengths And Weaknesses: Strengths 1. An extensive defense evaluation including adaptive and non-adaptive adversarial attacks. 2. A large-scale subjective study with many participants to provide additional evaluation. 3. Since the proposed benchmark is claimed to be open for submissions, it may facilitate the development of the field. Weaknesses 1. I appreciate the effort made in this paper to provide a benchmark for the IQA defences. The main concern is whether the methodological novelty of this paper is sufficient for the main track of the conference where this paper was submitted. The paper proposes some novel metrics for the considered task (e. g. (8), (9)) but in general the novelty seems to be rather limited. I think the proposed benchmark can be valuable for the community. But in my opinion after some adaption and formulating a clear position on the current state of the IQA defences based on the performed studies, the paper would much better fit into the Positon Paper Track of the conference. 2. Apart from the novelty concerns formulated above, I am not fully convinced regarding the significance of the IQA metric defences in general. The paper makes an example based on the image processing and compression competitions where the competitors might exploit the metrics to land higher on the leaderboard. I am not sure whether it is a significant concern for the ML community which requires developing specific defenses for image quality assessment. Thus, the significance of the proposed benchmark is also questionable Other Comments Or Suggestions: $-$ Questions For Authors: No further questions. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful and detailed feedback. We truly value your questions and your time reviewing our paper. We address your concerns below: 1. Our contributions include a novel methodology for comparing defenses for the IQA tasks, addressing a critical gap in the field, an extensive subjective study with 60,000+ responses, and an in-depth analysis of the results. Novelty of methodology includes: sampling strategy for dataset, wide range of attacks of various types (including WB and BB, restricted and unrestricted, dadptive and non-adaptive attacks), comparing different types of defenses with each other, aligning attacks parameters by attack strenth, varying defense parameters, new metrics for evaluation and subjective comparison. Notably, our work directly aligns with the main topics from the call for papers for ICML 2025 — specifically, Evaluation (Methodology) and Trustworthy Machine Learning (Robustness, Safety). By targeting an underexplored area with clear real-world impact, our benchmark lays the groundwork for future contributions. To our knowledge, no prior work has systematically examined defenses in this context, underscoring the novelty and importance of our approach. 2. The manipulation of IQA metrics is a significant concern that extends beyond image processing competitions to numerous critical applications in the ML community. IQA metrics are foundational in detecting copyrighted content, ranking results in internet search engines (e.g., Bing, as noted in Section 1), assessing medical imaging quality for diagnostics [1,2], enhancing face recognition systems[3], and nearly all preprocessing techniques for images and videos. The significance of attacking IQA metrics extends beyond manipulating benchmarks and is already recognized by research and industry areas. Existing papers address issues with IQA robustness because IQA is foundational in detecting errors in medical imaging, ranking the results in search engines, etc. Exploring and enhancing the robustness of IQA modes is an active area of research[4,5,6]. Moreover, after the paper[8] about the adversarial vulnerability of VMAF metric and implementation of this attack in Google's libaom video codec, VMAF developers from Netflix had to release a more robust version VMAF NEG[7], highlighting the importance of robust IQA metrics for the industry. We want to establish a standard to evaluate emerging methods and boost the development in this area by publishing our benchmark. If you have no further concerns we would be sincerely grateful if you could consider raising the rating of our submission. [1] Yuan S. et al. “A Deep-Learning-Based Label-free No-Reference Image Quality Assessment Metric: Application in Sodium MRI Denoising” [2] Dong X., Fu L., Liu Q. “No-reference image quality assessment for confocal endoscopy images with perceptual local descriptor,” Journal of Biomedical Optics, 2022 [3] Terhorst P. et al. “SER-FIQ: Unsupervised estimation of face image quality based on stochastic embedding robustness,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020 [4] Ghazanfari S. et al. “R-LPIPS: An adversarially robust perceptual similarity metric” [5] Kettunen M. et al. “E-lpips: robust perceptual image similarity via random transformation ensembles” [6] Chistyakova A. et al. “Increasing the Robustness of Image Quality Assessment Models Through Adversarial Training” [7] Netflix blogpost: https://netflixtechblog.com/toward-a-better-quality-metric-for-the-video-community-7ed94e752a30 [8] Zvezdakova A. et al. “Hacking VMAF with Video Color and Contrast Distortion” --- Rebuttal Comment 1.1: Comment: Thank you for addressing the concerns on the novelty and significance of the paper raised in the review. After reading the rebuttal I have decided to raise my score.
Summary: Image Quality Assessment (IQA) mostly uses DNNs to calculate the score, leaving space for attackers to perturb the image to manipulate the score for commercial advantage in ranking. Compared to (Antsiferova et al., 2024) that benchmark attacks to IQA, this paper benchmarks defenses in this task. It considers 17 purification (pre-processing) defenses, 2 adversarial training (with variations), and 6 certified defenses. The evaluation is extensively performed on 4 image datasets, 9 IQA models, and 14 attacks (with adaptive variations). Authors publish the benchmark for the community to assess upcoming defenses/attacks. ## update after rebuttal Thanks the authors for providing the rebuttal in a limited time. I think it is good to dedicate a paragraph to organize existing insights as promised, but overall, more insights from this work are encouraged. I did not question the novelty against RobustBench, but more discussions on the findings on this benchmark and RobustBench should be presented. I decided to keep my score. Claims And Evidence: Claims are clear. Methods And Evaluation Criteria: Yes. Theoretical Claims: No theoretical claims made. Experimental Designs Or Analyses: Yes, see Other Strengths And Weaknesses. Supplementary Material: I did not go through the details. Relation To Broader Scientific Literature: Authors publish the benchmark for the community to assess upcoming IQA defenses/attacks. Essential References Not Discussed: Not found by me. Other Strengths And Weaknesses: Strengths 1. Attacks to IQA models are well motivated, contributing to the value and timeliness of a defense benchmark. To achieve a high ranking in a search engine, an image releaser has the motivation to perturb the image to bypass the DNN-based quality ranking system, so that a specific image receives unmatched attention. 2. I am impressed by the comprehensiveness of experiments. For each of the assesed defense, all common attacks are tested. Notably, adaptive attacks that backpropagate attack gradients through the defense are also considered. The authors also study extensively on how the attack/defense hyperparameters affect the results. 3. Various metrics are adopted. Robustness scores reflect how well a defense can restore the IQA scores of adversarial images to match their original values. Quality scores measure the perceptual similarity between purified images and their original images. Performance scores assess an IQA metric’s performance in the presence of adversarial defense. Weaknesses 1. The takeaways for IQA designers are not clear enough. The benchmark is meant to provide guidance for defenders on how to secure their system. However, only limited discussions are put in Section 5. As a researcher not working in this field, I struggle to learn useful insights from a lot of numbers presented. 2. I am not sure how this work links to previous benchmarks on securing image classifiers, e.g., RobustBench. Both dealing with images, a lot of attacks/defenses are using the same technique. Are there similar conclusions about a defense method? What is new in IQA? There seems to be limited sentences for that, but I think a concentrated paragraph would be helpful. This gives hints on whether it is necessary to develop benchmarks for future new DNN-based image tasks. 3. I believe the manuscript would benefit from presentation improvement in terms of a clear prioritization. Other Comments Or Suggestions: N/A Questions For Authors: As a late reviewer, I do not expect interactions with the authors. Feel free to deprioritize my opinions if rebuttal is important in AC's judgement. Code Of Conduct: Affirmed. Overall Recommendation: 3
null
null
null
null
Learning Mean Field Control on Sparse Graphs
Accept (poster)
Summary: The paper studies mean field multi-agent methods on sparse graphs. Claims And Evidence: The paper is clearly written, and the reviewer is not aware of any misleading claims. However, some definitions could be improved. For instance, in Definition 2.1, it is unclear what the expectation on the right-hand side is defined over. Since \( f \) is a deterministic mapping to the real number space and \( G \) is a fixed variable in this definition, the source of randomness needs to be explicitly stated. Methods And Evaluation Criteria: The reviewer is primarily uncertain about the applicability of the mean-field method in the studied scenarios. Specifically, mean-field theory relies on an assumption that holds when agents are strongly influenced by their neighbors. However, this assumption does not hold in sparse graphs. This raises a fundamental question: What is the rationale for applying mean-field theory in this context? Can it accurately capture the underlying relationships expressed by these graphs? Theoretical Claims: The reviewer does not find errors in theoretical claims. Experimental Designs Or Analyses: The experimental design is well-structured, but error bars are essential for accurately evaluating the performance of the proposed method. Supplementary Material: The reviewer briefly checked Appendix B, which contains the proofs for the theoretical results. Relation To Broader Scientific Literature: NA Essential References Not Discussed: The paper effectively references related works. Other Strengths And Weaknesses: NA Other Comments Or Suggestions: NA Questions For Authors: Please see the questions mentioned in the above sections. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for the careful reading and constructive evaluation of our work. The reviewer raised the concern that “[…] in Definition 2.1, it is unclear what the expectation on the right-hand side is defined over. […] the source of randomness needs to be explicitly stated.” Thank you for pointing out the missing clarity in Definition 2.1! The randomness in Definition 2.1 is induced by the randomness in the graph limiting object $G$ which is a random variable over the set of isomorphism classes $\mathcal{G}^*$ and not just an element of $\mathcal{G}^*$ as currently stated in Definition 2.1. The expectation is then over the law of this random variable $G$. We will update Definition 2.1 accordingly to accurately outline the source of randomness. Furthermore, the reviewer raises the following question: “The reviewer is primarily uncertain about the applicability of the mean-field method in the studied scenarios. Specifically, mean-field theory relies on an assumption that holds when agents are strongly influenced by their neighbors. However, this assumption does not hold in sparse graphs. This raises a fundamental question: What is the rationale for applying mean-field theory in this context? Can it accurately capture the underlying relationships expressed by these graphs?” For the many low degree agents, as the reviewer indicates, the standard MF assumption does not hold. However, even for these low degree agents, the same local neighborhood configurations appear very often in large graphs, formalized by local weak convergence. The locally weak converging neighborhoods result in an (almost) deterministic evolution of the finite degree mean fields and entail the rigorous theoretical results we derive in the paper. The rationale of applying MF theory in the context of sparse graphs with finite expected average degree and diverging degree variance lies in the structure of the considered graphs. As we discuss in Example 2, these graphs contain a relatively small fraction of high degree nodes which are crucial for the overall system dynamics as they have many neighbors. Thus, we leverage the MF principle to model the important fraction of high degree agents who are close to the standard MF assumptions. Overall, the LWMFC model and our algorithms bridge the gap between rather dense topologies typically considered in MF setups, and ultrasparse graphs where the maximal degree is bounded. As our evaluations indicate, this gap contains many empirical networks which are modeled more accurately by LWMFC than by existing methods. Finally, the reviewer comments that “[…] error bars are essential for accurately evaluating the performance of the proposed method.” We do not report error bars in Table 2 because our learning approaches consistently outperform the IPPO benchmark on all problem-network combinations.
Summary: The main focus of this paper is a variation of mean field control (MFC) problems in which the agents' interactions are encoded by a graph-like structure which is not necessarily uniform, contrary to standard MFC. After studying the foundation of the problems (well-posedness and connection with finite-agent problems), two reinforcement learning algorithms are proposed and tested over several examples. Claims And Evidence: It seems fine to me. Methods And Evaluation Criteria: It seems fine to me. Theoretical Claims: I had a look at the proofs and they seem fine. Experimental Designs Or Analyses: Yes; they seem fine to me. Supplementary Material: I had a look. While I have not checked all the details, the ideas seem sound to me. Relation To Broader Scientific Literature: So far there are few papers on RL for MFC and on discrete-time graphon games. The literature provided seems fine. Essential References Not Discussed: The ones cited seem fine. Other Strengths And Weaknesses: Nothing specific beyond the points discussed in the other textboxes. Other Comments Or Suggestions: While the theoretical analysis of the control problem is interesting, I am not sure if this conference is the best fit. It would be better to develop further the learning aspects. Questions For Authors: Is it possible to analyze the theoretical convergence of the proposed algorithms? This would strengthen the contributions in terms of machine learning. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the positive evaluation and the constructive feedback. The reviewer noted that: “While the theoretical analysis of the control problem is interesting, I am not sure if this conference is the best fit. It would be better to develop further the learning aspects.” We agree with the reviewer to the extent that our work is an initial investigation on the topic and that the algorithm performance can likely be improved further. However, the conducted theoretical analysis and our subsequent two systems approximation provide essential insights for the learning algorithm design. As our empirical results indicate, the conceptual insights in the first sections translate into mathematically well-motivated learning algorithms that outperform existing approaches like LPGMFGs, GXMFGs and IPPO. Thus, our framework closes a crucial gap in the existing learning and mean field literature. The reviewer also raised the question “Is it possible to analyze the theoretical convergence of the proposed algorithms?” In Algorithm 1, we apply RL methods to the limiting LWMFC such that theoretical results from the corresponding RL literature transfer to our case. For Algorithm 2, a theoretical analysis is challenging due to the direct interaction with the empirical networks. We instead focus on the theoretical motivation for our learning approach and the extensive empirical evaluation in comparison to existing methods. For future work, a next step could be to mathematically analyze the error of the two systems approximation in terms of the number of agents. --- Rebuttal Comment 1.1: Comment: Thank you for your response.
Summary: The paper proposes a Local Weak Mean Field Control (LWMFC) model to address cooperative control in sparse networks, leveraging a local weak convergence framework to overcome limitations of traditional graph-theoretic methods in scenarios with finite average degree but diverging variance. Experiments demonstrate that LWMFC consistently outperforms baseline methods on both synthetic datasets and real-world networks (e.g., social network topologies). Claims And Evidence: The paper asserts LWMFC’s superiority in sparse networks, supported by empirical evidence from comparative analyses of mean-field dynamics and quantitative evaluations of global average rewards. Methods And Evaluation Criteria: Methodological soundness: The two-system approximation reduces computational complexity from O(K) to O(k*+1) by decoupling low-degree and high-degree agents, aligning with sparse network characteristics. Evaluation criteria: Comparisons of mean-field trajectories and global reward metrics provide intuitive and rigorous quantification of cooperative efficiency in complex topologies. Theoretical Claims: Theoretical claims are well-supported: The convergence of finite systems to mean-field limits and objective functions is rigorously established under local weak convergence assumptions, ensuring algorithmic validity. Corollaries demonstrate that optimal policies derived from limit systems can be directly applied to large-scale real-world networks. Experimental Designs Or Analyses: Experimental design is rigorous: Systematic comparisons with baseline methods validate LWMFC’s performance advantages in sparse networks. The inclusion of independent learning baselines ensures the reliability of evaluation results. Supplementary Material: Appendix A provides complete proofs of theorems but relies on idealized assumptions (e.g., N→∞) without discussing finite-N error bounds. Appendix B derives extended approximations. Relation To Broader Scientific Literature: The work extends graph mean field games to sparse networks, bridging gaps in existing approaches. Comparisons with decentralized paradigms align with interaction complexity reduction . Essential References Not Discussed: The references cited in the article are basically complete. Other Strengths And Weaknesses: Strengths: Theoretical Innovation: First integration of local weak convergence into mean field control, addressing sparse graphs (e.g., power-law networks). Practical Algorithm Design: LWMFMARL interacts directly with real-world networks (e.g., YouTube with >3M nodes) without model knowledge. Computational Efficiency: Two-system approximation reduces complexity from exponential to polynomial (O(k*+1)), enabling real-time applications. Other Comments Or Suggestions: No Questions For Authors: How is k* determined? Is it based on cross-validation or degree distribution inflection points? What is the impact of varying k*? If sensitive, robustness analysis is needed. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for carefully reading our work and the detailed positive comments. The answer with respect to the reviewer’s questions on $k^*$ is as follows. In all computational examples we set $k^* = 10$ for the standard LWMFC approximation, and $k^* = 4$ for the extensive LWMFC* approximation due to the considerably higher computational complexity of LWMFC*. In general, the choice of $k^*$ is a trade-off. On the one hand, choosing a higher $k^*$ means that agents with moderately low degrees are depicted more accurately which should enhance the accuracy of the two systems approximation. On the other hand, a higher $k^*$ also entails higher computational costs and an increasing algorithmic runtime as one must sum over more potential neighborhood distributions. While tuning the choice of $k^*$ will likely further improve the performance of our algorithms, we leave it to future work as our initial choice of $k^*$ already outperforms existing methods such as LPGMFGs and GXMFGs in all considered problem-network combinations.
null
null
null
null
null
null
null
null
Efficient Time Series Processing for Transformers and State-Space Models through Token Merging
Accept (poster)
Summary: This paper introduces ​local token merging, a novel algorithm for accelerating time series processing in transformers and state-space models (SSMs). A domain-specific token merging method that computes token similarities within a constrained neighborhood (size k), reducing complexity from quadratic to linear (when k=1). This enables scaling to long sequences while preserving locality and causality. The work bridges token merging—previously limited to vision—to time series, addressing efficiency challenges in long-sequence modeling. Claims And Evidence: Most claims are well-supported. Speedup metrics (Table 1, 2, 5) and FLOPs analysis (Appendix B.1) validate complexity reduction. Successful application to Chronos (decoder-only) and encoder-decoder models (Table 2) supports causality claims. Issues: While Figure 7 shows slight improvements, the paper does not compare dynamic merging to fixed r in batch settings, leaving practical trade-offs unclear. The upper bound in Section B.1 assumes merging half the tokens per layer, but real-world speedups depend on hardware and implementation (e.g., merging overhead). Methods And Evaluation Criteria: Local merging is well-motivated for time series, where locality and causality are critical. Restricting merging to neighborhoods (k) balances redundancy exploitation and computational cost. Chronos experiments use a subsampled test set (7000 series). While practical for compute constraints, full-test-set validation would strengthen claims. Theoretical Claims: The derivation of local merging complexity (O(t + (k−1)(t−k))) is correct. The upper-bound speedup estimate for L-layer transformers is reasonable but idealized (ignores layer-wise overhead). Experimental Designs Or Analyses: Results on 7000 samples may not generalize to full test sets. The ablation (Figure 9) uses only Autoformer; broader validation across architectures would strengthen conclusions. Supplementary Material: Reviewed A-D Relation To Broader Scientific Literature: The key contribution of this paper is a technique switch from CV to time series processing. Essential References Not Discussed: ​Adaptive Token Reduction: Liu et al. (2021, DynamicViT) dynamically prunes tokens in vision; comparison could highlight trade-offs between pruning/merging. ​Time Series Tokenization: Nie et al. (2023, PatchTST) uses patching for efficiency; discussing how merging complements patching would contextualize contributions. ​Recent SSMs: Mamba (Gu & Dao, 2023) achieves linear-time inference; analyzing merging in Mamba could strengthen SSM evaluations. Other Strengths And Weaknesses: Strengths: First token merging method for time series, enabling causal merging and SSM acceleration. Massive speedups (54×) in Chronos, relevant for real-world deployment. Connects spectral properties to merging efficacy, providing actionable insights. Weaknesses: No evaluation in batched settings or comparison to threshold-based pruning. Lack of code/details for SSM merging. Limited to Autoformer; broader validation needed. Other Comments Or Suggestions: Page 5, Table 1: “MSE△” formatting inconsistencies. Section 5.4’s spectral analysis could better differentiate signal entropy and noise Questions For Authors: My main concern will lie on the kind of incremental motivation that applies token merging technique from CV to time series processing. What is the main gap between the application of token merge in these two domains (definitely existing), and how has this paper resolved this core issue? Tehse clarifications should be highlighted. How does dynamic merging perform in batched inference? Would variable token counts per batch element hinder practical usage? Could local merging accelerate Mamba? Testing this would broaden SSM applicability. Would results hold on the full ETTh1 test set? I am willing to raise my final rating after the rebuttal phase if the authors well-resolved my concerns. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer 6Amp, Thank you for taking the time to read our paper and for your valuable questions. We are happy to answer them in the following. To this end, we conducted 5 new experiments. Please find anonymous results here: https://figshare.com/s/679d2c1d825228385b2d **Q:** Gap between token merging in CV and time series. \ **A:** Prior work in CV only used quadratic token merging in non-causal transformer encoders. We design token merging for time series: - Long sequences: Time series often consist of very long sequences, as opposed to CV (L122). Here, quadratic global merging introduces a significant computational burden. We propose local merging with linear complexity. (tab. 5, 16k long sequences). - Decoders: In CV, token merging is applied to non-causal encoders. However, for time series, endoder-decoder or decoder-only models are common. Local merging is the first causal merging scheme for decoders. Further, causality is a good inductive bias for time series (tab. 5, L417-426), improving MSE. In contrast to images, real-world time series are generated by causal processes. - Forecasting: In CV, most works focus on classification where only 1 cls token needs to be preserved. We use forecasting as a dense task where reducing tokens might be more difficult. To this end, we propose causal unmerging to restore a required number of output tokens. - SSM: Prior work focuses on transformers. We are the first to merge tokens in ssm. - We propose dynamic merging to mitigate the issue of dissimilar tokens being merged. Prior work utilized fixed merging rates per layer. - Besides these technical aspects, our detailed analysis of local merging, such as why it improves MSE, is way beyond literature. We will highlight these differences in our paper **Q:** Dynamic merging in batch settings. \ **A:** Thank you for this nice suggestion. We conducted new experiments for dynamic merging in batched settings. To retain rectangular tensors, we average the number of tokens to be merged among the elements in a batch. For batch size 10 dynamic merging is still better than fixed r merging. Variable token counts per batch element do not hinder practical usage. **Q:** Would results hold on the full ETTh1 test set? \ **A:** We subsample 7000 time series for chronos experiments. We have made additional efforts to compare results of the subsampled test set to the full ETTh1 test set. Results also hold for the full set. **Q:** Section B.1 upper bound for speed up ignores overhead. \ **A:** You are correct. As the overhead varies between models and implementation, we derive this upper bound theoretically. For our experiments, however, we report the execution time rather than the FLOPs as it includes all overheads. **Q:** Fig. 9 uses only Autoformer. \ **A:** This is a good point. For this ablation, we only borrow the trained position embedding from Autoformer. It is quite similar to the one Transformer, Informer, FEDformer and Non-stationary use. The actual model has no effect. We therefore hope to make broad statements. **Q:** Merging and pruning. \ **A:** This is a great addition. We conducted new experiments to compare merging and pruning, which is used by DynamicViT. Merging results in better MSE, as it retains more information, and comparable speed up. Further, as opposed to DynamicViT, our pruning does not require any training and is zero-shot-applicable. **Q:** Tokenization and token merging \ **A:** In our experiments, the models use 3 different types of tokenization: - Models in tab. 1 use multivariate tokens - Chronos uses a discrete vocabulary - Hyena builds tokens from nucleotides Autoformer and FEDformer further use the frequency domain. Local merging works on top of all token types. We argue that the tokenization method is of minor importance. We conduct new experiments to include PatchTST. Preliminary results show, that local merging also works on patches. \ **Technical differences:** Patching compresses only the input data and requires model training (architectural change). Token merging is a general method that accelerates a variety of models in zero-shot setting. It exploits redundancy in every layer (L807-809, transformers generate redundancy). Thanks for pointing us to this missing detail. **Q:** Mamba as SSM \ **A:** Thank you for this suggestion. We added Mamba for this rebuttal. Local merging achieves over 4x acceleration with only 1.6% drop in accuracy. **Q:** Lack of details for SSM merging \ **A:** We are happy to include more details. Do you have specific aspects in mind? We already analyzed the wall-clock time in more detail: The similarity computation of local merging adds 14% of additional wall-clock time to each hyena block, while global merging takes additional 68%. This highlights the value of linear complexity. Thank you also for pointing us to sec. 5.4 spectral analysis. We will clarify "signal entropy" and "noise". We are happy to further discuss any open questions. --- Rebuttal Comment 1.1: Comment: Thank you for addressing the feedback and incorporating additional simulations, which provide valuable empirical support to the methodology. Nevertheless, regarding the pivotal technical concern about adapting token merging mechanisms from computer vision (CV) to time-series analysis, the authors' responses still lack substantial technical depth and fail to clarify fundamental theoretical discrepancies. Specifically, the critical distinction between 2-D patch-level image token merging—which inherently balances local-global semantics through spatial hierarchies—and 1-D vector-based token merging in time series remains underexplored. While CV methods exploit geometric correlations across image patches, temporal sequences exhibit distinct properties like periodicity, trend continuity, and stochastic volatility. This raises questions about whether the proposed adaptation truly addresses the unique challenges of time-series tokenization or merely replicates CV paradigms without domain-specific innovation. Moreover, the technical rationale for directly transplanting 2-D patch merging strategies to 1D time-series slices requires rigorous justification. The original mechanism relies on spatial redundancy reduction in images, whereas time-series redundancy often manifests through temporal autocorrelation or frequency-domain sparsity. Without a systematic analysis of how merging operations interact with these temporal characteristics (e.g., phase distortion in aggregated tokens, information loss in non-stationary segments), the claimed efficiency gains risk being contextually decoupled from the core challenges of time-series modeling. In summary, while the empirical results are compelling, the fundamental rationale for this adaptation—spanning theoretical alignment between CV and time-series token merging, as well as operational validity in 1D contexts—remains inadequately justified and demands deeper technical scrutiny --- Reply to Comment 1.1.1: Comment: Dear Reviewer 6Amp, Thank you for appreciating our new results and for elaborating on your questions. We agree, that time series have distinct properties, such as periodicity, trend, and sparsity in the frequency domain. In the following, we would like to address your questions: **Q:** Analysis on the effect of token merging regarding time series specific properties \ **A:** Thank you for pointing us to these details. We conduct an extensive analysis where we find metrics of time series that benefit local merging in sec. 5.4. We think connecting these metrics to the distinct time series properties you mentioned improves our paper: - The spectral entropy measures how concentrated power is in the frequency domain. Low values indicate strong periodicity, while high values indicate randomness. Further, the spectral entropy directly relates to sparsity in the frequency domain. - The total harmonic distortion measures the distortion in a signal caused by harmonics. The more a periodic signal deviates from a pure sine wave, the higher the harmonic content and thus the higher the THD. - To explore stationarity we utilize the Augmented Dickey-Fuller test and report the percentage of stationary variates on commonly used datasets. As almost all variates are stationary, we can not draw meaningful conclusions regarding the effect of token merging. | Dataset | % Stationary variates | | -------- | -------- | | ETTm1 | 100 | | ETTh1 | 100 | | Traffic | 99.8 | | Electricity | 91.2 | | Weather | 100 | Linking the periodicity, sparsity in the frequency domain, and shape of the time series to the spectral entropy and total harmonic distortion makes our analysis more intuitive. **Q:** Time series specific properties we integrate in our token merging method \ **A:** Thank you for pointing us to this. We will include the following discussion on which time series specific inductive biases we introduce in our method in our paper. Our token merging mechanism exploits two core properties of time series, which are not present in CV: - It preserves temporal causality, as real-world time series are generated by causal processes - It maintains linear complexity as time series often consist of way more tokens than images in CV (Godahewaetal.,2021; Grešováetal.,2023) This way, we design a very universal token merging scheme, applicable to many model architectures and datasets, as we show in our experiments. We conduct new investigations where we trace the tokens being merged throughout the transformer model. This experiment demonstrates that our merging can exploit distinct properties like periodicity and trend. https://figshare.com/s/679d2c1d825228385b2d As shown in the sine-curve example, our global merging for time series also trades off local and global information. However, we did not implement these properties as hard inductive biases to maintain the universality of our algorithm: - Token merging can exploit trend and periodicity, as our new experiments show, but it is not tied to these properties. This way, it also performs well on sequential data that does not exhibit trend nor periodicity, such as DNA sequences (Grešováetal., 2023), as we show in sec. 5.8. Stock prices typically also don't have regular periodic patterns. - Adding a periodic bias to the neighborhood of our local merging algorithm would further break causality. This way it is not applicable to decoders. - Autoformer and FEDformer transform the tokens to the frequency space. Autoformer specifically focuses on the autocorrelation. Token merging exploits sparsity in frequency domain and autocorrelation space in these architectures, which you correctly pointed out as two other properties of time series. Our universal algorithm can capitalize on this frequency-based sparsity. However, adding a token-level trend or periodicity bias would be suboptimal here, as tokens are already transformed to the frequency domain. We therefore see the universality of our algorithm as its strength. It can exploit inductive biases for time series (periodicity, trend, sparsity in frequency or autocorrelation space), but it is not fixed to those. This way, it is applicable to many architectures and datasets. Futher, it features causality and low complexity for long sequences as inductive biases for 1d-sequence processing. We agree that future work can explore more specialized merging schemes tailored to a specific type of time series, such as to periodic series. However, we think there needs to be initial work on universal, broadly applicable token merging for sequences first. Thank you for discussing time series specific properties with us. We think this makes our paper more valuable. We hope we could answer your questions.
Summary: This paper introduces a novel local token merging algorithm for time series models aimed at reducing the computational burden of processing long sequences in transformers and state-space models. By merging tokens within local neighborhoods, the method scales the complexity from quadratic to linear while preserving causality—making it suitable even for transformer decoders. Claims And Evidence: I think that the claims made by the authors are supported by a variety of experiments. The idea of token merging seems to be effective at improving computational efficiency while retaining good performance. Methods And Evaluation Criteria: The authors benchmarked on standard datasets used in the literature, and the evaluation is thorough. Theoretical Claims: This is mostly an empirical paper, so the authors did not make many theoretical claims. Experimental Designs Or Analyses: Yes, the experiments are sound. Supplementary Material: The submission does not contain supplemantary material. Relation To Broader Scientific Literature: I think one of the key contributions of this paper lies in the fact that this can be integrated into pre-trained models like Chronos. I think this type of technique could become a widely used heuristic to be integrated into time series models. Essential References Not Discussed: I think that while the authors claim that this is the local merging technique, they do not discuss a recent work that integrates both global and local information of the time series using path signatures [1]. I think that this should be included. [1] Moreno-Pino, Fernando, et al. "Rough Transformers: Lightweight and Continuous Time Series Modelling through Signature Patching." Advances in Neural Information Processing Systems 37 (2024): 106264-106294. Other Strengths And Weaknesses: - Other Comments Or Suggestions: None. Questions For Authors: Do the authors feel that similar techniques could be used in language, or is token merging something exclusively usable in the context of time series? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer mSjb, Thank you for your valuable feedback and effort. The "Rough Transformers" paper is very interesting and we will further discuss it in our related work section. Thank you for pointing us to that. We would like to emphasize that we see local merging as a method to accelerate a variety of models without needing to retrain or finetune them. Local merging is not only applicable to transformers but also to state space models. "Rough Transformers" on the other hand is a specifically designed model architecture. **Q:** Do the authors feel that similar techniques could be used in language? \ **A:** This is a great point. We do think that local merging might help to boost the efficiency of language models. Our causal merging and causal unmerging is the first token merging technique applicable to decoders, making it very valuable for decoder or encoder-decoder language models. Previous work in vision breaks causality and is not applicable to decoder models as used in NLP. For long contexts, local mergings linear complexity is particularly helpful. Further, we have made several improvements to our work and present new results in our comments to review 6Amp. We are happy to further discuss any open questions. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response. I will raise my score, as I believe this can be a valuable technique for the time-series forecasting community.
Summary: This paper proposes to apply Token Merging (ToMe), which was originally developed for vision transformers, to time series models.The main difference between the author’s work and standard token merging is the use of local neighborhoods, ie local merging. The original ToMe formulation allowed tokens from different regions to be merged together if they were similar enough, but this paper restricts the similarity comparison to small locl windows or even only adjacent tokens, which they find to be more efficient and better suited to time series modeling. The authors then show that this has similar results to base ToMe on standard time-series datasets and conduct ablations to demonstrate the value of the local merging. Due to the large number of input tokens in time series models, local merging leads to large speedups, with particularly large speedups compared to those shown in the original ToME paper. Claims And Evidence: Claim 1. Local merging is better suited to time series data, since is not quadratic complexity and does not rely on computing pairwise similarities between a large number of tokens. The original ToMe method is quadratic complexity. Evidence: It’s certainly true that the matching step of ToMe has quadratic complexity. What is less clear is whether this actually matters for wall-clock time. I am not particularly convinced it does, since in the original paper, measurements showed that the computation time of this step was negligible, especially compared to self attention. There is no clear analysis that demonstrates this point in the paper besides Table 5, which does not provide much detail. Claim 2. Local merging allows significant acceleration of time series models without reducing their performance. Evidence: This is well supported by experiments. Claim 3. Local merging preserves causality, which is important for time series data. Evidence: If I understand correctly, “causality” here means that any token T_i is only predictable from tokens directly before it. In this sense, i see that local merging does preserve causality. What is not clear is whether this actually matters for time series data; there is no ablation in the paper that clearly demonstrates this. Claim 4: Local merging improves forecasting quality by selectively reducing noise. Evidence: I don’t really see how applying a Gaussian filter demonstrates that Token Merging is reducing noise. I think the experiment here needs a bit more clarity; I elaborate on this in the experiments section. Methods And Evaluation Criteria: The basic setup makes sense, which is testing several state of the art time series models that rely on attention mechanisms and measuring the downstream performance metric (MSE) and whether there is a speedup. However, it’s not clear what the “acceleration” measure is - is this wall-clock time, throughput, GFLOPS? Given the results from the token merging paper, it’s certainly surprising to see such larger improvements, so some clarity would be good here. Theoretical Claims: The only theoretical claims are related to the complexity of the proposed methods. These proofs are relatively straightforward and are included in the supplementary material, and seem reasonable to me. Experimental Designs Or Analyses: An issue I found is that many of the important plots that support claims in the analysis section were punted to the supplementary material. I think the paper could be better organized so that the core results were included in the main text and the supplement only contained additional results that would be helpful for building intuition. Main Experiment The basic setup here makes sense, However, I don’t understand why the “global merging” (or standard ToMe) was not included here, since comparing local merging to it supports the main novelty in the paper. Secondly, I’m confused what “acceleration” means, and what measure is used here - is it wall clock time? Throughput? GFLOPS? All those base numbers should also be reported, and if that would take too much space, the results should be split into multiple tables. Finally, the Token Merging method (and local merging) depend on a parameter r that controls the number of tokens merged per layer. I dont’ see any discussion of this anywhere except in Figure 3; what was the Ablation of local merging There isn’t really an ablation study, since there isn’t many components. However, an important experiment to include would be the effect of increasing the neighborhood size k. There is no such analysis in the main text or the supplement which is an issue - the main point of the paper is that locality is an important property and this should be the main focus of the experiments apart from the main result. Additional understanding of token merging improvement through signal processing. I think this set of experiments was interesting. The premise of this experiment is that local merging improves forecasting quality, which was not observed in the vision setting but seems to occur according to Figure 2 (which should be much larger, as it is one of the more interesting results of the paper). However, the experimental design to validate this doesn’t make that much sense to me. Applying Gaussian filtering does not demonstrate that this is what token merging actually does, and neither does showing that combining it with Gaussian filtering improves the MSE either. I may be missing something here from the explanation, so please feel free to correct me if my understanding is wrong. Supplementary Material: I reviewed all of the supplementary material, as it seems that lots of the core results of the paper were found in the supplement. I think the paper should be re-organized so that readers do not need to frequently consult tables and plots in the supplement to find evidence supporting the main claims. Relation To Broader Scientific Literature: Token merging showed that for inputs with many redundant tokens, similar tokens could be combined together to get essentially the same results while significantly speeding up the model. This has been shown mostly in computer vision, but also in language models and vision-language case. This paper demonstrates that the effect occurs too in time series models. Furthermore, the paper’s analysis section attempts to explain why local merging actually improves the performance rather than keeping it the same as occurs in other merging works. This paper, if all experiments are properly conducted and the claims completely validate, would provide a clean way to accelerate time series models while also supporting the method with empirical intuitions. Essential References Not Discussed: To my knowledge, the main references were all included in the related works section. Some papers that could also be discussed are those that learnably prune tokens but this is not essential. Other Strengths And Weaknesses: Strengths: The results of the paper are quite impressive. Most works on token merging do not show improvements to the base model, but just a speedup. This paper shows impressive speed-ups and a clear improvement in the downstream metrics. I also think that the idea to analyze the reason why local merging works better is great and if executed correctly would provide some key intuitions for others who use this method. Weaknesses: Ultimately this paper is an incremental improvement to the original token merging paper, in the context of applying a known technique to a different domain with a small tweak to the original algorithm; there’s not a lot of novelty besides adding the locality. While I see the value in local merging for time series data, the change is extremely minimal - the method consists of restricting the window to small local neighborhoods and is a couple-line code change. This would be a more valuable contribution if it were better motivated. In the original ToMe work, the matching process has almost negligible runtime; computing cosine similarity is extremely cheap. Is the complexity of this operation actually meaningful? Theoretical complexity does not really translate to wall-clock time which is what matters for running these models at scale; there is no analysis that justifies this part. Finally, as discussed in the experiments section, the experiments and analysis need to be cleaned up and clarified; I think a lot of important details are currently missing. In the rebuttal phase, I would like to see convincing explanations from the authors why the experiments as currently shown are sufficient for justifying the claims made in the paper. Other Comments Or Suggestions: Typo: L298 "filer" should be filter. Questions For Authors: Questions: In Table 1, why does Autoformer and Informer have lots of rows with no speedup? Was token merging not appilcable in these cases? Why or why not? The gaussian + merging line seems to be missing from Figure 4 b (electricity). Is this intentional? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer eLzT, Thank you for taking the time to read our paper and for your valuable comments. We are happy to answer them in the following. **Q:** Local merging is a minimal contribution. \ **A:** We see our main contribution in investigating token merging for time series in great detail. Additionally, we propose new mechanisms to extend token merging to causal decoders, state-space models and long sequences. (Please see our Rebuttal on Review 6Amp for a full list of novelties) **Q:** Does linear complexity translate to wall-clock time? \ **A:** We think this is a great point. We compare global merging with local merging in sec. 5.8 and tab. 5. Using subquadratic ssm and a very long context of 16k tokens, we showcase the advantage of linear complexity. Computing the cosine similarity of all tokens in global merging has a significant overhead, resulting in only 2.92x speed up while local merging achieves greater acceleration of 3.62x. For this rebuttal, we investigate the wall-clock time in more detail. The similarity computation of local merging adds 14% of additional wall-clock time to each hyena block. For global merging, however, this is an additional 68% of wall-clock time. Thank you for pointing us to this missing detail, we will clarify that in our paper. **Q:** Is preserving causality important for time series? \ **A:** Thanks for bringing this up, you understood it correctly, and we will explain it more explicitly. Preserving causality has two important aspects: First, it enables token merging in decoder models, which are commonly used in time series processing. Previous work only focused on encoders. Second, causality and order seems to be a good inductive bias for time series, as we show in tab. 5. Here, it is better to rely on a smaller merging pool while preserving causality (74% vs. 69% accuracy, L417-418, L424-432). Intuitively, real-world time series are generated by causal processes. **Q:** Gaussian filtering does not demonstrate that this is what token merging does. \ **A:** Gaussian filtering smoothes the time series acting, as a low pass filter. We argue that averaging two tokens also reduces noise, but in an adaptive manner as only similar tokens are merged. To validate our intuition, we compare local merging to gaussian filtering on 5 datasets. On datasets where gaussian filtering improves forecasting quality, local merging also improves quality. On Electricity, both gaussian filtering and local merging are not effective (fig. 4, 15). We further validate the low-pass-filtering hypothesis by correlating local mergings MSE improvements to dataset properties related to noise (see Dataset properties in sec. 5.4). Please note that we try to show that local merging acts as an adaptive filtering algorithm. Not that it is the same as gaussian filtering. We agree that our comparison is not absolute, but we gather lots of evidence that local merging can be compared to gaussian filtering. **Q:** Table 1, Autoformer and Informer have rows with no speed up. \ **A:** In these experiments, we apply token merging only during inference. This disturbs some models too much. Thus, we report the results without merging (see L228-233, L251-255). However, in sec. 5.2 we show that we can accelerate these models when applying token merging during training. This way the models can adapt to token merging. **Q:** The gaussian + merging line missing from Figure 4 b. \ **A:** This is intentional, as you correctly guessed, the gaussian + merging line was exactly following the gaussian line. **Q:** Acceleration measure?\ **A:** Thank you for pointing this out. We base our acceleration on the wall-clock time (see L703) as we think it is the practically most relevant measure. However, not reported in the paper, we notice even larger accelerations in FLOPs in all experiments as they do not measure GPU overhead. **Q:** Important plots in the appendix. \ **A:** Due to the page limit we moved many results to the appendix. We will restructure the final paper. **Q:** Ablate neighborhood size k. \ **A:** In our preliminary experiments we find that either k=1 (linear complexity, causality, locality bias) or k=tl/2 (global merging pool) leads to best results. In many transformer encoders that already exhibit large computational cost, we utilize k=tl/2 to profit from a global merging pool (L679-683). In decoders, ssm, and for long sequence processing we find local merging with k=1 to work best (sec. 5.8). Other values for k were suboptimal. We will add this crucial information to the paper. **Q:** How we chose hyperparameter r. \ **A:** We do 185 hyperparameter optimization trials to find the best r (see L692-694) satisfying our tolerated MSE drop (see L227-230) for every model. In fig. 2,3,4,5,7 r is plotted as the color bar and the different points in the diagram. Thank you for also finding a typo. We are happy to further discuss any open questions. We present new results in our comment to review 6Amp. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for taking the time to address my concerns. Unfortunately, I will retain my rating at Weak Reject. The main issue with the paper is that the contribution is minimal. The paper makes a small modification to token merging and applies it to a new domain, without much new insights. In the rebuttal, the authors state that their main contribution is that it applies ToMe to a new domain. However, the experiments of the paper are not really focused on this - they are more focused on understanding and measuring local merging. The authors responded to my wall-clock time concerns by pointing to L703 which states that they focus on FLOPS since it is hardware agnostic. FLOPs are known to be a subioptimal measure of speed - many "efficient" attention methods significantly decrease GFLOPs but lead to no actual wall-clock speedup. The only experimental measure of wall-clock time is (as the authors state) the SSM-focused analysis in Table 5. I think if the paper is focused on efficiency, there should be much more experiments devoted to understanding the speedup and its effect on quality - this is missing from the paper as it is. The authors addressed my concerns about the hyperparameter r, the analysis is located in the supplement. Finally, I think the paper needs serious reorganization. The experiment section is extremely heavy on text and focuses on experiments that do not support or add to the papers focus on time-series analysis. Furthermore, a lot of important details are rlegated to the supplement, resulting in a supplement that contains more useful information than the main paper. The figures and tables in the main text should be much larger and clearer so that readers can easily tell what the conclusions of experiments are and how they support the main claim of the paper. I would not go so far as Reviewer 1 to say that it is sloppy, but I think it could be seriously improved. If the authors have time to respond to this rebuttal comment, I'd be happy to answer any further questions and reconsider my rating as needed. --- Reply to Comment 1.1.1: Comment: Dear Reviewer eLzT, Thank you for your questions. We will answer them in the following: **Q:** Contribution \ **A:** We summarize novel aspects of our work bellow: **Methodological contributions:** - **Adaption to long sequences:** Time series often consist of very long sequences, as opposed to CV (L122). Here, quadratic global merging introduces a significant computational burden. We propose local merging with linear complexity. - **Decoder compatible merging:** Existing merging schemes can not be directly applied to causal decoders. We address this limitation and propose local merging as the first causal merging scheme. In our experiments, causality is a good inductive bias for time series (tab. 5, L417-426), improving MSE. - **Forecasting:** We extend token mergings application from classification tasks in CV to dense forecasting tasks. To this end, we propose causal unmerging to restore a required number of output tokens. - **SSM:** Prior work focuses on transformers. We are the first to show the effectiveness of token merging in SSMs. - **Dynamic merging:** We propose dynamic merging to mitigate the issue of dissimilar tokens being merged. This adjusts for possible differences between multiple samples in a batch. Prior work utilized fixed merging rates. **New time series specific insights:** Beyond methodological contributions, we investigated dataset and model specific properties that predict possible benefits from token merging: - Datasets with high spectral entropy and total harmonic distortion are particularly amenable to token merging. - Some model architectures learn more similar tokens than others, benefiting local merging. - In our updated manuscript we now connect these signal processing metrics to periodicity, sparsity in the frequency domain, and shape of the time series (please see our last reply on Reviewer 6Amp) - We gain more insights in local merging and argue that it has low-pass-filtering effects. **Actions:** Thank you pointing out that our contributions should be more prominently listed. We make the following adjustments: - List our contributions in the introduction - In our method, we will have a distinct paragraph for the technical contributions (local merging, causality, unmerging, dynamic merging) **Q:** Wall-clock time \ **A:** We are sorry for this misunderstanding. You correctly pointed out that the wall-clock time is the practically most relevant quantity. This is why we *always* report wall-clock time based accelerations (see L703 "Besides the inference time as practically most relevant quantity, ..."). Dynamic merging in sec. 5.8 is the *only* case where we report FLOPs. Wall-clock time is used for *all* other experiments. For the SSM experiment we provided in the rebuttal, we specifically measured the wall-clock time the similarity computation of local vs. global merging takes in a Hyena block. (highlighting this might have caused confusion) **Actions:** We will make it more clear in L703 that almost all accelerations are based on wall-clock time (except dynamic merging). **Q:** Restructuring of the paper \ **A:** Thank you for making detailed suggestions on how to improve the write-up of our paper. We are already working on that. Due to the page limit and the large amount of experiments we did we had to resize many figures and move some experiments to the appendix. However, for the final paper, there will be one extra page available for the main text. **Actions:** - We will resize the most important figures and move important details back from the appendix to the main text using the one extra page available - To restructure our experiments, we will describe each experiment at the beginning of 5. and how it is interconnected to the claims. In every experiment subsection, we will further add a final sentence mapping the result to the respective claim / overarching goal. - **Our new structure:** We first investigate the effectiveness of local merging in: 5.1) pretrained transformer models, 5.2) during model training, 5.3) in foundation models, and 5.4.) in state space models. \ Next, in 5.5) we investigate how and in which cases token merging has the biggest benefits and find model and dataset specific properties that predict token merging effectiveness. \ Lastly, we explore design choices within our algorithm, such as reducing the sequence length in 5.6) or applying merging dynamically instead of at a fixed rate in 5.7). We think that clarifying these aspects will make our paper more valuable. We now present our results in this structured way. We hope that we could address your concerns and think that local merging is a valuable addition for the time series community.
Summary: The paper proposes local token merging to improve transformer efficiency. Building up on Bolya 2023, the proposed method appear to compute similar tokens within a local neighborhood (as opposed to all to all) and merge them. There are other techniques mentioned in the text, but the writeup is not organized enough to point it out explicitly -- either in the introduction or in the methods section. The method and experiment description make is really difficult to identify and summarize the proposed concepts. There seems to an attempt of evaluating on many datasets and compare with many models. But the unorganized and sloppy writeup makes it difficult to identify a single narrative or theme. Claims And Evidence: The text of the paper makes it difficult to understand a consistent and clear narrative of the claim. Token merging would reduce the computation of both transformer and state space models. This is intuitive and follows directly from the model design. The paper seems to be having difficulty taking a single stance on their claims. Lines 243~246, in both columns, seem to claim the proposed local merging improves both accuracy and efficiency. But Lines 354 ~356, some results in Table 1 imply reducing tokens would decrease accuracy. I would stick to one narrative: either we say token merging reduces model processing/time and the accuracy is not affected, i.e.. the accuracy drop is tolerable and sometimes even increases; or we say we fix the MSE at a certain level and show how much efficiency we can improve at this level. Otherwise, these two issues get tangled up very quickly and obscures the message the manuscript is trying to convey, e.g., Lines243~255 right column, Section 5.5., 5.6. Methods And Evaluation Criteria: **Contribution:** I am not even 50% clear what are contributions of the paper. From Section 3, it appears the contribution is local similarity computation as merging. The section does not clarify other important details, e.g., how often tokens are reduced, by how much, and how the tokens were merged. From Line 175 right column, I am guessing the proposed method adopts the exact setting of Bolya 2023 for these. Is restricting the similarity and merging to local neighborhood the main contribution and sets the proposed method apart from Bolya 2023? It seems like the manuscript also proposed merging technique for decoders, which I am guessing Bolya 2023 does not do as it is encoder only. Is this a novel idea? If so, why was it not describes in the methods section and deferred to supplemental? Is it not something that distinguishes this method from Bolya 2023? Or is it not important enough or incremental. If it is indeed an important distinction, the experiment section should clearly show empirical evidences of its benefits under a separate section. Now, the reader has to search and find out the the experiments on chronos pertains to encoder-decoder merging technique. Also, is Dynamic merging another technique that is novel and being proposed here? Then, why is it mentioned in page 7 under experiments section? **Exposition** The manuscript appears to be too poor to convey even the appreciable findings or efforts from the authors. Apart from what already mentioned above, the title does not mention local token merging which is probably the primary claim the submission makes. Figure 1 confuses the readers more than it clarifies: are the numbers ids of tokens? If so, the matrix should be 8x8 if I understood correctly. How the different colors are illustrating anything about the technique? Line 056 seems to give an impression this is an analysis paper not a model design proposal. In Section 3 methods, it is not clear how token merging in computer vision is relevant for model description whereas the token merging for decoders, which sounds like novel, is pushed to supplementary. **Experimental results:** The experiment section looks like dump of "all experiments we did" rather than empirical evidences of the claims we made as contribution. See the comments in section above. Theoretical Claims: --- Experimental Designs Or Analyses: --- Supplementary Material: --- Relation To Broader Scientific Literature: --- Essential References Not Discussed: --- Other Strengths And Weaknesses: --- Other Comments Or Suggestions: I would rethink the whole approach, ponder on what is the approach the paper should suggest to improve how we use transformers or state space models for time series and reorganize accordingly. Right now, the only things the the readers take away from the submission is local token merging reduces computation -- which I dont understand why is not obvious already. Questions For Authors: --- Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: Dear Reviewer HVPc, Thank you for taking the time to read our paper. We would like to address your concerns in the following: **Q:** Write-up of the paper \ **A:** We are sorry, that you were confused by the write-up. We will rework the writeup of our Methods and Introduction section to point out our contributions more clearly. **Q:** Contributions \ **A:** Previous work in CV only used quadratic token merging in non-causal transformer encoders. We design token merging for time series: - Long sequences: Time series often consist of very long sequences, as opposed to CV (L122). Here, quadratic global merging introduces a significant computational overhead. We propose local merging with linear complexity. (see tab. 5, 16k long sequences). - Decoders: In CV, token merging is applied to non-causal encoders. However, for time series, endoder-decoder or decoder-only models are commonly used. Local merging is the first causal merging scheme for decoders. Further, causality is a good inductive bias for time series (tab. 5, L417-426), improving MSE. In contrast to images, real-world time series are generated by causal processes. - Forecasting: In CV, most works focus on classification where only 1 cls token needs to be preserved. We, however, use forecasting as a dense task where reducing tokens might be more difficult. To this end, we propose causal unmerging to restore a required number of output tokens. - SSM: Previous work only focuses on transformers. We are the first to merge tokens in ssm. - We propose dynamic merging to mitigate the issue of dissimilar tokens being merged. Previous work utilized fixed merging rates per layer. - Besides these technical aspects, our detailed analysis of local merging, such as why it improves MSE, is way beyond literature. Due to the page limit and the large amount of experiments, we had to move some aspects to the appendix. However, that seems to cause confusion. We will list our contributions more clearly in the Method and Introduction section. **Q:** No experiments regarding causality \ **A:** Preserving causality has two advantages. First, it enables token merging in transformer decoders. Our main experiment in tab. 1 therefore demonstrates causal merging implicitly as we merge tokens in the encoder and the decoder. Second, causality is a good inductive bias for time series. We show that when comparing local merging with k=1 to global merging in sec. 5.8 and tab. 5. Global merging can exploit a larger merging pool with more similar tokens. However, local merging is still better due to its causal locality bias (74% vs. 69% accuracy, L417-418, L424-432). Reviewers mSjb and 6Amp find our causality claim well supported. Thank you for pointing us to that. We will mention the link to causality more explicitly in these experiments. **Q:** The paper seems to be having difficulty taking a single stance on their claims. Lines 243~246, in both columns, seem to claim the proposed local merging improves both accuracy and efficiency. But Lines 354 ~356, some results in Table 1 imply reducing tokens would decrease accuracy. \ **A:** We investigate token merging from different points of view. Sometimes token merging improves MSE while accelerating the model at the same time. This is mostly true for chronos models in sec. 5.3 and L82-89. In other settings, token merging accelerates the model at some cost of MSE. This is mostly true for the models in tab 1. We find empirical and theoretical explanations for these different behaviors in sec. 5.4 and 5.5. We think that an evaluation from both points of view (finding the best MSE and finding the fastest model at some MSE cost) is very valuable. Both settings relate to different practical applications, i.e., finding the best or the fastest model. We argue that our objective evaluation is therefore especially valuable, rather than focussing on a single setting. We try to disentangle both settings in the write-up of the final paper. **Q:** Colors in figure 1 \ **A:** Thank you for pointing us to this figure. The colors illustrate how different merging distances correspond to entries in the similarity matrix and to different values of k, i.e., which merging correspondences are valid for a given k and which reduced similarity set has to be computed. To avoid further confusion, we will reduce the matrix in fig. 1 to 3x3 (tokens 1 to 6). We hope we could address your concerns and we are happy to answer any follow-up questions. Further, we have made several improvements to our work and present new results in our comments to review 6Amp. --- Rebuttal Comment 1.1: Comment: I have to politely point out that rebuttal is as sloppy as the paper itself. Authors, pls try to look from a reader's point of view and not be offended by a reader's honest opinion. - One of my main concerns, i.e., how is the submission different from Bolya 2023, is not precisely answered in the rebuttal. - The reason for this question: *Q: The paper seems to be having difficulty taking a single stance on their claims. Lines 243~246, in both columns, seem to claim the proposed local merging improves both accuracy and efficiency. But Lines 354 ~356, some results in Table 1 imply reducing tokens would decrease accuracy.* was to understand when a researcher in the field would like to use it. The way I am thinking is, when a researcher/practitioner reads a paper, s/he asks how does the work help solving the problem s/he is dealing with. Will it improve accuracy? Will it make the her method more efficient? From this writeup, as well as the rebuttal, it is difficult to answer conclusively what would be the benefit. --- Reply to Comment 1.1.1: Comment: Dear Reviewer HVPc, Thank you for detailing your concerns. We will address them in the following: **Q:** Gap between token merging in CV (Bolya 2023) and time series (ours). \ **A:** Bolya 2023 proposed token merging for computer vision. Prior work only used quadratic token merging in non-causal transformer encoders. We, however, design token merging for time series: **Technical differences:** - Long sequences: Time series often consist of very long sequences, as opposed to CV (L122). Here, quadratic global merging introduces a significant computational burden. We propose local merging with linear complexity. - Decoders: Existing merging schemes can not be directly applied to causal decoders. We address this limitation and propose local merging as the first causal merging scheme. In our experiments, causality is a good inductive bias for time series (tab. 5, L417-426), improving MSE. - Forecasting: We extend token mergings application from classification tasks in CV to dense forecasting tasks. To this end, we propose causal unmerging to restore a required number of output tokens. - SSM: Prior work focuses on transformers. We are the first to show the effectiveness of token merging in SSMs. - We propose dynamic merging to mitigate the issue of dissimilar tokens being merged. Prior work utilized fixed merging rates. **New time series specific insights:** \ Beyond methodological contributions, we investigated dataset and model specific properties that predict possible benefits from token merging: - Datasets with high spectral entropy and total harmonic distortion are particularly amenable to token merging. - Some model architectures learn more similar tokens than others, benefiting local merging. - In our updated manuscript we now connect these signal processing metrics to periodicity, sparsity in the frequency domain, and shape of the time series (please see our last reply on Reviewer 6Amp) **Actions:** Thank you for noticing us that our contributions should be more prominently listed in our paper. We make the following adjustments: - List our contributions in the introduction - In our method, we will have a distinct paragraph for the technical contributions (local merging, causality, unmerging, dynamic merging) - In our experiments, we will clarify to which technical aspect the respective experiment contributes. Further, we will describe each experiment at the beginning of 5. - **Our new structure:** We first investigate the effectiveness of local merging in: 5.1) pretrained transformer models, 5.2) during model training, 5.3) in foundation models, and 5.4.) in state space models. Next, in 5.5) we investigate how and in which cases token merging has the biggest benefits and find model and dataset specific properties that predict token merging effectiveness. Lastly, we explore design choices within our algorithm, such as reducing the sequence length in 5.6) or applying merging dynamically instead of at a fixed rate in 5.7). **Q:** Will it improve accuracy? Will it make the method more efficient? \ **A:** We appreciate your practical approach and we will make the practical benefits more clear. We observe two extremes in token merging: - For most models, local merging boosts efficiency at some MSE cost (tab.1). - On Chronos models and Hyena, we additionally observe improvements in MSE. Here, token merging boosts efficiency and simultaneously improves MSE (faster and better). This results in two interesting settings for practitioners: - Maximum acceleration at an upper-bounded MSE cost - Best MSE while still accelerating the model - We distinguish between both cases in tab.2 and tab.5 naming them "fastest" and "best". Overall, token merging leads to pareto optimal efficiency utility trade-offs in all of our experiments. Further, we can predict local mergings benefit from model and dataset specific properties, as described earlier. **Actions:** Following our answer above, we will make the two different approaches more clear (Will it improve accuracy and efficiency? Will it improve efficiency at some MSE cost?) - We define the overarching goal of making models more efficient (in introduction and beginning of experiments) - In our experiments, we observe everything from faster and considerably worse to faster and even better. - Motivated by this novel behavior, we perform our investigation of why local merging can improve MSE in sec. 5.4 - We will add overarching comments such as "local merging boosts the efficiency of every architecture", "on Chronos and Hyena local merging boosts efficiency and simultaneously improves MSE" We now present our results in the structured way we described above. We think this makes our paper more valuable, especially due to the large amount of experiments. We hope we could resolve your concerns. Further, we would like to point you to our new results https://figshare.com/s/679d2c1d825228385b2d
null
null
null
null
null
null
Weakly-Supervised Contrastive Learning for Imprecise Class Labels
Accept (spotlight poster)
Summary: This paper proposes a graph-theoretic framework for contrastive learning with weakly-supervised information. This framework is recognized as effective according to the superior results in noisy label learning and partial label learning by introducing the continuous semantics similarity to define positives and negatives. Theoretical analysis shows that the framework could approximate supervised contrastive learning under mild conditions. Claims And Evidence: Yes, this paper provides sufficient theorems and outstanding experimental results to support the convincing claims. Methods And Evaluation Criteria: Yes, the proposed method adopts a novel view of contrastive learning by framing it as a graph spectral clustering problem, and the edge weights are constructed based on self-supervised connectivity and weakly-supervised information, thereby learning expressive weakly-supervised representations. Theoretical Claims: I have checked all the theoretical claims and proofs, which are correct in my angle. However, there are still some typos in the proofs. Experimental Designs Or Analyses: The experimental designs adopt two weakly-supervised learning paradigms and several well-known datasets to validate the superiority of the model. However, the analyses are not in-depth enough. Supplementary Material: I followed the paper to review the supplementary material including the codes for reproducing the results. Relation To Broader Scientific Literature: This paper is relative to the field of weakly-supervised learning. The excellent experiment results and rigorous theorems may contribute to the broader scientific literature. Essential References Not Discussed: The key references are provided to follow the main idea of the paper, and I have no essential references to provide. Other Strengths And Weaknesses: Strengths - The technique is novel. This paper proposes a graph-theoretic framework for weakly-supervised contrastive learning, which mines the relationships between sample pairs by integrating the self-supervised and weakly-supervised information into the augmentation graph. - The theoretical analysis is sound. This paper provides sound theorems to explain how to realize weakly-supervised contrastive learning, and offers a rigorous performance guarantee to show that the proposed framework can approximate supervised learning. - The model performance is outstanding. Many SOTA models are adopted to compare with the proposed WSC, and the experiment results show the superiority of WSC in both noisy label learning and partial label learning. Weaknesses - Lack of in-depth analyses of experiment results. The analyses in Section 4.1 and Section 4.2 just focus on the performance improvements, but do not explain why the model has wonderful performance, e.g., how the model works in the weakly-supervised scenario. So more in-depth analyses should be provided. - Some typos. There are some typos in the paper, for example, $\mathcal{D} and \mathcal{D}_{\mathcal{Q}} $ in line 625. Other Comments Or Suggestions: No, I have no other comments or suggestions. Questions For Authors: - Question on hyperparameter settings. I find that there exist significant differences in the numerical settings of $alpha$ and $beta$, so I want to know how to balance the self-supervised and supervised information in Eq. (5), and find ideal $alpha$ and $beta$ jointly. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear reviewer LHXf: Thanks for your valuable suggestions, we will try to address your concerns and we are eager to engage in a more detailed discussion with you. > **W1: Lack of in-depth analyses of experiment results...** - We analyze the effectiveness of the proposed method in detail in the theoretical understanding of Section 3. It can be roughly understood in the following order: - **Supervised information improves representation learning.** Theorem 3.4 describes how much the quality of learned features can be improved when using accurate supervised information. It can be seen that the features learned by the graph constructed with supervised information usually have a smaller error upper bound than those learned by only self-connection. - **Weakly supervised information can approximate supervised information.** Through Theorem 3.7, we try to show that the features estimated using weakly supervised information can approximate the features obtained by supervised information to a certain extent. This theorem also indirectly illustrates how the proposed method works in weakly supervised scenario. - Your proposal made us realize that there is a lack of some supplementary explanations between our theoretical analysis and experimental verification. We will emphasize the reasons why the model behind the theoretical results is valid in the experimental part when the revised version is uploaded. > **W2: Some typos.** We are very sorry for this. We will check typos carefully and correct them in the next revision. > **Q1: Question on hyperparameter settings.** The setting of the hyperparameters $\alpha$ and $\beta$ is related to the number of categories in the dataset. Denote $C$ is the number of class, setting the ratio of $\beta$ to $\alpha$ at $2C$ can achieve a promising performance in general.
Summary: The paper introduces a graph-theoretic framework for weakly supervised contrastive learning, leveraging continuous semantic similarity to better utilize ambiguous supervisory signals from imprecise class labels. This approach enhances model performance on multiple benchmark datasets in noisy and partially labeled settings. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: I have gone through the theorem and found no obvious errors Experimental Designs Or Analyses: The paper experimentally evaluates its weakly-supervised contrastive learning framework through extensive testing on noisy and partial label datasets (e.g., CIFAR-10 and CIFAR-100), demonstrating performance improvements over existing methods via quantitative metrics and t-SNE visualizations of learned representations. However, for PLL, it is better to add the experiments on instance-dependent PLL data. Supplementary Material: I have gone through the full parts of the supplementary material, especially in PLL and NLL parts. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: 1. The paper introduces a graph-theoretic framework for weakly supervised contrastive learning named WSC. 2. Extensive experiments demonstrate that WSC consistently outperforms existing baselines across diverse noise levels. 3. The manuscript is clearly written and well-structured, ensuring ease of readability. Weaknesses: 1. It looks like there's a discrepancy between the caption and the content of Table 1. The caption mentions that both the mean and standard deviation values are reported, but only the mean values are shown. 2. In Table 7, PiCO outperforms the proposed method with a significant performance advantage under the partial ratio setting of 0.1 on the CUB-200 dataset. The 5.89% performance advantage is remarkable. It would be beneficial to provide an explanation for why the proposed method performs poorly in this scenario. 3. For PLL, the authors lack of the validation on instance-dependent data on PLL. Other Comments Or Suggestions: Please refer to the weakness Questions For Authors: 1. It looks like there's a discrepancy between the caption and the content of Table 1. The caption mentions that both the mean and standard deviation values are reported, but only the mean values are shown. 2. In Table 7, PiCO outperforms the proposed method with a significant performance advantage under the partial ratio setting of 0.1 on the CUB-200 dataset. The 5.89% performance advantage is remarkable. It would be beneficial to provide an explanation for why the proposed method performs poorly in this scenario. 3. For PLL, the authors lack of the validation on instance-dependent data on PLL. 4. Only one state-of-the-art (SOTA) method published in 2024 is adopted for comparison, while the others were published in 2022 or earlier. It would be better to adopt more SOTA methods as comparison methods to further validate the effectiveness of the proposed method. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear reviewer kufB: Thank you for your valuable suggestions. We truly appreciate your review and are committed to addressing your concerns with care and attention. We sincerely look forward to engaging in a more in-depth discussion with you, as your insights are essential in helping us improve and refine our approach. > **W1 & Q1: There's a discrepancy between the caption and the content of Table 1.** We are very sorry for this, this is a typo on our part, and we will correct the inconsistency between the caption and the table in the next revision. >**W2 & Q2: In Table 7, PiCO outperforms the proposed method with a significant performance advantage under the partial ratio setting of 0.1 on the CUB-200 dataset. The 5.89% performance advantage is remarkable. It would be beneficial to provide an explanation for why the proposed method performs poorly in this scenario.** - In addition to using contrastive learning to learn similar representations for samples from the same class, PiCO also uses class prototype-based pseudo-labeling mechanism. The synergy between the contrastive learning and the above mechanism has been proven to be very effective in several fine-grained classification tasks [1]. - Our approach avoids the need for additional complex techniques. By incorporating our contrastive loss into the training objective, we achieve an 8.69% improvement over GFWS in this setting, highlighting the potential for even greater performance gains. - Although no other complex techniques are used, our method achieves the best performance in most of the remaining experiments, which also shows the effectiveness of our method in most scenarios. Reference: [1] Partial label learning: Taxonomy, analysis and outlook. *Neural Networks* 161 (2023): 708-734. > **Q3: Lack of the validation on instance-dependent data on PLL.** Following the works in [1, 2], we have expanded our experimental results to include the settings of biased tags associated with four fine-grained instance-dependent partial label datasets. The table below demonstrates that our method achieves marginally better performance than the current state-of-the-art (SOTA) approaches, and notably, without relying on any additional tricks or techniques. In the next version of our paper, we plan to incorporate more comprehensive comparative experiments to further validate our findings. | | WSC(Our) | CEL[1] | DIRK[2] | NEPLL[3] | IDGP[4] | | :-----: | :-------: | :-------: | :-----: | :------: | :-----: | | CUB200 | **69.10** | 68.60 | 66.60 | 62.88 | 58.16 | | CARS196 | **87.96** | 86.22 | 85.31 | 85.05 | 79.56 | | DOGS120 | **79.65** | 78.18 | 75.97 | 74.84 | 74.86 | | FGVC100 | 77.94 | **78.36** | 76.86 | 75.36 | 72.48 | The experimental details are shown in the table below ($\mathrm{resnet34}^{*}$ denote resnet34 pre-trained on Imagenet): | Hyper-parameter | CUB200 | CARS196 | DOGS120 |FGVC100| | :---: | :---: | :---: | :---: | :---: | | Model | $\mathrm{resnet34}^{*}$ | $\mathrm{resnet34}^{*}$ | $\mathrm{resnet34}^{*}$ |$\mathrm{resnet34}^{*}$| | Image Size | 224 | 224 | 224 |224| | Batch Size | 256 | 256 | 256 |256| | Learning Rate | 0.1 | 0.1 | 0.1 |0.1| | Weight Decay | 1e-4 | 1e-4 | 1e-4 |1e-4| | LR Scheduler | Cosine | Cosine | Cosine |Cosine| | Training Epochs | 500 | 500 | 500 |500| | Classes | 200 |196 | 120 |100 | |$\alpha$|2|2|2|2| |$\beta$|0-400|0-400|0-240|0-200| > **Q4: Only one state-of-the-art (SOTA) method published in 2024 is adopted for comparison, while the others were published in 2022 or earlier.** - Recent advances in partial label learning have primarily concentrated on instance-dependent partial label problems, exemplified by several notable methodologies: CEL [1], a class-wise embedding-guided disambiguation approach; DIRK [2], a self-distillation-based label disambiguation framework; NEPLL [3], which employs normalized entropy for sample selection; and IDGP [4], a generative method modeling candidate label generation processes. - While these methods provide thorough investigations into label disambiguation mechanisms for instance-dependent scenarios, they notably neglect the critical exploration of leveraging ambiguous information for representation learning—a research gap addressed by our work. Comprehensive comparative experiments under instance-dependent partial label settings will be included in our subsequent manuscript version, accompanied by detailed discussions in related work sections. Reference: [1] Mixed Blessing: Class-Wise Embedding guided Instance-Dependent Partial Label Learning (KDD'2025) [2] Distilling Reliable Knowledge for Instance-Dependent Partial Label Learning. (AAAI'2024) [3] Candidate-aware Selective Disam-biguation Based On Normalized Entropy for Instance-dependent Partial-label Learning. (ICCV'23) [4] Decompositional Generation Process for Instance-Dependent Partial Label Learning. (ICLR'2023) --- Rebuttal Comment 1.1: Comment: Thanks author for the response. They have answered my questions well. I will raise my score. --- Reply to Comment 1.1.1: Comment: Dear Reviewer, We appreciate your invaluable suggestions for enhancing the quality and clarity of the paper. We will incorporate your comments in the next version of the paper. Thank you again for your time and effort! Best regards, The Authors
Summary: This paper tackles a key challenge in contrastive learning: handling real-world datasets with messy or ambiguous labels. The authors propose replacing traditional binary positive/negative pairs with "continuous semantic similarity," modeled via a graph where edge weights reflect how likely two examples belong to the same class. This framework integrates weak supervision (e.g., noisy/partial labels) into contrastive learning and shows strong empirical results across tasks like noisy label learning (NLL) and partial label learning (PLL). Claims And Evidence: 1. Continuous similarity > binary labels for weak supervision The graph-based approach replaces rigid class labels with similarity scores, allowing gradual refinement of supervision. Theoretical analysis (Prop 2.1, Thm 3.4) shows this approximates supervised contrastive learning under mild conditions. 2. Versatility across weak supervision settings The proposed framework adapts to both NLL (noisy labels) and PLL (ambiguous candidate labels). Results on CIFAR-10/100, CUB-200, and Clothing1M show consistent gains, especially under high noise/ambiguity. Methods And Evaluation Criteria: - The proposed method builds a graph where nodes are augmented samples, and edge weights blend self-supervised similarity (from augmentations) and weakly supervised similarity (derived from labels/noise patterns). - The proposed method is tested on standard NLL/PLL benchmarks (CIFAR, CUB-200, Clothing1M) against 10+ baselines under varying noise/partial-label ratios. Theoretical Claims: I checked the proof of Thm 3.4 which seems okay. I skimmed through Appendix C (proof of Section 3) but did not check thoroughly. Experimental Designs Or Analyses: yes Supplementary Material: yes Relation To Broader Scientific Literature: Weak supervision is very common in the field of machine learning literature, but contrastive learning with weak supervision has not been studied in prior works. Essential References Not Discussed: All essential references are discussed. Other Strengths And Weaknesses: ## Strengths 1. This paper directly addresses the "label quality bottleneck" in real-world data. The graph framework elegantly unifies self-supervised and weakly supervised signals. 2. This paper proposes to connect the method to spectral graph theory, providing error bounds (Thm 3.4) and showing how label information improves clustering. 3. Outperforms specialized methods (e.g., DivideMix for NLL, PiCO for PLL) even in high-noise regimes. ## Weaknesses 1. It seems this paper relies on uniform class distribution for theoretical guarantees—how does this hold for imbalanced data? 2. While t-SNE plots (Fig 2) suggest better representations, deeper qualitative insights are missing. Other Comments Or Suggestions: None. Questions For Authors: 1. How does the method handle class imbalance, given the uniform class assumption in theory? 2. What’s the computational overhead of building/maintaining the similarity graph compared to standard contrastive learning? 3. Are there scenarios where discrete labels would still outperform continuous similarity (e.g., clean labeled data)? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear reviewer x5zp: Thank you for your valuable suggestions. We truly appreciate your review and are committed to addressing your concerns with care and attention. We sincerely look forward to engaging in a more in-depth discussion with you, as your insights are essential in helping us improve and refine our approach. > **W1 & Q1: It seems this paper relies on uniform class distribution for theoretical guarantees—how does this hold for imbalanced data?** **(1) Practical Applicability to Imbalanced Data:** - Our Algorithm 2 explicitly handles imbalanced scenarios by estimating $\mathcal{L}_{wsc}$ without any class prior assumption ($\mathbb{P}(\boldsymbol{y})$). This ensures practical validity under arbitrary class distributions. Algorithm 1 and Proposition 2.3 are merely simplified special cases under class balance, not prerequisites for imbalance handling. **(2) Theoretical Scope:** - The theoretical guarantee of unbiased representation learning under weak supervision (as formalized in Theorem 3.7: "Learning from Weak Supervision Approximates Supervised Learning") is inherently class-prior-invariant, requiring no assumptions about label distribution. However, the downstream generalization analysis under supervised information Theorem 3.4 currently requires balanced assumptions for tractability. Hence, the adverse impacts of extreme long-tailed distributions on our framework remain unclear. We will conduct further research on the framework's application under extreme long-tailed conditions in future work. >**W2: While t-SNE plots (Fig 2) suggest better representations, deeper qualitative insights are missing.** In addition to Figure 2, we also theoretically analyze that the proposed method helps improve the quality of representation learning features. - Theorem 3.4 bounds linear probe error of feature derived from perturbation graph through properties of self-supervised connectivity graph and perturbation coefficients, and demonstrates that the perturbation graph increases the density of intra-class connections without adding extra-class connections. - In Theorem 3.7, we further show that the features learned through finite samples and weakly supervised information can approximate the features learned from supervised information. - Finally, we present the linear probe error of the features learned by our framework in Corollary 3.8, which can be regarded as a qualitative analysis of the learned features. >**Q2: What’s the computational overhead of building/maintaining the similarity graph compared to standard contrastive learning?** Compared with standard contrastive learning, our additional computational overhead is mainly reflected in the single-step matrix multiplication used to construct semantic similarity, that is, $S((\tilde{x},\tilde{q}), (\tilde{x}',\tilde{q}'))$. This step of calculation does not bring too much additional computational cost and is negligible compared to standard contrastive learning. It can be considered that the computational overhead of our proposed method is basically the same as that of standard contrastive learning. > **Q3: Are there scenarios where discrete labels would still outperform continuous similarity (e.g., clean labeled data)?** - The concept of continuous semantic similarity we proposed is designed for weakly supervised learning, and our method is very effective under weakly supervised information empirically. - Theorem 3.7 explains the difference between the two features. The third and fourth terms on the right side of the inequality characterize the error induced by the usage of weakly supervised information, where the third term can be regarded as the estimated error term, and the fourth term describes the extent to which the weakly supervised information affects the learned representation. If the supervision information is completely accurate, it can be seen from the analysis that our proposed framework can be approximately equivalent to supervised contrastive learning.
Summary: This work rethinks contrastive learning for noisy real-world settings by replacing rigid class-based positive/negative sampling with adaptive, graph-driven “semantic similarity”. By blending self-supervised augmentations with weak supervision signals (e.g., noisy/partial labels), the method achieves state-of-the-art performance while offering theoretical guarantees. Claims And Evidence: Claim #1: Instead of treating labels as binary (same/different class), the authors model similarity as a continuous measure derived from label noise patterns or partial candidate sets. Evidence #1: This avoids brittle assumptions about label correctness, which is crucial for messy real-world data. On Clothing1M (real-world noisy labels), the method achieves best performance, outperforming prior works as reported in experiments. Claim #2: Augmented samples form nodes in a graph, with edges weighted by both augmentation-based similarity (e.g., two crops of an image) and label-derived signals (e.g., estimated noise patterns). Evidence #2: By framing the problem through spectral graph theory, the authors prove the learned representations approximate supervised contrastive learning (Corollary 3.8), a novel theoretical bridge. Methods And Evaluation Criteria: Method: Constructs a graph where nodes are augmented samples, and edge weights combine (1) self-supervised similarity (via data augmentations) and (2) weakly supervised similarity (derived from label patterns or noise estimates). From this graph, a contrastive learning objective is derived by solving a spectral clustering problem. Evaluation: Evaluated on multiple benchmarks spanning synthetic noise (CIFAR), real-world noise (Clothing1M), and fine-grained ambiguity (CUB-200). This paper compares with many strong baselines in LNL and PLL, which demonstrates its superiority. Theoretical Claims: I have checked the proof of Proposition 2.3. and Corollary 3.8 which appears valid. Experimental Designs Or Analyses: yes Supplementary Material: yes Relation To Broader Scientific Literature: The paper builds on prior work (i.e., Provable guarantees for self-supervised deep learning with spectral contrastive loss) and extends it to weak supervision tasks. Essential References Not Discussed: All essential references are discussed. Other Strengths And Weaknesses: **Strengths** S1. Weak supervision is ubiquitous in real-world data, yet contrastive learning under such conditions remains underexplored. This work fills that void. S2. Connects spectral graph theory with representation learning, providing error bounds while delivering strong empirical results. S3. Outperforms specialized methods even in extreme noise regimes (e.g., 90% label noise on CIFAR-100). **Weaknesses** W1. Since contrastive learning is designed for improving representation learning, however except Figure 2, the paper does not provide sufficient evidence that shows its superiority in representation learning. Other Comments Or Suggestions: None. Questions For Authors: Q1. Can the proposed method be applied to weakly-supervised multi-label learning problems? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear reviewer jhEr: Thank you for your valuable suggestions. We truly appreciate your review and are committed to addressing your concerns with care and attention. We sincerely look forward to engaging in a more in-depth discussion with you, as your insights are essential in helping us improve and refine our approach. > **W1. The paper does not provide sufficient evidence that shows its superiority in representation learning.** In addition to Figure 2, we also theoretically analyze that the proposed method helps improve the quality of representation learning features. - Theorem 3.4 bounds linear probe error of feature derived from perturbation graph through properties of self-supervised connectivity graph and perturbation coefficients, and demonstrates that the perturbation graph increases the density of intra-class connections without adding extra-class connections. - In Theorem 3.7, we further show that the features learned through finite samples and weakly supervised information can approximate the features learned from supervised information. - Finally, we present the linear probe error of the features learned by our framework in Corollary 3.8, which can be regarded as a qualitative analysis of the learned features. > **Q1. Can the proposed method be applied to weakly-supervised multi-label learning problems?** - The method we proposed is based on the semantic similarity between samples. The construction of semantic similarity depends on the transfer conditions approximately satisfied by each sample. It is not specially designed for multi-label classification tasks and cannot be simply applied to multi-label classification. - We believe that by further expanding the concept of semantic similarity, it is possible to apply our method to weakly-supervised multi-label learning problems. We will explore this problem in future work.
null
null
null
null
null
null
LaMAGIC2: Advanced Circuit Formulations for Language Model-Based Analog Topology Generation
Accept (poster)
Summary: This paper introduces LaMAGIC2, a circuit formulation approach analog topology generation. The authors identify limitations in previous methods, particularly LaMAGIC, which used inefficient circuit representations with quadratic token length complexity and showed low sensitivity to numeric input precision. Experimental results show that LaMAGIC2 achieves 34% higher success rates under tight tolerance conditions (0.01) and 10X lower MSEs compared to LaMAGIC. Claims And Evidence: Claim: LaMAGIC2 achieves higher success rates under tight tolerance conditions. Evidence: The authors provide comprehensive experiments comparing success rates across different tolerance levels (0.01-0.1), showing LaMAGIC2 outperforming baselines. Claim: SFCI reduces token length complexity to O(|V| + |E|). Evidence: The authors provide theoretical analysis and empirical measurements of token lengths (Table 1), showing significant reduction compared to matrix-based formulations. Claim: Component-type tokens improve circuit structure learning. Evidence: The ablation study comparing SFCI with SFCI-NCT (without component-type tokens) provides clear evidence supporting this claim. Methods And Evaluation Criteria: - Circuit formulations: The authors provide clear descriptions and analyses of different formulations, highlighting their advantages and disadvantages. - Comparison with baselines: The authors compare with both previous formulations from LaMAGIC and an RL-search method, providing a comprehensive evaluation. - Transferability evaluation: Testing on more complex 6-component circuits with limited training data is a practical approach to evaluate generalization capabilities. Theoretical Claims: The paper does not present formal mathematical proofs but makes theoretical claims about computational complexity. Experimental Designs Or Analyses: The experimental design is generally sound: The authors use the same dataset as LaMAGIC for fair comparison, with appropriate splits for training and evaluation. Model architecture: Using the same Flan-T5-base architecture as LaMAGIC ensures fair comparison. The simulation-based evaluation using NGSPICE provides realistic performance metrics. The approach of training on 3,4,5-component circuits and fine-tuning on limited 6-component circuits is a reasonable test of transferability. Supplementary Material: Yes. Relation To Broader Scientific Literature: It builds upon prior work in search-based approaches for analog topology design. Essential References Not Discussed: Recent work on graph generation using language models beyond circuit design, which could provide additional context for the token-based graph representation approach. Other Strengths And Weaknesses: Strengths: The paper presents a clear analysis of the limitations of existing formulations before proposing improvements. The proposed SFCI formulation is elegant and addresses multiple issues simultaneously. The experimental results are comprehensive and convincingly demonstrate the advantages of the proposed methods. Weaknesses: The paper could benefit from more discussion of failure cases or limitations of the proposed formulations. While the paper shows improved performance on 6-component circuits, it's unclear how well the approach would scale to even larger circuits. Other Comments Or Suggestions: NA Questions For Authors: Have you investigated how the choice of language model architecture influences the performance of various formulations? For instance, do models with longer context windows alter the relative benefits of SFCI compared to matrix-based formulations? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ### Addressing limitations Thanks for your advice on discussing the weakness of our methods. This is a helpful idea to contribute more to the community. The circuit space will grow exponentially with number of nodes increases. In addition, the simulation time for larger circuits will also increase, causing the scarcity of training data and limiting the model generalizability. Thus, this one-shot generation approach may become less effective for significantly larger circuit designs. To address these challenges, for future works, we are developing search-based decoding methods with our models to generate optimized circuits for larger and more diverse design spaces. For example, Monte-Carlo tree search (MCTS) is a promising method to expand the language model capability in test-time computation. This integration can balance the strengths of generative and search-based approaches, enhancing the quality and practicality of automated analog circuit design solutions. ### Response to Question on Architecture Impact In our experiments, all circuit representations (both SFCI and matrix-based) fit within a single 1024-token context window, and no context-window shifting is necessary during generation. Under this setting, we observed that our SFCI consistently outperformed matrix-based formulations. The reduced token length and the use of component-type token of SFCI can improve the model performance and has faster convergence (23.0% fewer training steps compared to SFM, as mentioned in our response to Reviewer 2sgz). Given that our experimental setup does not require multiple context windows, increasing the window size (e.g., from 1024 to 2048 tokens) would not directly influence the relative benefits of SFCI. However, we expect that as circuit size scales up (e.g., larger circuits exceeding one context window), SFCI would provide greater advantages. Specifically, when matrix-based formulations need context-window shifting, this will lead to degraded performance, since the model cannot see the entire circuit during generation. In contrast, SFCI would be more suitable for generating larger circuits without context-window shifting due to its compact token length.
Summary: This paper introduces LaMAGIC2, an improved approach for language model-based analog topology generation. It proposes SFCI, which enhances component recognition, reduces token complexity from $O(|V|^2)$ to $O(|V| + |E|)$, and improves numerical precision sensitivity. The method achieves a 34% higher success rate under strict tolerance and 10X lower MSE compared to prior work. It also generalizes better to complex circuits, improving MSE by up to 58.5%. Future work includes extending SFCI to transistor-based circuits and integrating search-based techniques. Claims And Evidence: The paper claims that LaMAGIC2 reduces token complexity, improves component recognition, enhances numerical precision, and increases transferability through a sparse representation. These claims are well-supported, as the method effectively streamlines circuit encoding, improves model learning, and aligns with real-world circuit sparsity. Methods And Evaluation Criteria: The proposed methods and evaluation criteria align well with language model-based analog topology generation. SFCI effectively reduces token complexity, improves component recognition, and enhances numerical precision, all essential for accurate circuit generation. Success rate and MSE metrics appropriately measure performance, while the benchmark dataset from LaMAGIC provides a diverse set of 3–6 component circuits for evaluation. Theoretical Claims: The paper does not present formal proofs for its theoretical claims but instead supports its methodology through empirical results. The key theoretical claim is that reducing token complexity from $O(|V|^2)$ to $O(|V| + |E|)$ improves efficiency and model performance. While this claim is reasonable given the sparsity of real-world circuits, it is not formally proven. The paper demonstrates the impact of this reduction through token length statistics (Table 1) and empirical success rate/MSE comparisons (Tables 2–5), but it does not provide a mathematical proof showing how this complexity reduction affects training dynamics or inference efficiency. Experimental Designs Or Analyses: The experimental design is generally sound, with well-defined metrics, meaningful comparisons, and ablation studies. Success rate and MSE effectively assess circuit generation, and the LaMAGIC dataset provides a reasonable benchmark. Comparisons with prior methods, including RL-based search and different formulations, demonstrate performance gains, while ablation studies on SFCI variants validate key design choices. Supplementary Material: I have briefly read all the supplementary materials. Relation To Broader Scientific Literature: The paper builds on LaMAGIC and related work in language model-based circuit generation, improving token efficiency, numerical precision, and transferability by incorporating structured tokenization, float-input formulations, and efficient graph encoding, aligning with broader trends in transformer-based design automation. Essential References Not Discussed: As far as I am concerned, the references are sufficient. Other Strengths And Weaknesses: The paper is clearly written, effectively identifying the limitations of previous methods and explaining how each issue is addressed. The structured presentation makes it easy to follow the improvements and their impact on circuit generation. Weaknesses are shown in other parts. Other Comments Or Suggestions: The claim that reducing token complexity from $O(|V|^2)$ to $O(|V| + |E|)$ improves efficiency is intuitive, as shorter sequences generally lead to faster inference and reduced memory usage in transformer models. However, the paper lacks explicit empirical analysis to substantiate this impact. A more detailed discussion on how token reduction influences training time, convergence speed, and inference efficiency would strengthen the argument. Questions For Authors: Can you provide some results in terms of computational efficiency, e.g., training time or convergence speed compared to previous methods? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: ## Question for computational efficiency Based on the question, we further record the training steps that required for matrix formulation (SFM) and the succinct canonical formulation with identifier (SFCI). Specifically, SFM saturates at 8943 steps, and SFCI converges at 6886 steps. This shows that our SFCI reduces 23.0% of training steps compared to SFM, thus enhancing the computational efficiency. We will include this experimental results if we get accepted! --- Rebuttal Comment 1.1: Comment: Thank you for your reply, which addressed my concerns. I will retain the score.
Summary: This paper addresses the automation of analog topology design, which aims to determine the optimal connections between given nodes while satisfying various constraints. Existing methods, including search-based and reinforcement learning-based approaches, are often inefficient. This paper analyzes the structure of a large language model-based approach, LaMAGIC. To tackle the challenges of low-precision sensitivity and long token lengths, the paper introduces two circuite representations:SFM and SFCI. SFM simplifies numerical input representation, while SFCI preserves a sparse graph structure with improved component-type recognition, leading to more efficient and accurate circuit generation. The proposed method, LaMAGIC2, outperforms baseline approaches by achieving higher success rates under strict tolerances, reducing mean squared errors by up to 10×, and demonstrating superior transferability to more complex circuits, with improvements of up to 58.5%. Claims And Evidence: The proposed methods, SFM and SFCI, are well supported by Figure 3, which also effectively illustrates the disadvantages of previous methods. Methods And Evaluation Criteria: The proposed method is reasonable, but it primarily involves prompt engineering, which is a relatively naive mechanism. The approach presented by the author is an extension of the previous method, but a more advanced algorithm is needed for further improvement. Nevertheless, the evaluation criteria are appropriately set. Theoretical Claims: There is no theoretical validation required, but it would be beneficial to analyze or establish a connection to the LLM structure to explain why the proposed method achieves a higher success rate. Additionally, providing a reference that supports the main idea would further strengthen the argument. Experimental Designs Or Analyses: The author adhered to the experimental design of previous studies, ensuring the validity of the results. Additionally, while the ablation studies are appropriately conducted, it would be necessary to perform experiments using a different LLM model architecture, given that this paper is primarily experimental. Supplementary Material: The paper provides only the training details, including a brief description of the model structure and experimental settings. To enhance clarity and support the findings, more detailed information about the model, as well as additional cases similar to Figure 3, which highlight the differences between the proposed and previous methods, are needed. Relation To Broader Scientific Literature: The author discusses search-based and RL-based methods; however, to provide a comprehensive understanding of the automation of analog topology design, more related work should be included. Additionally, sufficient information on analog topology design using LLMs is needed. The author refers to "one such approach" for the direct mapping from performance requirements to circuit topologies, but a brief explanation of why LaMAGIC was chosen for this study would further clarify the rationale behind the selected method. Essential References Not Discussed: All essential references are covered; however, additional related works should be included to provide a more comprehensive background and context for the study. Other Strengths And Weaknesses: This paper presents an effective method that integrates large language models (LLMs) to automate analog topology design. The proposed approach is practical and has a significant impact on automation system design. However, the method is somewhat naive, as it primarily relies on prompt engineering. Other Comments Or Suggestions: There are no further comments for the author. Questions For Authors: 1. Why the author specify 'LaMAGIC' for the baseline algorithm? Is there no other similar research to handle the limitation of the LaMAGIC? 2. Can the author give a more detailed explanation of how the author changed the tocken length complexity to O(|V|+E|)? It is quite confusing. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: ## Clarification on previous work and our methodology We thank the reviewer for this important comment. Our paper focuses on developing supervised fine-tuning (SFT) methods for language models in analog topology generation task. Our SFCI formulation contributes these key innovations: 1. It proposes a compact canonical form with component-type tokens to enhance the learning of different circuit devices. 2. It increases the attention on the float-valued inputs of circuit specifications, thus effectively capturing the relations between numerical circuit specifications and circuit topologies. LaMAGIC was selected as the primary baseline because, to the best of our knowledge at submission time, it is the only prior work using SFT with transformer-based LMs for analog topology generation. Additionally, we compare with a RL search method for power converter to comprehensively demonstrate the benefits of our generative approach over conventional search techniques. These two methods (LaMAGIC and RL) cover well the previous works for power converter topology generation, and we have thoroughly discussed other related methods in Section 2.3 of our paper. ## Question for complexity We appreciate the reviewer’s question and apologize for any confusion caused by our initial complexity characterization. In our original submission, we described the token length complexity as $O(|V| + |E|)$, assuming a simple graph scenario where the SFCI representation is a vertex-to-edge adjacency list for a graph with $|V|$ vertices and $|E|$ edges. Upon careful re-examination, we realized this characterization was inaccurate, as our formulation is actually a hypergraph represented by an edge-to-vertex adjacency list. ### Correct Hypergraph Complexity Analysis Each hyperedge $e_i$ has $k_i$ vertices. Therefore, the total token length, representing the sum of vertex incidences across all hyperedges, is: $$ \sum_{i=1}^{|E|} k_i $$ For our power converter devices, each vertex appears in at most two hyperedges: $d(v)≤2$. Thus, the total number of vertex incidences , which equals to the number of edges of all nodes, is: $$ \sum_{i=1}^{|E|} k_i = \sum_{v \in V} d(v) \leq 2|V| $$ This upper bound implies the total token length is at most $2|V|$, which simplifies to $O(|V|)$. This result indicates that the token length complexity of SFCI is independent of the number of edges $|E|$. The $O(|V)$ token length of SFCI demonstrates its compactness compared to $O(|V|^2)$ lengths of matrix formulations. We will revise the token length complexity to $O(|V|)$ and include the above proof in our manuscript accordingly, to improve clarity and readability for the readers.
Summary: This paper proposes LaMAGIC2, introducing succinct formulations (SFM and SFCI) for language-model-based analog circuit topology generation. Compared to the previous LaMAGIC approach, these formulations effectively reduce output sequence length and improve component-type recognition. Experiments demonstrate that LaMAGIC2 achieves higher success rates and lower MSE under strict tolerance conditions. Claims And Evidence: The claims made in the submission are well-supported by clear and convincing evidence. Methods And Evaluation Criteria: The methods and evaluation criteria are well-aligned with the problem, effectively measuring improvements in circuit topology generation. Theoretical Claims: No theoretical claims were involved. Experimental Designs Or Analyses: The experimental design is sound, using benchmark datasets and appropriate metrics. Supplementary Material: The supplementary material was reviewed, primarily focusing on additional experimental details and results. No major issues were identified. Relation To Broader Scientific Literature: The paper builds on prior work in language-model-based analog circuit topology generation, particularly LaMAGIC. It improves upon previous formulations by reducing output sequence length and enhancing component-type recognition. These contributions align with broader trends in applying machine learning, especially transformer-based models, to circuit design automation. However, expanding comparisons to other ML-based circuit design approaches could further contextualize its impact. Essential References Not Discussed: The paper provides a sufficient discussion of prior work, particularly LaMAGIC, and appropriately cites relevant literature. No essential references appear to be missing. Other Strengths And Weaknesses: The paper presents a well-structured study with clear contributions to refining analog circuit topology generation using language models. The proposed formulations effectively improve efficiency and component-type recognition. While the methodological novelty is somewhat incremental, the improvements in output sequence length and numerical precision make it a valuable contribution. Other Comments Or Suggestions: 1. The paper focuses primarily on power converter circuits. Expanding experiments to other types of analog integrated circuits or evaluating the method’s transferability to new circuit types would strengthen its generalizability. 2. The proposed approach is tested on circuits with up to six components, which is only a slight scalability improvement over previous work (five components). Evaluating larger-scale circuits would better demonstrate the method’s scalability. Questions For Authors: 1. The paper uses T5 as the underlying model. Have the authors tried or considered extending their formulations to other language models (e.g., GPT variants)? How generalizable is the proposed approach across different model architectures? 2. Have the authors conducted comparative experiments with advanced commercial LLMs (e.g., GPT-o1, Claude 3.7, etc.) in circuit topology generation tasks? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ### Q1: Choice of Model Architecture We appreciate the suggestion to evaluate our proposed formulations using other model architectures. In this work, we adopted T5 to maintain architectural consistency with previous work LaMAGIC, enabling a fair comparison focused on the impact of new formulations. We have considered extending to encoder-only models (e.g., GPT-2), which are often more suitable for scaling up the parameter count compared to encoder-decoder models. However, due to the limited rebuttal timeline, we were unable to complete the experiments. Our proposed float-input setting and SFCI formulation are architecture-agnostic and can be applied by any transformer models with sequential inputs. Specifically, the float-input directly feeds numerical specifications into the model without discretization by transitional tokenizers to improve numerical representation. Also, the SFCI formulation further provides a compact representation of circuit topologies, which improves the learning for larger circuits. Another reason we chose T5 is its smaller model size comparing to more general models, which makes it more suitable for application-specific use cases like this work while also being more energy-efficient and low-cost. ### Q2: Comparison with commercial LLMs Based on your suggestion, we evaluate o1 using a few-shot setup. We provide 100 example circuits as context and evaluate performance on the same 350 specifications used in our main experiments (Figure 5 in the paper). Below are the results: Success rates at thresholds from 0.01 to 0.1: |Threshold| 0.01| 0.02 | 0.03 | 0.04 | 0.05 | 0.06 | 0.07 | 0.08 | 0.09 | 0.1| | -------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- | |Success rate of o1 | 0.27 | 0.32 | 0.36 | 0.39 | 0.40 | 0.41 | 0.42 | 0.43 | 0.44 | 0.46 | |Success rate of SFCI | 0.90 | 0.96 | 0.98 | 0.99 | 0.99 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | MSE of voltage conversion ratio: | Metric | SFCI | o1 | |:------:|:-------------:|:--:| | MSE | 8e-5 | 0.602 | This result shows that our model significantly outperforms o1, indicating our supervised fine-tuning is needed to tackle the analog circuit design, which demand high precision and the interaction of custom SPICE simulators.
null
null
null
null
null
null
BlockDialect: Block-wise Fine-grained Mixed Format Quantization for Energy-Efficient LLM Inference
Accept (poster)
Summary: This paper introduces BlockDialect, a block-wise fine-grained mixed format quantization technique designed to enhance the energy efficiency of large language model (LLM) inference. Unlike traditional quantization methods that focus on scaling values, BlockDialect assigns a number format to each block using a predefined formatbook, DialectFP4, which consists of multiple FP4 variants (dialects) tailored to different data distributions. The proposed two-stage selection process aims to determine the best dialect for each activation block in real-time, ensuring compatibility with low-precision integer arithmetic. Experimental results demonstrate that BlockDialect outperforms MXFP4 format, achieving up to a 10.78% accuracy gain on LLaMA3-8B while maintaining lower bit usage and only a 5.45% drop compared to full precision. Claims And Evidence: Please refer to **“Methods And Evaluation Criteria”**. Methods And Evaluation Criteria: The accuracy of the proposed method BlockDialect (DialectFP4) has been verified across different models and evaluation tasks, such as perplexity (PPL) and zero-shot downstream tasks. The evaluation results demonstrate BlockDialect can preserve the model’s accuracy better than baselines such as MXFP4. However, one of the important baselines, NVFP4, is missing in the evaluation. As [NVFP4](https://docs.nvidia.com/deeplearning/cudnn/frontend/latest/operations/BlockScaling.html) supports floating-point micro-scaling factors, it can preserve the model’s accuracy better than MXFP4. For the efficiency benchmarks, the paper mainly presents the area and energy consumption of MAC units, while the costs (latency, memory, and energy) of on-the-fly quantization & dequantization units are missing. Theoretical Claims: N/A. Experimental Designs Or Analyses: Please refer to **“Methods And Evaluation Criteria”**. Supplementary Material: Yes. I have read the appendix of this paper, particularly the extra experiment results. The authors also provide source code for accuracy simulation in the supplementary material. Relation To Broader Scientific Literature: The paper contributes to accelerators for quantized LLM inference, with a particular focus on improving FP4 data format’s accuracy. Essential References Not Discussed: N/A. Other Strengths And Weaknesses: Please refer to **“Methods And Evaluation Criteria”**. Other Comments Or Suggestions: N/A. Questions For Authors: 1) How is the accuracy of BlockDialect compared to NVFP4 quantization? 2) What is the overhead for the 2-stage block-wise quantization in BlockDialect? 3) Is the per-block dialect design compatible with prevalent GEMM accelerator architectures such as tensor core? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Performance comparison of BlockDialect (BDFP4), NVFP4, and MXFP4 across various model sizes and architectures** * We compare accuracy and perplexity for both `Linear` layer quantization and `Full`-path (including activation-activation multiplication) quantization. Perplexity (PL) and common reasoning tasks (CR) evaluations follow the BlockDialect paper; for GLUE (GL), we average six tasks (MRPC, SST2, RTE, QQP, MNLI, QNLI), and for MMLU (ML), we use the representative accuracy from lm-eval-harness framework. We test both 16- and 32-block sizes for a comprehensive comparison. * Overall, BlockDialect outperforms both data types, demonstrating its versatility, with NVFP4 falling between MXFP4 and BlockDialect. Also, in the full-path results, the accuracy gap widens, solidifying BlockDialect’s superiority. * Note that unlike BDFP4 and MXFP4, which use a power-of-two shared exponent, NVFP4’s floating-point scale factor requires costly floating-point operations (e.g., normalization, scale factor multiplication), with overhead increasing as block size decreases. ###### N/A: Too low to compare ###### **Bold**: Best result among comparable effective bitwidths (NVFP4-32, MXFP4-16, BDFP4-32) ||`Linear`||LLaMA3-1B|OPT-6.7B|Phi-2.7B|MobileLLM-125M|`Full`LLaMA3-1B| |:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |**Format**|**BlkSize**|**Eff.bit.**|PL↓\|CR↑\|ML↑\|GL↑|PL↓\|CR↑\|ML↑\|GL↑|PL↓\|CR↑\|ML↑\|GL↑|PL↓\|CR↑\|ML↑\|GL↑|PL↓\|CR↑\|ML↑\|GL↑ |FP16|-||9.8 \|60.4\|37.6\|52.6|10.9\|62.6\|N/A\|52.8|9.7\|72.4\|54.5\|64.3|12.5\|46.3\|N/A\|51.2|9.8\|60.4\|37.6\|52.6 |NVFP4|16|4.5|12.4\|55.2\|32.0\|52.7|12.4\|59.3\|N/A\|47.9|11.3\|68.8\|52.1\|64.9|14.9\|44.2\|N/A\|49.6|17.5\|49.2\|27.2\|50.8 ||32|4.25|12.8\|54.1\|29.1\|53.1|12.5\|59.2\|N/A\|**51.6**|**11.6**\|68.3\|51.0\|**64.9**|15.4\|43.6\|N/A\|**50.5**|20.0\|48.5\|**26.1**\|49.9 |MXFP4|16|4.31|15.7\|51.4\|27.1\|50.7|19.2\|50.6\|N/A\|49.9|12.6\|69.4\|50.0\|61.4|18.3\|42.2\|N/A\|49.8|53.8\|41.8\|24.2\|49.3 ||32|4.16|15.9\|50.6\|26.3\|50.6|19.2\|50.8\|N/A\|50.3|12.8\|69.1\|50.0\|61.1|18.2\|42.2\|N/A\|49.2|60.0\|40.0\|24.1\|48.9 |BDFP4|16|4.56|11.5\|56.3\|31.4\|52.7|11.3\|61.3\|N/A\|51.3|11.1\|70.6\|52.1\|62.9|14.3\|44.5\|N/A\|50.4|14.8\|52.2\|27.9\|51.4 ||32|4.28|**12.1**\|**55.5**\|**30.2**\|**53.4**|**11.3**\|**60.3**\|N/A\|51.3|11.8\|**69.9**\|**52.1**\|64.7|**15.1**\|**43.6**\|N/A\|50.4|**17.4**\|**49.5**\|25.9\|**51.2** **Overhead for the 2-stage format selection** * We evaluate the overhead of the 2-stage format selection to highlight its superiority over the conventional Mean Square Error method. Please see our response to Reviewer 2 (ID: 6poj) in “Resource overhead of real-time MSE calculation” section. **Compatibility with prevalent GEMM accelerator architectures** * Given that BlockDialect uses 4-bit unsigned integer MAC and a logical operation-based quantization method, it can be integrated into existing GEMM accelerators. Moreover, as block-wise operations become more common and commercial accelerators start supporting block-wise quantization formats, integrating BlockDialect will become increasingly seamless. **Overhead of on-the-fly quantization/dequantization** * We synthesize and evaluate an implementation of BlockDialect modules for 32-element processing in SystemVerilog, as an alternative to implementing the GPU kernel within the time constraints. Since we share six values across dialects, we keep the number of cases manageable, enabling a compact combinational logic implementation. Even with a register file, the overhead remains minimal at 600B for 32-element parallel processing. The reported numbers use the 130nm process node at 100MHz. * Since the latency and energy benefits - improved computation efficiency through low-precision MACs and reduced data movement via 4-bit quantization - are clear$^{[1,2]}$, assessing whether the overhead offsets these gains is crucial. As shown in the results, quantization and dequantization logic takes only a few clock cycles, which can be further overlapped with pipelining. Their power and area are comparable to or lower than that of our 32 MAC units, indicating minimal overhead. The overhead of on-the-fly activation quantization can also be amortized as the quantized activation block is reused across a large number of weight blocks. Compared to the resources required for INT8 MACs, the practicality of BlockDialect becomes more evident. To fully realize BlockDialect’s potential, we are taping out an optimized accelerator that will provide end-to-end measurements. - ###### [1] Yuan et al., "LLM Inference Unveiled: Survey and Roofline Model Insights," arXiv, 2024. [2] Argerich & Patiño-Martínez, "Measuring and improving the energy efficiency of large language models inference," IEEE Access, 2024. |Module|Latency ($clock\ cycle$)|Power ($mW$)|Area ($\mu m^2$)| |-|:-:|:-:|:-:| |Quantization (including format selection)|5|0.7|42833.6| |Dequantization|1|0.2|6319.8| |32 MACs (Ours)|1|2.2|41319.6| |32 MACs (INT8)|1|6.1|85482.0|
Summary: This work presents BlockDialect, a block-wise mixed format quantization method for energy-efficient LLM inference. It assigns each block an optimal number format from a predefined formatbook to better capture data distributions. The proposed DialectFP4, a set of FP4 variants, enhances flexibility while maintaining hardware efficiency. A two-stage online format selection method enables efficient activation quantization without costly MSE calculations. Claims And Evidence: Most of the claims in the paper are supported by strong empirical evidence and theoretical analysis, particularly regarding accuracy improvements, hardware efficiency, and training stability. However, while the method is evaluated on LLaMA3-8B, LLaMA2-7B, and Mistral-7B, it remains unclear how well BlockDialect generalizes to other architectures such as GPT, OPT, or hybrid transformer-based models. Additional experiments on more diverse model families would strengthen this claim. Also, Not entirely. The method claims to be energy-efficient, but the evaluation of energy consumption provided in Table 5 seems not support this claim. Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria are well-structured and mostly relevant to the quantization challenges in LLM inference. Theoretical Claims: N/A Experimental Designs Or Analyses: Yes, I have reviewed the soundness and validity of the experimental designs and analyses presented in the submission. Overall, the experimental setup is reasonable. However, there are still some points that can be improve: - While the paper claims that BlockDialect is energy-efficient, it does not provide concrete measurements of energy savings in actual LLM inference workloads. The inclusion of MAC unit power analysis is useful but does not fully capture real-world energy consumption across the entire inference pipeline. A more rigorous validation of this claim would involve reporting end-to-end energy usage when running inference on real hardware accelerators, such as GPUs or TPUs. - The experiments focus primarily on LLaMA and Mistral models, leaving uncertainty about whether BlockDialect generalizes to other architectures, such as GPT, OPT, or hybrid transformer-based models. Given that different model families may exhibit varying sensitivities to quantization, additional evaluations on a broader range of LLM architectures would provide a more comprehensive validation of the approach. - Since BlockDialect assigns per-block format identifiers, there is an inherent trade-off between format flexibility and additional metadata storage requirements. However, the paper does not quantify how much extra memory is needed to store these identifiers or discuss scalability concerns for very large models. Providing a detailed breakdown of the memory overhead and its impact on inference efficiency would enhance the clarity of this trade-off. - While the hardware synthesis results demonstrate energy and area efficiency at the MAC unit level, there is no empirical validation of how BlockDialect affects overall inference latency when deployed on real hardware. Without these benchmarks, it remains unclear whether the proposed method actually accelerates practical LLM inference. Including end-to-end latency measurements would significantly strengthen the practical relevance of BlockDialect. Supplementary Material: The supplementary material provides extended experiments, including additional zero-shot accuracy and perplexity evaluations on the OPT-6.7B model, as well as ablation studies on dialect numbers and block sizes. Relation To Broader Scientific Literature: This work builds on prior research in block-wise and mixed-precision quantization, extending methods like MXFP4 and LLM-FP4 by introducing fine-grained format selection with DialectFP4. It aligns with recent advancements in adaptive numerical precision and hardware-efficient LLM training, particularly in optimizing low-power MAC operations. Essential References Not Discussed: N/A Other Strengths And Weaknesses: One of the key strengths of the paper is its practical approach to energy-efficient quantization by introducing a block-wise mixed-format strategy that balances flexibility and efficiency. However, I still have questions about the novelty of the paper, as it builds upon existing mixed-precision and block-wise quantization techniques rather than introducing fundamentally new principles. While the format book selection and DialectFP4 variants add flexibility, the core idea of assigning optimal numerical formats based on block-wise distributions has been explored in various forms before such as [1-3]. [1]Liu R, Wei C, Yang Y, et al. Block-wise dynamic-precision neural network training acceleration via online quantization sensitivity analytics[C]//Proceedings of the 28th Asia and South Pacific Design Automation Conference. 2023: 372-377. [2]Wu X, Hanson E, Wang N, et al. Block-Wise Mixed-Precision Quantization: Enabling High Efficiency for Practical ReRAM-based DNN Accelerators[J]. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 2024. [3]Dettmers T, Lewis M, Shleifer S, et al. 8-bit optimizers via block-wise quantization, 2022[J]. URL https://arxiv. org/abs/2110.02861, 2022. Other Comments Or Suggestions: See Other Strengths And Weaknesses Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Evaluating the impact of BlockDialect on inference latency and energy consumption** * That’s a valid point. Given the clear energy and latency benefits from low-precision MACs and reduced data movement due to 4-bit weight/activation quantization (including KV cache), we assess the overhead from BlockDialect's additional logic across various aspects to determine if these benefits can be offset. We address this in our response to Reviewer 4 (ID: 8c5N) (see the "Overhead of on-the-fly quantization/dequantization" section). **Broader applicability of BlockDialect to architectures other than LLaMA and Mistral** * In appendix E, we provide experiment results for the OPT-6.7B model. Also, you can refer to our response to Reviewer 4 (ID: 8c5N), section "Performance comparison of BlockDialect, NVFP4, and MXFP4 across various model sizes and architectures," for further experiments including OPT and Phi models. **Metadata storage requirements** * As mentioned on page 6, line 306, "Effective bitwidth" indicates the per-block metadata overhead and we provide the detailed calculation method in Appendix H (page 16). Our results show that BlockDialect achieves better accuracy than the MX format, even with a lower effective bitwidth. Additionally, metadata overhead is mitigated by sharing it across the block, reducing it to less than 0.3 bits per data. Note that even with larger models, the per-block data and metadata ratio remains constant. **Novelty of BlockDialect compared to existing block-wise mixed-precision quantization approaches** * Thanks for providing relevant references. We have carefully reviewed them to clarify the novelty of BlockDialect. While block-wise mixed-precision (MP) quantization methods and our mixed-format (same precision) approach share similarities, they differ in key aspects: * MP requires either multiple types of MAC units or a complex modularized MAC with different precision modes, whereas our approach operates efficiently with a single low-precision MAC unit, improving hardware efficiency. * MP introduces irregular memory access patterns or requires reordering mechanisms to manage distributed precision blocks, whereas our method maintains regular memory access without added complexity. * MP relies on complex online tracking, pre-training or calibration with sample datasets to determine precision allocation before quantization, which may limit scalability or adaptability, whereas our approach avoids these requirements entirely. * Due to these factors, MP demands extensive modifications to computation paths and kernel code, while our method only requires integrating quantization/dequantization functions before or after matrix multiplication, making it more portable and easier to deploy. * Additionally, BlockDialect introduces several novel aspects compared to existing quantization techniques: * **Sample dataset agnostic**: Many activation quantization methods rely on sample dataset calibration or pre-training to avoid the high overhead of online processing, sacrificing adaptability. In contrast, BlockDialect eliminates the need for calibration or training by employing an efficient two-stage format selection (the efficiency of the two-stage approach is analyzed in the response to Reviewer 2 (ID: 6poj), section "Resource overhead of real-time MSE calculation") and logical operation-based quantization, enabling practical online processing. * **Handling unstructured outliers**: Unlike many activation quantization methods that rely on easily tracked or calibrated structured outliers (e.g., channel-wise magnitude mean), BlockDialect can also handle unstructured outliers through fine-grained block-wise outlier localization. * **Awareness of fine-grained block distribution**: To the best of our knowledge, this is the first mixed-format approach that constructs a diverse set of format candidates based on profiling fine-grained block distributions, rather than selecting from a few predefined standard formats or modifying exponent bias. * **Addressing MX format limitations**: We identify and mitigate the limitations of the MX format, a block-wise quantization format already adopted in commercial products. * **Efficient INT MAC utilization**: By carefully selecting a 0.5 granularity for representable values, BlockDialect enables floating-point representation using scaled integers ($0.5\cdot integer$), allowing direct computation with efficient low-precision integer MAC units. Note that we do not require floating-point operations even for the scaling factor, as we use a power-of-two scaling factor, which can be handled by addition and shifting logic while preserving accuracy. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed rebuttal. I appreciate the authors' clarifications and additional results. I still encourage more systematic evaluation across diverse model families (e.g., GPT, hybrid architectures) to better establish the robustness and adaptability of BlockDialect. Overall, the authors' response improves my understanding of the method’s strengths and novelty, but I believe further validation on real-world deployment aspects is necessary for stronger confidence. I maintain my overall score as Weak Accept. --- Reply to Comment 1.1.1: Comment: * Thank you for your additional comments. To address your concerns, we extended our evaluation to include the GPT model and observed that BlockDialect consistently outperforms other data formats across most cases, consistent with our results on LLaMA, Mistral, OPT, and Phi architectures. While time constraints limited us to evaluating only the GPT model, this additional experiment further supports our claim that BlockDialect's block-wise format assignment - being independent of architecture-specific characteristics - does not rely on assumptions about structural outlier patterns or computation flow. As a result, we expect it to extend naturally to hybrid transformer architectures, which often interleave heterogeneous layer types. We believe this strengthens the case for BlockDialect’s broad applicability. ###### MMLU is omitted due to low scores; perplexity is measured with 1024-token sequences ||GPT2-1.5B||`Linear`|`Full`| |:-:|:-:|:-:|:-:|:-:| |**Format**|**BlkSize**|**Eff.bit.**|PL↓\|CR↑\|GL↑|PL↓\|CR↑\|GL↑| |FP16|-||17.4\|53.2\|48.7|17.4\|53.2\|48.7| |NVFP4|16|4.5|18.6\|50.9\|47.7|18.8\|50.4\|47.8 ||32|4.25|18.5\|50.4\|47.2|18.8\|49.8\|47.2 |MXFP4|16|4.31|19.0\|51.3\|48.3|20.3\|49.2\|47.7 ||32|4.16|19.0\|51.3\|48.0|20.1\|50.0\|47.5 |BDFP4|16|4.56|17.9\|51.8\|48.1|18.0\|51.4\|47.6| ||32|4.28|18.1\|51.9\|47.3|18.4\|51.1\|46.8|
Summary: They proposed BlockDialect, a block-wise finegrained mixed format technique that assigns a per-block optimal number format from a formatbook for better data representation. DialectFP4 ensures energy efficiency by selecting representable values as scaled integers compatible with low-precision integer arithmetic. 1. They introduce DialectFP4, a formatbook of FP4 variants (akin to dialects) that adapt to diverse data distributions. Three core principles of formatbook: 1) minimizing wasted or underestimated ranges, 2) prioritizing the representation of larger magnitudes, and 3) ensuring hardware efficiency. 2. They propose a two-stage approach for online DialectFP4 activation quantization. Claims And Evidence: The claims made in the submission supported by clear and convincing evidence. Methods And Evaluation Criteria: The methods make sense. Theoretical Claims: Probably correct. Experimental Designs Or Analyses: Experimental soundness. Supplementary Material: Yes. All. Relation To Broader Scientific Literature: Efficiency, model compression, and quantization. Essential References Not Discussed: N/A. Other Strengths And Weaknesses: The experimental design of this paper considers the latest methods (Quarot) and model series (LLaMA3-8B), which are comprehensive. The authors deserve credit for providing the code. Other Comments Or Suggestions: I suggest moving Figure 2 to page 5 for easy viewing. In addition, Figure 2 lacks an explanation for some belonging, such as "sink". Questions For Authors: 1. The proposed method eliminates real-time MSE calculation for activation quantization and claims that MSE overlooks the magnitude of data elements. The authors should provide a comparison of resource overhead. 2. The experiments are conducted on models with up to 8B parameters. How does BlockDialect scale with even larger models, such as those with 70B+ parameters? Are there any additional challenges or optimizations needed to maintain its efficiency and accuracy in such scenarios? 3. The paper explores various block sizes and their impact on performance. However, does the optimal block size vary across different layers? Could a dynamic block size strategy be more effective, where different layers use different block sizes based on their specific characteristics? 4. Given the growing interest in hybrid quantization techniques, how does BlockDialect interact with other methods? Please discuss the possibility of further combining with other strategies like SmoothQuant. 5. While the paper focuses on accuracy and energy efficiency, real-time inference latency is also crucial for practical deployment. How does BlockDialect impact the inference latency compared to full-precision models and other quantization methods? Are there any specific hardware accelerators that could further optimize the latency of BlockDialect? 6. The evaluation focuses on common-sense reasoning tasks. How does BlockDialect affect performance on other downstream tasks, such as machine translation and text summarization? Are there any specific tasks where BlockDialect might show more pronounced benefits or limitations? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Resource overhead of real-time MSE calculation** * Qualitatively, MSE-based method requires 16 rounds (per dialect) of quantization, each involving FP16 square mean error accumulations for every block element, whereas our 2-stage selection efficiently operates in a single pass using 5-bit fixed-point values, logical operations, and simple counting. * Quantitatively, we design in SystemVerilog and synthesize quantization logic implementations with a 130nm process. For a fair comparison, we aim to match the latencies as closely as possible while evaluating area and power. Additionally, since FP16 operations in the exact MSE-based approach incur significant overhead, we convert into fixed-point representation and truncate the lower bits to reduce the complexity. Nevertheless, as shown in the table, MSE-based logic is 9.3× larger and consumes 10× more power. Notably, MSE-based logic fails to meet timing at 100MHz (synthesized at 83.3MHz), whereas our logic meets timing constraints even at 250MHz, underscoring the efficiency of our approach and the impracticality of online MSE-based selection. |Format selection method|Synthesized freq. ($MHz$)|Latency ($clock\ cycle$)|Power ($mW$)|Area ($\mu m^2$)| |:-:|:-:|:-:|:-:|:-:| |2-stage based|100|5|0.7|42833.6| |MSE-based|83.3|8|6.9|399409.3| **Scalability of BlockDialect** * Model size upper bound is due to our GPU resource constraints, not BlockDialect limitations. A key advantage of block-wise quantization is its independent block processing, which ensures efficiency and accuracy remain consistent regardless of model size. While there is a potential concern regarding the linear increase in per-block metadata as the model scales, our results in the paper shows that we can keep this overhead below 0.3 bits per data. Additionally, as discussed in the following response, dynamic block sizing (e.g., larger blocks for quantization-insensitive layers) can help mitigate this overhead. **Dynamic block size strategy** * That's a great question. In Table 3 (page 8), we analyze the impact of the dynamic block size strategy and find that the sensitivity of each sub-layer varies. Additionally, using larger blocks for most layers, while reserving smaller blocks for quantization-sensitive sub-layers, improves accuracy in certain cases. Based on this observation, we expect that selectively applying small block to quantization-sensitive layers (e.g., those with more outliers or greater influence on the final output) could enhance overall performance. **Combining with other strategies** * In Appendix G, we present the experiment results and discussion on combining BlockDialect with SmoothQuant. In summary, the combination leads to overall improvement, but with limited gains, indicating that the two methods are not entirely orthogonal. A more refined approach, such as applying smoothing exclusively to 'extreme' outliers before applying BlockDialect, may be beneficial. Additionally, we further explored combining BlockDialect with the Hadamard transformation-based rotation method and observed similar trends (limited gains) as in the BlockDialect-SmoothQuant hybrid quantization. **Impact on the inference latency** * BlockDialect offers clear latency benefits: (1) higher throughput with full-path low-precision MACs, unlike methods relying on FP16 MACs after dequantization or some matrix multiplications, and (2) reduced data movement latency from 4-bit weight/activation quantization (including KV cache), both crucial for inference latency. To ensure these gains are not offset by additional logic overhead, we assess this in our response to Reviewer 4 (ID: 8c5N) (see the “overhead of quantization/dequantization” section). While BlockDialect is deployable on GPUs, further optimizations, such as specialized MACs for our formatbook and efficient quantization/dequantization logic, will reduce latency. To achieve this, we are also taping out a hardware accelerator to incorporate these optimizations. **Workload sensitivity of BlockDialect** * We conducted additional experiments with different networks and downstream tasks (GLUE benchmark). Please see our response to Reviewer 4 (ID: 8c5N), section "Performance comparison of BlockDialect, NVFP4, and MXFP4 across various model sizes and architectures". BlockDialect uses a power-of-two shared exponent and smaller block sizes compared to conventional scale factor methods, preserving accuracy through optimal dialect assignment. This allows BlockDialect to effectively localize outliers, making it particularly beneficial for tasks with high magnitude variation and unstructured outliers. **Sink in Figure 2** * Thanks for your valuable feedback. We will reorganize the layout and clarify any ambiguities in the next revision. If the term "sink" refers to the range between 0 and 4 in the Figure 2(b), this occurs because we normalize the values using $2^{MaxExponent-2}$ , making the maximum magnitude become $2^2\cdot mantissa\ ([1,2))\rightarrow [4,8)$.
Summary: The paper introduces a block-wise finegrained mixed format technique (called BlockDialect) that assigns an optimal number format to each block and FP4 variants data format (called DialectFP4) that is built on shared exponent among a group of numbers. They also propose the method of efficient online quantization/dequantization and DialectFP4 computation method. They demonstrate that the proposed approach outperforms existing methods across multiple LLMs while leveraging lowprecision, energy-efficient MAC unit. Claims And Evidence: Some claims lack sufficient support. For instance: 1. The author states that "We introduce BlockDialect, a novel block-wise finegrained mixed format technique that assigns an optimal number format to each block" However, the proposed 4-bit BlockDialect may not be the optimal format; alternative methods, such as codebook-based quantization, could theoretically offer better performance. 2. Additionally, the author mentions in Section 3.1 that subtracting 2 from the shared exponent allows for a direct comparison with FP4 E2M1, but there is no explanation or compelling evidence provided to support this claim. Methods And Evaluation Criteria: The proposed methods make sense for energy-efficient LLM inference. The evaluation criteria also makes sense. Theoretical Claims: There is no theoretical claims in this paper. Experimental Designs Or Analyses: I carefully reviewed the author's experimental design. ### Model and Dataset Selection The author utilized the LLaMA-2-7B, LLaMA-3-8B, and Mistral-7B models, evaluating them on the WikiText2 dataset while employing zero-shot commonsense reasoning tasks . Strengths: The selection of widely used LLM models and standard datasets ensures the generalizability and comparability of the experiments. Potential Issues: All chosen models are around 7 billion parameters in size, without consideration for networks of varying sizes. Besides, the MMLU is not used. ### Dataformat Compare The study compares BlockDialect with baseline methods such as MXFP4, LLM-FP4, and Quarot. A variety of quantization methods were selected as baselines, encompassing both hardware-supported and software-supported quantization techniques, ensuring a comprehensive comparison. Potential Issues: The comparison does not include codebook-based methods, such as Vector Quantization. I wonder whether table-based approaches could potentially achieve similar or even higher performance through search methods, although storing the table entries might result in slightly increased storage requirements. ### Effective Bit Width Calculation The experimental design calculated the effective bit width (Eff. bit), taking into account the overhead of scaling factors or dialect identifiers. By calculating the effective bit width, the design more accurately reflects the memory footprint and computational efficiency of the quantization methods. Potential Issues: The specific calculation method for effective bit width is not detailed, which may lead to a lack of transparency in the results. ### Evaluation of Hardware Implementation The author modeled MAC units at different precision levels using SystemVerilog and synthesized them with Synopsys Design Compiler to evaluate area and power consumption. The assessment of hardware implementation demonstrates the efficiency of BlockDialect in practical hardware contexts. Potential Issues: The specific parameters for the hardware evaluation (e.g., clock frequency, process node) are not clearly stated, which could affect the comparability of the results. Additionally, the reported findings indicate that the method proposed by the author shows significantly lower overhead for the multiplier compared to INT5, raising questions about the validity of the evaluation settings. Supplementary Material: I carefully reviewed the author's code, mainly focusing on the W4A4Linear class, particularly regarding how to perform shared exponent and how DialectFP4 quantization finds the MSE that minimizes the error after dequantization. Relation To Broader Scientific Literature: ### Block-wise Quantization Block-wise quantization is a widely adopted technique that assigns scaling factors on a per-block basis to constrain the impact of outliers. The author use block-wise quantization is the same as previous methods. ### Non-Uniform Quantization Non-uniform quantization serves as an alternative to integer formats, aiming to better capture data distributions in large language models. Floating-point formats excel in handling the wide value ranges encountered in deep learning models, while lookup-based formats better align with the distributions of large language models through statistical distribution quantile functions. This paper introduces DialectFP4, a set of FP4 variants (akin to "dialects") tailored for diverse block-level data distributions, and achieves online DialectFP4 activation quantization through a practical two-stage approach. Compared to existing non-uniform quantization methods, this approach offers greater flexibility and hardware efficiency. ### Activation Quantization Activation quantization faces challenges such as real-time execution, large dynamic ranges, and inter-channel outliers. Existing methods include mixed-precision subgrouping, migrating quantization difficulty to weights, and using Hadamard matrices to reduce outliers. This paper achieves efficient activation quantization by introducing FP4 variants and adopting a two-stage approach for online optimal format selection. Compared to existing activation quantization methods, this approach reduces reliance on high-precision operations, improving energy efficiency and inference speed. Essential References Not Discussed: 1. Hu X, Cheng Y, Yang D, et al. "I-LLM: Efficient Integer-Only Inference for Fully Quantized Low-Bit Large Language Models." arXiv preprint arXiv:2405.17849, 2024. (This paper mainly focuses on how to quantize all activations and weights in LLMs to enable fully integer-based computations in hardware, which is a good exploit of software-hardware co-design.) 2. Yuan Z, Shang Y, Zhou Y, et al. Llm inference unveiled: Survey and roofline model insights[J]. arXiv preprint arXiv:2402.16363, 2024. (This paper provides an overview of how different methods impact inference efficiency from the perspective of software-hardware co-design.) Other Strengths And Weaknesses: ### Strengths 1. Unlike existing methods that primarily focus on "how to scale," this paper introduces a new perspective of "how to represent" each block, removing the reliance on a single scaling strategy and thereby better capturing the data distribution within the block. 2. The hardware cost is considered in the design of data format, which is a successful explore of software-hardware co-design for efficient LLM inference. 3. Experimental results show that BlockDialect significantly outperforms the existing MXFP4 format across multiple LLM models, especially in full-path matrix multiplication quantization, with only a 5.45% (LLaMA3-8B) and 2.69% (LLaMA2-7B) accuracy drop compared to full precision, while reducing the bit usage per data. ### Weaknesses 1. One of the main drawback of this paper lies in the insufficient clarity of the methodology section. Without referring to the code, the entire methodology section is difficult to fully understand. 2. There are some questions when I execute the code (See Question for authors). Other Comments Or Suggestions: The comparison does not include codebook-based methods, such as Vector Quantization. I wonder whether table-based approaches could potentially achieve similar or even higher performance through search methods, although storing the table entries might result in slightly increased storage requirements. Questions For Authors: 1. Subtracting two facilitates direct comparison with FP4 E2M1 ? Why? Can you provide any proof? 2. Hardcoded DialectFP4—where do these numbers come from? Why were these specific values chosen, such as 4.5, 7.5, 5.5, and 5.0, which cannot be represented by FP4-E2M1? What is the purpose of selecting these? I find the author's description very difficult to understand. Moreover, when I ran the W4A4Linear using the author's code, I found numbers like 0.25 and 0.275, which FP4 cannot represent at all. Why is that? 3. The author mentions in Figure 5 the operation man << exp, followed by truncation, introducing an intermediate 5-bit data type. Why is this 5-bit data type used instead of other intermediate data types? How was the choice of 5 bits made? Is there proof that this is a result of a trade-off between precision and bit count? 4. When sharing the exponent, why use log2 to determine the exponent of the absolute maximum value within a block, when bit manipulation could be employed directly (since the data format is based on FP16)? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Reason for Subtracting 2 from the Shared Exponent** * When normalizing by the maximum exponent, normalized values fall within [0, 2), whereas FP4 variants span [0, 7.5]. Subtracting 2 from the shared exponent extends the range to [0, 8), enabling FP4 variants to represent normalized values without additional scaling. This also enables direct comparisons and highlights FP4 E2M1's limitations, such as its inability to represent values in the (6, 8) range within a power-of-two dynamic range. **How to determine the representable values of DialectFP4?** * The representable values in DialectFP4 are designed to capture diverse block distributions that a single FP4 E2M1 format cannot express. * To account for varying dynamic ranges of fine-grained blocks, each dialect's largest value is selected from a range of 7.5 to 4.0, while second-largest values vary to adapt to large-magnitude distributions. * In the second stage of format selection, we choose the best dialect based on how many block values fall within each dialect's beneficial range. We adjust the second-largest value, which only varies between dialects with the same maximum, to ensure the widths of the beneficial ranges of the compared dialects are similar. This approach helps avoid bias toward any particular dialect. * The representable values follow a fixed granularity of 0.5, ensuring all values are scaled integers (integer x 0.5). This facilitates efficient integer MAC operations. Additionally, six values remain consistent across dialects to simplify quantization/dequantization logic. * While DialectFP4 demonstrates strong performance, we acknowledge that it is not optimally constructed. There is potential to adaptively generate DialectFP4 tailored to specific models or layers. * Unexpected numbers may result from the shared exponent applied after quantization. For example, with a shared exponent of -3, a quantized value of 3.0 reverts to 0.375 (3/8). If this doesn't apply, please clarify where your values appear. **Justification for the 5-bit intermediate representation** * The representable magnitudes of DialectFP4 range from 0.0 to 7.5 with a 0.5 granularity, requiring three integer bits and one fractional bit. However, to handle rounding accurately, an additional fractional bit is needed to determine whether to round up or down to the nearest representable value, resulting in a total of 5 bits. **We appreciate the feedback and will clarify the points explained above in the next revision.** --- **Limited network and workload diversity** * We conduct experiments across networks of varying sizes and additional workloads, such as MMLU. For further details, please refer to our response to Reviewer 4 (ID: 8c5N), “Performance comparison of BlockDialect, NVFP4, and MXFP4 across model sizes and architectures” section. **Comparison with table-based approaches and I-LLM** * While table-based approaches with high-precision entries may perform well, they typically rely on high-precision MACs, which conflicts with our goal of enabling low-precision MACs for better hardware efficiency. Additionally, codebook-based methods are generally restricted to weight-only quantization due to the challenges of adaptive codebook construction and online activation codebook quantization. This is why we exclude this method in our weight-activation quantization comparison. * We appreciate the mention of relevant papers. I-LLM functions as an adaptive online variant of SmoothQuant, migrating quantization difficulty to easier-to-quantize matrices, albeit with additional tracking overhead. As discussed in Appendix G, we believe such approaches can complement BlockDialect by managing outliers at different levels. In this context, I-LLM serves as a valuable reference for improving accelerators using BlockDialect. **Effective bitwidth calculation** * Due to character limitations, we have integrated this response with our reply to Reviewer 3 (ID: rDoa) under the "Metadata Storage Requirements" section. **Hardware evaluation parameters** * As stated in the implementation section and the caption of Table 5, we use a 0.5 GHz clock speed and a 45 nm process node for our hardware evaluation. **Comparison between proposed MAC and INT5 MAC** * The discrepancy mainly arises from the sign-magnitude (Ours) vs. 2's complement (INT5) implementation. As recent works$^{[1,2]}$ have noted, sign-magnitude representation offers advantages over 2's complement MAC due to its much lower toggle rate and simpler sign processing. - ###### [1] Wang et al. "14.3 A 28nm 17.83-to-62.84 TFLOPS/W Broadcast-Alignment Floating-Point CIM Macro with Non-Two's-Complement MAC for CNNs and Transformers." ISSCC, 2025. [2] Han & Chandrakasan. "MEGA. mini: A Universal Generative AI Processor with a New Big/Little Core Architecture for NPU." ISSCC, 2025. **Use of log2 function** * That’s a great point. We just used log2 as it seems intuitive, but will consider changing using an efficient bit manipulation.
null
null
null
null
null
null
Activation by Interval-wise Dropout: A Simple Way to Prevent Neural Networks from Plasticity Loss
Accept (poster)
Summary: This paper presents a new method for addressing loss of plasticity. The authors investigate why Dropout doesn’t help with loss of plasticity, although it helps with generalization. Through empirical work, the authors pinpoint the causes and introduce AID, an improvement over Dropout that maintains plasticity and improves generalization. The authors show their method's effectiveness in a wide range of experiments. Claims And Evidence: The paper is well-written and easy to follow. I like how the authors decided to show the problem first with Figure 1 and tried to convince the reader first before introducing the algorithm itself. The results show the effectiveness of AID on performance. The authors also complement their empirical work with some theoretical analysis of why AID is able to address loss of plasticity. Methods And Evaluation Criteria: The authors considered standard tasks for plasticity like input/output permuted MNIST, continual CIFAR10/CIFAR100/TinyImageNet. The authors also included challenging settings like incremental class learning with CIFAR100 and TinyImageNet in addition to reinforcement learning with high-replay-ratio DQN. Theoretical Claims: I didn't check the correctness of the proofs of the theoretical claims. Experimental Designs Or Analyses: The authors use standard benchmarking tasks that are widely used and accepted by the research community. I don’t see any issues in their experimental design or analysis. Supplementary Material: I didn't check the supplementary materials. Relation To Broader Scientific Literature: The paper provides a comprehensive comparison of many methods addressing loss of plasticity. In addition, they related their method to similar ideas in the literature, namely DropReLU, CRELU, and RRELU. I think that answering the question of why Dropout doesn’t address loss of plasticity would interest a significant number of researchers. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The paper is well-written and well-motivated. The performance gain when using AID seems to be significant and superior to other methods, even though it is a simple method. The approach is easy to implement, which is a plus. All in all, I would like to see this paper published. There are a few issues that can be addressed easily by the authors. If the authors made efforts towards fixing the issues, I’m going to raise my score to reflect the changes. The main weaknesses I see in the paper: - It’s not clear how AID is sensitive to its hyperparameter. The authors need to be transparent about this information. - The comparison with dropout in the RL experiment is missing (Figure 6). Other Comments Or Suggestions: - There is room to improve the introduction. Overfitting and loss of plasticity are often correlated, thus, most algorithms addressing overfitting would also address loss of plasticity. However, dropout (a method that addresses overfitting) does not address loss of plasticity. Thus, it’s interesting to understand why. - The appendix refers to the method DIA instead of AID, which needs to be fixed. - References with missing years: - (Shin et al.) - (Delfosse et al.). Questions For Authors: > “During the testing phase of traditional dropout, each weight is scaled by the probability $1−p$, which is the chance that a node is not dropped. Similarly, in AID, as each interval has a unique dropout probability, the pre-activation values within each interval $I_j$ are scaled by $1−p_j$ for testing” > “Dropout layer with probability p scales the value by $1/(1−p)$ at training phase, so it acts as an identity function during the test phase. However, since AID functions as a non-linear activation, we scale the value at test phase” Shouldn’t the scaling be $1/(1-p)$ for dropout in the first paragraph and by $1/(1-p_j)$ for AID in the second paragraph? Additionally, can the authors share why they are not scaling in Algorithm 2? Also, why the scaling is $1-p_j$ in Algorithm 1 instead of $1/(1-p_j)$ \ \ \ Another question: how would AID perform under the streaming supervised learning setting or the streaming reinforcement learning setting? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful comments and their positive assessment of the paper. We appreciate the recognition of our motivation, empirical scope, and clarity, and we respond below to the concerns raised. --- > It’s not clear how AID is sensitive to its hyperparameter. The authors need to be transparent about this information. We agree and will include a sensitivity analysis in the camera-ready version. Below, we report the mean and standard deviation of final accuracy for various AID hyperparameter settings (*p*) in the **continual full** setup, which we consider representative. We use the same seeding procedure as in the main paper. We report $p=0.6\sim 0.9$, as $p=0.5$ yields a linear network and $p=1$ gives ReLU. We exclude $p<0.5$ due to symmetry around 0.5. | *p* (Hyperparameter) | 0.6 | 0.7 | 0.8 | 0.9 | Best Baseline | | --- | --- | --- | --- | --- | --- | | CIFAR10 (CNN) | 0.601 ± 0.004 | 0.700 ± 0.004 | 0.746 ± 0.001 | **0.755 ± 0.004** | 0.717 ± 0.003 | | CIFAR100 (Resnet-18) | 0.5814 ± 0.002 | **0.640 ± 0.002** | 0.631 ± 0.003 | 0.600 ± 0.003 | 0.578 ± 0.005 | | TinyImageNet (VGG-16) | 0.438 ± 0.002 | **0.508 ± 0.0004** | 0.494 ± 0.004 | 0.454 ± 0.002 | 0.435 ± 0.001 | As shown, AID mostly outperforms the baselines across a range of hyperparameters, though some variance is expected. Additionally, as shown in Table 3, we used a very small hyperparameter search space for the **generalizability** and **RL** experiments—where AID still performs robustly. This suggests AID works well without extensive tuning. --- > The comparison with dropout in the RL experiment is missing (Figure 6). Dropout is not commonly used in deep RL due to its high variance and unstable training behavior. We also found that Dropout offered no consistent advantage over the vanilla model, which is why it was excluded from Figure 6. However, given that the paper discusses Dropout’s role in plasticity, we agree that including its RL results would be valuable. Below, we report the mean and standard deviation of final **human-normalized scores** for three games identified in Appendix G.3.2 as suffering from plasticity loss: | Game\Method | Vanilla | Dropout | AID | | --- | --- | --- | --- | | Asterix | 0.115 ± 0.007 | 0.180 ± 0.025 | **0.318 ± 0.054** | | BeamRider | 0.107 ± 0.021 | 0.124 ± 0.009 | **0.242 ± 0.052** | | DemonAttack | 0.122 ± 0.039 | 0.155 ± 0.052 | **0.714 ± 0.245** | While Dropout shows minor gains over vanilla in some cases, the improvements are negligible and likely reflect ensembling effects rather than plasticity preservation. In contrast, AID demonstrates consistent and substantial improvements, reinforcing its plasticity-related benefits. --- > There is room to improve the introduction… We believe this refers to the paragraph beginning *“Widely used methods…”* on page 2. We agree that including the relationship between the overfitting and plasticity loss can be made clearer, and including data augmentation as an example—as the reviewer suggests—would enhance the explanation. We will revise this in the camera-ready version. --- > The appendix refers to the method DIA instead of AID, which needs to be fixed. > > References with missing years [...] Thank you for pointing these out. We will correct these issues in the final version. --- > “Shouldn’t the scaling be 1/(1−p) for Dropout...?” We appreciate the reviewer’s attention to detail here. The confusion likely stems from different ways in implementing Dropout. To clarify: - **Standard implementation** [1]: no scaling at train time; multiply by $1 - p$ at test time. - **Inverted implementation** (e.g., PyTorch’s `nn.Dropout`): scale by $1/(1 - p)$ at train time; no scaling at test time. In our case, applying **inverted Dropout** to AID would make it act as an identity function during inference, removing non-linearity. Thus, we adopt **standard Dropout** way, which allows AID to maintain its activation function properties at test time via a modified leaky ReLU $r_p$. Algorithm 2 may appear not to scale, but the scaling is implicitly applied via $r_p$ during inference. We hope this helps in understanding how AID works. --- > How would AID perform under streaming supervised or RL settings? We thank the reviewer for the interesting question. Though we haven’t explicitly tested AID in streaming, our results show that AID effectively preserves plasticity in several challenging settings. This suggests it may also be helpful in streaming contexts. However, streaming learning often involves **catastrophic forgetting** in addition to plasticity loss. Since our work does not focus on forgetting mitigation, the effectiveness of AID in streaming scenarios would likely depend on the relative impact of forgetting versus plasticity loss in those settings. We believe this is a promising direction for future investigation. --- [1] Srivastava et al., Dropout: a simple way to prevent neural networks from overfitting, JMLR 2014 --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their response. My concerns are addressed and therefore I increased my score accordingly.
Summary: The paper proposes AID, a novel activation function that generalizes Dropout to intervals, where it can be applied with different probabilities to different intervals of the activations. In its simplified version, it has 2 intervals (positive and negative) and can be interpreted as an interpolation between ReLU and a linear activation. It claims that 1) AID effectively tackles plasticity loss in continual learning and reinforcement learning, unlike Dropout. 2) This is achieved by a regularization towards a linear regime. 3) AID also improves generalization in supervised learning. ## update after rebuttal All my concerns have been addressed. It also seems that the other reviewers' concerns have been addressed (apart from apparently Reviewer AVaP, with whom we can engage in discussions if they feel otherwise). I maintain my score from the end of the rebuttal. Claims And Evidence: The claims are supported by evidence, which I do not find fully sufficient. 1.a) continual learning - The claim is supported by Figure 2 showing the benefits of AID on plasticity loss and generalizability, unlike dropout. 1.b) Reinforcement learning - Figure 6 does not show benefits on plasticity in my understanding. I would expect to see the vanilla method collapse but AID not to isolate plasticity as a cause. Otherwise, it's not clear that the benefits of AID come from better plasticity. 2.) This is addressed in Theorem 4.1. The notation is not clear, and the interpretations of the results do not help in supporting the claim. 3.) The claim is supported by Table 1. Methods And Evaluation Criteria: The methods and evaluations in the paper are very well chosen and performed. The paper categorizes and compares AID to multiple SoTA methods proposed to mitigate plasticity loss. It uses supervised learning, continual learning, and reinforcement learning experiments. All the experiments have enough statistical significance. Theoretical Claims: I did not check the proofs of the theoretical claims which are in the Appendix. Experimental Designs Or Analyses: Figure 4. Vanilla does not seem to collapse; how are the benefits in plasticity isolated from the benefits in generalization? Supplementary Material: I did not review the supplementary material. Relation To Broader Scientific Literature: The paper provides a good discussion of the broader scientific literature relating to plasticity and activation functions. Essential References Not Discussed: The paper references the important literature combining dropout and ReLU and other attempts at mitigating plasticity loss. Other Strengths And Weaknesses: +The explanation of why dropout does not prevent plasticity is beneficial. -Only performance is used to diagnose plasticity loss. The paper would be stronger by incorporating other metrics like feature rank, weight norm, activation norms, etc. Other Comments Or Suggestions: No other comments. Questions For Authors: No other questions. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed feedback and thoughtful questions. We appreciate your recognition of the motivation, experimental setup, and analysis, and we address your concerns below. --- > 2. b) Reinforcement learning [...] I would expect to see the vanilla method collapse [...] > Reinforcement learning settings do not induce non-stationarity as severely as random-label or permuted MNIST. Indeed, prior studies with similar setups [1,2] also do not report full IQM score collapse. Instead, they demonstrate the presence of plasticity loss in reinforcement learning by showing that addressing it—through improvements in various indicators of plasticity such as dormant neurons and effective rank—leads to enhanced performance. Our experiments use the same setting—four times higher replay ratio (RR) than in typical DQN—which has been shown to accelerate plasticity loss [1,2,3], and is widely used in recent plasticity research [1,2,4]. Game-specific results provide additional insight: AID outperforms the baseline in **DemonAttack**, **SpaceInvaders**, and **Asterix**, where the vanilla model struggles—suggesting plasticity loss. In contrast, for games like **Pong** and **Qbert**, where the plasticity loss is not the main bottleneck for the performance , AID does not benefit from mitigating plasticity loss. While other factors may contribute to performance, the results of the trainability and high RR experiments support our interpretation that AID mitigates plasticity loss. --- > 3. “Theorem 4.1. The notation is not clear, and the interpretations of the results do not help in supporting the claim.” > We appreciate the reviewer’s feedback. We acknowledge that the current notation and interpretation in Theorem 4.1 could be clearer, and we will revise these in the camera-ready version. Theorem 4.1 shows that the lower bound of the loss function under AID, denoted as L_{\text{AID}*p}, can be decomposed into the loss under a modified leaky ReLU, L*{r_p}, and a regularization term that encourages **linearity** in the network. As the hyperparameter p approaches 0.5, AID imposes stronger regularization toward linear behavior. Prior work [5] has shown that **linear networks do not suffer from plasticity loss**, suggesting that the linearity-inducing effect of AID (as formalized in Theorem 4.1) directly contributes to mitigating plasticity loss. In contrast, naive dropouts cannot induce the regularization term associated with linearity (Appendix C.2), which may explain its ineffectiveness at preserving plasticity. --- > Figure 4. Vanilla does not seem to collapse; how are the benefits in plasticity isolated from the benefits in generalization? > Accuracy collapse is the indicator of trainability. The experiment in Figure 4 focuses on **generalizability**, evaluated by comparing the performance of the vanilla model to that of a fully reset model. A performance gap here indicates **loss of generalizability**, a form of plasticity degradation. A method with poor generalizability but good generalization may initially perform well, but lose its advantage over time. In contrast, AID maintains consistent performance throughout training. As shown in Section 3.2, Dropout enhances generalization performance but does **not** improve generalizability. AID, by narrowing the performance gap, demonstrates its ability to preserve generalizability. --- > - Only performance is used to diagnose plasticity loss. […] > In Appendix G.1.2, we evaluate multiple **plasticity metrics**, including **dormant neuron ratio** [1], **average unit sign entropy** [5], and **effective rank** [2], comparing vanilla, Dropout, and AID. In Appendix G.1.3, we also analyze **pre-activation distribution**, which is one of the causes that induces plasticity loss [6]. These analyses show that: - Dropout does not improve plasticity indicators relative to the vanilla baseline. - AID shows favorable trends across all metrics. - AID causes significantly less change to pre-activation distributions than Dropout. Thus, our claims are supported by a combination of **performance metrics** and **plasticity metrics**. --- [1] Sokar et al., *The Dormant Neuron Phenomenon in Deep Reinforcement Learning*, ICML 2023 [2] Kumar et al., *Implicit Under-Parameterization Inhibits Data-Efficient Deep RL*, ICLR 2021 [3] Nikishin et al., *The Primacy Bias in Deep Reinforcement Learning*, ICML 2022 [4] Elsayed et al., *Weight Clipping for Deep Continual and Reinforcement Learning*, RLC 2024 [5] Lewandowski et al., *Plastic Learning with Deep Fourier Features*, ICLR 2025 [6] Lyle et al., *Disentangling the Causes of Plasticity Loss in Neural Networks*, arXiv 2024 --- Rebuttal Comment 1.1: Comment: The authors addressed all my concerns. Raising my score from 2 to 4.
Summary: The proposed method in this paper is motivated by the characteristic of dropout, poor trainability and good generalizability. By enhancing the trainability of dropout in AID, the model can have both good trainability and generalizability. The main point of AID is separating the neurons into positive and negative part, and using different masks to utilize both positive and negative neurons. As a result, the network can obtain better trainability than using naive dropout. In the experiment, AID outperforms most of the baselines in warm-start scenario, and is also effective on reinforcement learning. Claims And Evidence: The main claims are not quite persuasive. The motivation experiments are not related to the results in the experiment section. Methods And Evaluation Criteria: The evaluation criteria does make sense. Theoretical Claims: The theoretical results are not aligned to the empirical results. Experimental Designs Or Analyses: Experiment designs are valid. Supplementary Material: I checked only the experiment details in the supplementary materials Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths 1. In the warm-start scenario experiment, AID achieves much higher accuracy than other baselines. Furthermore, AID is also effective on reinforcement learning Weaknesses 1. As mentioned in this paper, the main disadvantage of using dropout is its low trainability in non-stationary target scenario, and the authors show the degradation of the trainability in permuted MNIST and random label MNIST experiment. However, in the experiment section, the authors only carried out the experiment on warm-start scenario, and it is hard to figure out whether the trainability issues are really occur in this scenario. If it is not, does the performance increase are from enhanced trainability? or just from the generalization capability of AID? Since the initial test accuracy of AID is much higher than naive dropout, it is hard to know that AID can actually resolve the trainability issue. 2. It may not be a fair comparison in warm-start scenario between AID and other baselines since the initial test accuracy of AID is much higher than other baselines. To make the comparison be fair, all the methods have same generalizability at the learning phase, and only additional treatment are applied when the number of training data increases. 3. If AID is applied only when the experiment transition occurs similar to DASH [1] (i.e. apply AID when the number of training data increases), does AID still achieve better performance than others? If it does not increase the performance, we can think that applying AID is just giving generalization capability similar to other generalization techniques (e.g. data augmentation, or batch normalization) which are always effective regardless of the learning scenario such as warm-start or non-stationary target experiments. 4. As in theorem 4.1, the authors said applying AID can inject the linearity which can mitigate the plasticity loss (i.e. trainability). However, I wonder this theorem is actually aligned with the experiment results in warm-start scenario. [1] Shin et. al., DASH: Warm-Starting Neural Network Training in Stationary Settings without Loss of Plasticity, NeurIPS, 2024 Other Comments Or Suggestions: Already mentioned in the above section. Questions For Authors: Already mentioned in the Strengths And Weaknesses section. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the thoughtful and constructive feedback. First, we clarify a potential misconception: **Dropout does not significantly improve generalizability.** > “The proposed method in this paper is motivated by the characteristic of dropout, poor trainability and **good generalizability**.” > In the literature, generalization performance in deep learning and generalizability [1] in plasticity have different meanings. This is addressed in Section 3.2 and illustrated in Figure 2 (right). While Dropout slightly improves performance, it does not reduce the performance gap between cold-start and warm-start, which signals loss of generalizability. In contrast, AID narrows this gap, indicating better generalizability. Thus, we show that Dropout is limited in both trainability and generalizability, whereas AID improves both. We appreciate the opportunity to clarify this important distinction and will revise the wording in the camera-ready version to avoid confusion. --- We address the reviewer’s concerns below. > 1. [...] authors only carried out the experiment on warm-start scenario [...] it is hard to figure out whether the trainability issues are really occur in this scenario. [...] > While the reviewer notes that our experiments focus on the warm-start scenario, our paper evaluates both **trainability** and **generalizability**, as shown in Section 5.1.1 and Section 5.1.2, respectively. Additional trainability results are included in Appendix G.1. Importantly, trainability issues typically arise in **non-stationary settings** (e.g., permuted or random-label MNIST) [5], while generalizability issues occur in **stationary continual settings** such as warm-start and class-incremental learning [2,3]. Prior work [1,4,5] has established these tasks as **standard benchmarks** for studying each form of plasticity loss, and we follow the same practice in our experiments. > 1. [...] initial test accuracy of AID is much higher than naive Dropout [...] To make the comparison be fair, all the methods have same generalizability at the learning phase [...] > We understand the reviewer’s concern about fairness in comparison. The initial accuracy difference stems from hyperparameter choice. All methods were tuned based on final accuracy, reflecting typical practice. For instance, we found the Dropout setting with similar initial accuracy to AID, but their final accuracy gets lower (e.g., continual full, ResNet-18, table below). This highlights AID’s strength in consistently retaining generalization throughout training. | Epoch | 100th | 1000th | | --- | --- | --- | | Dropout p=0.1 | 0.255 ± 0.005 | 0.5395 ± 0.003 | | Dropout p=0.3 | 0.136 ± 0.023 | 0.5398 ± 0.005 | | AID p=0.7 | 0.289 ± 0.005 | 0.638 ± 0.001 | > 2. [...] and only **additional treatment are applied when the number of training data increases.** This is a valuable point. We distinguish two classes of methods for generalizability in warm-start: - Applied only as data comes in: S&P [3], head reset, DASH [2] - Apply continuously: H&T [1], Fourier activation [4], AID AID belongs to the second category. These methods are advantageous as they do not require task boundary information [6], making them more practical—especially in settings like RL where task boundaries are implicit or unknown. They can also complement transition-based ones, making AID broadly applicable. > 3. If AID is applied only when the experiment transition occurs [...] does AID still achieve better performance than others? [...] Since AID is implemented as an activation function, it is structurally unsuitable for being applied only at task transitions. > 4. I wonder this theorem is actually aligned with the experiment results in warm-start scenario. We agree this is a subtle point. Theorem 4.1 supports AID’s ability to improve trainability by regularizing toward linear behavior. It does not directly explain the generalizability improvements observed in warm-start settings. However, prior work [4] has empirically shown that encouraging linearity can also lead to improved generalizability in several continual learning scenarios. Similarly, AID’s ability to inject linearity appears to yield benefits not only for trainability but also for generalizability in our experiments. This contrasts with other methods for trainability, which often fail to improve generalizability [1]—further supporting this interpretation. --- [1] Lee et al., Slow and Steady Wins the Race: Maintaining Plasticity with Hare and Tortoise Networks, ICML 2024 [2] Shin et al., DASH: Warm-Starting Neural Network Training in Stationary Settings without Loss of Plasticity, NeurIPS 2024 [3] Ash et al., On Warm-Starting Neural Network Training, NeurIPS 2020 [4] Lewandowski et al., Plastic Learning with Deep Fourier Features, ICLR 2025 [5] Lyle et al., Disentangling the Causes of Plasticity Loss in Neural Networks, arXiv 2024 [6] Aljundi et al., Task-Free Continual Learning, CVPR 2019 --- Rebuttal Comment 1.1: Comment: Thank you for the effort on the authors' rebuttal comment. After carefully reading the comments, I still wonder comparing AID with other baselines is still fair. Since applying AID can boost the performance on single task, it is hard to figure out the excessive test accuracy comes from resolving the over-fitting problem in warm-starting scenario or just improving a single task performance using a proper regularization technique. Though the authors give additional experiments, as we can see in the bottom figure in Figure 4, only using dropout outperforms other baselines, and I don't think this resuls shows the dropout can resolve the over-fitting problem in warm-start scenario. Based on the rebuttal comments, I will keep my score. --- Reply to Comment 1.1.1: Comment: We appreciate the reviewer’s continued engagement and thoughtful remarks following our rebuttal. We understand the reviewer’s concern that AID may appear to alleviate plasticity loss (PL) in the warm-start scenario simply due to its improved generalization performance. First, we would like to clarify that there is strong evidence in Figure 2 (right) to support the claim that the improved performance of AID comes from addressing generalizability loss, not generalization improvement. The obvious way to determine if a model has generalizability loss is to compare its performance to a model with all parameters reset when new data is added, since a freshly initialized model has perfect plasticity. Using this way, previous studies for plasticity loss [1,2,3] have shown that plasticity loss exists in their baseline models and their proposed model solves the plasticity loss. Based on this approach, the greater the difference between the performance of a model with AID and a full reset (resetting all parameters whenever new data comes in, also termed as cold start) and the performance of a model with AID alone, the greater the loss of generalizability. Figure 2 (right) shows that the accuracy gap between AID+full reset (cold start) and AID is very low (3.3%p), and the accuracy gap between Dropout+full reset (cold start) and Dropout is high (10.1 percentage points). In particular, the accuracy gap of Dropout is similar to that of the vanilla model (10.9%p), indicating that Dropout does not address the generalizability loss at all. On the other hand, AID has an accuracy gap of 3.3%p, which is very low compared to the vanilla model (10.9%p) and Dropout (10.1%p). These results show that AID's performance improvement comes from addressing generalizability loss, while Dropout does not. To provide further evidence, we extended the experiment in Figure 2 (right), where data is added only once, to compare performance differences in the continual full setting, where new data is added multiple times. We analyzed the difference in final accuracy for each chunk, computed as the performance gap between models trained with and without full reset. We adopted three methods—Vanilla, Dropout, and AID—in the **continual full** setting. A positive difference indicates that full reset improves performance, suggesting the presence of generalizability loss. For each method, hyperparameters were independently tuned to maximize final performance both with and without reset. The number of seeds follows our original setup, and we omit standard deviations here for readability. ### CIFAR10 (CNN) | Acc (%p) \ Chunk | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | (Vanilla+Reset) - Vanilla | 0.11 | -0.31 | 0.76 | 1.39 | 2.25 | 3.06 | 3.22 | 3.68 | 4.23 | 4.96 | | (Dropout+Reset) - Dropout | 2.49 | 2.42 | 1.40 | 1.55 | 1.04 | 1.07 | 1.22 | 1.48 | 1.17 | 1.10 | | (AID+Reset) - AID | 1.47 | -0.39 | -0.67 | -0.45 | -0.43 | -1.37 | -0.93 | -0.74 | -0.77 | -1.32 | ### CIFAR100 (Resnet-18) | Acc (%p) \ Chunk | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | (Vanilla+Reset) - Vanilla | 0.57 | 4.42 | 7.58 | 8.93 | 8.56 | 10.29 | 12.60 | 11.69 | 11.61 | 12.08 | | (Dropout+Reset) - Dropout | 10.80 | 4.98 | 4.69 | 5.83 | 6.37 | 6.82 | 7.66 | 7.65 | 7.56 | 8.29 | | (AID+Reset) - AID | -1.36 | 0.48 | 0.97 | 1.91 | 1.97 | 1.83 | 2.19 | 2.15 | 1.85 | 2.43 | ### TinyImageNet (VGG-16) | Acc (%p) \ Chunk | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | (Vanilla+Reset) - Vanilla | -0.27 | 1.54 | 5.01 | 4.98 | 9.06 | 10.12 | 11.60 | 13.65 | 13.5 | 14.65 | | (Dropout+Reset) - Dropout | -5.22 | 1.37 | 1.86 | 5.17 | 6.51 | 8.48 | 3.00 | 8.62 | 9.71 | 9.17 | | (AID+Reset) - AID | -0.07 | -1.53 | -1.30 | -2.2 | -1.57 | -1.46 | -1.59 | -1.37 | -0.98 | -0.43 | From these results, we observe that while Dropout provides modest gains in generalizability over Vanilla in smaller models, it fails to close the gap with full reset in larger models. In contrast, AID outperforms (AID+full reset) in both CNN and VGG-16, and substantially narrows the gap on ResNet-18. This indicates that AID not only enhances generalization capacity (as discussed in Section 5.3), but also preserves generalizability, which is crucial in warm-start and other stationary continual learning settings. We hope these additional results clarify the issue that AID’s improved performance in warm-start scenarios results solely from enhanced generalization capacity. --- [1] Ash et al., On Warm-Starting Neural Network Training, NeurIPS 2020 [2] Nikishin et al., The Primacy Bias in Deep Reinforcement Learning, ICML 2022 [3] Nikishin et al., Deep Reinforcement Learning with Plasticity Injection, NeurIPS 2023
null
null
null
null
null
null
null
null
From Jack of All Trades to Master of One: Specializing LLM-based Autoraters to a Test Set
Accept (poster)
Summary: While traditional LLM evaluators (Autoraters) are generally trained for generalization to a broad set of tasks, this paper studies specializing existing models into autoraters for particular known test sets. In particular, the proposed new prompting strategy leverages in-context learning examples obtained from historical ratings of each individual test set example. Empirically, the authors focus on the machine translation task, and show substantial improvements meta-evaluating their metric against past considered autorater methods. Claims And Evidence: The claims are supported by solid empirical evidence (see Experiments Section below). Methods And Evaluation Criteria: The proposed improvement is simple and very intuitive. I find the practical value of the proposed method is hindered by its inherent requirements and costs of having full access to the evaluation set of examples. I think this would make it inappropriate to rate specialized models and applications where successful translations can be hard to obtain or costly (e.g., underrepresented languages, long passages). The scalability is also a potential concern, as with more comprehensive and larger test sets, there will be linearly scaling costs. Theoretical Claims: The paper does not make theoretical claims. Experimental Designs Or Analyses: The experimental breadth is sound. The authors apply their method prompting a powerful Gemini LLM as the autorater. They evaluate their method against relevant recent baselines across 5 different target/source language combinations. On top of convincing results, the authors also provide interesting parameter studies to highlight the sensitivities of their method (e.g., performance as a function of the base LLM autorater/number of in-context samples) and evaluate potential failure modes (e.g, compare performance vs. 'Parrot' baseline also copying the in context example's errors). Supplementary Material: I reviewed the full Appendix, which contains specific details about the employed autorater prompts, more granular benchmark results and ablations, and selected qualitative examples. Relation To Broader Scientific Literature: The work is a direct extension of prior prompt-based LLM autoraters systems for machine translation. The main difference with prior work such as [1, 2] is the use of in-context examples that are specific to each individual sample in the test set. [1] Fernandes, Patrick, et al. "The devil is in the errors: Leveraging large language models for fine-grained machine translation evaluation." arXiv preprint arXiv:2308.07286 (2023) [2] Kocmi, Tom, and Christian Federmann. "GEMBA-MQM: Detecting translation quality error spans with GPT-4." arXiv preprint arXiv:2310.13988 (2023). Essential References Not Discussed: I believe the paper discusses most of the relevant prior work. Other Strengths And Weaknesses: The paper is clear and well-written. The novelty of the methodology is relatively limited due to the incremental nature of the work. Other Comments Or Suggestions: No additional comments. Questions For Authors: No additional questions. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer, thank you very much for your careful and comprehensive review of our paper. We appreciate that you recognize our sound experimental setup and clear presentation, as well as convincing results and parameter studies. We would like to address your concerns about the practicality of the Specialist method. > I find the practical value of the proposed method is hindered by its inherent requirements and costs of having full access to the evaluation set of examples. It is true that the Specialist method specializes to a fixed test set by design. By removing the constraint that the metric generalize to new test sets, we achieve large gains in the metric's ability to generalize to new systems, and this is often a tradeoff that is worth making. It is common to hillclimb for many years on a fixed test set (e.g., WMT for machine translation, XLSum for Summarization, SQuAD for question answering). Eventually these benchmarks saturate and must be refreshed, but benchmarks for core NLG tasks are often carefully curated to measure certain capabilities of interest and are repeatedly used to allow for fair comparison against previous work (including during the model development process). Thus, the Specialist method provides a paradigm for joint development of Automatic Metrics and Test Sets (which are typically developed independently, but don't need to be, and we get quality gains if they're developed jointly). > I think this would make it inappropriate to rate specialized models and applications where successful translations can be hard to obtain or costly (e.g., underrepresented languages, long passages). We interpret this concern to relate to the cost of collecting outputs from multiple models on the given test set, for the sake of then collecting human annotations to construct Specialist ICL examples, but please correct if this is a misinterpretation. If "successful translations" are hard to obtain, this would apply equally to the collection of outputs for the creation of ICL examples as it would at test time. Thus, this begs the question of whether this task should be used for evaluation in the first place, or would be a bottleneck at every evaluation. The above point aside, it is true that for some NLG tasks, there are not that many different models/systems which can perform the given task (and from which to collect ICL examples for the Specialist Autorater). However, as we show in the ICL example scaling experiments in Section 5.3, only 3 ICL examples (i.e., ratings of outputs from 3 other systems) are needed to outperform the state-of-the-art. For many NLG tasks, including those tracked most closely on LLM leaderboards, we do indeed have easy access to outputs from multiple systems. Moreover, as discussed in the "Specialist Algorithm" paragraph in Section 3, while "the different translations for each input example come from different translation systems [in this work], they could in principle also be sampled from a single model (e.g., using a diversity promoting sampling algorithm)." This is a promising direction for future work, especially for tasks which are not well represented in available models. > The scalability is also a potential concern, as with more comprehensive and larger test sets, there will be linearly scaling costs. This work explores how to use human annotations to enhance the quality of automatic metrics and, while the cost of collecting Specialist annotations for a test set does scale linearly with the size, evaluation using automatic metrics is much cheaper than directly depending on human annotations for evaluation, without sacrificing quality (as shown in the comparison against inter-annotator agreement in Section 5.5). Moreover, with larger test sets, the imperative to depend on automatic metrics (rather than human evaluation) increases too. One-off collection of human ratings on a large(r) test set to build an automatic metric is much more tractable than repeated human evaluations on this test set. Our hope is that the results in our paper showing the effectiveness of the Specialist method will provide motivation to the broader NLG community to acquire human annotations and build Specialist metrics for other tasks and datasets. We hope our responses and clarifications have allayed your concerns about our method's practicality, and that you will consider increasing our score. Please let us know if you have any further questions or if we can provide any additional clarifications to help finalize your assessment of our paper. --- Rebuttal Comment 1.1: Comment: I am not entirely convinced by authors' arguments, but I understand their points. I think the paper provides value, as detailed in my review. Thus, I will keep my positive rating.
Summary: This paper presents a novel approach to enhancing automatic evaluation metrics based on LLMs, focusing on Machine Translation (MT) evaluation. The authors propose a Specialist method that leverages historical human-generated ratings on a fixed test set to construct ICL demonstrations. This specialization allows an LLM-based Autorater, termed "Specialist AutoMQM," to outperform existing state-of-the-art metrics on fine-grained MT evaluation tasks. Claims And Evidence: The authors' main claim includes a framework for effectively conducting automatic evaluations of NLG tasks on a fixed set that remains largely unchanged. However, it seems this work includes several limitations. The authors broadly claim the generalizability of the Specialist method across various NLG tasks. However, explicit experiments were limited to two closely related MT tasks (AutoMQM and Direct Assessment scoring). The method’s effectiveness on tasks with fundamentally different characteristics (e.g., open-ended generation, dialogue evaluation, summarization quality assessment) remains uncertain. Without explicit validation, these claims are somewhat overstated. Methods And Evaluation Criteria: The proposed Specialist method relies heavily on the availability of high-quality historical ratings from human annotators for each test set. This requirement is a major practical limitation in contexts where obtaining such annotations might be prohibitively expensive, labor-intensive, or challenging (e.g., low-resource languages or less common test sets). In particular, the proposed method lacks any significant technical novelty. The approach of carefully adjusting ICL demonstrations to enhance or analyze the performance of specific tasks has already been extensively studied. Theoretical Claims: N/A Experimental Designs Or Analyses: The experimental design is sound, involving rigorous ablations (e.g., varying ICL examples, "Shuffled sources," and "Fixed, different source" baselines), ensuring thorough validation of the effectiveness of same-source ICL specialization. However, all experiments rely on WMT test sets, which, despite being standard and widely used, may not adequately reflect real-world translation scenarios or fully capture the variability found in practical applications. The performance improvements reported might therefore not fully generalize outside WMT benchmarks. Supplementary Material: Yes, it's the actual format of the evaluation prompt and qualitative examples. Relation To Broader Scientific Literature: I could not find the any broader impact for this paper. Essential References Not Discussed: I think Other Strengths And Weaknesses: The Specialist method depends on prompting large models like Gemini 1.5 Pro, GPT-4o, or Claude. In practice, large-scale deployment of prompted LLM-based metrics can be computationally costly. Moreover, evaluation latency, cost implications, and practicality for real-time evaluations during rapid model iterations have not been discussed. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: Dear Reviewer, thank you very much for your feedback, which has helped us to improve our paper. In response to your primary concern, namely that the Specialist method's "effectiveness on tasks with fundamentally different characteristics (e.g., open-ended generation, [...]) remains uncertain", we have added a new experiment showing the effectiveness of the Specialist method on the HANNA benchmark (https://arxiv.org/pdf/2208.11646), which evaluates story generation. For additional details of our experimental setup and results, please see our response to Reviewer inJi. As shown in the table below (which reports story-level Pearson and Spearman correlations), our Specialist method demonstrated substantial gains on HANNA, outperforming the shuffled baseline across both tasks and all meta-evaluation metrics. |Metric|REL (excl. human)||REL (incl. human)| |AVG (excl. human)| |AVG (incl. human)| | |:-----------|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:| ||r_p|r_s|r_p|r_s|r_p|r_s|r_p|r_s| |Shuffled baseline|43.71|41.51|59.58|52.88|36.74|32.35|54.32|44.40| |Specialist|49.01|46.99|63.76|57.90|46.34|42.36|66.16|54.78| We also appreciate that, despite your concerns, you appreciate the soundness of our experimental design and the rigorous ablations which "ensure thorough validation of the effectiveness of same-source ICL specialization". We would also like to address your concerns about our method's novelty and practicality: - Novelty: Your review states that "adjusting ICL demonstrations to enhance or analyze the performance of specific tasks has already been extensively studied". This is a broad field of study in its own right, and no supporting citations were provided. The "Essential References Not Discussed" section is also empty. If you believe essential references are missing, we would like to include them, but cannot act on this incomplete comment. Moreover, our contribution is not limited to showing that our proposed method for constructing ICL examples enhances performance of LLM-as-a-Judge metrics. More broadly, our work proposes a novel technique for using human evaluation to improve automatic metrics (rather than using human evaluation as the de facto evaluation method itself) via test set specialization, and includes an extensive study of how rater behavior affects performance of LLM-as-a-Judge metrics prompted with human ratings. Research on how to use human annotations to enhance the quality of automatic metrics is an important research area in its own right. To the best of our knowledge, our Specialist method has never been proposed, and yields enormous gains over the existing SOTA in MT evaluation (54% and 119%, respectively, on WMT'23 and WMT'24 test sets). While we acknowledge the simplicity of the method, its dramatic effectiveness is evident from the results presented in the paper. - Generalization to real-world use cases: We would like to address your concern that the WMT benchmark, despite being "standard and widely used", may not "adequately reflect real-world translation scenarios." The submissions to WMT (submitted by researchers from institutions across both academia and industry) include a diverse collection of systems of varying quality, including both LLM-based and non-LLM-based (e.g., small encoder-decoder) systems. As shown in Figure 2 in our paper (and discussed in Section 5.2), the Specialist metric's outperformance is consistent for translation systems across the quality spectrum, and cannot be explained by gains only for a certain quality tier. These results imply that the Specialist metric has *better* generalization to new systems, when removing the constraint that it must also generalize to new test sets. - Cost: The review states that "evaluation latency, cost implications, and practicality [...] have not been discussed." Please note that *these considerations are in fact discussed in Section 3 of the paper, in the paragraph entitled "Specialist Method in Practice"*. The review also states that "large-scale deployment of prompted LLM-based metrics can be computationally costly". Once deployed, our proposed Specialist metric does not have any additional computational overhead relative to standard LLM-based metrics. Moreover, large-scale deployment of LLM-based metrics has been widely adopted and, for many real-world applications, is more practical, efficient, and cost-effective than running human evaluations. In fact, "rapid model iteration" is virtually impossible without relying on automatic metrics. We hope the additions we have made to our paper (especially the new results on the HANNA benchmark for story generation evaluation) allay your concerns about our method's generalizability to other NLG evaluation tasks, and that you will consider increasing our score. Please let us know if you have any further questions or if we can provide any additional clarifications to help finalize your assessment of our paper.
Summary: This paper introduces an LLM-based automatic evaluation method, called "Specialist". At a high-level, Specialist closely imitates a human rater's behavior by making ICL examples from the ratings (1) of the same rater (2) on the same example (3) on different model outputs. In other words, the only place where extrapolation from past data points happen is from annotations on past model outputs to new model outputs. Results show that when measured by character-level F1 of the annotated error spans, Specialist outputs the state-of-the-art machine translation metric XCOMEt by 54% and 119% respectively on WMT 23 and WMT 24 test sets. Claims And Evidence: All claims made in the submission is clearly and convincingly supported by their experimental results. Methods And Evaluation Criteria: The method makes sense to me, but the authors could make it clearer how this is useful in real-world applications. From my perspective, this is only useful when one has existing human evaluation on a dataset and one is using "Specialist" to obtain accurate, explainable evaluations for future systems developed on this test set. It can't be (1) used on an arbitrary new dataset where no prior human evaluation is available (2) used for alternative QE metric use cases such as data filtering. The evaluation criteria makes sense, but is constrained to finer granularities such as span-level and segment-level. I have two small doubts, mostly on Section 5.6: - Why did you use an alternative prompt, instead of directly converting the annotated MQM errors into MQM scores? - Current evaluation is only constrained to segment-level. I'd like to see how this stacks up to other WMT metrics on system-level on the acc23 dataset. (Specifically, where it stands in Table 1 in the [WMT 24 metrics shared task paper](https://aclanthology.org/2024.wmt-1.2.pdf).) Theoretical Claims: N/A Experimental Designs Or Analyses: I carefully checked through all the details in Section 4 and haven't spot any issues. Supplementary Material: I only glanced through supplemental materials whenever there are tables referred from the main paper. Relation To Broader Scientific Literature: The proposed Specialist method is a nice addition to the existing LLM-based machine translation evaluation methods. While there are both methods that directly predict scores (GEMBA) as well as ones that predicts MQM annotations (AutoMQM), the contribution of this paper is orthogonal to existing work and concerns about the usage of ICL examples. It can serve as a nice addition to both of these methods in the intended applications. Essential References Not Discussed: Not that I'm aware of. Other Strengths And Weaknesses: In general, this is a nicely-written paper with good novelty and well applicability to machine translation development, with potentials to extend to evaluations of other generative tasks. The only reservation I have is I'm having trouble understanding Section 5.4.1. The logic in the section is hard to follow, and it looks a little duplicative to 5.4.2 in that both mentioned "if the model is simple copying errors in the ICL examples". Can the authors explain to me what they are trying to achieve in 5.4.1 and what the results mean? Other Comments Or Suggestions: - I think this paper shares some resemblance to the spirit of "test-time training" (https://arxiv.org/pdf/1909.13231). The authors may consider drawing an analogy to justify their usage of existing information in the test examples. - In general, the evaluation is somewhat skewed to error span annotation, while being rather lenient to error classification and severity -- both are actually critical when converting MQM annotations to scores. I wonder why this skew exists. (My comment in "Methods And Evaluation Criteria" is also along that line.) Questions For Authors: The most crucial question is regarding question 5.4.1 (see "Other Strengths And Weaknesses"). Please make sure you address that one. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer, thank you very much for your positive and thorough review of our paper. We will address your outstanding concerns below, starting with the question which you stated as most important. > Can the authors explain to me what they are trying to achieve in 5.4.1 and what the results mean? Sections 5.4.1 and 5.4.2 both address the question of whether the Specialist model is simply copying errors in the ICL examples, but from two different angles, to provide further evidence for the claim that the success of the Specialist cannot be attributed to naive copying behavior. In Section 5.4.1, we compare copying behavior of prompted Autoraters using Specialist vs non-Specialist (and fewer Specialist) ICL examples, while in Section 5.4.2, we compare the Specialist prompted Autorater with a non-model-based baseline, which makes predictions by directly copying ICL example errors. While the gap in precision between the Specialist Autorater and the Parrot baseline in Section 5.4.2 shows that the Specialist metric is not copying all of the errors that it could from the ICL examples, this alone does not establish that the Specialist ICL examples are actually *teaching the model* which errors to *abstain from predicting*. This latter claim is substantiated in Section 5.4.1, where we contextualize the Specialist's copying behavior with respect to that of other prompted Autoraters, and show that providing *more* Specialist ICL examples results in a *decrease* in copying behavior with respect to these examples. We hope this has clarified the difference between Sections 5.4.1 and 5.4.2, but if anything remains unclear, please let us know. > Section 5.6: Why did you use an alternative prompt, instead of directly converting the annotated MQM errors into MQM scores? In Table 5, we are actually directly comparing these two settings. That is, to compute Acc23 for the *AutoMQM Specialist*, we are indeed converting the predicted MQM errors into scores. As shown in the table, this outperforms the *Score Prediction Specialist*, which directly prompts the LLM to generate a float quality score on a scale from 0-100 (aligning with standard direct assessment prompting techniques traditionally used for LLM-as-a-Judge evaluation). As for the meta-evaluation metrics used, and in particular your concern regarding leniency to error classification and severity, note that the character-level F1 does indeed penalize incorrect error severities. See Section 4.5 of the paper: "Partial credit of 0.5 is given if the predicted rating correctly marks a character as an error but predicts the incorrect severity." To address your concern about leniency towards error classification, we report the "binary error classification F1" meta-evaluation metric below, which computes the F1 of the rating-level binary error vs no-error decision. This complements Table 1 in the paper, and we see that the Specialist metric outperforms all others according to this evaluation too. |Binary error class. F1|WMT'23|WMT'24| |:----------------------|:-----:|:-----:| |XCOMET|83.35|52.36| |GEMBA|73.43|-| |Shuffled sources|80.25|49.82| |Fixed, different source|70.36|49.92| |Specialist|89.74|65.23| > [...] this is only useful when one has existing human evaluation on a dataset and one is using "Specialist" to obtain accurate, explainable evaluations for future systems developed on this test set. It is true that the Specialist method specializes to a fixed test set by design. By removing the constraint that the Autorater generalize to new test sets, we achieve large gains in the metric's ability to generalize to new systems, and this is often a tradeoff that is worth making. Many benchmarks (e.g., WMT for machine translation, XLSum for Summarization, SQuAD for question answering) are carefully curated and repeatedly used to allow for fair comparison against previous work (including during the model development process). Moreover, the Specialist method provides a paradigm for joint development of *automatic metrics* and *test sets* (which are typically developed independently, but don't need to be, and we get quality gains if they're developed jointly). Thus, when seeking to evaluate on "an arbitrary new dataset where no prior human evaluation is available", the metric development process which accompanies development of this new test set would involve collection of a small set of human annotations of system outputs on this test set. > [...] resemblance to the spirit of "test-time training" (https://arxiv.org/pdf/1909.13231) We agree that Test-Time Training has some similarity in spirit with our work, in that we both seek to specialize a model to a particular test set example using data related to that example. While they use related examples from a simpler auxiliary task with automatic labels (image rotation), we use related examples from the same task with ground-truth labels. Additionally, our method does not involve parameter updates. Thank you!
Summary: This paper introduces the "Specialist" method, a novel approach to specialize LLM-based Autoraters to specific test sets by leveraging historical ratings through in-context learning (ICL) examples. The method is applied to fine-grained machine translation (MT) evaluation (AutoMQM), achieving significant performance gains—54% and 119% relative F1 improvements on the WMT'23 and WMT'24 test sets respectively, outperforming the state-of-the-art XCOMET metric. The method is robust across different LLM backbones and evaluation tasks, demonstrates sensitivity to rater variability, and suggests potential for broader NLG evaluation applications, though further experimental validation is required. Claims And Evidence: The claims of superior performance and robustness are convincingly supported by extensive experimentation and thorough comparisons against strong baselines (XCOMET, GEMBA-MQM, MetricX-24). However, the claim regarding the broader applicability to other NLG evaluation tasks lacks empirical validation beyond MT evaluation. Methods And Evaluation Criteria: The proposed method—constructing ICL examples from historical ratings—is highly suitable for the MT evaluation problem, leveraging existing human annotations effectively. The chosen evaluation criteria (character-level F1 for AutoMQM and Acc23 for scoring tasks) are appropriate and align well with the MT evaluation standards used in the literature. However, additional evaluation across a broader range of NLG tasks (e.g., Topical-Chat, FED, HANNA, QAGS) would strengthen the method's claimed generalization. Theoretical Claims: No explicit theoretical proofs or claims are provided or necessary for this empirical study. The focus is empirical validation rather than theoretical contributions. Experimental Designs Or Analyses: The experimental design is thorough and sound within the context of MT evaluation. The authors employ strong baselines, rigorous ablations, and analyses to validate the impact of the method, including investigating the effect of ICL example counts, error copying behaviors, robustness across translation systems, and inter-rater variability. Yet, the lack of experimentation on additional NLG tasks limits the overall generalizability claim. Supplementary Material: There is no supplementary material. Relation To Broader Scientific Literature: The paper builds effectively on the existing literature around automatic MT evaluation, specifically extending the LLM-as-a-Judge paradigm. It directly relates to and advances beyond GEMBA-MQM and XCOMET methods by introducing specialization via ICL examples from historical evaluations. However, it lacks discussion of the relation, similarities, and differences to the closely related BatchEval method (BatchEval: Towards Human-like Text Evaluation), which also employs a comparative evaluation framework. Essential References Not Discussed: The essential reference "BatchEval: Towards Human-like Text Evaluation," which presents a related comparative-based evaluation method, should be discussed to contextualize this paper's contributions further. Other Strengths And Weaknesses: Strengths: 1. The paper is well written. 2. Innovative and simple approach with practical significance. 3. Extensive empirical validation and robustness analyses. 4. Clearly articulated results and experimental methodology. Weaknesses: 1. The method's reliance on historical annotations might limit applicability in settings without existing annotation resources. 2. Generalization to broader NLG evaluation tasks remains experimentally unverified. 3. Absence of a detailed comparative discussion with the BatchEval method, which uses similar comparative-based ideas. Other Comments Or Suggestions: None. Questions For Authors: Questions For Authors* Q1: Could you clarify how their method relates to, differs from, or improves upon BatchEval: Towards Human-like Text Evaluation? Q2: Could you validate the method across other widely-used NLG evaluation tasks such as Topical-Chat, FED, HANNA, or QAGS? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer, thank you very much for your thoughtful and detailed review of our paper. We will address your concerns below. >*Q1*: Could you clarify how their method relates to, differs from, or improves upon BatchEval: Towards Human-like Text Evaluation? (same as *Weakness 3*) BatchEval is only related to our work through our use of in-context learning (ICL), as both approaches involve exposing the model to predictions from either itself (BatchEval) or the ground truth (Specialist) on different examples. BatchEval exposes the model to its own predictions on a batch of unrelated other samples. In this work, we expose the model to ground-truth predictions on a batch of related (same-source) samples. The approaches are not in competition with each other: one could provide multiple same-source historical annotations as ICL examples (our work) while asking the evaluator model to evaluate a batch of different outputs from the same source in one or more stages (BatchEval). We will add this BatchEval reference to the Related Work section in the final version of our paper. > *Weakness 1*: The method's reliance on historical annotations might limit applicability in settings without existing annotation resources. It is true that access to (an offline collection of) human annotations is inherent to our work, which proposes a novel technique for using human evaluation to improve automatic metrics (rather than using human evaluation as the de facto evaluation method itself) via test set specialization. This work contributes to research on how to use human annotations to enhance the quality of automatic metrics, which we believe is an important and under-explored field, and our hope is that the results in our paper showing the effectiveness of the Specialist method will provide motivation to the broader NLG community to acquire human annotations and build Specialist metrics for other tasks and datasets. > *Q2*: Could you validate the method across other widely-used NLG evaluation tasks such as Topical-Chat, FED, HANNA, or QAGS? (same as *Weakness 1*) Thank you for pointing us to a suitable non-MT NLG evaluation dataset, which has enabled us to improve our paper by validating the generalization of our method. We have added a new experiment showing the effectiveness of the Specialist method on the HANNA benchmark (https://arxiv.org/pdf/2208.11646; one of your recommendations), which evaluates story generation. Some additional details of our experimental setup are below: - We evaluated both with and without human stories included. The HANNA paper excluded human stories, as these were considered outliers, while some follow-up papers included them. - The HANNA dataset includes 6 criteria: relevance, coherence, empathy, surprise, engagement, and complexity. Due to time constraints, we report results on the first criterion (Relevance) and on the average over all criteria (where the averaged score represents an overall indication of the story's quality). - We follow the same experimental methodology as described in Section 4.4 of our paper, and construct Specialist ICL examples per each test example via hold-one-system-out prompting. This simulates the real-world use case of evaluating a new system. As shown in the table below (which reports story-level Pearson and Spearman correlations), our Specialist method demonstrated substantial gains on HANNA, outperforming the shuffled baseline across both tasks and all meta-evaluation metrics. On the Relevance task (REL), the Specialist achieved a Pearson of 49.0 (excl. human stories), compared to 43.7 for the shuffled baseline, which already outperformed the winning metric (BARTScore; Pearson=42.6) reported in the [HANNA paper](https://arxiv.org/pdf/2208.11646). The gap in performance between the Specialist and shuffled baseline is even wider when evaluating on the average score, with Specialist Pearson of 46.3 (66.2 incl. human stories), compared to 36.7 (54.3 incl. human stories) for the shuffled baseline. |Metric|REL (excl. human)||REL (incl. human)| |AVG (excl. human)| |AVG (incl. human)| | |:-----------|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:| ||r_p|r_s|r_p|r_s|r_p|r_s|r_p|r_s| |Shuffled baseline|43.71|41.51|59.58|52.88|36.74|32.35|54.32|44.40| |Specialist|49.01|46.99|63.76|57.90|46.34|42.36|66.16|54.78| We believe this new experiment has strengthened our paper substantially by showing that the Specialist method generalizes to a long-form text generation evaluation task (with very different characteristics than machine translation evaluation). We hope the additions we have made to our paper (especially the results on the HANNA benchmark and the comparison with the BatchEval method) allay your concerns and will be taken into account when assigning your final score. Please let us know if you have any further questions or if we can provide any additional clarifications to help finalize your assessment of our paper.
null
null
null
null
null
null
LongVU: Spatiotemporal Adaptive Compression for Long Video-Language Understanding
Accept (poster)
Summary: This paper introduces LongVU, a video-LLM designed for Long Video-Language Understanding. LongVU is built on three main components: (1) DINOv2 Features, which is used to remove redundant frames that exhibit high similarity, (2)Text-Guided Cross-Modal Query, which can selective frame feature reduction and (3)Spatial Token Reduction, which allows spatial tokens reduction via temporal dependencies. Experiments mainly focus on video understanding tasks, testing the model’s Spatial-temporal Orientation Awareness, Video Detailed Description, Action Counting, and Hour-long Video Understanding, demonstrating the superiority of LongVU for long video understanding. Claims And Evidence: Claims supported by clear and convincing evidence: Claim: The proposed LongVU significantly improves the performance of existing visual-language models (VLMs) on video understanding tasks. Evidence: The paper provides extensive experimental results in Table 1 and Table 2, demonstrating performance gains on multiple benchmarks such as EgoSchema, MVBench, MLVU and VideoMME. The comparison with baseline models clearly shows the superiority of the LongVU. Methods And Evaluation Criteria: Proposed Methods: The proposed method, LongVU, aims to enhance video understanding by: Injecting temporal information into large vision-language models (VLMs). Minimizing additional computational overhead while improving performance. Generalizing across various video understanding tasks. Video understanding inherently requires capturing both spatial and temporal features, which LongVU addresses. Reducing the cost of adaptation without compromising performance is crucial for large-scale VLMs. Theoretical Claims: N/A Experimental Designs Or Analyses: **Strengths** The paper provides a reasonably well-structured experimental design, with the following positive aspects: **Baseline Comparisons:** The papaer compare LongVU with several state-of-the-art large vision-language models (VLMs). **Multiple Datasets:** The experiments are conducted on well-known datasets like EgoSchema, MVBench, and VideoMME. These datasets are widely accepted benchmarks for video understanding tasks, making the evaluation relevant and convincing. **Evaluation Metrics:** The primary metric used is Accuracy (Acc), which is standard for VideoQA tasks. Although limited, the choice of this metric is reasonable for the problem setting. **Ablation Study:** The paper conducts ablation studies to isolate the contribution of LongVU. This strengthens the validity of their claims. **Weaknesses** **No Clear Analysis of STC’s Token Reduction** The paper claims that STC helps compress video tokens, but it does not provide quantitative details on how many tokens are actually reduced. Table 3 shows that DINO + Query and DINO + Query + STC have very similar performance (only 0.3% - 0.5% difference), raising the question of whether STC meaningfully contributes to efficiency or accuracy. Without token reduction statistics, it is unclear whether STC is significantly improving computational efficiency or if it has a negligible effect. **Limited Task Scope** The paper claims LongVU is generally applicable to various video understanding tasks. However, all experiments focus solely on VideoQA tasks, without evaluating performance on other video understanding tasks like: Video captioning (describing video content). Action recognition (classifying actions from videos). This narrow task scope weakens its claim of generalizability. Supplementary Material: N/A Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: **Strengths** The paper presents a novel model LongVU, which effectively reduces video tokens while preserving crucial visual details. This is an innovative approach compared to existing uniform or dense sampling strategies. The use of cross-modal query-based compression is a unique way to adaptively reduce spatial and temporal redundancy in videos, an aspect that has not been fully explored in prior works. **Weaknesses** Lack of a Clearly Stated Contribution Section The paper does not explicitly list its main contributions in a separate section or clearly structured paragraph in the introduction.While the methodology is described in detail, the lack of a concise summary of contributions makes it difficult to quickly understand what is novel and significant about this work. Other Comments Or Suggestions: N/A Questions For Authors: Would the compression strategy require re-tuning for different types of videos (e.g., static surveillance footage vs. fast-moving sports videos)? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **1. The paper claims that STC helps compress video tokens, but it does not provide quantitative details on how many tokens are actually reduced.** Please refer to Figure 4(b), where we illustrate the reduction in the number of tokens after applying STC across different video durations. On average, we observe a 40.4% reduction in tokens during STC. The primary goal of STC is to reduce tokens while preserving visual information, rather than achieving a significant performance improvement. The table below shows the retained token ratio before and after STC, demonstrating that while the number of token is significantly reduced, the model’s performance is effectively preserved. | Methods | Retained Ratio | EgoSchema | VideoMME | MLVU | | -------- | ------- | ------- | ------- | ------- | | w/o STC | 23.9% | 18.5 | 67.3 | 60.1 | 65.0 | | w/ STC | 14.2% | 67.6 | 60.6 | 65.4 | **2. Limited Task Scope. Other video understanding tasks like: Video captioning.** Our video training data, sourced from VideoChat2, includes extensive captioning annotations, as shown in Table 6. Our model is capable of generating detailed video captions, as demonstrated in the qualitative results presented in Figure 3. Recent long video benchmarks adopt the VideoQA format for easier and more stable evaluation, since automatic caption evaluation is challenging, and GPT-4 based evaluation is not cost friendly. MLVU offers subsets like Video Summarization and Sub-Scene Captioning for captioning tasks. Below, we report MLVU video captioning results, with G-Avg representing the average performance across generation tasks. | Models | MLVU (G-Avg) ↑ | | -------- | ------- | | ShareGPT4Video | 3.7 | | VideoLLaMA2 | 3.9 | | VideoChat2 | 3.9 | | LongVU (Ours) | 4.1 | **3. Lack of list of main contributions.** We believe we developed notable contributions that have been thoughtfully developed and carefully integrated to achieve strong performance. We are clarifying the contributions here. + **Vision-centric visual encoders for video understanding.** We identify a key insight: vision-centric encoders trained with feature similarity objectives, e.g., DINOv2, excel at frame reduction in the visual space, while CLIP-based features, optimized for vision-language alignment, are suboptimal for this task. To our knowledge, this has not been explored before for video token reduction. + **Context-aware dynamic token compression.** Our method adaptively compresses the video both spatially and temporally by dynamically adjusting the number of tokens based on the video's inherent visual complexity and redundancy, considering both spatiotemporal nature of video and semantic relevance to the user’s query. + We conducted extensive experiments across several video understanding benchmarks, including EgoSchema, MVBench, VideoMME, and MLVU. Our LongVU method significantly outperforms multiple recent open-source video LLM models and also demonstrates strong effectiveness in smaller models (3B).
Summary: This paper proposes to reduce long video context by: 1) temporal frame reduction based on DINO feature similarity to extract keyframes (DINO module), 2) cross-modal query (Query module) to capture important tokens, 3) Spatial Token Compression (STC module) to further reduce tokens for excessively long videos. The technical novelty of the Query and STC module is limited. The ablation studies demonstrate that the improvements of STC module are weak and it sacrifices fine-grained abilities. Given that the improvements of DINO module and Query module are significant, I lean towards weak accept if the concerns in STC module and advantages against FastV are solved. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: No theoretical claims in this paper. Experimental Designs Or Analyses: Yes Supplementary Material: I reviewed the code in suppl.\ Relation To Broader Scientific Literature: In terms of video understanding. Essential References Not Discussed: No. Other Strengths And Weaknesses: **Strength** - For long video understanding, the authors developed 1) temporal frame reduction based on DINO feature similarity to extract keyframes (DINO feature), 2) cross-modal query (Query module) to capture important tokens, 3) Spatial Token Compression (STC module) to further reduce tokens for excessively long videos. - The effectiveness of all the above modules is verified by ablation studies. **Weakness** - The core idea of Query and STC lacks substantial novelty. Cross-modal query are known to reduce sequence length long time ago[1], and sparsity in MLLM has already been explored by FastV [2]. This paper differs from FastV in that it compresses tokens before feeding into LLM, but the authors did not demonstrate its advantages either. - Improvements of the STC module are not significant. From Table 3, performance gain on EgoSchema, VideoMME, and MLVU ranges from 0.3%~0.5%. In MLVU, simply changing seed, resolution, or number of sampled frames brings much more variations. - STC harms fine-grained temporal understanding abilities. In Table 4, performance in Needle QA and Plot QA drops significantly when introducing STC. Both subtasks are on the fine-grained temporal understanding, especially Needle QA. In another saying, the paper sacrifices fine-grained abilities since they account for only small portions on the benchmark. [1] X-Pool: Cross-Modal LanguageVideo Attention for Text-Video Retrieval. CVPR 2022. [2] An Image is Worth 1/2 Tokens After Layer 2: Plug-and-Play Inference Acceleration for Large Vision-Language Models. ECCV 2024. Other Comments Or Suggestions: N/A Questions For Authors: 1) Since dropping / compression tokens in MLLM is not a new idea, the authors should demonstrate their advantages against previous methods like FastV. I think efficiency might be a point, as LongVU compresses tokens before feeding LLM. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **1. The core idea of Query and STC lacks substantial novelty. Cross-modal query are known to reduce sequence length long time ago[1], and sparsity in MLLM has already been explored by FastV [2]. This paper differs from FastV in that it compresses tokens before feeding into LLM, but the authors did not demonstrate its advantages either.** FastV is a token reduction method designed for image understanding in MLLMs, operating after the second layer of MLLMs by reranking visual tokens based on attention scores from remaining tokens. In DyCoke [3], FastV was compared as a baseline for long video understanding tasks. The results show that while FastV effectively compresses tokens to speed up inference, it sacrifices performance in long-video comprehension compared to the base model. As a plug-and-play approach, we applied FastV to our LongVU model to compare its compression with our method. | Models | TFLOPs | VideoMME | | -------- | ------- | ------- | | LongVU w/o reduction + FastV | 34.4 | 55.1 | | LongVU w/ reduction (default) | 18.5 | 60.6 | For our case, the limitation of FastV becomes more pronounced. Our approach performs dense sampling and reduces tokens before input to LLM, ensuring they fit within the LLM’s context length with lower complexity. In contrast, FastV still requires processing all video tokens (100K) in the first two layers, leading to a significant exceeding beyond the LLM's context size (8K). This discrepancy affects attention reliability for extremely long contexts that extend beyond the model’s training distribution. In addition, its attention-based token selection fails to account for the temporal nature of video. FastV is an effective token compression approach for image understanding, but not designed optimal for long video compression. In contrast, our method effectively compresses video tokens by considering the spatiotemporal dependencies inherent in video data, which not only reduces the number of tokens but also improves performance in long-video understanding tasks. [3] DyCoke: Dynamic Compression of Tokens for Fast Video Large Language Models, CVPR 2025 **2. Improvements of the STC module are not significant. STC harms fine-grained temporal understanding abilities.** We propose that STC is not primarily aimed at improving performance but at reducing video redundancy to minimize the number of tokens input into the LLM. | Methods | Retained Ratio | EgoSchema | VideoMME | MLVU | | -------- | ------- | ------- | ------- | ------- | | w/o STC | 23.9% | 67.3 | 60.1 | 65.0 | | w/ STC | 14.2% | 67.6 | 60.6 | 65.4 | By applying STC, we achieve a 40.4% reduction in tokens, ultimately retaining only 14.2% of the original tokens. It demonstrates that while the number of tokens is significantly reduced, the model’s performance is effectively preserved.
Summary: This paper introduces a novel spatiotemporal token compression method named LongVU, designed for long video understanding. Specifically, LongVU divides the video token compression process into three stages: Temporal Reduction, Selective Feature Reduction, and Spatial Token Compression. In the Temporal Reduction stage, an additional DINOv2 visual encoder is introduced to eliminate frame-level redundancy based on similarity. The Selective Feature Reduction stage incorporates the user’s query to identify the most relevant frames, preserving their resolution while reducing the resolution of less relevant frames. Finally, the Spatial Token Compression stage partitions the video into temporal windows and performs spatial compression at corresponding positions within each window. The experimental evaluation in this paper assesses the performance of the proposed LongVU on four public long video understanding benchmarks, comparing it with other state-of-the-art methods. ##Update after rebuttal: I appreciate the authors' detailed response, most of my concern is settled. I will keep the original rating as "Accept". Claims And Evidence: The paper claims that existing sampling schemes have drawbacks, namely that uniform sampling may cause some frames to be missed, and dense sampling may lead to the number of tokens reaching the limit prematurely, resulting in the video being truncated early. While this claim makes intuitive sense, it lacks further validation both qualitatively and quantitatively. Methods And Evaluation Criteria: Yes. The evaluation benchmarks utilized in this paper include most common video understanding benchmarks, including Egoschema, mvbench, MLVU and Video-MME. Theoretical Claims: This paper does not make theoretical claims. Experimental Designs Or Analyses: I am satisfied with the most of the experiments in this paper except for the visual encoder part. 1. Since this paper introduces an additional DINOv2 encoder, it highlights the importance of the visual encoder. However, the paper lacks a thorough discussion on the visual encoders, including why DINOv2 and SIGLIP were selected (beyond the brief description in Sec 3.1, A more comprehensive analysis, such as quantitative ablation studies on different visual projectors, would be better). 2. Furthermore, the discussion of DINOv2 and SIGLIP in Tab. 3 is limited to their role in feature extraction during the temporal reduction stage. However, prior to the Selective Feature Reduction stage, the SVA is still employed for feature fusion (I hope I understand it correctly). Thus it lacks quantitative analysis when relying solely on DINOv2/SIGLIP for feature extraction throughout the pipeline. Supplementary Material: Yes. The supplementary material provides additional information on the training dataset, a comparison between SIGLIP and DINOv2, ablation experiments, and the NIAVH benchmark. It also introduces a new positional encoding method to enhance performance. Furthermore, the limitations of the approach are discussed. Relation To Broader Scientific Literature: This paper proposes several novel solutions for feature extraction in long videos, which could be beneficial not only to the field of VLMs but also to the broader, more general domain of video processing. Essential References Not Discussed: Does the authors consider comparing LongVU with some training-free token compression methods[1-4]? [1] VisionZip: Longer is Better but Not Necessary in Vision Language Models (2024) [2] [CLS] Attention is All You Need for Training-Free Visual Token Pruning: Make VLM Inference Faster (2024) [3] DyCoke : Dynamic Compression of Tokens for Fast Video Large Language Models (2024) [4] An Image is Worth 1/2 Tokens After Layer 2: Plug-and-PLay Acceleration for VLLM Inference (2024) Other Strengths And Weaknesses: ## Strength 1. The ideas behind the spatiotemporal adaptive compression are considered intuitive and useful. 2. In general, the paper is easy to follow. ## Weakness 1. I understand that the tokens filtered out are considered less important compared to those retained. However, the filtered tokens may still contain some useful information. Have the authors considered providing a discussion on the potential implications of discarding these tokens? 2. LongVU requires retraining the model, which may limit its portability and practical value. Other Comments Or Suggestions: 1. Although the authors conducted ablation experiments on the threshold and sliding window in the supplementary material, it would be more convincing to provide quantitative statistics on the number of tokens and the compression ratio after each stage of compression based on these ablation experiments. 2. The description of Table 3 is somewhat unclear, as the order of experiments within the table does not align with the sequence in the paper. Also, the unclear description of specific experimental setup makes me somewhat confused. 3. The impact statement is missing. It is recommended that the author add it. Questions For Authors: I have no further questions. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **1. The drawbacks of uniform sample and dense sampling.** Numerous studies, including LLaVA-OneVision, LongVA, and SlowFast-LLaVA, have explored the trade-offs between these approaches. Below, we present results of the baseline LongVA using either uniform sampling or dense sampling (1fps, with truncation at the end). LongVA extends the context length to 224K, yet it still cannot fully process long videos at 1 fps, requiring truncation. Performance peaks around 128-frame sampling and then decreases. | #Frames | VideoMME (Long) | VideoMME (Avg) | | --- | --- | --- | | 32 (uniform) | 45.4 | 51.8 | | 64 (uniform) | 45.0 | 52.4 | | 128 (uniform) | 46.2 | 52.6 | | 384 (uniform) | 46.1 | 51.8 | | 1fps (dense) | 44.9 | 51.5 | **2. Lacks a thorough discussion on the visual encoders, and analysis on DINOv2/SigLIP solely for feature extraction.** We appreciate the reviewer's concern regarding the discussion on visual encoders. However, we emphasize that our approach primarily focuses on the adaptive video token reduction, rather than an exhaustive exploration of visual encoders. In the initial stage, we explored various vision encoders, including SAM, DINOv2, and SigLIP. Due to space constraints, we present the ablation studies conducted during the image pretraining phase, where we compare the performance of SigLIP alone with that of a SigLIP and DINOv2 combination. | Method | GQA | MMVP | POPE | RealWorldQA | | --- | --- | --- | --- | --- | | SigLIP | 61.9 | 31.3 | 85.6 | 59.5 | | SigLIP + DINOv2 | 62.3 | 51.3 | 86.7 | 61.1 | The results clearly demonstrate that combining both vision encoders leads to superior performance, resulting in a more robust image understanding model. SigLIP alone ranked as the runner-up. In contrast, DINOv2 and SAM, trained without language supervision, consistently showed an average accuracy more than 5% lower than that of SigLIP. **3. Comparing with training-free token compression methods.** Thank you for sharing these methods. We reported the results in the table below and will incorporate them into our revision. | Model | MVBench | MLVU | VideoMME | | -------- | ------- | ------- | ------- | | FastV | 56.1 | 62.6 | 57.3 | | VisionZip | 56.9 | 62.5 | 57.8 | | DyCoke | 58.2 | 63.8 | 60.4 | | LongVU (Ours) | 66.9 | 65.4 | 60.6 | **4. Quantitative statistics on the number of tokens and the compression ratio.** We already illustrated the number of tokens before and after reduction for different video durations in Figure 4. Below we provide the average compression ratio for the VideoMME dataset below for each compression step. | Temporal Frame Reduction | Query-based Token Reduction | STC | | -------- | ------- | ------- | | 45.9% | 52.1% (23.9%) | 59.6% (14.2%) | At the temporal frame reduction stage, 45.9% of tokens remain. In the query-based token reduction, 52.1% remain, and at the STC stage, 59.6% are retained, yielding the final retain ratio of 14.2%. This breakdown shows the adaptive nature of our approach and the significant compression achieved throughout the process. **5. The order of experiments within the table does not align with the sequence in the paper.** The last three rows of Table 3 outline the sequence of our method: first, DINOv2 for temporal reduction, followed by query-based selection, and finally, STC reduction. The table also includes ablation studies on context length, tokens per frame, and the impact of using SigLIP or DINOv2 for temporal reduction. We apologize for any confusion caused by the table’s condensed format. To enhance clarity, we will reorganize it in the revision—grouping “Uniform” together, renaming "DINO" to "Temporal Reduction (DINO)," and "SigLIP" to "Temporal Reduction (SigLIP)." **6. The impact statement is missing.** Our work introduces a spatiotemporal adaptive compression mechanism that reduces the number of video tokens while preserving visual details of long videos. This innovation paves the way for future research in video compression tailored for MLLM-based applications, enabling more effective long-video, media, and streaming video understanding. Additionally, we contribute to the open-source community by providing datasets, benchmarks, and models to support the advancement of AI-driven video analysis. We envision a future where MLLMs can directly process compressed video formats aligned with VLLMs that account for spatiotemporal redundancy. We will add an impact statement section in our revision. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed rebuttal, most of my concerns have been settled. I would keep my original rating as "Accept". --- Reply to Comment 1.1.1: Comment: Dear Reviewer ow6N, Thank you very much for your positive recognition of our work. We sincerely appreciate your thoughtful and constructive feedback. We will incorporate all of your suggestions in our revision. Best, Paper 9623 Authors
Summary: The paper proposes LongVU, a method that addresses the challenge of processing long videos within multimodal language models' limited context by implementing a three compression approaches: reducing temporal redundancy through inter-frame similarity, leveraging cross-modal dependencies, and eliminating spatial token redundancy. This framework successfully processes hour-long videos within standard 8k context lengths, outperforming competing methods across multiple benchmarks while maintaining high-quality video understanding. Claims And Evidence: The major claims about the method's performance on long videos are supported by results across multiple benchmark datasets. Additionally, the authors have provided detailed ablations on all their components to provide more insight into their method. Methods And Evaluation Criteria: Yes, all major benchmark datasets are covered. However, I am concerned about the inference reported in the supplementary material. Authors report that LLama-VID is OOM on 20-minute videos. But the LLama-VID paper reports performance on hour-long videos. Is there any error here? Theoretical Claims: The mathematical representations are fine. No major theoretical proofs in the paper. Experimental Designs Or Analyses: The experimental design and analysis are all technically sound and thorough. All major datasets concerning long video evaluations are covered and experimental analysis is provided in detail. Supplementary Material: The supplemental material contains code that I did not run myself. Relation To Broader Scientific Literature: This is my major point of concern. The main contribution of the paper revolves around token reduction based on dino features and adaptive reduction based on the text query. These components have already been proposed in prior work for either image VLMs or other Video VLMs. For example, both [1] and [2] use DINO features alongside the CLIP features. Similarly, the idea of using text query to select optimal tokens is also covered in previous published work on videos [3, 4] that the authors do not seem to discuss. This reduces the proposed work to a more engineering solution that combines ideas from previous image or video-based work. There is no doubt that the results are outstanding, but the core contributions leading to strong performance have already been proposed in a similar form in previous work, or adapted for similar work in image-based models. [1] Eyes Wide Shut? Exploring the Visual Shortcomings of Multimodal LLMs [2] Cambrian-1: A Fully Open, Vision-Centric Exploration of Multimodal LLMs [3] Text-Conditioned Resampler For Long Form Video Understanding [4] Goldfish: Vision-Language Understanding of Arbitrarily Long Videos Essential References Not Discussed: I believe that there is a missing discussion on two works, that also propose a similar text-conditioned token reduction for videos. [1] Text-Conditioned Resampler For Long Form Video Understanding [2] Goldfish: Vision-Language Understanding of Arbitrarily Long Videos Other Strengths And Weaknesses: Strengths: The paper is well-motivated and well-written. The overall method achieves strong performance on all major benchmark datasets. The method is also supported by detailed ablations regarding each component of the method. Weaknesses: My major concern is only regarding the technical novelty of the method. As of right now, it seems that the method is a strong engineering solution to the problem of long video understanding in VLMs, but the major components have all been proposed in previous published work. Whether it is the concept of combining DINO and CLIP features [1, 2], the idea of selecting tokens based on text queries [3, 4], or even the concept of reducing temporal resolution before processing the video [5]. Therefore, I believe that the novelty of the method is quite limited with all components proposed in some form in previous work. [1] Eyes Wide Shut? Exploring the Visual Shortcomings of Multimodal LLMs [2] Cambrian-1: A Fully Open, Vision-Centric Exploration of Multimodal LLMs [3] Text-Conditioned Resampler For Long Form Video Understanding [4] Goldfish: Vision-Language Understanding of Arbitrarily Long Videos [5] 3D CNNs with Adaptive Temporal Feature Resolutions Other Comments Or Suggestions: N/A Questions For Authors: Can the authors discuss each component of their method in light of previous work on similar components? What has already been and what is the unique contribution that is different from previous work? Is the unique contribution really a major research contribution of incremental in nature? Because this is my only major concern right now. Other than that, the results are good and the work is well presented. Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **1. LLama-VID OOM issue.** Thanks for your concern. We encountered an OOM issue while running on an 80GB A100 GPU for long videos, using the same settings as other comparison models. To address this, we need to precompute the video features in advance and then load the entire model for inference, as shown in LLaMA-VID ([long-video-inference]([https://github.com/dvlab-research/LLaMA-VID?tab=readme-ov-file#long-video-inference](https://github.com/dvlab-research/LLaMA-VID?tab=readme-ov-file#long-video-inference))). The inference time for 20-minute videos is 55.3 seconds, which will be updated in Table 11. **2. The main contribution has already been proposed in prior work for either image VLMs or other Video VLMs.** **2.1. For example, both [1] and [2] use DINO features alongside the CLIP features.** While MoF [1] and Cambrian-1 [2] utilize both DINO and CLIP features, neither explores their combination for video training and video token compression—where careful design is crucial for practical training and good performance. Simply applying both vision features to all video frames is computationally demanding, making long-video training impractical. Our work goes beyond merely using both encoders for video understanding—we introduce a novel strategy to reduce computation demand and conduct adaptive video token compression for MLLMs. We identify a key insight: vision-centric encoders trained with feature similarity objectives, e.g., DINOv2, excel at frame reduction in the visual space, while CLIP-based features, optimized for vision-language alignment, are suboptimal for this task. To our knowledge, this has not been explored before for video token reduction. In addition, by leveraging DINOv2 to selectively filter redundant frames before extracting SigLIP features, we significantly reduce computational costs and make long video training feasible. **2.2. Using text query to select optimal tokens is also covered in [3, 4] that the authors do not seem to discuss.** The claim that Goldfish uses text query to select optimal tokens is inaccurate, as its selection mechanism does not operate on visual representations. Instead, it retrieves text descriptions based on query similarity. **As we have already discussed in lines 151-154**, Goldfish chunks long videos into short clips, generates descriptions with VideoLLM, and encodes them using OpenAI’s text-embedding-3-small. Then it retrieves most relevant descriptions based on similarity between the query's embedding and clip embeddings for the final answer. In contrast, whereas our method selects video tokens, making its approach fundamentally different from ours. In [3], the Text-Conditioned Resampler (TCR) employs learnable queries to resample a long sequence of video frame features into a fixed-length sequence, which is then fed into a language model. This approach is quite similar to QFormer [5] but is applied in the video domain. However, TCR enforces long video compression into a limited and fixed token space, limiting its ability to represent complex visual information and ignoring temporal variations—resulting in redundancy in static scenes and loss of detail in complex ones. In contrast to [3], our method dynamically compresses video tokens based on intrinsic dynamics, adapting to each video's complexity. We will integrate the above discussion in our revision. [5] Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models **3. The unique contribution that is different from previous work?** **(1) Vision-centric visual encoders for video understanding.** To the best of our knowledge, no previous work has explored leveraging a vision-centric visual encoder like DINOv2 for frame reduction. While methods like MovieChat and Chat-UniVi reduce redundancy by calculating frame-level feature similarities, they rely on CLIP features. We argue that language-supervised methods like CLIP fail to capture the nuanced differences between frames. **(2) Context-aware dynamic token compression.** Many token compression methods, like TCR, VideoChat, and VideoChat2, rely on QFormer-based modules, which limit video representation by transforming it into a fixed number of trainable queries. These approaches overlook the video's dynamic and informational richness, regardless of their complexity. In contrast, our method adaptively compresses the video both spatially and temporally, dynamically adjusting the number of tokens based on the video's inherent visual complexity and redundancy. **(3) Stronger performance with lower cost.** Compared to Chat-UniVi, which extends ToMe-based dynamic token merging for visual features in MLLMs, our method is both more efficient and more effective. As shown in Table 11, it processes over 1K video frames in 32.96 seconds, compared to Chat-UniVi’s 49.06 seconds, while achieving a substantial 17.7% accuracy gain in VideoMME long. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response. I believe most of my concerns have been addressed. I will be moving my rating up to Accept. --- Reply to Comment 1.1.1: Comment: Dear Reviewer 7Nad, We sincerely thank you for taking the time to carefully read our rebuttal and for acknowledging our clarifications. We’re grateful for your constructive feedback throughout the review process and are pleased to hear that your concerns have been addressed. We will integrate all of your valuable suggestions in our revision. Best, Paper 9623 Authors
null
null
null
null
null
null
The Choice of Normalization Influences Shrinkage in Regularized Regression
Reject
Summary: The paper proposed to study the nature and impact of feature normalization schemes with respect to linear models under the L1, L2, and Elastic-Net penalties. This is done only for regression, and is focused particularly on binary features. The results are primarily theoretical in nature, with a limited number of datasets evaluated. ---- Post Rbuttal Update --- Having read the other reviews, I do not believe their concerns are well founded. There are stylistic concerns that are noisy factors, I obviously found the paper well written enough with some suggestions that the authors seem to have taken hernestly - a difference in perception on subjective factors is no reason to reject a work. Similar concerns about ImageNet show a lack of familiarity with the breadth of ML research and what solves problems in the real world. I have used $L_1$ penalized models over decades to make more effective, faster, and cheaper solutions to real-world problems deployed across the globe. Deep learning is not the beginning or the end of AI/ML. This paper takes a refreshing new perspective to provide theory and insights to an often overlooked matter in feature normalization. We should reward such creativity, which can have a real-world impact, and spawn new avenues for research. Claims And Evidence: Though the claims do have evidence, the abstract set my expectations considerably beyond what the article contains. I had anticipated 1. Logistic models to be considered as well 2. A larger collection of datasets 3. An empirical evaluation of "our recommendations" vs the readily available tools Methods And Evaluation Criteria: The theoretical approach is sound as far as I can tell. Many figures would be clearer, especially the ones with simulated results, by instead centering the plots to the deviation from the estimated effect $\hat{\beta}_j$ from the known true effect $\beta_j$, as it is otherwise difficult to tell at a glance what the results mean and how to interpret them. Indeed while well written, every result is presented in a way that presumes the reader is intimately familiar with the larger statistical literature. I think the paper is suffering a bit from "I wrote it and I know it", and could use a friendly pass from a colleague unaware of the work previously. Theoretical Claims: I have not manually checked the proofs. Experimental Designs Or Analyses: I have no issues with the content of what was done, but find the most obvious experiments seem to be missing. 1. Use each of the listed normalization approaches and the current recommendation in a table for each of L1, L2, and Elastic-Net regression. Show the final total difference in predictive performance achieved using the new theoretically derived insights. If a positive improvement is shown, it also validates the acceptability of the theoretical model, assuming Gaussian errors. 2. Consider more datasets, which it is easy to binarize other datasets to match the scope of the model. In this way a statistical test via the Wilcoxon Signed Rank Test can be performed to show conclusively that the approach is an improvement. See, Should We Really Use Post-Hoc Tests Based on Mean-Ranks? https://jmlr.org/papers/v17/benavoli16a.html Supplementary Material: I have reviewed the empirical results in the appendix, not the theory. Relation To Broader Scientific Literature: Linear models make the real world go round, the impact could be massive. Most literature in this space consider the algorithm independent of the data normalization (or just pick something). Even in other areas like Differential Privacy, it is standard to assume a specific L1 normalization of the data for theoretical reasons, and this work may evidence som understandings of an implicit difference between DP and standard regressions beyond the effect of randomized noise. Essential References Not Discussed: No critical missing references, but it would be nice to give pointers to the many libraries that have these models to highlight impact (Scikit, DLIB, JSAT, Celer, and so many more). Other Strengths And Weaknesses: I was very excited about this paper initially. I've solved many real-world problems with L1 penalized models, followed by Xgboost, and that gets you 90% of the world. Briding the empirical gap to show that the results are practical in a straight up, "what is the Accuracy /AUC", as mentioned in the above sections, would massively elevate the quality and impact of the paper. The paper's readability would be greatly improved by including an Algorithm block for each of L1, L2, and Elastic-Net of the author's proposed approach to normalization, each is currently buried in the text and hard to separate in the content. Other Comments Or Suggestions: The article would be improved with some more guidance to the reader of "where we are going and why", each section currently "jumps in" and it is not clear what point is necessarily being made until the end. That isn't to say that the paper isn't good grammar, but it was my excitement that carrier me through - it would have been more pleasant if I had a map! Considering or discussion robust standardization methods would also be appreciated, but a weakness I could accept if other items were addressed. Questions For Authors: See above content, if you could provide: 1. Larger experimental evaluations 2. Algorithm blocks for each case I will raise my score to accept and strongly champion the work, though I would still encourage improvements in writing. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your extensive review of our paper. We appreciate the time and effort you have put into providing feedback and hope that our responses will address your concerns. We will start by addressing your comments regarding the experimental design and request for additional experiments. ## Extended Experiments and Comparisons If we understand correctly, then you ask for a new experiment where we compare the different normalization methods on an extended suite of real data sets. We have run such an experiment, which now also includes results with logistic loss, and present the results here: <https://imgur.com/a/hRAw7lf>. They show that our method (tuning over the normalization parameter $\delta$) performs best among all the methods evaluated here. We would like to make the following caveats regarding this experiment, however: - Our paper is not primarily focused on predictive performance, but rather on estimation accuracy and feature selection. As we have shown, normalization has a significant impact on prediction as well, but this is not the main focus of our work. - As we state in our paper, the elastic net requires special consideration and would necessitate an altogether different approach. While not impossible, we would be happy to do so but will not have time to finish this for the review. - We are not sure that the Wilcoxon Signed Rank Test is the best choice for this kind of comparison. The symmetricity assumption will not be met in our case and the test would ignore the magnitude of the differences, which we believe to be important in our case. ## Binarization Experiment We have conducted an experiment on the effect of normalization when binarizing continuous features. To dichotomize in a meaningful way, we used contextual information from the data set, for example dichotimizing - the crime rate variable by the national average (for 1970-1971), - the NOx variable by the EPA standard (at the 1971 level), and - the presence/absence of large lot zoning. We fit the lasso path to each of the binarized data sets as well as OLS. Then we compared the ranks of the OLS estimates to the order in which the features appear in the lasso path and computed correlation measures. The results are shown here: <https://imgur.com/a/FNugwMx>. As you can see, using $\delta=1$ (scaling with variance) yields best correspondence between the OLS estimates and feature selection order. We will include these results in the final version of the paper. ## On Methods and Evaluation Criteria ### Plot Presentation Thank you for suggesting centering plots on deviation from estimated effects. We've considered this carefully but believe the current presentation better serves reader comprehension since: - We show regularized estimates, not true coefficients (which are 1 in the experiment). - Centering would therefore center most effects around -0.5, making shrinkage interpretation less intuitive. - When estimates shrink to 0, a centered plot would show -1, requiring readers to know the true value is 1. We experimented with your suggestion (see: <https://imgur.com/a/bGJ1eNQ>) but find the original approach more clear. ### Accessibility Improvements We've made substantial revisions to make our paper more accessible to readers less familiar with the statistical literature, including clearer terminology, additional context, and improved explanations of key concepts. ## On Broader Scientific Context We appreciate your recognition that our work could significantly impact how practitioners approach regularized modeling. Linear models remain fundamental across many domains, and understanding normalization effects addresses a critical gap in the literature. ## On Essential References not Discussed Following your suggestion, we've added a new paragraph discussing popular libraries that implement these models, highlighting that our findings may potentially have major consequences for a large and broad group of people. ## Practical Guidelines While our paper doesn't introduce new algorithms per se, we recognize the need for clear recommendations. We've added a structured section presenting actionable guidelines for practitioners on: - How to select appropriate normalization methods for different regularization techniques - When particular normalization approaches may be preferred or avoided - Considerations for data with imbalanced binary features - Practical impact on coefficient interpretation ## Paper Structure Improvements We've enhanced the paper's flow and readability by: - Adding clearer "signposting" at section beginnings to orient readers - Improving transitions between theoretical concepts and practical implications - Providing better context for technical derivations - Including summary points that connect individual findings to our broader argument ## Additional Topics We've incorporated a brief discussion of robust standardization methods in the discussion section as suggested. --- Rebuttal Comment 1.1: Comment: While there isn't a dramatic "algorithm" perse, it would still be useful to have pseudo-code in a latex algorithm environment for "if you want to code this yourself, there's are the relevant equations/steps in the correct order". Please include it, it will help with the "Practical guidelines" and I think elevate this work. --- Reply to Comment 1.1.1: Comment: Thank you for your clarification. We recognize that this could be helpful for the reader and have added an algorithm to clearly state how we normalize binary vs continuous features in the paper.
Summary: This paper investigates how different normalization strategies affect the shrinkage in regularized regression models such as Lasso, Ridge, and Elastic Net. The authors analyze the impact of normalization on binary and continuous features, noting that class balance directly affects regression coefficients and that different normalization methods may introduce a trade-off between bias and variance. The paper proposes the possibility of replacing feature normalization with a weighted Elastic Net approach. Experimental results are validated using both synthetic and real data. Claims And Evidence: The claims are overall supported by theoretical or experimental evidence. Methods And Evaluation Criteria: (1)The evaluation is based on synthetic and small-scale data, which is far from the real scenario. I understand that this paper is only for simple (linear) model. However, the sparsity property is also somewhat important for deep neural network. I think this paper should tailor the method and analyses for deep neural networks. Otherwise, I cannot find its value in practice. (I cannot judge whether the proposed method/analyses is enough novel and significant in the lasso problem, since I am not familiar to the topic of Lasso). (2)The paper analyzes traditional regression models, but current predominant models also include CNNs, Transformers, etc., which use normalization techniques such as Batch Normalization and Layer Normalization. It would be valuable to provide some discussion and experimental results on the applicability of the conclusions to these models and normalization methods. Theoretical Claims: I do not find remarkable errors in theoretical claims. My main concern is the assumption of feature orthogonality. The theoretical analysis assumes feature orthogonality, which is often not the case in real data. It would be beneficial to include analysis and experimental results for these scenarios. Experimental Designs Or Analyses: The experiments provided metrics for various parameters; it would be helpful to include accuracy comparison results. This paper should conduct more experiments on large-scale classification tasks. Supplementary Material: This paper only provides the code in supplementary materials, and I donot check it. Relation To Broader Scientific Literature: I cannot judge whether the proposed method/analyses is novel and significant in the lasso problem, since I am not familiar to the topic of Lasso Essential References Not Discussed: I cannot judge whether the proposed method/analyses is novel and significant in the lasso problem, since I am not familiar to the topic of Lasso Other Strengths And Weaknesses: The paper provides mathematical derivations analyzing the effects of normalization on coefficient estimation, bias, and variance in Lasso, Ridge, and Elastic Net models. Other Comments Or Suggestions: Formula Derivation Confirmation: In Equation 12, how is $ x_j^T ε $ derived? Should it be $ \tilde{x}_j^T ε$ instead? A detailed derivation process would be appreciated. Questions For Authors: NA. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your review of our paper. We appreciate the time and effort you have put into providing feedback and hope that our responses to your comments will address your concerns. > ## Claims And Evidence > > (1)The evaluation is based on synthetic and small-scale data, which is far from > the real scenario. I understand that this paper is only for simple (linear) > model. However, the sparsity property is also somewhat important for deep > neural network. I think this paper should tailor the method and analyses for > deep neural networks. Otherwise, I cannot find its value in practice. (I cannot > judge whether the proposed method/analyses is enough novel and significant in > the lasso problem, since I am not familiar to the topic of Lasso). We respectfully disagree that the results need to be tailored to deep neural networks. While we appreciate the suggestion to explore connections to deep learning, our focus on regularized linear models addresses an important and foundational issue in statistical learning that has been overlooked in the literature. As we demonstrate throughout the paper, feature normalization has significant impacts on parameter estimates and model selection even in these simpler settings, which are still widely used in many practical applications including interpretable AI, biostatistics, and economics. Furthermore, understanding these fundamental behaviors in linear models provides essential groundwork for extending similar analyses to more complex models in the future. > (2)The paper analyzes traditional regression models, but current predominant > models also include CNNs, Transformers, etc., which use normalization > techniques such as Batch Normalization and Layer Normalization. It would be > valuable to provide some discussion and experimental results on the > applicability of the conclusions to these models and normalization methods. We agree that it would be interesting to investigate batch and layer normalization for neural networks and the connection to regularization and are happy to add a remark regarding this in the paper's discussion. Tackling this issue directly in the paper, however, would require a completely different analysis and empirical evaluation, which makes us think that this is better suited for future work. > ## Theoretical Claims > > I do not find remarkable errors in theoretical claims. My main concern is the > assumption of feature orthogonality. Please see the response to reviewer nR8u regarding the assumption of orthogonality. > ## Experimental Designs Or Analyses > > The experiments provided metrics for various parameters; it would be helpful to > include accuracy comparison results. > > This paper should conduct more experiments on large-scale classification tasks. Thank you for this suggestion. By "accuracy," we assume you mean predictive performance. We have included results on additional data sets and also included results on classification, which we have added to the supplementary material. See the response to reviewer dUhp for details and plots of the new results. We have also updated the dimensions of the experiments in section D.2.1 to include more features as well as observations, but this makes no difference for our results. > ## Other Comments or Suggestions > > Formula Derivation Confirmation: In Equation 12, how is $x_j^T \varepsilon$ > derived? Should it be $x_j^T \varepsilon$ instead? A detailed derivation > process would be appreciated. Thank you for pointing this out. There is indeed a mistake in the equation, although it should not in fact be $\tilde{\boldsymbol{x}}_j$. Instead, the term $c_j \mathbf{1}^\intercal \boldsymbol{\varepsilon}$ should be present in the numerator as well, which we have now corrected. This correction doesn't affect the result since the expectation of this term is 0. --- Rebuttal Comment 1.1: Comment: Thanks for the response to my comments. I still have concerns on the assumption of feature orthogonality. Does the authors can conduct experiments on real datasets with the assumption of feature orthogonality? Besides, I still cannot find the value of this paper in practice. I know "**feature normalization** has significant impacts on parameter estimates and model selection even in these simpler settings" and widely used in deep learning (normalizing the activations, e.g., batch normalization, layer normalization). But my concern is "the evaluation is based on synthetic and small-scale data, which is far from the real scenario". In another way, does the experiments can be conducted on ImageNet-1000 classification? or object detection tasks on Coco? or maybe, providing an experiments on the "Interpretable AI, biostatistics, and economics" as you mentioned in the rebuttal? --- Reply to Comment 1.1.1: Comment: > I still have concerns on the assumption of feature orthogonality. Does the authors can conduct experiments on real datasets with the assumption of feature orthogonality? Please see the response to reviewer nR8u regarding the assumption of feature normalization, where we discuss the assumption in detail as well as provide a new experiment that shows that this assumption has, at least empirically, no effect on our results. We are not certain if you meant to write "with" or "without" regarding the assumption of feature orthogonality. If you meant "without", and are asking us to present results on data sets without the assumption of orthogonality, then please note that we already have several experiments on real, non-orthogonal, data and have shown that the effect of normalization is strong even in these cases as well. Also see the response to reviewer dUhp, where we present new results on additional data sets and show that hyper-parameterizing over normalization has real effects for predictive performance as well as feature selection and estimation. If you ask for data sets with the assumption of orthogonality, however, then note that our theoretical results cover this scenario exactly. > But my concern is "the evaluation is based on synthetic and small-scale data, which is far from the real scenario". We would respectfully like to point out that this is not correct. Our evaluation is not only based on synthetic data. Neither is it far from the real scenario. Our real data examples span a wide range of different tasks and types of data structures, including mixed binary and continuous features, only binary features, and sparse and dense data. They also span several different domains, such as socio-economics, bioinformatics, web-browsing, electricity consumption, credit ratings, and more. We have also, during the review, expanded our results to both classification and regression. > In another way, does the experiments can be conducted on ImageNet-1000 classification? or object detection tasks on Coco? or maybe, providing an experiments on the "Interpretable AI, biostatistics, and economics" as you mentioned in the rebuttal? While ImageNet and Coco would be a useful data sets to examine in the context of image recognition, they would not be suitable for the models we are considering here: the lasso, ridge, and elastic net. As we pointed out earlier, the results in our paper are robust to increasing dimensions in our experiments. That being said, however, we do recognize that it would be useful to include additional experiments on real data sets. Therefore, we have now run experiments on additional data sets here of larger sizes, and show the results in the table below, which show the combination of $\lambda$ and $\delta$ with the lowest test set error. As you can see, the best value for $\delta$ differ depending on the data set. The setup of the experiment is the same as in Section 4.1.2 in our paper, but we have used a grid of 11 $\delta$ values instead of 100. | Dataset | $n$ | $p$ | $\delta$ | $\lambda$ | NMSE | |---------|-------------|----------|-------|--------|-------| | YearPredictionMSD | 463,715 | 90 | 0.0 | 0.000251189 | 0.761951 | | covtype.binary | 581,012 | 54 | 0.3 | 0.0001 | 0.701364 | | rcv1_train.binary | 20,242 | 47,236 | 0.0 | 0.01 | 0.986731 | | real-sim | 72,309 | 20,958 | 0.9 | 0.158489 | 0.999964 | We would be happy expand this list with additional, and even larger, data sets for the final version but will unfortunately not have time to do this before the review period ends.
Summary: This paper studies the effects of input normalization for LASSO, Ridge, and ElasticNet regression, focusing on normal and binary features. See below for more detailed discussions on the settings and contributions of this paper. Claims And Evidence: yes. Methods And Evaluation Criteria: yes. Theoretical Claims: Yes. Under strong assumptions, the authors make the following claims: 1. In Theorem 3.1, when the class balance becomes very extreme ($q_j \\rightarrow 1^-$ (there is a problem in the draft, where the authors write $q_j \\rightarrow 1^+$)), the coefficients of the estimators depend heavily on the normalization parameters. 2. In Theorem 3.2, when the class balance becomes very extreme, the variance of the coefficients of estimators also strongly depends on the normalization parameter. 3. In Theorem 3.3, they have analogous results to Theorems 3.1 and 3.2, but for the weights of regularizations. Experimental Designs Or Analyses: Yes. In Section 4.1.1, the authors explain how class balance affects coefficients in LASSO with synthetic data, showing how the estimation varies with different normalization (no scaling vs. variance scaling). In Section 4.1.2, the authors investigate how normalization affects the predictive performance with three real datasets. In Section 4.1.3, the authors demonstrate how to normalize when we have both binary and continuous features. In Section 4.2, the authors present an alternative way to normalize: weighting the regularization terms instead of normalizing data. Supplementary Material: No. Relation To Broader Scientific Literature: N/A. Essential References Not Discussed: N/A. Other Strengths And Weaknesses: ## Strength I like the problem considered here, and I agree that this problem is somewhat under-investigated and worth pursuing. ## Weaknesses 1. This paper is not well-polished. The flow is very monotone, making the readers have a hard time reading this paper. See below for suggestions. 2. The assumption on the orthogonality of the normalized design matrix $\\tilde{\\boldsymbol{X}}$ is super strong and rarely holds in practice. This is even more unrealistic for two cases that this paper considers here: binary features and mixed features. As a result, it is very hard to tell if the findings in this paper have an interesting implication in the general cases and in practice. 3. As a consequence, I expect that it is the technical challenge on the theoretical side that makes the paper more interesting. However, this is not the case here. Other Comments Or Suggestions: ### General comment Overall, I find the problem being considered interesting. However, I find the presentation of this paper a bit underwhelming, and I feel uncomfortable reading this paper. I believe that the presentation can be greatly improved. However, in this form, I believe this draft is not ready to be published, and I cannot recommend acceptance for this paper. But don't worry, I will consider updating my score after the authors make changes to improve the presentation, clarity, and flow of this paper. Here are some suggestions: ### Minor changes: Potential typos/clarification suggestions 1. In Equation 2, $\\tilde{\\boldsymbol{X}}$ is used before defined in Definition 2.1. Consider adding a small clarification right after Equation 2, even if it is repetitive. 2. In the first paragraph of Section 3, there is no definition for the noise $\\varepsilon_i$ (it is defined later). Consider adding this clarification right after using it. 3. In the first paragraph of Section 3, maybe explicitly mentioning that $\\boldsymbol{x}_j$ is the $j$-th column of $\\boldsymbol{X}$ could improve clarity. 4. The same goes with $\\tilde{\\boldsymbol{x}}_j$. 5. Please discuss the (strong) assumption on the orthogonality of the normalized design matrix $\\tilde{\boldsymbol{X}}$: (1) how it rarely holds in practice, (2) how it is necessary in this framework, (3) if any prior works also have this assumption (to see that it is not too weird one). 6. In the first paragraph of Section 3.1., what is $\\overline{\\boldsymbol{x}}? I know that it is defined in Table 1, but please refer to it once again to help the readers keep track of the notations. Besides, the wording of this paragraph is not good; consider rewriting this paragraph. 7. What is $\\Phi(\\cdot)$ in Equation 8? I know this one is the CDF of normal distribution, but please define it clearly here. 8. In section 4.1.2, the author should at least describe what those datasets (a1a, rhee2006, and w1a) are about. I know that the statistics of that dataset is mentioned in Appendix E, Table 2, but at least the author should refer to that Table in the main body. 9. In Theorems 3.1, 3.2, and 3.3, since $q_j \\in [0, 1]$, why do the authors write the limit $q_j \\rightarrow 1^+$? Do you mean $1^-$ instead? ### Major changes: paragraph/section suggestions 1. It is better if the authors write the whole paragraph/sections of notations used in this paper. The way the authors use notations here is not good: it makes the readers have a hard time finding relevant notions. 2. Consider adding the contributions paragraph at the end of Section 1. List the main contributions clearly and where they are located in the main body. I know that the authors discuss those points in Section 5, but I think it is better if authors have some punch lines right from the start so that the readers can keep track of the paper more easily. Consider reorganizing Section 5 accordingly, making it more condensed, and putting the punch lines right at the start. 3. Section 3 is very monotone; please reorganize this section. For example, in Section 3.1, the authors could make each part of Section 3.1 more distinguishable, for example, by writing: (1) "\paragraph{Class blance}", which contains the rigorous definition and its intuition. (2) "\paragraph{the effects on the estimators scale ...}" discusses how the class balance directly affects the estimator. Please make three paragraphs after Equation 7 stand out more and elaborate on how the finding is interesting and counterintuitive. (3) and so on. The above are just some suggestions that I have. I recommend that the authors do multiple passes to improve the presentation of this draft. Questions For Authors: 1. Is the assumption of the orthogonality of the normalized design matrix too strong and unnatural, especially for the case of binary features and mixed features? Please give me a couple of examples where this assumption holds and a couple of prior works that deal with this assumption. My intuition is that this assumption is extremely weird in the cases that this paper considers. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your detailed review of our paper. We appreciate the time and effort you have put into providing feedback. We will start by addressing your comments regarding the assumption of orthogonality. ## Assumption of Orthogonality We agree that the assumption is strong and unrealistic. Nevertheless, it is not uncommon in literature, particularly not in the context of research on regularized methods, where the theoretical analysis is often difficult. Please see the following references for examples where the authors make similar (and sometimes even stronger) assumptions: - Efron, B., Hastie, T., Johnstone, I. M., & Tibshirani, R. (2004). Least angle regression. Annals of Statistics, 32(2), 407–499. - Fan, J., & Li, R. (2001). Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association, 96(456), 1348-1360. - Bogdan, M., Berg, E. van den, Sabatti, C., Su, W., & Candès, E. J. (2015). SLOPE – adaptive variable selection via convex optimization. The Annals of Applied Statistics, 9(3), 1103–1140. - Yuan, M., & Lin, Y. (2005). Model selection and estimation in regression with grouped variables. The Journal of the Royal Statistical Society, Series B (Statistical Methodology), 68(1), 49–67. - Bu, Z., Klusowski, J., Rush, C., & Su, W. (2019). Algorithmic analysis and statistical estimation of SLOPE via approximate message passing. In H. Wallach, H. Larochelle, A. Beygelzimer, F. dAlché-Buc, E. Fox, & R. Garnett (Eds.), Advances in neural information processing systems 32 (pp. 9361–9371). Curran Associates, Inc. We would also like to argue that it is not unreasonable to begin with a simple setting, especially given that this is the first work to investigate this issue and that, even in this simple setting, the effect of normalization is strong. More importantly, however, we believe that it is reasonable to assume that the assumption of orthogonality is not in fact restrictive for our results. See, for instance, the experiment presented in figures 4 and 15, where we have introduced correlation between the features and where the effect of class-imbalance is _stronger_ when the features are correlated. We have also for this review conducted a new experiment to investigate this assumption further, which we present in the figure in the link below. Here, we have studied the case of two binary features with varying levels of correlation and let the class balance of the second feature tend to 1. The results show that the effect of class imbalance is unaffected by correlation. <https://i.imgur.com/RzijpY6> We realize, however, that the paper would benefit from a more thorough discussion of this assumption, and we will include this experiment along with a detailed discussion of the assumption in the final version of the paper. We will also add a new theorem that shows that correlation between two features shrinks to zero in the limit if one of the features is binary and its class balance tends to 1 (or 0). ## Impact > I like the problem considered here, and I agree that this problem is somewhat > under-investigated and worth pursuing. Thank you for your appreciation regarding the impact and value of the problem considered in the paper. We agree that it is under-investigated and worth pursuing. ## Notational Remarks and Other Minor Suggestions Thank you for your detailed comments on the notation and suggestions on clarifications. We have made the changes you suggested and believe that the paper is now clearer and more accessible to a wider audience. ## Writing and Presentation We are sorry to hear that you found the paper hard to read. We have now restructured the paper along your recommendations, including adding a section on notation and a summary of main contributions, and believe that the paper now flows better and is easier to read. ## Details on Data Sets > 8. In section 4.1.2, the author should at least describe what those datasets > (a1a, rhee2006, and w1a) are about. We have added a reference to the appendix and detailed descriptions of these data sets in the section in the appendix.
null
null
null
null
null
null
null
null
Beyond Induction Heads: In-Context Meta Learning Induces Multi-Phase Circuit Emergence
Accept (poster)
Summary: This paper extends the toy model and task setup of [Reddy 2023] to the multi-task case, which requires models to infer tasks from in-context learning to label the last token. Their main findings show that in this setup there are three phases in learning, where each phase is associated with the transition of an attention pattern/circuit in each of the two layers. This contrasts with the single phase for a single task as in [Reddy 2023]. They further investigate these findings and demonstrate: - The three phases of learning persist across varying numbers of tasks, other than T=1 which results in only the first and last phase as in [Reddy 2023]. - The second phase circuit is unique to the T>1 IC-ML setup, and is associated with models’ improved accuracy by integrating label information to predictions. This circuit’s accuracy coincides with the accuracy of a model on tasks with random labels, suggesting that the reason model performance does not degrade fully on random labels is due to circuits such as this one. - The phases are not pronounced with multi-head attention (as opposed to the single head setup), as the phased circuits form in parallel rather than in sequence. ## update after rebuttal post rebuttal, raising my score to weak accept. Claims And Evidence: Yes they justify their experimental claims. Methods And Evaluation Criteria: If the focus is answering questions about their fixed setup, then I think the experiments/metric/evaluations make sense. However, the largest issue with the paper is that this toy task and model setup, which although reworked from prior art with [Reddy 2023], does not necessarily shed much light on in context meta learning in usual LLMs and transformers. - The two layer, attention only models, and the last-token-only classification loss, require some leap of faith to infer meaning outside the toy setup to usual LLM’s trained with next token prediction. If the goal was to gain insight about LLMs outside this particular setup, there should have been other experiments carried out - Experiment training with data of varying context lengths and next token prediction, or at least label prediction at all $x$ tokens. Do the same circuits and phases form? - Experiments with more layers than two, and in a usual (single head architecture). Do the phases exist? What circuits arise? - Experiment with pre-trained foundation models performance on such a task, and perhaps ablate some of these circuits from all layers/attention heads. Theoretical Claims: There was just one calculation, which determined the probability in the “Deeper Look at the Semi-Context Circuit” section. I checked it and do not find any issues. Experimental Designs Or Analyses: Yes, all checked. The numerical metrics used for the particular attention circuits are self explanatory and appropriate. Supplementary Material: All supplementary reviewed Relation To Broader Scientific Literature: This would be most related to their referenced papers like [Reddy 2023], [Min et al 2022] Essential References Not Discussed: Not to my knowledge Other Strengths And Weaknesses: In addition to the weaknesses outlined in my answer to “Methods And Evaluation Criteria”, the paper extends the results of Reddy et al to the multi task setup, which detracts from the overall novelty and significance. The strength of the paper is that the results they claim for their setup, are experimentally justified and well investigated. Other than some under specified aspects of the setup, the clarity of the paper is good. Other Comments Or Suggestions: post rebuttal, raising my score to weak accept. Questions For Authors: - What are the 3 Tasks in T? I assumed it is a reassignment of labels and mean vectors for each class, but it could also be just a reassignment of labels, it is unclear. - Appendix C does not explain the pruning/masking of circuits used. - The setup is also not very clear for the “Deeper Look at the Semi-Context Circuit” section. Eg. “In Phase 1, since there are two tasks, the model has a 50% chance of predicting correctly by random guessing. In other words, the model’s prediction reduces to a binary choice for each input query $(x_q )$” Does this rely on the model having memorized the two possible labels that can apply to $x_q$ or does the input context contain all other classes so that possible 2 labels for $x_q$ can be decided by the two labels not used in the context? Can you clarify this section? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your detailed review. We added experiments (notably Figures 21, 23, 24, 25, 26 and 27), in the following website: https://sites.google.com/view/in-context-meta-learning. **> W1** > Experiment training with data of varying context lengths and next token prediction We conducted additional experiments to address the concerns regarding generalizability. - Context Length: In **Figure 23**, we tested varied context lengths (N=2 to 64). While longer contexts accelerate convergence to the FCC, the phase transitions remain observable up to N=8. In **Figure 24**, we show the circuit learned for N=16, which structurally align with the FCC reported in the main text. - All-Label Prediction: In **Figure 25**, we experimented with predicting "all" labels in context. Phase transitions were still observed, and the final learned circuit again resembles the main FCC. These results demonstrate that the key phenomena—phase transitions and FCC formation—are robust to variations in context length and prediction objective. **> W2** > Experiments with more layers than two **Figure 26** shows experiments extending from 2-layer models to 3, 4, and 5-layer models. For the 2-layer and 3-layer models, we observe a phase transition in accuracy; however, when increasing to 4 or 5 layers, any phase transition in accuracy becomes less pronounced. Additionally, **Figure 27** visualizes the final circuits in the 3, 4, and 5-layer models. In all cases, the first two layers form a circuit resembling an FCC-like structure. Our results show that even with 3, 4, and 5 layers, the key circuit still forms, though phase transitions in accuracy become less distinct. This demonstrates that our findings extend beyond a mere two-layer “toy” setup, indicating a robust mechanism likely relevant to larger-scale models. **> W3** > Experiment with pre-trained foundation models Please see **W7** from Reviewer kgDv, where we show that a pretrained GPT2-XL exhibits chunk-example attention in its early layers and label-attention patterns in its middle-to-late layers on natural language data (see **Figure 21**). These findings align with our two-layer attention-only model and suggest that these circuits generalize to LLMs. **> W4** > the paper extends the results of Reddy et al to the multi-task setup, which detracts from the overall novelty and significance. We believe analyzing how Transformers operate in a more realistic few-shot in-context learning scenario is crucial, as few-shot performance is central to many practical LLM applications. While much of the existing circuit-focused research on in-context learning (e.g., induction heads) remains limited to copy tasks that are not seen in a practical LLM usage, our work explores circuit formation under a broader few-shot setup, offering a novel perspective. Furthermore, unlike the single-head approach in Reddy et al., we investigate multi-head architectures, shedding new light on how circuits differ in this setting. Finally, we also provide additional experiments with standard Transformers (see **W3** from Reviewer kgDv) and pretrained-LLMs (see **W3**), underscoring the broader relevance and novelty of our analysis. **> W5** > What are the 3 Tasks in T? As explained in Section 3.1, each task $\tau$ defines a unique assignment of labels $\ell$ to items $x$. While the underlying classes and mean vectors remain the same, the labels are randomly reassigned from one task to another. Thus, the 3 tasks in $T$ are simply 3 different ways of pairing each class (and its mean vector) with a label, giving rise to three distinct label assignments under the same class/feature space. **> W6** > Appendix C does not explain the pruning/masking of circuits used. We believe the pruning/masking procedure is described in Appendix C, where we state: ```all components except for the circuit corresponding to a specific phase were pruned at initialization, thereby isolating the contribution of each circuit.``` In Figure 12, the legends Phase1 Circuit, Phase2 Circuit, and Phase3 Circuit each refer to an experiment where only the circuit from the respective phase was retained from the initial weights, ensuring that only that circuit contributed to the predictions. Due to the limited space in the response, we will provide additional details on the exact pruning process for each circuit in the revised paper. **> W7** > Does this rely on the model having memorized the two possible labels that can apply to xq It is clearly the case that the model memorizes the two possible labels corresponding to xq. As explained in Section 4.1: ```Phase 1 (Non-Context Circuit; NCC): Both layers use bigram attention, ignoring the context and relying solely on the model’s weights. ``` During Phase 1, the model does not utilize any contextual information, indicating that the model’s weights have memorized the two possible labels for xq. --- Rebuttal Comment 1.1: Comment: The additional experiments clear up most of my concerns; my only remaining lack of conviction comes from the utility of exclusively analyzing their generalized Reddy et. al. type setup. The added figure 21, analyzing the pretrained gpt-2xl is particularly interesting. Within that setup, their empirical results with the new additions are sufficiently convincing. I am changing my score to weak accept from weak reject.
Summary: The paper proposes a problem setting named In-Context Meta-Learning (ICML) with multiple tasks. For the same query, the answer would be different from task to task, so the model needs to infer the task to make a prediction. Trained on this setting, the paper found that the model training has multiple phases: (i) predict simply based on query, (ii) predict based on labels of example and query, (iii) predict based on all. The paper further shows multi-head attention may make this abrupt transition on loss not apparent, but such abrupt transition can still happen internally. Claims And Evidence: Claim: The model has multiple phase transitions: (i) predict simply based on query, (ii) predict based on labels of example and query, (iii) predict based on all. Evidence: Almost all the experiments in the paper srtongly supports the claim. Question: I believe Fig. 2 is used to support such claim. But the pattern is not obvious. The red block potentially make it even more hard to observe. I feel that label attention and bigram give the same visualizetion. (The case might be "this is just the truth". In that case, the author could consider draw the difference between the two visualizations of phase 1 and phase 2, or figure out another setting.) Methods And Evaluation Criteria: Yes. The experiments make sense except for the Fig. 2, where the attention visualization is not well supporting the claim. Theoretical Claims: There is no theory. Experimental Designs Or Analyses: Yes, I read the full paper so I checked the experimental design. I believe the paper should provide better writing to introduce the setting of the experiments. Based on my experience, I believe there is no issue in the experimental design and experimental results. Supplementary Material: I went to Appendix A trying to find information about $K$ since $K$ is not clear to me based on Sec. 3.1 and Appendix A is the most closest mentioned Appendix. I went to Appendix B trying to find information on how many attention heads are used in Fig. 2. I went to Appendix G to check MultiHeads Experiments. Relation To Broader Scientific Literature: The paper provides a deeper understanding of ICL beyond prior finding of induction head and a deeper understanding of the training dynamic of ICL beyond [I cannot provide more detail here since I'm not sure about what is the SOTA understanding of ICL's training dynamic]. Essential References Not Discussed: Missing a very related work: Label Words are Anchors: An Information Flow Perspective for Understanding In-Context Learning. The paper empirically shows a very similar information flow: The label in the shadow layers collect information from the (x,y) pairs, and then deep layer processed those summarized information. Other Strengths And Weaknesses: Strength: (1) The experiment of random-label robustness of SCC is a good connection to existing work, which is a plus to me. This is a strong evidence of the difference between phase 2 and 3. (2) The experiment design is good to me. Starting from single attention and extend to multi-head attention (3) Overall the experiments make the training dynamic and interpretation very clear. Weakness: (0) The author could show multiple run on experiments with multiple attention heads in Fig. 8. My training (multi-head) on ICL (different tasks) encounter diverse transition. The multiple plateaus may occur and may not. So I'm curious on whether the author constantly observe no transition happens on the accuracy curve. (1) There are some minor issues in the flow of introduction: (a) P1 col2 line051 "we train a simplified transformer", P2 col1 line101 "we also examine the case of multi-head model". I got confused here since the word "multi-head" suddenly jumped to me. I'll guess the conclusions before "multi-head" are based on a single-head model, then I'll guess whether the author means "single-head" as "simplified transformer"; (b) P2 col1 line101 "The existence .... nature of LLMs": The sentence is a statement but I think the statement may not be true since the paper found in the second phase the Transformer will predict based on examples' label (and query), while we do not know which phase real-world LLMs are in. (c) P2 col2 line075 "phase transition can ... practical scenarios": this sentence says "bridging the gap". What is the gap between toy and practical settings? (2) There is a redundancy in the related work: (a) The first and second sentences aim to introduce the same thing “ICL”. I think the author forgets to delete one of them. (These duplicated sentences are potentially generated from LLMs, The author may need to do more proof reading.) (3) There are some writing issue in the experimental setup: (a) The notation $K$ is used in P3 col2 line 136 but not introduced before using. This issue introduces huge issue for me to understanding the paper. What is $K$? Why the author sets $L<K$? Is the $K=64$ the number of classes? Maybe not, since we only have $L=32$ labels? Such confusion from the issue costs lots of reading energy of me. The author should clarify in Sec. 3.1 rather than requiring the reader to remember the paper define $K$ in Fig. 1. (b) $\mu$ is used in P3 col2 line130. It's used again in P3 col2 line149 for another meaning. The author could consider use \bm{\mu} for P3 col2 line130 since that's a vector. (4) Experimental setting is not clear: (a) Sec. 3.2 said there are $m$ heads, and more details in Appendix B. I went to appendix B but it does not tell me how many heads are used. I have this question because I want to know whether the results in Fig. 2 comes from 1 attention head or multiple attention heads. But I cannot find it. The question remains in my mind until I read the title of Sec. 5. (b) P5 Col1 line253, "µ"-th layer. I will rate 3.75 if purely based on experiments and ideas. The score goes to 3.25 because of the writing issues. (The qualities of idea and experiments are beyond acceptance line but the quality of writing is below acceptance line) Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your detailed review. We revised the figures and added experiments (notably Figures 17 and 22) in the following website: https://sites.google.com/view/in-context-meta-learning. **> W1** > The red block potentially make it even more hard to observe We replaced the red squares with red arrows in **Figure 17** and highlighted the updated caption in purple. Strong attention stands for entries that exhibit relatively high intensity in the row-wise softmax-normalized attention map. Note that the last query token often shows strong attention because it’s crucial for prediction. Figure 2 is intended to illustrate the three distinct training phases (based on accuracy and loss) "qualitatively". This intuition from attention patterns motivates deeper analyses in the later sections. A more quantitative and rigorous analysis of attention maps appears in Figure 4 of the main paper. **> W2** > K is not clear to me based on Sec. 3.1. We will include the following table, which outlines the key hyperparameters and their default values, as well as a brief description of each parameter. We believe this format clarifies how the data is generated and represented in our experiments. | **Parameter** | **Description**| **Default Value** | |---|---|---| | $T$ | Number of tasks (types of $\tau$). | 3 | | $K$ | Number of classes (types of $k$). | 64 | | $L$ | Number of labels (types of $\ell$). | 32 | | $N$ | Number of $(x, \ell)$ context pairs. | 4 | | $\epsilon$ | Noise magnitude controlling intra-class variation. | 0.1 | | $p_B$ | Probability that the query item $x_q$ appears identically within the in-context examples. | 0 | | $\alpha$ | Exponent for the power-law (Zipf) rank-frequency distribution over classes. | 0 | | $\beta$ | Exponent for the power-law (Zipf) rank-frequency distribution over tasks. | 0 | **> W3** > Missing a very related work Thank you for noting [1]. Their results in pretrained models align with ours, especially regarding how label tokens gather context for deeper layers. Similar chunk-example attentions have also been noted in LLMs [2]. Unlike [1] and [2], we run controlled experiments to observe circuit formation during training. We will clarify this connection and highlight how our dynamic analysis complements their findings. [1] Label Words are Anchors: An Information Flow Perspective for Understanding In-Context Learning. [2] Revisiting In-context Learning Inference Circuit in Large Language Models. **> W4** > the author constantly observe no transition happens on the accuracy curve. In **Figure 22**, we show four training runs (seeds 0, 1, 2, and 3) of a 2-head attention model. Consistently, across all seeds, the accuracy curve does not exhibit clear, discrete phase transitions, supporting our observation that multi-head configurations tend to smooth out in accuracy. **> W5** > I got confused here since the word "multi-head" suddenly jumped to me. > I went to appendix B but it does not tell me how many heads are used. We will clarify that our default setting uses a single-head attention model. We then introduce the multi-head extension later to examine how additional heads affect phase transitions and circuit formation. **> W6** > we do not know which phase real-world LLMs are in We agree that it is difficult to conclusively determine which “phase” might dominate within a real-world LLM. However, our point is that the label-attention circuit seen in Phase 2 of our toy model—one that yields higher accuracy on random labels—may also be present (at least in part) in large-scale models. We do not claim LLMs remain in phase2, only that phse2 circuit likely contributes to their observed behavior on random-label prompts. **> W7** > What is the gap between toy and practical settings? In our toy setup, we observe abrupt phase transitions (e.g., a sudden drop in loss), whereas large-scale LLM training typically does not exhibit such discrete transitions in practice. When we introduce multiple heads, these abrupt transitions become smoothed out, making our toy model’s behavior more akin to real-world LLMs—thus “bridging the gap” between the simplified setting and practical scenarios. **> W8** > Why the author sets L<K? Please see **W2** for our detailed revisions to Section 3.1. We set $L<K$ following prior studies [1], which introduce a notion of synonymous classes: multiple tokens can represent the same label. For example, the words “Happy” and “Glad” might both map to the same sentiment label. This design reflects more realistic linguistic variability. [1] Data Distributional Properties Drive Emergent In-Context Learning in Transformers **> W9** > The author could consider use \bm{\mu} for P3 col2 line130 since that's a vector. We will modify the latter usage to indicate it is a vector, ensuring clarity in the notation. --- Rebuttal Comment 1.1: Comment: As in "Thank you for your rebuttal." --- Reply to Comment 1.1.1: Comment: Thank you for taking the time to review our paper and for acknowledging our rebuttal. If you have any further questions or concerns, we would be happy to address them.
Summary: The main contributions of this paper are the following. 1. A novel synthetic in-context sequence modelling data set, based on identifying which of a number of classification rules are active and using it to predict the label of the query. 2. Demonstrating through analysis of the loss and attention mechanisms that when a small transformer is trained on this task, it progressively implements three different solutions of increasing accuracy over training. Further experiments contribute additional findings, namely: * One of the three solutions is a novel label-based prediction method, the paper studies this solution in more detail. * The paper experiments with varying some data-distributional properties, cataloguing their effect on the sequence of solutions. * The paper experiments with multi-head attention, showing that different heads specialise to different solutions at the same time in this case, and the loss trends are smoothed out. **Summary of my review:** Overall, I think this is a strong paper that makes a valuable contribution. However, I have recommended 'weak reject' based on some missing discussion of important related work. If the authors can amend this I would be pleased to change my recommendation to 'accept'. **Update:** The rebuttal show adequate summary of the relationship to prior work. I am changing my recommendation to 'accept' in the understanding that these comparisons would be included in the final version. See my follow-up comment for some minor further discussion. Claims And Evidence: The empirical claims are supported by clear and convincing evidence for the most part. I didn't quite follow figure 6's discussion of random label accuracy. In particular, RLA is as high in the SCC phase as in the FCC phase. In the latter, the transformer is meant to be using a circuit that depends on the pairs rather than just the labels. It seems to me that the RLA should be higher during SCC than during FCC. Methods And Evaluation Criteria: The synthetic data setting is well-designed and captures the intended features of a 'meta-learning' style task. Theoretical Claims: N/A Experimental Designs Or Analyses: N/A Supplementary Material: I read the appendices. Relation To Broader Scientific Literature: Understanding the emergence of in-context learning abilities in large-scale transformer models is an important objective of the science of deep learning. Many recent works have studied simplified synthetic sequence modelling settings where the emergence of in-context learning can be elicited and understood in detail (in terms of both the emergent mechanistic structure and also the training-dynamical mechanisms governing its emergence). In this context, this paper's main contributions represent a well-designed and novel synthetic in-context classification task that captures a meta-learning challenge, exhibits a third intermediary 'semi-contextual' solution, leads to a particularly crisp example of a progression of emergent learning mechanisms. This setting would be a good basis for future work studying the emergence of these transitions in detail. Essential References Not Discussed: **Synthetic in-context learning tasks that capture meta-learning.** The paper acknowledges and discusses prior work on what they call 'copy tasks' and the emergence of induction heads. They adequately compare their setting and findings to this part of the literature on synthetic examples of the emergence of in-context learning. However, the 'copy task' setting is only a small part of a much broader literature that has developed over the last couple of years, that is not adequately discussed in the paper. There are by now dozens of different synthetic in-context learning settings, many of which do not have the same limitations as a simple 'copy task,' and are require more general requirement of identifying a 'task vector' as discussed by the authors in the "implications for LLMs" paragraph (starting line 424(right)). A non-exhaustive list of other settings includes: * In-context linear regression, which involves identifying a latent task vector, rather than just copying previous labels. The paper cites some work in this space but does not discuss it as an example of moving beyond the a 'copy task.' * In-context Markovian sequence modelling [1]. * In-context modular arithmetic [2]. * Many other similar settings. In-context linear regression and Markovian sequence modelling involve 'implicitly' identifying a task vector. However within these settings there are also variants more similar to the paper's setting where the transformer must select from a finite set of tasks, namely [3, 4]. To be clear, the specific multi-task classification setting considered in this paper is novel to my knowledge. However, since the paper frames this setting as a core contribution and motivates it as an attempt to capture the challenge of meta-learning, in my opinion related settings clearly represents related work that should definitely be cited and ideally should be discussed in greater detail. **Multi-phase emergence.** The other core empirical finding of the paper is that solutions arise in a sequence of multiple phase transitions. While the analysis of this setting is original (to my knowledge), a qualitatively similar finding has already been published in the settings of Markovian sequence modelling [1], in-context linear regression [5], language modelling [5], and plausibly other settings. I believe this is highly-related work that should be cited and discussed in the paper. --- * [1] https://arxiv.org/abs/2402.11004 * [2] https://arxiv.org/abs/2406.02550 * [3] https://arxiv.org/abs/2306.15063 * [4] https://arxiv.org/abs/2412.01003 * [5] https://arxiv.org/abs/2402.02364 Other Strengths And Weaknesses: **Strengths.** In my opinion, this is a strong paper overall. The results are very clear. The presentation is very clear. I believe the paper makes a valuable contribution to the literature, as discussed. **Weaknesses.** The main weakness is incomplete discussion of prior work, as outlined in the previous section. To my knowledge, the setting proposed in the paper and the specific results within this setting are novel, but as an alternative demonstration of the same kinds of findings that I am already aware of based on other work in different settings. I still think the paper is valuable and well-done, but this prior work needs to be acknowledged and discussed. Other Comments Or Suggestions: Small typos I noticed: * Line 335: "nunber" * Line 825: "Birstiness" Other notes: * The title of section 5, "multi-head enhances circuit discovery", was confusing while I read it. "Circuit discovery" to me suggests the mechanistic interpretability problem of finding circuits in a learned model. Importantly, it's mechanistic interpretability practitioners that are doing the discovering. However, this section appears to talk about the discovery of the circuits *by the model, during learning.* That is, multi-head attention makes it easier for different circuits to emerge. I invite the authors to consider a different title for this section to avoid possible confusion. * On page 1, the paper uses a country-to-capital task to motivate moving past copy tasks and induction heads. While I generally agree with the framing that meta-learning is an important element of ICL, I didn't find this example compelling. Arguably, in a large-scale example, an induction head could be an important part of a mechanistic explanation of an LLM's ability to perform such a task. If by some intermediate layer the tokens `France`, `Spain`, and `Japan` have a representation involving that they are countries, and `Paris` and `Madrid` have a representation have a representation involving that they are capitals of the preceding country, then in this dimension the task becomes a copy task. The original induction heads paper already identifies and discusses similar 'abstract' induction heads in their small scale language modelling experiments (see the section 'Argument 4' in Olsson et al.). Questions For Authors: My overall assessment of 'weak reject' is based on the incomplete discussion of prior work. Do the authors agree that this work is related? If so, I invite the authors to clarify the distinction from this prior work in their revision, in which case I would be pleased to increase my recommendation to 'accept.' Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate the reviewers’ insights and address each point below. **> W1** > It seems to me that the RLA should be higher during SCC than during FCC. Random Label Accuracy (RLA) can increase whenever label information is incorporated into the final token’s prediction. Even if the model uses a chunked example circuit (FCC), it can still leverage label information for that final prediction, which can maintain RLA. We do not claim that RLA must necessarily drop in the FCC phase or that RLA is definitively higher in SCC than in FCC. **> W2** > The main weakness is incomplete discussion of prior work We acknowledge that our study is related to multiple lines of research on in-context learning, multi-phase emergence. Below, we outline the connections and highlight where our approach differs. --- 1. ICL Literature Beyond Copy Tasks - **Linear Regression Approaches** Prior studies on in-context learning often adopt linear regression [1,2], which provides a tractable theoretical framework. Although they set a common task across context examples, they typically rely on MSE loss—differing from real-world in-context use. Our setup more closely aligns with practical LLM applications. - **Markov Chain–Based Tasks** Other works [3,4,5] focus on in-context learning with Markov chain tasks. While the meta-learning aspect is similar to our setting, these tasks differ from the few-shot pairs format that is standard in LLM applications. - **Modular Arithmetic** Research on in-context modular arithmetic [6] studies out-of-distribution behavior and final attention maps. Although it extends beyond simple copy tasks, it does not track the circuit acquisition dynamics during training or directly link them to phenomena like random-label accuracy or multi-head smoothing—key aspects of our work. --- 2. Multi-Phase Emergence Literature - **Phase Transitions** Prior work [4] shows transformers acquiring functionalities in discrete “phase transitions,” which aligns with our observations. However, the task there (Markov chains) is different from our few-shot setup, which is more akin to real-world LLM usage. Also, while [4] studies a progression from `uniform prediction → unigram → bigram`, we analyze more complex `Bigram → Semi-Context → Full-Context` circuits. - **Developmental Interpretability** Studies like [7,8] also note multi-phase transitions in in-context learning. However, they mainly focus on the loss landscape’s geometry (e.g., local learning coefficient, or LLC) rather than the mechanistic circuits we analyze. Bridging LLC-based approaches with an internal-circuit perspective could be an exciting future direction. We will include those discussions and citations in the revision. [1] Transformers Learn In-Context by Gradient Descent. [2] Pretraining task diversity and the emergence of non-Bayesian in-context learning for regression. [3] Selective induction Heads: How Transformers Select Causal Structures in Context. [4] The Evolution of Statistical Induction Heads: In-Context Learning Markov Chains. [5] Competition Dynamics Shape Algorithmic Phases of In-Context Learning. [6] Learning to grok: Emergence of in-context learning and skill composition in modular arithmetic tasks. [7] The Developmental Landscape of In-Context Learning. [8] Loss Landscape Degeneracy Drives Stagewise Development in Transformers. **> W3** > I invite the authors to consider a different title for this section to avoid possible confusion. We will rename that section to “Multi-Head Attention Facilitates the Emergence of Distinct Circuits” to avoid conflating our focus on circuit formation dynamics with mechanistic-interpretability efforts on circuit discovery. **> W4** > While I generally agree with the framing that meta-learning is an important element of ICL, I didn't find this example compelling. We agree that induction heads can perform pattern matching at a more abstract level, as described in the original induction heads paper (Argument 4 in Olsson et al.). However, consider a scenario where the token “Tokyo” does not appear in the context: if the model only performs a (possibly abstract) copy operation, it cannot generate “Tokyo” from the context itself. In other words, merely copying from the examples—whether literally or via a higher-level “abstract match”—is insufficient for a task like “Find the capital of Japan” unless “Tokyo” is explicitly present in the context. Thus, our central motivation remains unchanged: while induction heads can extract answers that are already present in the context, they do not fully explain the ability of LLMs to infer a shared task (like a country-to-capital mapping) and then apply it to a query whose answer is not directly provided in the context. Numerous studies (including research on task vectors) suggest that LLMs can learn and apply such tasks in-context beyond the scope of mere copy-based or induction-head operations. --- Rebuttal Comment 1.1: Comment: Thank you for your rebuttals. I changed my recommendation to 'accept' in the understanding that the proposed revisions (esp. comparison to related work) would be included in the final version. I briefly read the other reviews and rebuttals, and I am satisfied with most of the responses. I did not check the linked website with additional figures in any detail (I am not sure if sharing this extra information is allowed). **W4** One minor follow-up point regarding abstract induction circuits. Perhaps my example was unclear. I still believe an abstract induction circuit could implement this task even though "Tokyo" does not appear anywhere in the context. After an induction circuit in the intermediate layers of a language model literally copies the abstract continuation "capital of preceding country", subsequent layers could easily instantiate this as "Tokyo". I believe a similar circuit could explain many instances of meta-learning with no apparent copying. Anyway, I have no evidence for or against this proposal, and your claim in the introduction (that a *mere* induction circuit (without pre-/post-processing) cannot explain the ability) is technically true, so I am happy to leave it at that. --- Reply to Comment 1.1.1: Comment: Thank you for taking the time to review our paper, and we truly appreciate your updated score. Regarding the external link: as clarified in the [ICML 2025 Peer Review FAQ](https://icml.cc/Conferences/2025/PeerReviewFAQ), > While links are allowed, reviewers are not required to follow them, and links may only be used for figures (including tables), proofs, or code (no additional text). All links must be anonymous to preserve double-blind review, both in the URL and the destination. Our linked site contains only figures and follows the anonymity guidelines, so it complies with the ICML policy. **> Re: W4** Thank you for the additional clarification. We now understand your point better—if the abstract pattern “capital of the preceding country” is represented in the intermediate layers and copied forward, it is plausible that an induction head could be part of such a mechanism. It’s a very interesting direction, and we appreciate you raising it.
Summary: The paper investigates how transformers acquire in-context learning (ICL) abilities by extending a simple copy task into an In-Context Meta Learning (ICML) setting that requires task inference rather than simply copying from the context. The authors train a two‐layer, attention-only transformer on a synthetic task where the model must infer the underlying task from (input, label) pairs and predict the answer for a query. A key contribution is the identification of three distinct learning phases—named the Non-Context Circuit (NCC), Semi-Context Circuit (SCC), and Full-Context Circuit (FCC)—each characterized by unique attention patterns (measured by metrics such as Bigram, Label Attention, and Chunk Example). The work further extends the analysis to multi-head settings and controlled pruning experiments, linking the observed circuit transitions to abrupt improvements in accuracy. Overall, the paper suggests that these emergent circuits underpin the meta-learning abilities observed in transformer-based language models. Claims And Evidence: **Claims** - In-context meta-learning emerges via multi-phase transitions in the transformer’s internal circuitry. - Each phase corresponds to a different circuit (NCC, SCC,FCC) which can be quantitatively tracked using specific attention-based metrics. - These circuit transitions explain phenomena such as the model’s robustness to random label shuffling and the smoother improvement observed in multi-head settings. **Evidence** - Several experiments are performed on a simple transformer architecture which indeed exhibits the aforementioned transitions (fig 2, fig 4). - The simple theoretical derivations (4.3), which yield predictions in line with empirical curves (fig. 5), support their interpretation of the circuits. Some aspects (for example, the extension to practical large-scale LLMs) might benefit from further empirical validation beyond the synthetic setup as the considered architecture is a drastic simplification compared to modern LLMs. Methods And Evaluation Criteria: The authors use mainly 4 metrics for their experiments: 1) accuracy and differential accuracy (as defined in 4.1), 2) bigram, 3) label attention, 4) Chunk example. The first is standard and its differential version makes sense to better illustrate phase transitions in the dynamics, the others are specifically tailored to highlight the emergence of the identified circuits. Theoretical Claims: The paper includes a derivation of a theoretical accuracy formula for the SCC based on the probability that certain labels appear in the context (proof in Appendix D). Fig 5. empirically validates the theoretical result. Experimental Designs Or Analyses: The experiments are extensive and aimed at showing that indeed the identified circuits emerge during training. I have a couple of questions: 1. Looking at the attention maps in Figure 2, it is not simple to identify the circuits. According to which criteria did the authors draw red squared in the attention maps? For example, it is not clear that in Phase 2, such squares actually highlight the entries of the matrix with high intensity. Also the figures are not very visible and in my opinion they should be made more visible and clear. 2. Is the accuracy calculated on test data? 3. The considered architecture is peculiar. Why did the authors choose to use 2 consecutive attention blocks followed by an MLP layer, rather than the classical transformer architecture interleaving attention and MLP blocks? 4. Figure 4. shows the evolution of the proposed metrics. These metrics follow to some extent the interpretation given by the authors. However, some metrics present some overlap which are not discussed in depth. For example, Chunk Example seems to be pretty high even in phases different from last one. Similarly, Bigram (layer 1) remains quite high in the phase. Do the authors have an explanation for that? 5. While the number of tasks is ablated in figure 7, it would be interesting to see what happens when T grows larger, a setting more in line with real world settings. Supplementary Material: Section B (Model details) and D (Derivation of Th. Accuracy). Relation To Broader Scientific Literature: The paper builds upon prior work in in-context learning, notably studies on induction heads and mechanistic interpretability in transformers (e.g., Olsson et al., 2022; Reddy, 2023). Essential References Not Discussed: From the best of the reviewer knowledge, the related literature is adequately covered. Other Strengths And Weaknesses: **Strengths** - Overall, the paper presents an interesting empirical analysis to investigate the phenomenon of in-context learning in transformer-based architectures. - The learning dynamics revealed by the experiments is peculiar and novel from the best of reviewer knowledge. **Weaknesses** - Some experimental results should be more carefully discussed (see Experimental Designs Or Analyses). - Some parts of the paper could be written more clearly, for example section 3.1 would benefit from a clearer explanation of the way the data is represented. - It would be very interesting to see if the circuits found in the paper manifest themselves even in more standard architectures and in larger models. Other Comments Or Suggestions: post rebuttal, raising my score to weak accept. Questions For Authors: See Strengths And Weaknesses and Experimental Designs Or Analyses. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your detailed review. We revised the figures and added experiments (notably Figures 17, 18, 19, 20 and 21) in the following website: https://sites.google.com/view/in-context-meta-learning. **> W1** > attention maps in Figure 2, it is not simple to identify the circuits. Please see **Figure 17**, and refer to W1 from Reviewer bYju for more details. **> W2** > Is the accuracy calculated on test data? We focus on training dynamics and internal circuits in a practical task rather than OOD behavior as in [1]. Hence, we do not strictly separate train and test sets, although section 4.3 examines OOD behavior via random labels. For completeness, **Figure 18** presents test accuracy, confirming that phase transitions also manifest. [1] “The mechanistic basis of data dependence and abrupt learning in an in-context classification task” **> W3** > The considered architecture is peculiar. Using a two-layer standard Transformer, interleaving attention and MLP blocks, we still see phase transitions in accuracy and observe similar circuits (SCC,FCC) in **Figure 19**. However, the transitions are less distinct. We adopted a two-layer, attention-only Transformer to focus on self-attention, whose architecture is widely used in prior works analysing in-context learning [1,2,3]. The minimal models clearly simulates in-context behaviors such as induction heads [4], and can be useful for clearer phase-transition and circuit-level analysis. [2] “What needs to go right for an induction head? A mechanistic study of in-context learning circuits and their formation” [3] “Differential learning kinetics govern the transition from memorization to generalization during in-context learning” [4] “In-context Learning and Induction Heads” **> W4** > some metrics present some overlap which are not discussed in depth. We address why Chunk Example and Bigram remain high outside their respective phases: --- 1. Chunk Example Using the chunk-example metric (Table 2), the initial value is $\frac{1}{4}(\frac{1}{2}+\frac{1}{4}+\frac{1}{6}+\frac{1}{8})\approx 0.26,$which we treat as the baseline. - In **Phase 1**, both Layer 1 and 2 stay near this baseline. - In **Phase 2**, the metric increases in Layer 2 but does not affect the final output because the loss is only computed on the last token. - In **Phase 3**, the metric in Layer 1 exceeds the baseline and influences the final output, showing that the chunk-example circuit is forming and impacting the last token’s prediction. --- 2. Bigram - In **Phase 2**, due to row-wise softmax normalization, strong attention to label tokens in Layer 1 diminishes Bigram attention. - In **Phase 3**, in contrast, Layer 1’s chunk example pattern does not compete with Bigram, allowing the Bigram metric to recover relative to its Phase 2 level. We will add this explanation in the revised version. **> W5** > what happens when T grows larger We tested $T=3,6,9,12,15,18$ (see **Figure 20**) and found that phase transitions still occur for larger $T$. Increasing $T$ reduces Phase 1 accuracy (around $1/T$) and delays the transition to the next phase. **> W6** > section 3.1 would benefit from a clearer explanation Please see **W2** from Reviewer bYju. **> W7** > more standard architectures and in larger models. For standard Transformer results, please see **W3**. We tested whether our identified circuits appear in LLMs by evaluating a pretrained GPT2-XL (48 layers) on [SST2](https://huggingface.co/datasets/stanfordnlp/sst2) (872 samples). We used a 2-shot prompt: two labeled examples (`Review: {text}\nSentiment: {label}`) and a third, unlabeled query. Since the model is already trained, we observe each layer’s behavior on these prompts. We measure 3 metrics by averaging attention $p(i,j)$ across all heads in each layer, where $p(i,j)$ denotes the attention from the $j$-th token to the $i$-th token: 1. **Bigram** $p(\text{query}, \text{query})$ Here, query refers to the final token, and this metric measures a query token’s attention to itself. 2. **Label Attention** $\frac{1}{K}\sum_{k=1}^{K} p(\text{query}, \text{label}_k)$ Where $K$ is the number of shots, and $\text{label}_k$ is the label token in the $k$-th example. This captures how strongly the query token attends to label tokens. 3. **Chunk Example** $\frac{1}{K}\sum_{k=1}^{K} \frac{\sum_i p(\mathrm{label}_k, \mathrm{text}_k^i)}{n_k}$ Here, $n_k$ is the number of text (non-label) tokens in the $k$-th example, and $\mathrm{text}_k^i$ represents the $i$-th such token. This metric reflects how strongly each label token attends to the text tokens in its corresponding example. **Figure 21** shows that the Chunk Example metric is higher in early layers, while Label Attention dominates later layers—mirroring our two-layer model’s progression from chunk example to label focus. This pattern aligns with our earlier findings in small Transformers, suggesting these circuits generalize to LLMs. --- Rebuttal Comment 1.1: Comment: Thank you for your revisions. I am generally satisfied by the authors' rebuttal. I will raise my score to weak accept.
null
null
null
null
null
null
Test-time Adapted Reinforcement Learning with Action Entropy Regularization
Accept (poster)
Summary: This submission proposes Test-time Adapted Reinforcement Learning (TARL) to address the transfer gap between offline learning and online testing. TARL has two main components: minimizing the entropy of action probabilities by filtering actions with high entropy and efficiently updating only the layer normalization parameters, along with a KL divergence constraint between the fine-tuned policy and the original policy. The authors empirically demonstrate the effectiveness of the proposed TARL. Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: The proposed methods and the corresponding evaluation criteria are appropriate for addressing the problem. Theoretical Claims: This submission does not provide theoretical claims. Experimental Designs Or Analyses: The experimental design and analysis exhibit soundness and validity. Supplementary Material: This submission does not provide supplementary material. Relation To Broader Scientific Literature: This work primarily focuses on the field of offline reinforcement learning. Essential References Not Discussed: This submission includes sufficient related references. Other Strengths And Weaknesses: **Strengths** The main issue discussed in this paper, the transfer gap between offline learning and online testing, is crucial in the field of reinforcement learning. The proposed TARL method is articulated clearly, offering a well-structured approach. **Weaknesses** The improvements reported on most Atari and D4RL tasks are relatively marginal. This raises questions about the practical significance and robustness of the proposed method under different conditions and task complexities. Other Comments Or Suggestions: No further comments and suggestions. Please see the following questions. Questions For Authors: 1. The motivation for only fine-tuning the layer normalization parameters is not entirely clear. Why was this particular choice made? Could you provide more insight into the rationale behind this decision? 2. In the comparison results of Table 1 and Table 2, are the baselines also fine-tuned with test data? 3. This work addresses the issue of the transfer gap between offline learning and online environments. There are several offline-to-online algorithms[1][2] that seem very related to this work. Shouldn't these algorithms be compared within the study? 4. While CQL is a classic offline RL algorithm, it would be valuable to apply the proposed method to more powerful algorithms, such as DT[3] and IQL[4]. 5. I am curious about the sensitivity of the proposed TARL to the hyperparameters. The detailed hyperparameter settings for different tasks should be included for clarity. [1] Xie T, Jiang N, Wang H, et al. Policy finetuning: Bridging sample-efficient offline and online reinforcement learning[J]. Advances in neural information processing systems, 2021, 34: 27395-27407. [2] Lee S, Seo Y, Lee K, et al. Offline-to-online reinforcement learning via balanced replay and pessimistic q-ensemble[C]//Conference on Robot Learning. PMLR, 2022: 1702-1712. [3] Chen L, Lu K, Rajeswaran A, et al. Decision transformer: reinforcement learning via sequence modeling, 2 June 2021[J]. URL http://arxiv. org/abs/2106.01345, 2021. [4] Kostrikov I, Nair A, Levine S. Offline reinforcement learning with implicit q-learning[J]. arXiv preprint arXiv:2110.06169, 2021. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: >Q1. The improvements reported on most tasks are relatively marginal. This raises questions about the practical significance and robustness under different conditions and task complexities. A1. Thanks for your valuable feedback. The marginal improvements should be interpreted through four aspects. 1. Our performance reflects a trade-off between efficiency and performance. TARL updates only LN parameters during testing, requiring no gradient updates to the full network. 2. Unlike online RL, TARL operates purely on test samples without reward signals, making it suited for safety-critical applications where exploration is prohibited. TARL preserves the stability and safety of conservative offline RL. 3. TARL shows improvements across discrete (Atari) and continuous (D4RL) control tasks, which indicates its robustness. 4. We study more extensively across multiple datasets to validate our effectiveness. From Table D, TARL achieves significant improvement. Table D. Comparison of TARL against the baseline. Task|CQL|CQL(TARL)| |-|-|-| |Walker2d-fully-replay-v2|83.38|**97.99**| |Hopper-expert-v2|99.72|**113.34**| |Antmaze-medium-play-v0|6|**7**| |Antmaze-medium-diverse-v0|2|**5**| >Q2. The motivation for only fine-tuning the layer normalization parameters is not entirely clear. Why was this particular choice made? A2. The test-time phase requires efficient model parameter updates to ensure the model can adapt to the new environment promptly. Thus, TARL only updates parameters in layer normalization layers of the model, ensuring computational efficiency without full model gradient updating. This allows for rapid adaptation while maintaining model stability. >Q3. In Table 1 and 2, are the baselines also fine-tuned with test data? A3. The baselines do not fine-tuned with test data. >Q4. This work addresses the issue of the transfer gap between offline learning and online environments. There are several offline-to-online algorithms[1][2] that seem very related to this work. Shouldn't these algorithms be compared within the study? A4. Comparing TARL directly with online RL methods isn't appropriate as TARL is an offline RL method. TARL and offline-to-online RL operate at different levels. When offline RL methods run in online environments, they will suffer from OOD issues. The core motivation of TTA is that environmental dynamics and distribution shifts cause performance drops in offline-trained models. TARL aims to maintains stability amidst data distribution changes. TARL fine-tunes the policy during the test-time phase through entropy minimization, without needing environment feedback. It effectively adapts to changes in test data distribution and ensures efficient updates. In contrast, offline-to-online RL algorithms[1][2], depend on environment feedback for policy updates. However, in some environments acquiring rewards can be timeconsuming or even infeasible. Therefore, offline-to-online RL cannot update policies. We will include and discuss relevant literatures in the revised manuscript. [1] Policy finetuning: Bridging sample-efficient offline and online reinforcement learning. NeurIPS, 2021. [2] Offline-to-online reinforcement learning via balanced replay and pessimistic q-ensemble. PMLR, 2022. >Q5. While CQL is a classic offline RL algorithm, it would be valuable to apply the proposed method to more powerful algorithms, such as DT[3] and IQL[4]. A5. Accoding to your suggestion, we present a comparative analysis of IQL[4] performance in Table E. Even on the expert, we still achieve improvement. Table E. Effectiveness of TARL applied to IQL. |Task|IQL|IQL(TARL)| |-|-|-| |Walker2d-medium-v2|79.92|**82.43** |Walker2d-expert-v2|110.31|**110.49** However, applying TARL to DT[3] would require higher workload due to the inherent differences in the Transformer architecture. Given the time constraints of this submission, we are unable to complete the experiments. We will explore this direction in future work. [3] Decision transformer: reinforcement learning via sequence modeling. NeurIPS 2021. [4] Offline reinforcement learning with implicit q-learning. ICLR 2022. >Q6. Curiou about the sensitivity of the proposed TARL to the hyperparameters. The detailed hyperparameter settings for different tasks should be included for clarity. A6. We use two distinct sets of hyperparameters for discrete control and continuous control tasks, respectively. All environments of the same type of task share the same hyperparameters. This further highlights the strong generalization capability of TARL. We have show hyperparameter settings in Section 4.4 and would make them more detailed and clearly. For important hyperparameters entropy threshold $E_0$ and the KL divergence $\lambda$, we have conducted ablation studies in Section 4.7. The TARL is sensitive to $E_0$ because a small $E_0$ is crucial for selecting high-confidence samples. --- We sincerely hope our clarifications above have addressed your questions. --- Rebuttal Comment 1.1: Comment: Thank you for your thoughtful response. I still have some follow-up questions: 1. Regarding the experimental settings for Tables 1 & 2: Did you first run the original baseline on the offline dataset and then apply the proposed TARL method to the baseline for conducting test-time adaptation? 2. Regarding the fine-tuning of the LN layer: I noticed that it is more efficient to update only the LN layer during test-time. What I would truly like the authors to discuss is the rationale behind fine-tuning the LN layer specifically, as opposed to other parameters in the model. Additionally, for the continuous tasks, could you elaborate on how the LN layer was integrated into the network? --- Reply to Comment 1.1.1: Comment: >Q1. Regarding the experimental settings for Tables 1 & 2: Did you first run the original baseline on the offline dataset and then apply the proposed TARL method to the baseline for conducting test-time adaptation? A1. Thanks for your valuable feedback. To directly address your question: Yes, we first run the original baseline on the offline dataset and then apply our TARL method to the baseline for conducting test-time adaptation. We begin with a pre-trained offline RL policy that has conventional offline training on the dataset. This policy serves as the initialization for deployment. During testing phase, we apply TARL's unsupervised optimization to *enhance the pre-trained policy* without modifying its original training process. This design ensures TARL functions as a universal test-time enhancement approach, improving pre-trained policies' robustness to distribution shifts while preserving their original training integrity. The performance gains in Tables 1-2 stem solely from this test-time adaptation, not from retraining baselines. >Q2. Regarding the fine-tuning of the LN layer: I noticed that it is more efficient to update only the LN layer during test-time. What I would truly like the authors to discuss is the rationale behind fine-tuning the LN layer specifically, as opposed to other parameters in the model. A2. The rationales behind fine-tuning the LN layer specifically are as follows: **1. LN's Unique Role in Distribution Calibration** LayerNorm layers (h' = γ ⊙ (h - μ)/σ + β) directly interact with input data statistics (mean, variance) and are inherently sensitive to distribution shifts. Tuning LN layers allows learnable affine transformations (γ,β) to absorb data distribution shifts. This makes LN parameters the optimal adaptation knobs for distribution mismatches. **2. Parameter Isolation in Normalization Layers** The parameters of LN layers are inherently localized and independent, unlike the tightly coupled parameters of other layers (e.g., convolutional or linear layers). **Other Layers (Strong Coupling):** Parameters in convolutional or fully connected layers are globally interdependent. Adjusting even a single layer’s weights can cascade through the network, disrupting hierarchical feature representations. **LN Layers (Weak Coupling):** LN’s scaling (γ) and shifting (β) parameters act as local calibrators for feature distributions. They normalize activations per sample and do not depend on batch statistics or cross-layer interactions. This design ensures: • **Localized Impact:** Adjusting γ/β affects only the current layer’s output scale and offset, preserving the pre-trained model’s core feature extraction. • **Statistical Independence:** LN’s per-sample normalization avoids batch-size dependencies, making it robust to dynamic test environments (e.g., single-sample batches or mixed distribution shifts). >Q3. For the continuous tasks, could you elaborate on how the LN layer was integrated into the network? A3. In our method, LN layers are integrated into the hidden layers of the actor and critic network backbones for both discrete and continuous tasks. For continuous control tasks, the final action outputs (e.g., mean and variance for Gaussian policies) are generated by a separate linear output layer. Notably, this final linear output layer remains devoid of LN integration. --- We sincerely hope our clarifications above have addressed your questions.
Summary: This paper presents Test-time Adapted Reinforcement Learning (TARL) for the distribution shift issue of offline RL. TARL creates unsupervised test-time objectives for different control tasks. Moreover, it uses a KL divergence constraint to avoid bias. TARL only updates layer normalization parameters during testing for efficiency. Experiments on Atari and D4RL datasets show its superiority over baselines. Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: TARL enables offline RL to establish an interface with the environment during the testing stage. It uses test data to fine-tune the model without the need for environmental rewards, simply by using the inherent information within the test data itself. The proposed methods and evaluation criteria make sense for the problem, but I still have some questions: 1. In Line 70, the authors claim that they propose a novel offline reinforcement learning paradigm. However, the method belongs to a test-time adaptation method. Is there a conflict between offline reinforcement learning and test-time adaptation? I suggest further clarification on this. 2. Continuous control tasks assume that the action obeys a normal distribution. However, the actions in real applications may not be strictly normal. Can the TARL method based on the normal distribution assumption still work? Theoretical Claims: N/A Experimental Designs Or Analyses: TARL shows significant advantages in experiments across discrete and continuous control tasks. The experimental results support the claim that TARL can be applied to various existing offline RL methods, effectively enhancing their performance in online environments. The experimental designs and analyses presented in the paper are sound and robust, but I still have some questions: 1. This paper mentions that there is a distribution bias between offline training and online testing in the abstract. In the experiments, where is this bias specifically reflected? 2. The details of the Atari and D4RL benchmarks are not clear. For example, the environment and the agent setting are unknown. It would be better to add more descriptions about the two benchmarks. Supplementary Material: The paper does not provide separate supplementary material. Relation To Broader Scientific Literature: TARL uses test-time data to fine-tune the model without environmental rewards. It helps the policy adapt to real-world test environments. TARL can be applied to existing offline reinforcement learning methods. By integrating TARL, these methods can overcome the limitations imposed by static offline datasets. It helps them better handle the uncertain environment. This indicates a broad application prospect for the proposed method. Essential References Not Discussed: The authors clearly discussed key methodologies in the field of efficient attention. Other Strengths And Weaknesses: **Strengths** 1. During test time, TARL updates only a few parameters of the LN layers. Compared to updating the entire network, adjusting only the LN layer parameters requires far fewer computations. This speeds up adaptation and enhances overall efficiency. 2. TARL uses the KL divergence constraint as a debiasing term. This effectively restricts the update of model parameters, reducing the bias within the policy network and preventing overfitting to the test-time data. **Weakness** The performance of TARL depends on selecting appropriate entropy thresholds ($E_0$) and KL divergence weight ($\lambda$). Other Comments Or Suggestions: In Line 359, the period is missing. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: >Q1. In Line 70, the authors claim that they propose a novel offline reinforcement learning paradigm. However, the method belongs to a test-time adaptation method. Is there a conflict between offline reinforcement learning and test-time adaptation? I suggest further clarification on this. A1. Thank you for your valuable feedback. Offline reinforcement learning and test-time adaptation are not conflicting concepts. TARL is a framework designed for test-time adaptation of offline reinforcement learning methods. Offline RL learns from pre-collected data without interacting with the real-time environment. TARL fine-tunes the offline-trained policy model using test data to better fit real-world conditions. It is important to note that TARL requires no environment interaction or feedback during adaptation. As a result, Offline RL algorithms optimized by TARL with test-time adaptation retain the offline characteristic, maintaining its efficiency and safety while improving real-world performance. >Q2. Continuous control tasks assume that the action obeys a normal distribution. However, the actions in real applications may not be strictly normal. Can the TARL method based on the normal distribution assumption still work? A2. In practice, TARL remains effective even when the action distribution is not a strictly normal distribution. The core principle of TARL is to reduce policy uncertainty during action selection, thereby improving stability and adaptability. This process does not depend on any specific state-action distribution. For non-normal distributions, entropy minimization still encourages the policy to favor more confident actions, leading to better performance. >Q3. This paper mentions that there is a distribution bias between offline training and online testing in the abstract. In the experiments, where is this bias specifically reflected? A3. In our experiments, the distributional shift primarily manifests as a mismatch between state-action distributions in offline training datasets and evaluation environments. This OOD stems from the sampling incompleteness problem: While the state-action space is theoretically infinite, offline datasets can only capture a finite subset through sampling. Consequently, the policy inevitably encounters unobserved state-action combinations during real-world deployment, highlighting the critical need for test-time adaptation mechanisms like TARL. >Q4. The details of the Atari and D4RL benchmarks are not clear. For example, the environment and the agent setting are unknown. It would be better to add more descriptions about the two benchmarks. A4. We provide additional details regarding the benchmarks used in our experiments. **Atari Benchmark:** For **discrete control tasks**, we evaluate policy performance on five representative Atari environments: Qbert, Seaquest, Asterix, Pong, and Breakout. These games provide diverse challenges through distinct mechanics, such as spatial reasoning in Qbert and reactive control in Pong. **D4RL Benchmark:** For **continuous control tasks**, we employ three standardized D4RL dataset configurations: Medium Policy (suboptimal policy-generated data), Medium Replay Buffer (mixed-quality trajectories), and Medium-Expert (hybrid expert-novice demonstrations). We evaluate TARL on three locomotion tasks: bipedal locomotion (HalfCheetah), monopedal jumping stability (Hopper), and dynamic balance maintenance (Walker2D). **Consistent Experimental Settings** Our implementation strictly follows original offline RL methodologies for fair comparison. For example, CQL(TARL) maintains identical network architectures and training hyperparameters to vanilla CQL. >Q5. The performance of TARL depends on selecting appropriate entropy thresholds ($E_0$) and KL divergence weight ($\lambda$). A5. The selection of appropriate entropy thresholds $E_0$ and KL divergence weights $\lambda$ is critical to the performance. This underscores the necessity of the data filtering mechanisms and debiasing regularization introduced in our method. A suitable $E_0$ ensures that only high-confidence samples are retained, thereby preventing the introduction of noise from out-of-distribution data and preserving the stability and reliability of policy updates. Similarly, the selection of an optimal KL divergence weight $\lambda$ plays a key role in facilitating a smooth transition of the offline policy to the online environment. --- We sincerely hope our clarifications above have addressed your questions. --- Rebuttal Comment 1.1: Comment: After carefully reviewing the authors' thorough rebuttal, I am convinced that they have effectively addressed my key questions and criticisms raised during the review process. Additionally, I also examine the other reviewers' comments. The expanded results rigorously validate the robustness and practical advantages of TARL across diverse benchmarks. Based on their comprehensive explanations and enhanced experimental validation, I have decided to raise my score.
Summary: This paper introduces Test-Time Adapted Reinforcement Learning (TARL), a method designed to help offline RL policies adapt to distribution shift during deployment by leveraging test data—without needing additional reward signals. The core idea involves (1) learning objectives that minimize policy entropy for newly encountered states to reduce uncertainty (discrete tasks) or output variance (continuous tasks), (2) filtering out high-uncertainty (out-of-distribution) states so only lower-uncertainty states are used for adaptation, and (3) a KL divergence regularization term that keeps the adapted policy close to the original offline policy to prevent overfitting or degenerate solutions. Empirical results on Atari (discrete) and D4RL (continuous) benchmarks show that incorporating TARL on top of standard offline RL algorithms (e.g., CQL, QR-DQN, REM) improves final performance on out-of-distribution evaluation settings. The paper argues that TARL can be integrated with multiple offline RL methods by only updating the parameters in layer normalization layers, thus making test-time adaptation both computationally cheap and stable. Claims And Evidence: The core hypothesis underlying the TARL approach appears to be that selective training on low-entropy states facilitates information transfer to other similar states in a way that cannot be achieved through simpler entropy minimization strategies. However, this fundamental hypothesis is neither explicitly articulated nor well tested in the paper. The proposed state-selective entropy minimization approach lacks critical comparisons to two obvious alternatives during runtime: 1. Simply decreasing the temperature globally in a Boltzmann policy 2. Using argmax action selection on states with low entropy Without these comparisons, it's impossible to determine whether the reported improvements stem from the claimed state-selective mechanism or could be achieved through much simpler approaches that also reduce policy entropy. The paper presents performance improvements without convincingly demonstrating that these improvements are causally linked to the proposed state-selective training mechanism rather than to a simpler entropy reduction effect that could be achieved with less complex methods. Methods And Evaluation Criteria: While the paper's method of state-selective entropy minimization is coherent, the evaluation is fundamentally incomplete because it lacks the necessary experiments to validate the core hypothesis. To properly evaluate whether selective training on low-entropy states provides unique benefits, the following comparisons are essential: 1. Direct performance comparison with global temperature reduction in Boltzmann policies 2. Comparison with a policy that simply uses argmax action selection for states below an entropy threshold 3. State-specific analysis showing exactly where and how state-selective training performs differently than these simpler alternatives The paper should provide: - Controlled experiments in simplified environments where the hypothesized information transfer from low-entropy to related states can be explicitly tracked - Concrete examples identifying specific state types or scenarios where state-selective training provides advantages that global approaches cannot - Ablation studies comparing selective entropy minimization against global entropy minimization Without these critical evaluations, it remains unclear whether the additional complexity of TARL is justified over simpler alternatives that achieve similar entropy reduction. Theoretical Claims: N/A Experimental Designs Or Analyses: As discussed in the "Claims And Evidence" and "Methods And Evaluation Criteria" sections, the experimental design cannot validate the central hypothesis because it lacks baselines for comparison. The experiments show that the method works in practice but fail to establish why it works or whether simpler alternatives might work equally well. The improvements shown could result from effective hyperparameter tuning of the entropy threshold rather than from the claimed state-selective mechanism. Refer to the "Methods And Evaluation Criteria" section for the specific comparisons and controlled experiments that would be needed to properly validate the paper's core hypothesis. Supplementary Material: N/A Relation To Broader Scientific Literature: The paper appropriately relates its contributions to existing offline RL approaches such as CQL, REM, and QR-DQN, as well as to existing test-time adaptation methods (e.g., TTT, TTT++, batchnorm adaptation). Essential References Not Discussed: See "Claims And Evidence" section. Other Strengths And Weaknesses: Strengths: - The implementation approach (only updating layer normalization parameters) is computationally efficient - The method demonstrates compatibility with multiple offline RL algorithms - The paper shows empirical improvements on standard benchmarks The paper's primary weakness is its failure to validate the core hypothesis that training on low-entropy states transfers beneficial knowledge to other similar states in the environment. Without controlled experiments that explicitly track this knowledge transfer mechanism, it remains unclear whether the observed improvements stem from the claimed selective learning transfer or from simpler entropy reduction approaches. Other Comments Or Suggestions: L426 test-time data instead of test-time date L41 migrate -> mitigate L346: Impletement -> Implementation Questions For Authors: 1. Your paper seems to rest on the hypothesis that selective training on low-entropy states enables beneficial knowledge transfer to other similar states. Could you explicitly articulate this hypothesis and provide direct evidence for this specific transfer mechanism rather than just overall performance improvements? 2. Have you conducted experiments comparing TARL to simpler alternatives that also reduce policy entropy: a) A policy that simply uses argmax action selection for states with entropy below a threshold? b) A policy that globally reduces temperature in the Boltzmann distribution? If so, what were the results? If not, why were these fundamental comparisons omitted? 3. Can you provide controlled experiments in simpler environments that explicitly demonstrate the hypothesized learning transfer from low-entropy states to related states? Ideally, this would include visualizations or state-by-state performance metrics showing how the selective training influences performance on states that weren't directly trained on. Providing such evidence would significantly strengthen the paper and my rating. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: >Q1. Your paper seems to rest on the hypothesis that selective training on low-entropy states enables beneficial knowledge transfer to other similar states. The paper should provide ablation studies comparing selective entropy minimization against global entropy minimization. A1. Thank you for your valuable feedback. The rationale for focusing on low-entropy states operates on two levels: **1. Theoretical Motivation.** Low entropy directly reflects reduced action uncertainty. States with low-entropy correspond to more deterministic and reliable decision regions in the policy space. By prioritizing these low-entropy states during test-time adaptation, the policy can establish stable decision boundaries while avoiding OOD extrapolation. **2. More study on the advantage of selective training on low-entropy states.** The experiments were conducted in the walker2d-medium-v2 task to evaluate the hypothesis that selective training on low-entropy states improves TTA performance. The experimental setup included three conditions: * **Global Entropy Minimization**, where all available data were used for test-tiem adaptation. * **Low-entropy Selective Training**, where only samples with entropy below a predefined threshold were used for training. * **High-entropy Selective Training**, where only samples with entropy over a predefined threshold were used for training. Our results demonstrate that selective training on low-entropy states improves TTA performance more effectively than global entropy minimization. Meanwhile, if we only select high-entropy samples for tta, the performance actually becomes worse. This further indicates that selective training on low-entropy states enables beneficial knowledge transfer. Table C. Effectiveness of low-entropy selective training. |Global Entropy Minimization|Low-entropy Selective Training|High-entropy Selective Training| |-|-|-| |77.74|**82.95**|72.91| >Q2. Have you conducted experiments comparing TARL to simpler alternatives that also reduce policy entropy: a) argmax action selection, b) globally reduces temperature in the Boltzmann distribution? If so, what were the results? If not, why were these fundamental comparisons omitted? A2. Our study does not include comparative experiments between TARL and simpler entropy-reduction approaches like boltzmann or argmax action selection, primarily due to their incompatibility with test-time adaptation requirements. **1. Characteristics of Test-Time Adaptation** During the test-time phase, the model has already been trained. The primary goal is fine-tuning the model using test data to adapt to the actual environment. The key characteristics of this phase is that the model cannot access environmental reward signals or other forms of feedback. This means the model cannot rely on environment feedback to guide policy updates. **2. Limitations of Boltzmann and Argmax Policies** The Boltzmann policy is an action selection method that uses the softmax function to convert action values into a probability distribution. During training, the Boltzmann policy requires environment feedback to adjust the temperature parameter and action value function. The argmax policy is an action selection method that directly selects the action with the highest value. During training, the argmax policy needs environment feedback to evaluate and update the action value function. In the test-time phase, the lack of environment feedback makes them impossible to effectively update these parameters. **3. Advantages of TARL** TARL has no reliance on environment feedback. It uses entropy minimization as the objective function for policy updates during the test-time phase, effectively updating the policy. By minimizing entropy, TARL enhances the selection of high-confidence actions and reduces reliance on uncertain actions, better adapting to changes in the distribution of test data. >Q3. Can you provide controlled experiments in simpler environments that explicitly demonstrate the hypothesized learning transfer from low-entropy states to related states? A3. We have conducted ablation studies on the effect of low-entropy states selection in Section 4.7. $E_0$ controls the selection of samples for updating. Specifically, samples with entropy below this threshold are selected for updating. As $E_0$ decreases, the selected samples have lower entropy and fewer in number. When the entropy threshold increases, the performance of test-time adapted RL declines, which shows that it is necessary to filter some samples with relatively high confidence to update the offline policy to adapted the online environment. --- We sincerely hope our clarifications above have addressed your questions.
Summary: This paper proposes TARL, a framework that minimizes action uncertainty at test time to mitigate distribution shift issues. Claims And Evidence: The authors conduct experiments on the D4RL and Atari benchmarks to validate the effectiveness of their framework. Methods And Evaluation Criteria: Yes, the evaluation criteria adopted in this paper are standard in this field. Theoretical Claims: Yes, although the paper does not present major theoretical results, I have verified the correctness of the proposed methods. Experimental Designs Or Analyses: I suggest adding more experiments to further evaluate the effectiveness of the proposed framework. Specifically, implementing the TARL framework on top of other offline RL algorithms, such as IQL, could provide additional insights. Additionally, I recommend conducting experiments on other datasets included in the D4RL benchmark, such as the AntMaze environments and the replay and expert datasets in Gym environments, as this is a common practice in the field. Supplementary Material: No supplementary materials were provided with this paper. Relation To Broader Scientific Literature: This work falls within the area of offline-to-online RL algorithms, which aim to mitigate distributional shift issues when encountering out-of-distribution state-action pairs. Essential References Not Discussed: I suggest adding more references to recent progress in this area. Other Strengths And Weaknesses: Please refer to the Experimental Designs Or Analyses part. Other Comments Or Suggestions: No further comments. Questions For Authors: No further questions. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > Q1. Implementing the TARL framework on top of other offline RL algorithms, such as IQL. A1. Thank you for your valuable feedback. The state-action out-of-distribution (OOD) issues stem from data distribution shifts between training and testing phases, rather than being specific to particular offline RL algorithm. Our proposed TARL method is specifically designed to mitigate OOD interference in offline RL algorithms, thereby significantly enhancing stability during testing phases. As a foundational algorithm in offline reinforcement learning, IQL [1] will suffer from this fundamental challenge during the testing phase. In Table A, we present a comparative analysis of IQL performance on Walker2d-medium-v2 environment and Walker2d-expert-v2 environment before and after TARL. Our approach achieving improvement in terms of average episode return. Table A. Effectiveness of our TARL applied to IQL. | Task Name | IQL | IQL (TARL) | | -------- | -------- | -------- | | walker2d-medium-v2 | 79.92 | **82.43** | |walker2d-expert-v2|110.31|**110.49** | [1] Offline reinforcement learning with implicit q-learning. ICLR 2022. > Q2. Conducting experiments on other datasets included in the D4RL benchmark, such as the AntMaze environments and the replay and expert datasets in Gym environments. A2. Accoding to your suggestion, we study more extensively across multiple benchmark datasets, including the AntMaze environments and the replay and expert datasets in Gym environment, to rigorously validate the effectiveness of our method. From Table B, our TARL achieves better performance than original RL algorithms, with particularly notable improvements in the complex environment Antmaze-medium-diverse-v0 and the expert-level task Hopper-expert-v2. These results further demonstrate the advantages of our method in handling data noise, distribution shift, and high-complexity tasks. Table B. Average episode return comparison of TARL against baseline methods on D4RL benchmarks over the 10 evaluations. Environment| Task Name | CQL | CQL (TARL) | | --------| -------- | -------- | -------- | | Mujoco| walker2d-expert-v2 | 113.25 | **113.57** | | Mujoco| walker2d-fully-replay-v2 | 95.31 |**97.99** |Mujoco|hopper-expert-v2|99.72|**113.34** | Antmaze| medium-play-v0 | 6 |**7** | Antmaze| medium-diverse-v0 | 2 |**5** --- We sincerely hope our clarifications above have addressed your concerns.
null
null
null
null
null
null
A Memory Efficient Randomized Subspace Optimization Method for Training Large Language Models
Accept (poster)
Summary: The paper highlights a critical limitation in existing methods: while prior approaches have focused on reducing the memory burden of optimizer states, they have largely overlooked the substantial memory consumption imposed by activations—especially in scenarios with long context sequences or large mini-batches. To address this challenge, the authors propose a novel randomized subspace optimization framework that decomposes the high-dimensional training problem into a series of lower-dimensional subproblems. This innovative approach significantly reduces memory requirements for both activations and optimizer states while maintaining competitive performance. Claims And Evidence: The main claim of this paper is that RSO significantly reduces memory usage for optimizer states, gradients, and activations. To support this, the authors conduct a comprehensive theoretical analysis of memory consumption in Table 1, as well as empirical study in Tables 3 and Figure 2. Methods And Evaluation Criteria: The proposed method uses a randomized subspace projection to transform the full-scale optimization problem into a series of lower-dimensional subproblems, which is well-aligned with the problem of reducing training overhead in LLMs. Theoretical Claims: Yes, I have checked the proof of memory complexity for activations and optimizer states in Appendix A and convergence analysis in Appendix B. Experimental Designs Or Analyses: Yes. The experiments are designed to validate the memory and communication efficiency of RSO as well as its training performance. The main experiments include: 1) pretraining on LLaMA with RSO, showing that RSO could reduce memory consumption. 2) fine-tuning RoBERTa on GLUE benchmark, showing that RSO can achieve competitive performance. Supplementary Material: Yes, I have checked the proof of memory complexity for activations and optimizer states in Appendix A and convergence analysis in Appendix B. Relation To Broader Scientific Literature: The paper is well-situated in the context of existing research on memory-efficient LLM training. It builds upon prior works such as GaLore, LoRA by addressing a key gap: the memory overhead associated with activations. Essential References Not Discussed: I believe that this paper have cited most related works. Other Strengths And Weaknesses: Strengths: * The paper offers a robust combination of theoretical convergence guarantees and practical memory efficiency analyses. * Extensive experiments on both pre-training and fine-tuning verify the method’s effectiveness. Weaknesses: * More detailed ablation studies on the choice of the distribution of the random subspaces could offer deeper insights into the method’s sensitivity and robustness. Other Comments Or Suggestions: No. Questions For Authors: 1. For each subproblem (4a), each $\eta^k$ shoud be chosen such that $\eta^k \le 1/ \hat{L}$. How did you set this value in experiments? 2. The convergence analysis relies on specific properties of the random projection matrices. How robust is RSO when these matrices are sampled from simpler distributions (e.g., Gaussian) rather than the theoretically Haar distribution? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your time and thoughtful feedback on our manuscript. Below are our detailed responses to the weaknesses and questions you raised: - Questions: 1. We would like to clarify that, as stated in our manuscript, $\eta_k$ is required to satisfy $\eta_k \le 1/\hat{L}$ to guarantee theoretical convergence. This condition is explicitly imposed to facilitate the convergence analysis. In our experiments, we set $\eta_k$ to a relatively large constant (e.g., 10) in order to accelerate convergence in practice. The results demonstrate that RSO still converges reliably under this setting. 2. In all of our experiments, we actually use Gaussian distributions instead of Haar distributions to achieve better computational efficiency. We apologize for any confusion caused by the previous version of the manuscript, and we will explicitly clarify this choice in the next revision. As discussed in Remark 5.4, when the rank $r$ is much smaller than the embedding dimensions $m$ or $n$, Gaussian projections closely approximate the behavior of Haar projections. For completeness, we additionally provide experimental results using Haar-distributed projection matrices for training LLaMA-60M and LLaMA-130M, as shown in the following table. | Projection Type | LLaMA-60M | LLaMA-130M | |------------------|-------------|---------------| | Gaussian | 34.55 | 25.34 | | Haar | 34.53 | 25.06 | - Weaknesses: As noted above, we believe this concern has already been addressed in our previous response. We hope these responses address your concerns. Please feel free to reach out with any further questions or feedback. --- Rebuttal Comment 1.1: Comment: I thank the authors for the responses which have addressed my concerns. Overall, the proposed algorithm is meaningful for memory-efficient fine-tuning. --- Reply to Comment 1.1.1: Comment: Thank you for taking the time to review our manuscript and for providing valuable feedback once again.
Summary: The paper introduces a Randomized Subspace Optimization (RSO) framework for LLM training, breaking the problem into lower-dimensional subproblems to reduce memory and communication overhead. It also offers comprehensive convergence guarantees for various optimizers, with refined results for Adam. Claims And Evidence: Yes, it's great. Methods And Evaluation Criteria: Yes. Theoretical Claims: The paper highlights that prior methods, such as GaLore, lack comprehensive convergence guarantees—often only analyzing fixed projection matrices. In contrast, it provides a complete convergence analysis for the RSO framework across various subproblem solvers, including zeroth-, first-, and second-order methods, and optimizer variants like gradient descent and Adam. Experimental Designs Or Analyses: The proposed RSO framework significantly improves memory efficiency and speeds up training by reducing communication overhead compared to state-of-the-art methods such as GaLore, LoRA, and Adam, while maintaining competitive performance. These results underscore the practical value of the approach. Supplementary Material: This paper provides sufficient convergence analysis, experiment details and Memory Complexity Analysis in the supplementary material. Relation To Broader Scientific Literature: Their work extends current literature on deep learning optimization by decomposing high-dimensional training into manageable subproblems using random subspaces. This approach not only reduces memory and communication overhead but also aligns with recent theoretical advances by providing comprehensive convergence guarantees and competitive empirical performance relative to methods like GaLore and Adam. Essential References Not Discussed: None Other Strengths And Weaknesses: The theoretical analysis and experimental validations presented in the paper are robust and well-supported. Other Comments Or Suggestions: See in next part. Questions For Authors: [1] How do you address the subproblem in (4a)? Could you provide the detailed code implementation of how to use Adam to optimize the parameter B (how to make B get the gradient)? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate your thorough review and valuable feedback on our manuscript. Below, we provide our detailed responses to the questions you raised. - Questions: In our convergence analysis, we allow the use of different optimizers to solve each subproblem. In our experiments, we employ the Adam optimizer with a fixed number of iterations to solve each subproblem. In our implementation of RSO, we adopt a LoRA-style design. For a linear transformation with a frozen weight matrix $W$, we introduce a trainable low-rank matrix $B$ and a projection matrix $P$. Both $W$ and $P$ are treated as fixed parameters with `requires_grad = False`, while $B$ is defined as a learnable parameter. During each forward pass, we compute $XW$ and $XPB$ separately and sum the results to obtain the final output. Since the optimizer (e.g., Adam) updates only $B$, gradients are neither computed nor stored for $W$ or $P$. As a result, PyTorch automatically avoids storing intermediate activations associated with their gradients, thereby improving memory efficiency without requiring manual intervention. After each subproblem is solved, the update is merged into the model by setting $W \leftarrow W + P B$, followed by resampling a new random projection matrix $P$ for the next subproblem. We hope these responses address your concerns. Please feel free to reach out with any further questions or feedback.
Summary: The paper introduces a new optimization method designed to reduce communication costs in distributed training of large language models (LLMs). The method proposed by the authors is based on breaking up the problem into smaller low-rank subproblems that we can optimize using arbitrary optimizers, by introducing a new update equation to the weights. The paper proposes optimizing over a low rank matrix passed through a random projection, which reduces the memory requirement for the activations and implicitly the gradients. The authors also provide guarantees for the number of steps required to find an approximate stationary point under mild assumptions, as well as an empirical evaluation of their method on models at various scales and across datasets, and a comparison with existing methods such as GaLore and LoRA. Claims And Evidence: The paper provides sufficient evidence that their proposed method can achieve improvements in optimization speed and memory reduction. Concretely, while the performance of RSO is similar to that of GaLore - as shown in Table 3 - for pretraining, or marginally better compared to GaLore - as shown in Table 4 - for finetuning, GaLore does not target reducing communication costs as it still requires all-gathering the whole gradient across devices. RSO on the other hand, in Table 5, shows that it can achieve much faster iteration time than Adam or GaLore during optimization. Methods And Evaluation Criteria: The authors sweep over hyperparameters across all baseline methods as shown in Appendix D. Due to computational constraints the authors provide training runs up to 1B, but they run fine-tuning on several datasets and larger scales than pre-training. Theoretical Claims: While I have not checked the proofs of their theorems regarding guarantees, I do have several questions regarding the method and the claims that I will post in the Questions for Authors section. Experimental Designs Or Analyses: See methods and evaluation criteria. Supplementary Material: I have reviewed Appendix C and D. Relation To Broader Scientific Literature: The method is in general related to other low rank training works such as: [1], [2], [3], [4]. [1] uses a similar low rank approximation of the weight updates as RSO in order to minimize memory cost. [2] uses only the top subspace of the gradients in the optimization procedure, which is also a memory reduction technique. [3] uses a similar idea for the momentum term. [4] does not base its compression on rank, but on the top frequencies of the momentum term. [1]: Hu, Edward J., et al. "Lora: Low-rank adaptation of large language models." ICLR 1.2 (2022): 3. [2]: Zhao, Jiawei, et al. "Galore: Memory-efficient llm training by gradient low-rank projection." arXiv preprint arXiv:2403.03507 (2024). [3]: Vyas, Nikhil, Depen Morwani, and Sham M. Kakade. "Adamem: Memory efficient momentum for adafactor." 2nd Workshop on Advancing Neural Network Training: Computational Efficiency, Scalability, and Resource Optimization (WANT@ ICML 2024). 2024. [4]: Peng, Bowen, Jeffrey Quesnelle, and Diederik P. Kingma. "Decoupled Momentum Optimization." arXiv preprint arXiv:2411.19870 (2024). Essential References Not Discussed: None, see above. Other Strengths And Weaknesses: Strengths: * The paper proposes a method to reduce memory usage an in turn communication cost in distributed settings RSO achieves faster iteration time as it does not need to communicate the full gradient at each step, without sacrificing performance (when compared to the existing works such as GaLore) * The authors perform a thorough empirical evaluation and show convergence guarantees for their method Weaknesses: * The method adds another layer of complexity by solving an internal suboptimization problem, which leads to additional hyperparameters to be tuned such as the optimizer choice for this problem * The paper does not provide a FLOPs controlled experiment, see questions Other Comments Or Suggestions: In general I think the paper could benefit more from a rewriting. Currently, the most important finding regarding RSO which is that even though it achieves similar performance to previous works in the literature while reducing drastically the time per iteration appears very low in the paper - Table 5. Moving this section up would greatly benefit this work. Questions For Authors: * When controlling for compute, how does RSO stand when compared to the other methods? Concretely, it would seemingly appear that because this method requires optimizing a subproblem for each of the K steps, this would incur a lot more FLOPs when compared to Adam/Galore etc. Could the authors provide an experiment in this direction? * The authors mention that “when \eta_k is properly chosen” and “with suitable choice of \eta_k”, but it is not clear to me what this choice of \eta_k should be and how it should be chosen in practice. Could the authors be more specific here? * If it is not a computational constraint, could the authors also try optimizing the subproblems with a nondiagonal preconditioner such as SOAP/Shampoo? As this requires quite a few extra runs, I will not base my decision on this experiment, but it is more of a scientific curiosity * Why does optimizing in a random subspace given by a random rotation matrix not accumulate errors over time? More concretely, why is it the case that at each time step k we can optimize via a random projection matrix and this does not break the progress we have made so far in optimization in the previous steps? * Could the authors provide plots of their training losses? How stable is the proposed method to hyperparameter choices? Does the learning rate change over time for the subproblem optimization? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for taking the time to review our manuscript. We greatly appreciate your valuable feedback. Below, we provide our responses to your comments: - Weaknesses: - In the RSO method, a series of subproblems need to be solved. However, this does not introduce additional complexity in tuning hyperparameters. In our experiments, for each task, we employ the Adam optimizer with the same hyperparameters across all subproblems, and apply a unified learning rate schedule rather than assigning separate schedules to individual subproblems. - We will address the FLOPs-related discussion in the response to the questions section. - Other Comments or Suggestions: Thank you for the suggestion. As discussed in Sections 4.3 and 6.3, RSO can also improve iteration time by reducing communication overhead when training across multiple devices, as shown in Table 5. We will move this discussion earlier in the manuscript and place greater emphasis on this point in the next revision. - Questions: 1. In our experiments, we use the Adam optimizer to perform multiple steps to solve each subproblem in RSO, resulting in an outer–inner iteration structure: outer iterations correspond to different subproblems, while inner iterations correspond to Adam steps. For fair comparison, we count the total number of inner iterations as the number of RSO steps; that is, solving one subproblem with $\tau$ Adam steps is counted as $\tau$ steps. Under this definition, the per-step computational cost of RSO is comparable to that of Adam and lower than GaLore, which incurs additional overhead due to costly SVD operations. Since all methods are evaluated under the same number of steps, we believe the computational complexity is well controlled and the comparison is fair. In addition, RSO significantly reduces communication overhead (see Section 4.3). Table 5 compares per-step iteration time (for RSO, per inner iteration), further supporting our claim. We apologize for any confusion and will clarify this point in the next revision. 2. The choice of $\eta_k$ is discussed in Appendix B; see Lemma B.1 and Theorem B.2 for more details, where we restrict the range of $\eta_k$ to facilitate the convergence analysis. In our experiments, we choose a relatively large constant for $\eta_k$ (e.g., 10) to accelerate convergence in practice. The results show that RSO still converges well under this setting. 3. We conduct additional experiments on LLaMA-60M, where RSO is evaluated using the Shampoo optimizer for solving each subproblem. The results are presented in the following table. |Method|Perplexity| |-|-| | RSO-Adam |34.55| | RSO-Shampoo|36.45| 4. In the $k$-th outer iteration, we solve the subproblem $$\min_B f(W^k + P^k B),$$ using a given optimizer to obtain an approximate solution $B^k$, rather than performing a single gradient step, where $P^k$ is a random projection matrix. Since choosing $B = 0$ recovers $W^k$, it follows that $f(W^k + P^k B^k) \leq f(W^k)$. Updating the weight as $W^{k+1} = W^k + P^k B^k$ thus ensures $$f(W^{k+1}) \leq f(W^k).$$ This demonstrates that solving each subproblem in a new random subspace does not compromise the progress made in previous iterations. On the contrary, each update builds upon the last, despite the randomness in the projection. As a result, errors do not accumulate across outer iterations; instead, the optimization process consistently advances in a descent direction in expectation. 5. We track both training and evaluation loss during the training of LLaMA-60M and LLaMA-130M, under configurations using 200 and 400 Adam steps per subproblem. Results are available at: https://anonymous.4open.science/r/RSO-1CC6/ As shown, the evaluation loss decreases consistently. While the training loss may briefly fluctuate at the start of each subproblem, it also follows a downward trend overall. We further perform an ablation study on LLaMA-60M by varying the projection rank and the number of inner Adam steps per subproblem. Results are summarized below. Increasing the rank generally improves performance, though at the cost of higher memory usage. To balance performance and efficiency, we choose rank 128—matching GaLore—for fair comparison. For the number of inner steps, moderate values (e.g., 200) perform well, whereas excessively large values may degrade performance. |Rank|64|128|192||Inner Steps|100|200|400|800| |-|-|-|-|-|-|-|-|-|-| |Perplexity|37.60|34.55|33.25||Perplexity|34.54|34.55|34.80|35.30| Finally, we adopt a unified learning rate schedule across the entire training process, rather than resetting it for each subproblem. This schedule decays adaptively, requiring no manual tuning between subproblems. We hope these responses adequately address your concerns. If you have further questions or need additional clarifications, we would be happy to provide them.
Summary: The paper introduces a memory-efficient optimization method named RSO, which optimizes subspace variables rather than the original weight. Different from the conventional subspace method such as Galore, which projects the original weight's gradient into subspace, they propose to directly optimizes the subspace variable. Hence, they can reduce the activation memory by a large margin. Such a method also avoids heavy computation of SVD as in Galore. Experiments on pre-training Llama shows that RSO achieves similar convergence speed as Galore with reduced memory and time cost. Claims And Evidence: Yes. The memory cost and convergence are carefully analyzed in this paper. Methods And Evaluation Criteria: Yes, the C4 pre-training is a standard benchmark for the considered setting. Theoretical Claims: Yes. I went through the key proof steps of the main theorem. Experimental Designs Or Analyses: Yes. The experimental design is fair for comparing the memory/time cost of RSO and the baseline methods. However, the RSO rank for Table 5 experiments is missing. The strategy of constructing P matrices are not explicitly discussed in the paper. Supplementary Material: Yes, i have reviewed the main experimental design and convergence proof. Relation To Broader Scientific Literature: The key contribution of reducing the activation memory cost is of general interest for LLM research community as well as practitioners. Essential References Not Discussed: For the literature review of optimizer-efficient methods, it is suggested to discuss block coordinate descent methods, e.g. [1, 2], which share similar time efficiency feature as RSO by reducing gradient computational cost. [1] BAdam: A Memory Efficient Full Parameter Optimization Method for Large Language Models [2] BlockLLM: Memory-Efficient Adaptation of LLMs by Selecting and Optimizing the Right Coordinate Blocks Other Strengths And Weaknesses: **Strengths** * The proposed method reduces the memory cost of storing activation through algorithmic-level design, which is novel and may inspire development of new activation-efficient methods. Given that the activation memory is usually the main bottleneck for long-sequence training, this contribution is critical. * Since the algorithm only computes the gradient of the subspace variable, the gradient computation is much cheaper than full backward. Hence, the time efficiency of RSO is much better than the Adam and other baseline approaches. * The algorithm exhibits competitive performance in C4 pre-training and GLUE benchmark. * The sample complexity result is established for RSO under strongly convex and smoothness assumption. **Weaknesses** * The method seems not very memory-efficient under mixed precision training setting compared to LoRA, whose master fp32 copy, gradient, and optimizer states are all low-rank matrices. RSO cannot avoid storing the full master copy as the update will be merged into the weight after (approximately) solving each sub-problem. Other Comments Or Suggestions: N/A Questions For Authors: * What is the initialization of B matrix? The initialization may have huge influence on the convergence. * The construction of P matrix is not explicitly discussed. In memory analysis, P is shared for query, key, and value. Is there any reason for doing this or it is mainly used for memory efficieny? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for taking the time to review our manuscript. We greatly appreciate your valuable feedback. Below, we provide our responses to your comments: - Experimental Designs Or Analyses: In Table 5, we set the rank to one-fourth of the embedding dimension for both the LLaMA-1B and LLaMA-7B models, consistent with the perplexity comparison experiments and the setting used in the GaLore paper. Each matrix $P$ is constructed as a random Gaussian matrix in our experiments. We will include these implementation details in the next revision of the manuscript. - Essential References Not Discussed: Thank you for the suggestion. These two works introduce block coordinate descent methods into LLM training. BAdam [1] divides the model parameters into several blocks and updates them sequentially, with one block optimized per epoch using Adam. BlockLLM [2] partitions the parameters by layers and updates those with larger gradient norms in each outer loop. It also incorporates a pruning mechanism to further reduce the number of parameters being updated. By limiting the number of parameters optimized in each epoch, both methods effectively reduce the memory required for storing optimizer states, as well as the computational cost associated with backpropagation. We will incorporate these two works and discuss them in the related work section in the next revision of our manuscript. - Weaknesses: In mixed-precision training, activations are stored in FP16, while model parameters and optimizer states are stored in FP32. Compared to LoRA, RSO incurs slightly lower memory usage for optimizer states, as LoRA trains two matrices per layer instead of one. On the other hand, RSO introduces additional memory overhead due to storing the full model in FP32. However, as illustrated in Figure 1 of the manuscript, when training large models with practical batch sizes, activation memory dominates the overall memory consumption, accounting for nearly all of the total usage. As a result, even with the extra memory required for model storage, the activation efficiency of RSO still leads to overall memory savings compared to LoRA. Moreover, although LoRA avoids storing the full model in FP32, it significantly reduces the number of trainable parameters, which may lead to performance degradation in pre-training tasks. We compare the actual memory consumption of RSO and LoRA during the pre-training of LLaMA-1B and LLaMA-7B under mixed-precision training. To avoid out-of-memory (OOM) errors, we set the rank-to-embedding-dimension ratio to $1/4$ for LLaMA-1B and $1/8$ for LLaMA-7B. As shown in the following table, the results confirm the memory efficiency of our RSO method even in the mixed-precision setting. | Method | LLaMA-1B (GB) | LLaMA-7B (GB) | |--------|---------------|---------------| | RSO | 54.73 | 68.39 | | LoRA | 69.93 | 78.82 | - Questions: 1. In our experiments, we use the Adam optimizer to solve each subproblem, and the matrix $B$ is initialized as a zero matrix. This ensures that the objective value of the next subproblem is initialized at the function value evaluated at the solution of the previous subproblem, which helps maintain a more stable training trajectory. 2. Each matrix $P$ is constructed as a random Gaussian matrix in our experiments. In the memory analysis, we assume a shared $P$ for the query, key, and value weight matrices to achieve optimal memory efficiency. Without sharing $P$, one would need to store $XP_q$, $XP_k$, and $XP_v$ separately instead of a single $XP$, resulting in an additional memory cost of $2sr$, which remains relatively minor. Thank you again for your valuable feedback, and we hope our responses address your concerns. If you have further questions, we would be happy to provide additional clarification. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed response, which resolves most of my concerns. Additionally, I missed a related work [1] during my initial review, which also reduces the memory cost of activation by utilizing low-rank structure. I suggest the authors to include more discussions on [1] in the next version of the paper. In summary, the proposed algorithm may serve as a competitive approach for long sequence training of relatively small models (e.g. within 8B). [1] LoRA-FA: Memory-efficient Low-rank Adaptation for Large Language Models Fine-tuning --- Reply to Comment 1.1.1: Comment: Thank you for the suggestion. LoRA-FA [1] modifies the original LoRA method by training only one matrix per layer while keeping the other fixed. This reduces the number of trainable parameters and also saves memory by lowering activation storage costs. However, this approach is primarily designed for fine-tuning tasks and may not be directly applicable to pre-training. We will include a discussion of this work under "activation-efficient methods" in the related work section in our next revision.
null
null
null
null
null
null
Bayesian Neural Scaling Law Extrapolation with Prior-Data Fitted Networks
Accept (poster)
Summary: The paper proposes using a taylored prior-fitted network to extrapolate neural scaling laws. A new prior over the set of potential curves is designed, capturing both simple power law behavior and more complex double-descent curves. The method is evaluated extensively on several data sets and shows improved performance over existing baselines. ## update after rebuttal The rebuttal resolved some issues that I had and I raised my score accordingly. Claims And Evidence: The authors evaluate their new model extensively against relevant competitors and provide some ablation studies. However, I was left somewhat unconvinced. The main issue is that the researchers have so many degrees of freedom in building their model (and even in implementing some competitor methods) that it should be easy to craft a method that works better on an already available benchmark suite. Since the benchmark suite is relatively comprehensive and realistic, this may still be valuable. Methods And Evaluation Criteria: Yes. Theoretical Claims: There are no theoretical claims. Experimental Designs Or Analyses: I have some issues with how the MCM alternatives were implemented. * The models $\Mcal_j$, BNSL appear to be specified without noise. That is, we expect the observed curves to follow *exactly* the idealized curves. This makes the Bayesian inference problem extremely misspecified, so it's no wonder it doesn't perform well. A more reasonable approach would be to do MCMC inference on the model $y = f(x) + \epsilon(x)$ where $\epsilon(x)$ is, e.g., a mean-zero Gaussian process. * The number of samples drawn (150) is tiny. Typical numbers for running MCMC chains are in the thousands. Supplementary Material: I read all of the appendix. Relation To Broader Scientific Literature: The new model improves on several prior works for estimating the parameters in neural scaling laws. PFNs were already previously used for this task by [1]. The main innovation is to design priors tailored explicitly to the task and train a PFN accordingly. [1] Adriaensen, S., Rakotoarison, H., Müller, S., & Hutter, F. (2023). Efficient bayesian learning curve extrapolation using prior-data fitted networks. Advances in Neural Information Processing Systems, 36, 19858-19886. Essential References Not Discussed: None. Other Strengths And Weaknesses: The paper is really well written. Other Comments Or Suggestions: * Eq (10): * The expectation is over $\mathcal T$ which doesn't appear in the expression at all. Do you mean that $(Y^*, X^*)$ is one instance from $\mathcal T$? * I have no idea what $(Y, X)$ and "the autoregressive objective" are. Questions For Authors: To convince me about the scientific progress made in the papers, I strongly suggest the authors extend their empirical evaluation. 1. The MCMC baselines should be implemented in a more reasonable way, esp. regarding noise model and length of the chains; see my comments above. 2. The priors in Table 1 seem a bit arbitrary. How were they chosen? How sensitive are the results to their choice? If these points are addressed convincingly I am inclined to increase my score. (Update: score raised to 3.) Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your time and thoughtful comments. We address each of your comments below. If you have further questions, we’d be happy to discuss them during the author-reviewer interaction phase. --- > **[Q1]** The MCMC models appear to be specified without noise... This makes the Bayesian inference problem extremely misspecified. - Thank you for pointing this out, and sorry for the confusion. Actually, we **did account for the observational noise** in the MCMC baselines. Otherwise, we would not be able to compute the log-likelihood. - For example, $\mathcal{M}_1: y=ax^{-b}$ model has two parameters $a$ and $b$, and we define the likelihood as $\mathcal{N}(y; ax^{-b}, \sigma^2)$, where $\sigma^2$ is the observational noise. - MCMC is performed over the joint posterior: $p(a, b, \sigma \mid \lbrace x_i, y_i\rbrace_{i=1}^M) \propto \prod_{i=1}^M \mathcal{N}(y_i; ax_i^{-b}, \sigma^2) \cdot p(a) \cdot p(b) \cdot p(\sigma)$, where $p(a)$, $p(b)$, and $p(\sigma)$ are predefined priors. - We will clarify this in the revision to avoid further confusion. Thank you again for raising this point. --- > **[Q2]** The number of samples drawn (150) is tiny. Typical numbers for running MCMC chains are in the thousands. - Thank you for pointing this out. We acknowledge that 150 MCMC samples are small compared to the typical choice. We chose such a small number because there are too many scaling curves to evaluate and each MCMC inference takes too much time. For instance, applying 3000 MCMC samples to all those curves takes around **4 weeks** in total! - In [this external link](https://github.com/nslpfn-anonymous/nslpfn-anonymous/blob/rebuttal/rebuttal/LL_nsamples_and_inference_time.pdf), we compare our method against MCMC baselines with varying numbers of samples: 150, 300, 750, 1500, and 3000. To make the experiments feasible, we evaluated only on a subset of 30 scaling laws randomly sampled from the collection of all neural scaling law datasets we considered. - As shown in the results, increasing the number of MCMC samples sometimes leads to slightly improved performance as expected, but the differences are marginal (actually, we had already verified it before submission). Also, the increased performances still fall short of our method, while the inference time increases substantially. This supports one of our key claims: **NSL-PFN achieves strong performance with significantly better efficiency**. After meta-training, PFN-based methods provide **near-instantaneous inference**, which is the power of amortized inference. - We will discuss this further and provide the full evaluations in the revision. --- > **[Q3]** Eq (10): The expectation is over T which doesn't appear in the expression at all. Do you mean that (Y*, X*) is one instance from T? I have no idea what (X,Y) and "the autoregressive objective" are. - As defined in L123–124 (the right column) in the main paper, the context and target set are denoted as $\mathcal{C}=(X,Y)$ and $\mathcal{T} = (X^*, Y^*)$, respectively. - In Eq. (10), $\mathcal{C}, \mathcal{T}\sim p(\mathcal{D})$ is thus equivalent to $(X,Y),(X^*, Y^*)\sim p(\mathcal{D})$. We call the training objective $\mathbb{E}_{\mathcal{C}, \mathcal{T} \sim p(\mathcal{D})} [\log q (Y \mid X, \mathcal{C})]$ "autoregressive" because it is regressing $\mathcal{C}=(X,Y)$ conditioned on the same context $\mathcal{C}$. - To improve clarity, we will revise the terminology to **"context regression objective"**, or any terminology you have in mind. Thank you for bringing this to our attention. --- > **[Q4]** The priors in Table 1 seem a bit arbitrary. How were they chosen? How sensitive are the results to their choice? - Thank you for pointing it out. For the design of our functional prior, we followed the same procedure used in LC-PFN. Specifically, we collected some of the well-known scaling law curves from various domains, and tried to match by eye the shape of our functional prior to the actual curves by manually controlling the parameters of the prior. - Importantly, when controlling the prior parameters, we assumed that the actual values of the scaling law curves are **not** accessible. Also, for the ColPret dataset (whose size exceeds the sum of all the other datasets), we did not access the shape of the scaling law curves, but just tested our model on the dataset directly. Nevertheless, the model still achieved strong performance on ColPret, indicating that our prior generalizes well across tasks. - Further, during the rebuttal period, we conducted a simple Bayesian optimization (BO) on the prior parameters to minimize the average RMSLE over 60 BO steps. As shown in [this external link](https://github.com/nslpfn-anonymous/nslpfn-anonymous/blob/rebuttal/rebuttal/hpo_error.pdf), the BO led to slight improvements on the performance, indicating that our prior tuning was not aggressive and that there remains room for further improvements. - We will discuss the above in the revision.
Summary: The authors propose a method to infer scaling laws using a Bayesian approach which allows them to model predictive uncertainty. To accomplish this, they rely on prior-data fitted networks (PFNs) introduced by Müller et al., (2022) and their ability to meta-learn from large amounts of synthetic data. The method is evaluated both quantitatively and qualitatively on a range of data sets showing competitive and improved performances against several baselines. Claims And Evidence: All claims (080 to 092) are provided with clear evidence throughout the paper. Methods And Evaluation Criteria: The proposed method makes sense for the proposed task. PFNs and other meta-learning methods that learn from a large amount of synthetic data are an obvious fit for tasks such as neural scaling law inference, due to the ease of generating such synthetic data while at the same time having very limited real-world data. The evaluation criteria are also broad and valid, both quantitatively, as well as qualitative visualizations. Theoretical Claims: There are no theoretical claims that require proof. Experimental Designs Or Analyses: The experimental design is well-specified and well-motivated. The analysis is sound. Supplementary Material: I skimmed the additional ablations, visualizations, and implementation details. Relation To Broader Scientific Literature: The relation to the broader scientific literature is in the application of PFNs for the task of scaling law inference. The proposal does not contribute anything to the theory of PFNs but takes it as a tool to improve current neural scaling law inference techniques. Essential References Not Discussed: All essential references are discussed properly. Other Strengths And Weaknesses: ### Strengths - The paper is very well written and can easily be followed. - The motivation is clear and justified. The evaluation is convincing. ### Weaknesses - One omission from the related work discussion in Appendix A is the model family of neural processes originally introduced by Garnelo et al. in 2018. These follow a similar structure and motivation to PFNs and have been adopted in a wide area of applications. Other Comments Or Suggestions: - The reference section needs to be updated, as several references point to arxiv preprints despite the respective papers being published work. An example is Müller et al. (2021) which has been published at ICLR 2022. Questions For Authors: - Q1: The PFN has been trained with a rather large set of synthetic examples (1.6 million). How sensitive is the performance of the proposed approach to this size? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your time and thoughtful comments. We address each of your comments below. If you have any further questions or comments, we would be happy to discuss them during the author-reviewer interaction phase. --- >**[Q1]** Missing related work - Neural Processes. - Thank you for the suggestion. We agree that the Neural Process (NP) family, originally introduced by Garnelo et al. [1], is closely related to our approach, as both aim to meta-learn an amortized posterior inference machine over a distribution of tasks. We will include the following discussion in the revision: - *Neural Processes (NPs) [1] typically learn a latent variable model where context data are summarized into a global latent representation, and target predictions are made by conditioning on this latent variable. This structure allows for uncertainty modeling and task generalization. The Conditional Neural Process (CNP) [2] simplifies this by removing the latent variable and directly conditioning target predictions on the aggregated context set. The Attentive Neural Process (ANP) [3] improves upon this by incorporating attention mechanisms to better capture context-target relationships. These models have been widely applied to few-shot learning, time series modeling, spatial inference, and so on. Further extensions include Transformer Neural Processes (TNPs) [4], which replace context aggregation with attention-based sequence modeling using Transformer architectures. While conceptually related, PFNs differ from NPs in an important way: PFNs do not require real dataset to train the inference machine. Instead, PFNs directly meta-learn the posterior predictive distribution with synthetic data generated from a manually-defined prior distribution, which allows Bayesian inference instead of maximum likelihood estimation. On the other hand, NPs are basically MLE and can be seen as implicitly learning the data-driven prior.* --- >**[Q2]** The reference section needs to be updated, as several references point to arxiv preprints despite the respective papers being published work. - Thank you for the suggestion. We will clarify the publication status of all references and replace arXiv citations with their corresponding published versions wherever applicable. We appreciate your attention to detail and will ensure the revision reflects the accurate and up-to-date citations as follows: - *Exploring the Limits of Large Scale Pre-Training* → ICLR 2022 - *An Image Is Worth 16x16 Words: Transformers for Image Recognition at Scale* → ICLR 2021 - *Language Models Are Few-Shot Learners* → NeurIPS 2020 - *Broken Neural Scaling Laws* → ICLR 2023 - *BERT: Pre-Training of Deep Bidirectional Transformers for Language Understanding* → NAACL 2019 - *A Survey on In-Context Learning* → EMNLP 2024 - *Probabilistic Rollouts for Learning Curve Extrapolation Across Hyperparameter Settings* → ICML Workshop 2019 - *Scaling Laws for Neural Machine Translation* → ICLR 2022 - *Mamba: Linear-Time Sequence Modeling with Selective State Spaces* → COLM 2024 - *Training Compute-Optimal Large Language Models* → NeurIPS 2022 - *TabPFN: A Transformer That Solves Small Tabular Classification Problems in a Second* → ICLR 2023 - *Bayesian Active Learning for Classification and Preference Learning* → CoRR 2011 - *Adam: A Method for Stochastic Optimization* → ICLR 2015 - *Transformers Can Do Bayesian Inference* → ICLR 2022 - *In-Context Freeze-Thaw Bayesian Optimization for Hyperparameter Optimization* → ICML 2024 - ... --- >**[Q3]** The PFN has been trained with a rather large set of synthetic examples (1.6 million). How sensitive is the performance of the proposed approach to this size? - Thank you for the question. First of all, it is well-known that PFNs benefit from training on a large amount of synthetic data, as shown in LC-PFN and other related works. Since PFNs are trained to approximate the posterior predictive over a distribution of tasks, a large number of sampled functions is typically required for effective generalization. - In [this external link](https://github.com/nslpfn-anonymous/nslpfn-anonymous/blob/rebuttal/rebuttal/learning_curves.pdf), we provide the training losses and test performances over training iterations. We find that they keep converging as the model spends more synthetic data. Note that for each iteration, we randomly sample a batch of synthetic functions (which can be generated cheaply on-the-fly from our functional prior), hence the number of synthetic training data used is proportional to the training iterations. --- **Reference** [1] Garnelo et al. *Neural Processes. ICML workshop,* 2018. [2] Garnelo et al. *Conditional Neural Processes, ICML* 2018. [3] Kim et al. *Attentive Neural Processes, ICML* 2019. [4] Nguyen et al. *Transformer Neural Processes: Uncertainty-Aware Meta Learning via Sequence Modeling. ICML* 2022. --- Rebuttal Comment 1.1: Comment: Thank your for your rebuttal and the answers which resolved my remaining questions/requests. --- Reply to Comment 1.1.1: Comment: We are glad to hear that your concerns have been resolved through this rebuttal. We appreciate your time and thoughtful feedback once again. Please let us know if you have any further questions, and we will be pleased to address them. Best regards, The Authors
Summary: This paper proposes the use of Bayesian Neural Networks (BNNs) to model deep learning scaling laws. Scaling laws in deep learning have gained a considerable amount of interest recently, as they provide quantitative ways to estimate, for example, the amount of computational resources, model size, or dataset size needed to achieve certain performance. Traditional approaches estimate these neural scaling laws using simple parametric assumptions (e.g., power law, exponential), and by carrying out "point estimation", that is without accounting for any uncertainty in predictions/extrapolations. BNNs allow one to tackle these two limitations by employing flexible over-parameterized neural network models and by treating these in a Bayesian way. The added benefit of this approach is the possibility to carry out active learning. One of the main contributions is the use of Prior Fitted Networks (PFNs) to establish sensible neural scaling laws that can be generated from the model prior to conditioning on any data. Claims And Evidence: The claims are that the proposed approach gives flexible representation for a prior over neural scaling laws and that the proposed model and Bayesian treatment yield accurate extrapolations. Another claim is that uncertainty in scaling laws can be used effectively to carry out active learning leading to computational savings. These claims are generally supported by the experiments provided in the paper. However, given that this paper is mostly experimental in nature, I would have liked to see a much more comprehensive validation, including more active learning scenarios and more competitors. Also, I think it would have been useful to study the calibration of the predictive uncertainties; at the moment, it is not clear how well the uncertainties produced by the model are calibrated. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are generally ok, but again I think it would be important to evaluate calibration, which is currently missing. Theoretical Claims: There are no theoretical developments in this paper. Experimental Designs Or Analyses: As far as I can tell, the experimental design and analyses in the paper are conducted properly. Supplementary Material: I've skimmed through the appendices to see if some of my concerns were already addressed there. Relation To Broader Scientific Literature: I think that the literature review on neural scaling laws is nicely captured in the paper, although I am not an expert on this. Maybe I would have put a bit more emphasis on alternative approaches on BNNs beyond PFNs. Essential References Not Discussed: I don't think that there are other essential references to be added. Other Strengths And Weaknesses: The main weakness is the formulation of a rather detailed PFN for a task that is essentially 1D regression/extrapolation. My immediate reaction after reading this paper was that 1D regression can be done in closed form with Bayesian linear regression, and it can be made quite flexible through the use of basis functions. This would give access to the marginal likelihood for model selection, and it would enable all the nice things that this paper promises to do quite cheaply. I believe that this would be a good alternative to the approach proposed in this paper, and I would encourage the Authors to think about this type of competitor. Section 3.1 provides a long text specifying the assumptions on the functions that BNNs can represent, whereas there is no effort in trying to encode this information in some basis functions which might ultimately yield competitive performance. Other Comments Or Suggestions: The paper is generally well written, so I don't have any particular comments on this. Questions For Authors: Overall I think this paper has a good aim. I find the realization overly complicated and I think that the paper is missing some simple Bayesian competitors. Due to the paper being experimental in nature, I would encourage the Authors to focus their efforts on a wider experimental campaign, testing as many competitors as possible, and reporting calibration. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your time and thoughtful comments. We address each of your comments below. If you have any further questions or comments, we would be happy to discuss them during the author-reviewer interaction phase. --- >**[Q1]** Additional experiments - calibration of the predictive uncertainties. - Thank you for the suggestion. In the following, we use **mean square calibration error (MSCE)**, the calibration metric used in [1] to assess predictive uncertainty. As shown in [this external link](https://github.com/nslpfn-anonymous/nslpfn-anonymous/blob/rebuttal/rebuttal/main_table_with_calibration_error.md), **our method demonstrates notably better calibration performances than the baselines**. - The gap is more prominent especially on the Double Descent (DD) which exhibits **highly chaotic behavior**. In such regimes, well-calibrated uncertainty is particularly important for reliable extrapolation. Furthermore, our method performs especially well on ColPret, the most recent and largest collection of **realistic neural scaling laws**, including models like GPT-3. - These results show our model’s strength in handling challenging and real-world scaling behaviors. We will include those findings in the revision. --- >**[Q2]** Additional baselines - Bayesian linear regression with basis functions. - Thank you for the suggestion. We agree that Bayesian linear regression (BLR) [2] with appropriate basis functions is a natural and lightweight approach for 1D regression and extrapolation. - To evaluate, we conducted additional experiments with the following baselines: 1) BLR with neural network basis functions, 2) BLR with polynomial basis, 3) BLR with RBF basis, 4) BLR with Fourier basis, 5) BLR with sigmoid basis, 6) BLR with spline basis, and 7) Deep Kernel GP (DKGP) [3]. We use log marginal likelihood for tuning the parameters, such as the parameters of the NN as a BLR basis function. We provide qualitative examples of the BLR with NN baseline and DKGP at [this external link](https://github.com/nslpfn-anonymous/nslpfn-anonymous/tree/rebuttal/rebuttal/BLR_visualization) and [this externel link](https://github.com/nslpfn-anonymous/nslpfn-anonymous/tree/rebuttal/rebuttal/DKGP_visualization), respectively. - As shown in [this external link](https://github.com/nslpfn-anonymous/nslpfn-anonymous/blob/rebuttal/rebuttal/table_with_dkgp_and_blr.md), **those baselines are not competitive** in the context of neural scaling laws. This is because, while they offer interpretability and closed-form inference, to our knowledge **it is not straightforward to encode a set of inductive biases about neural scaling laws into the basis or kernel functions**. This is why we originally did not consider those baselines. - In contrast, our PFN-based method provides a flexible framework to encode our prior belief on the shape of the functions. We can use any functional prior as long as we can efficiently sample from it, which is one of the biggest advantages of using PFNs. This is precisely why we propose to use PFNs for the problem of extrapolating neural scaling laws. - We appreciate your further suggestions as to how to effectively encode the prior belief on the scaling law behaviors into the corresponding basis or kernel functions. As long as time allows, we will try our best to do the additional experiments following your suggestions. --- >**[Q3]** I would have liked to see a much more comprehensive validation. - We emphasize that, to the best of our knowledge, we have already collected **all publicly available datasets of neural scaling laws**, including recent and diverse settings such as **ColPret** (large-scale LLMs), **DD** (chaotic double descent behavior), and other benchmarks from $\mathcal{M}_4$ and BNSL. These datasets span a wide range of domains and scaling patterns, including smooth, non-smooth, and chaotic regimes. - Across all of these settings, our method consistently shows **strong performance**, demonstrating its general applicability. - Also, during the rebuttal period, we conducted the two additional experiments following your suggestion. 1) We additionally consider Bayesian linear regression and deep kernel GP baselines and find that they largely underperform our method. 2) We demonstrate that our method shows superior calibration performance than the baselines. - Our paper also includes Bayesian active learning experiments to demonstrate the usefulness of uncertainty, which has not been discussed in the LC-PFN paper. If you find it insufficient, please feel free to suggest any additional experiments in detail, and we would be happy to try our best to do the experiments as long as time allows. --- **References** [1] Kuleshov et al., *Accurate Uncertainties for Deep Learning Using Calibrated Regression. ICML* 2018. [2] Bishop et al., *Pattern Recognition and Machine Learning*, 2006. [3] Wilson et al., *Deep Kernel Learning, AISTATS* 2016
Summary: In this paper, the authors investigate the use of Prior-Data Fitted Networks (PFNs), a Bayesian framework for estimating neural scaling law curves. Existing methods typically produce point estimates, which fail to capture the true scaling law dynamics across different scenarios (e.g., double descent, flat cut-offs) and do not provide uncertainty estimates. These uncertainties are crucial for designing experiments based on early observations of the scaling curve. To address this, the authors introduce a family of priors that effectively represent scaling laws in large model training, synthesizing new samples from these priors to train PFNs, which are based on a standard transformer architecture. The authors demonstrate that their approach, referred to as NSL-PFNs, not only quantifies Bayesian uncertainty but also yields strong performance in terms of RMSLE and likelihood metrics. Extensive evaluations across image and text domain benchmarks, along with ablation studies, highlight the contributions of each component in their framework. Claims And Evidence: Yes, the claims are supported by relevant evidence. Methods And Evaluation Criteria: Yes, the authors have considered relevant baselines and benchmarks datasets. Theoretical Claims: N/A Experimental Designs Or Analyses: Yes, the experimental setup and design is sound. Supplementary Material: Yes, I reviewed the appendix provided in the main paper. Relation To Broader Scientific Literature: In my humble opinion, the contributions are significant within the broader context of the scientific literature. However, there are some questions I have raised that require further clarification. Essential References Not Discussed: N/A Other Strengths And Weaknesses: In my view, the primary contribution of this paper lies in the development of a novel family of priors that enhance the training of PFNs, which constitutes the core strength of the work. Other aspects of the paper, as cited by the authors, have been explored in prior research. The emphasis on Bayesian uncertainty in the context of scaling law estimation, given only a set of observations, is particularly valuable for practitioners. It can eventually provide credible intervals and practical insights for conducting experiments at varying scales. Additionally, the paper is exceptionally well-written, clear, and almost free of typos. Other Comments Or Suggestions: N/A Questions For Authors: Can the authors clarify the main novelty of this work compared to Adriaensen et al.? Is it primarily the definition of the functional priors? It would be beneficial to explicitly highlight both high- and low-level differences. In Table 1, can the authors provide a reference that demonstrates the priors used for the coefficients (e.g., $\log a \sim \mathcal{N}(-1, 0.6)$)? How do these choices impact the overall performance? While the authors follow Muller et al. for implementing the PFN, can they clarify why positional encoding has been omitted? Can the authors update Figure 4 to better highlight the uncertainties for each method in the extrapolation regime? Currently, it is difficult to interpret and extract insights from the figure. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your time and thoughtful comments. We address each of your comments below. If you have any further questions or comments, we would be happy to discuss them during the author-reviewer interaction phase. --- >**[Q1]** Main novelty of this work compared to LC-PFN at both high- and low-level? Thank you for the question. The main novelty of our work compared to LC-PFN [1] lies in both the high-level approach and low-level modeling choices: High-level differences: - Our work is the first to apply a **Bayesian** framework to the problem of **neural scaling law extrapolation**, whose functional distribution and characteristics are much different from those of learning curves the LC-PFN focused on. - We further leverage uncertainty estimates to actively guide data acquisition, demonstrating the **practical utility of uncertainty in reducing the cost of collecting high-quality scaling data** — something not explored in LC-PFN. Low-level differences: - As you mentioned, we carefully design **a novel functional prior tailored to neural scaling laws**, informed by limitations of previous empirical approaches. Our prior can capture complex and chaotic behaviors such as double descent, enabling accurate extrapolation and reliable uncertainty estimates even under challenging real-world scenarios. In contrast, the prior of LC-PFN does not cover such complex patterns. - We further propose a simple **context-regressive learning objective** that not only improves extrapolation performance but also enables Bayesian active learning — something that LC-PFN lacks. We will make these distinctions more explicit in the revision. --- >**[Q2]** In Table 1, can the authors provide a reference that demonstrates the priors used for the coefficients? How do these choices impact the overall performance? - Thank you for pointing it out. For the design of our functional prior, we followed the same procedure used in LC-PFN [1]. Specifically, we collected some of the well-known scaling law curves from various domains, and tried to match by the eye the shape of our functional prior to the actual curves by manually controlling the parameters of the prior. - Importantly, when controlling the prior parameters, we have assumed that the actual values of the scaling law curves are **not** accessible. Also, for the ColPret dataset (whose size exceeds the sum of all the other datasets), we did not even access the shape of the scaling law curves, but just test our model on the dataset directly. Nevertheless, the model still achieved strong performance on ColPret, indicating that our prior generalizes well across tasks. - Further, during the rebuttal period, we conducted a simple Bayesian optimization (BO) on the prior parameters to minimize the average RMSLE over 60 BO steps. As shown in [this external link](https://github.com/nslpfn-anonymous/nslpfn-anonymous/blob/rebuttal/rebuttal/hpo_error.pdf), the BO led to slight improvements on the performance, which indicates that our procedure of tuning prior parameters was not aggressive and there remain some rooms for further improvements of our prior. - We will discuss the above in the revision. --- >**[Q3]** While the authors follow Muller et al. for implementing the PFN, can they clarify why positional encoding has been omitted? - The original PFN paper (Muller et al. [2]) also omits the positional encoding for the following two reasons. - In PFNs, when encoding the context points $\mathcal{C}=\lbrace(x_i,y_i)\rbrace_{i=1}^M$, the input feature $x_i$ itself already contains the "positional information" in the input space. - Further, PFNs approximate the posterior predictive distribution $p(y^*,x^*|\mathcal{C})$, which is permutation-invariant w.r.t. the ordering of the context points $\mathcal{C}=\lbrace(x_i,y_i)\rbrace_{i=1}^M$, by definition. The same applies to our model as well as LC-PFN [1]. - We will clarify this in the revision. --- >**[Q4]** Update Figure 4 to better highlight the uncertainties for each method? - Thank you for the suggestion. To improve clarity, we have updated those figures by separating the uncertainty plots for each method. [This external link](https://github.com/nslpfn-anonymous/nslpfn-anonymous/tree/rebuttal/rebuttal/DD_visualization) includes all those visualizations. We will include them in the appendix of the revision. --- **References** [1] Adriaensen et al., *Efficient Learning Curve Extrapolation via Prior-Data Fitted Networks, NeurIPS* 2023. [2] Müller et al., *Transformers Can Do Bayesian Inference, ICLR* 2022
null
null
null
null
null
null
Monte Carlo Tree Diffusion for System 2 Planning
Accept (spotlight poster)
Summary: The paper introduces Monte Carlo Tree Diffusion (MCTD) for long-horizon planning problems. MCTD combines the benefits of generative Diffusion models with tree-based planning in Monte Carlo Tree Search, without the requirement of a forward dynamics model. The primary aim is to utilise Diffusion models for improved long-horizon planning. Additionally, it allows MCTD to scale at test time with additional compute. Claims And Evidence: The authors make two claims. One that MCTD outperforms standard Diffusion-based methods for long-horizon trajectory planning, which they demonstrate on three experiments (Maze Nav, Cube Manipulation, and Vizual Pointmase) and compare against various Diffusion-based baselines such as Diffuser (with various sampling regimes), and Diffusion Forcing. Second, their application of MCTS with Diffusion models allows their Diffusion models to improve with additional Test Time Scaling which is demonstrated with one experiment on the Point Maze task. Methods And Evaluation Criteria: The methods and evaluation criteria make sense for the problem introduced. Theoretical Claims: None Experimental Designs Or Analyses: The experimental design and analysis appear sound and valid. Supplementary Material: No Relation To Broader Scientific Literature: The paper situates itself relative to the Diffusion literature, and blends concepts from MCTS and tree-based planning to improve the behaviour of standard Diffusion models specifically for long-horizon planning. Essential References Not Discussed: The paper focuses on the Diffusion literature. It does not cite traditional planning literature that attempt to solve long-horizon planning problems via sub-plans [1], or Options [2]. Two suggestions would be: [1] Jurgenson, T., Avner, O., Groshev, E. and Tamar, A., 2020, November. Sub-goal trees a framework for goal-based reinforcement learning. In International conference on machine learning (pp. 5020-5030). PMLR. [2] M. de Waard, D. M. Roijers and S. C. J. Bakkes, "Monte Carlo Tree Search with options for general video game playing," 2016 IEEE Conference on Computational Intelligence and Games (CIG), Santorini, Greece, 2016, pp. 1-8, Other Strengths And Weaknesses: **Clarity**: A major strength of the work is the extremely well written manuscript. The Experiments and Results look promising, and the paper presents a new way of combining the benefits of Diffusion with long-horizon planning. One weakness for me is the lack of details provided in the Experimental Setup. I don’t believe I could re-implement the proposed method from the manuscript alone. Whilst I believe most of the relevant information is provided in the Appendix (and code will be open-sourced), as a reader, I would benefit from a section in the main paper introducing topics such as: the Diffusion model architecture, the value networks, and training hyperparameters. Plus more details on the experiments themselves, like the action space, the training and compute budgets. This is all lacking in the main manuscript. Adding these details would improve the paper. **Novelty and Significance** To the best of my knowledge, this is the first application of Diffusion models to a tree-search framework such as MCTS, and the results are promising. The paper could be of interest to many researchers in the field. Other Comments Or Suggestions: Typo: page 2 “aiming to balencely” Page 4, should define DDIM. Questions For Authors: 1) When partitioning trajectories into x_1, …, x_S, is S predetermined? How do you select it for each environment? and what effect does altering S have on the performance? 2) Since the Diffusion Forcing paper introduces a casual Denoising schedule, is it feasible to suggest that the additional performance gains are due to the tree-based planning? A more indepth discussion on this would be interesting. 3) What is C in the fast Denoising process? How does this affect the backpropagated values? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Essential References Not Discussed** We thank you for recommending traditional long-horizon planning literatures. We will incorporate these valuable references in our revised version. > ... the lack of details provided in the Experimental Setup. ... lacking in the main manuscript. > Thank you for suggesting more experimental details in the main manuscript. In our revised version, we will include the experimental setup in the main text. > Typo: page 2 “aiming to balencely”, Page 4, should define DDIM. > Thank you for identifying these issues! We will correct them. > When partitioning trajectories into x_1, …, x_S, is S predetermined? How do you select it for each environment? and what effect does **altering S have on the performance**? > The subplan count $S$ is indeed a predetermined hyperparameter that we adapt to each environment's episode length. Following your insightful question, we conducted comprehensive ablation studies on PointMaze to analyze its effect. **The results demonstrate the suitable $S$ choice leads best effectiveness and efficiency.** **Subplan length ablation study** S=1 / 3 / 5 (Default) / 10 / 20 Medium: - Performance: 80 ± 13 / 96 ± 8 / 100 ± 0 / 96 ± 8 / 98 ± 6 - Run Time: 11 ± 1 / 14 ± 3 / 31 ± 4 / 103 ± 5 / 91 ± 3 - Search#: 81 ± 63 / 29 ± 40 / 74 ± 9 / 475 ± 28 / 500 ± 0 Large: - Performance: 80 ± 0 / 90 ± 10 / 100 ± 0 / 100 ± 0 / 92 ± 10 - Run Time: 11 ± 0 / 19 ± 4 / 65 ± 11 / 118 ± 6 / 131 ± 10 - Search#: 111 ± 31 / 64 ± 50 / 190 ± 37 / 500 ± 0 / 500 ± 0 S=1 / 2 / 4 / 8 (Default) / 15 / 30 Giant: - Performance: 72 ± 19 / 88 ± 16 / 100 ± 0 / 100 ± 0 / 100 ± 0 / 100 ± 0 - Run Time: 13 ± 1 / 18 ± 4 / 40 ± 22 / 215 ± 23 / 213 ± 9 / 178 ± 7 - Search#: 181 ± 78 / 75 ± 104 / 46 ± 41 / 374 ± 48 / 500 ± 0 / 500 ± 0 These results reveal an important trade-off: With longer subplans (smaller S), the search space is reduced, leading to faster running times but potentially lower performance. Conversely, very short subplans (e.g., S=20) can require more search iterations, potentially exhausting the computational budget before finding optimal solutions. > Since the Diffusion Forcing paper introduces a casual Denoising schedule, is it feasible to suggest that **the additional performance gains are due to the tree-based planning?** A more indepth discussion on this would be interesting. > This is an excellent question about disentangling the contributions of tree search (TS) and causal denoising (CD). To address this directly, we conducted an ablation study comparing four configurations: Diffusion Forcing (DF) without CD, standard DF, MCTD without CD, and full MCTD. **The results demonstrate the huge performance gains from TS.** **Causal Denoising and Tree Search ablation studies** DF wo CD / DF / MCTD wo CD / MCTD Performances: - Medium: 44 ± 22 / 40 ± 21 / 87 ± 16 / 100 ± 0 - Large: 50 ± 14 / 55 ± 20 / 78 ± 6 / 98 ± 6 - Giant: 8 ± 10 / 34 ± 16 / 18 ± 11 / 100 ± 0 The results reveal that both components contribute substantially to MCTD's performance. TS consistently improves performance across all maze sizes. Meanwhile, CD provides especially dramatic gains for long-horizon planning in the giant maze. > What is C in the fast Denoising process? How does **this affect the backpropagated values**? > The parameter $C$ in fast denoising determines how many levels are skipped during the jumpy denoising process. Larger $C$ values mean fewer denoising steps are performed, trading accuracy for speed. We investigated how different jumpiness scales affect performance and efficiency with the version which de-noises the trajectory in one-shot. **The results demonstrate proper value choice for $C$ affects the model performance and efficiency.** **Jumpiness scale ablation study** Jump=1 / 5 / 10 (Default) / 20 / 50 / One-shot Medium: - Performance: 100 ± 0 / 100 ± 0 / 98 ± 6 / 100 ± 0 / 100 ± 0 / 100 ± 0 - Run time: 84 ± 26 / 39 ± 16 / 34 ± 13 / 29 ± 12 / 27 ± 13 / 42 ± 12 - Search#: 86 ± 30 / 74 ± 32 / 77 ± 29 / 73 ± 30 / 65 ± 29 / 143 ± 46 Large: - Performance: 100 ± 0 / 100 ± 0 / 100 ± 0 / 98 ± 6 / 94 ± 13 / 63 ± 39 - Run time: 160 ± 69 / 91 ± 35 / 74 ± 34 / 65 ± 32 / 68 ± 49 / 355 ± 183 - Search#: 176 ± 85 / 188 ± 81 / 174 ± 90 / 167 ± 89 / 174 ± 112 / 301 ± 134 Giant: - Performance: 100 ± 0 / 98 ± 6 / 100 ± 0 / 100 ± 0 / 88 ± 16 / 10 ± 21 - Run time: 632 ± 239 / 258 ± 115 / 231 ± 103 / 185 ± 88 / 174 ± 76 / 164 ± 2 - Search#: 390 ± 148 / 379 ± 158 / 394 ± 152 / 363 ± 154 / 387 ± 149 / 500 ± 0 For relatively simple tasks, even aggressive (large $C$) jumpiness or one-shot maintains high performance. However, as task complexity increases, they significantly degrades performance, as the value function estimation becomes less accurate under highly jumpy denoising, leading to suboptimal tree search decisions. The default (Jump=10) achieves an excellent balance, maintaining high success rates while reducing computational time compared to smaller jump.
Summary: The paper "Monte Carlo Tree Diffusion for System 2 Planning" introduces Monte Carlo Tree Diffusion (MCTD), a novel algorithm that merges diffusion models with Monte Carlo Tree Search (MCTS) to enhance test-time compute scalability in planning with diffusion models. It proposes three key innovations: restructuring denoising as a tree-rollout process for semi-autoregressive trajectory refinement, using guidance levels as meta-actions to balance exploration and exploitation, and employing jumpy denoising for planning. MCTD adapts the four MCTS steps—Selection, Expansion, Simulation, and Backpropagation—into a diffusion context, enabling iterative plan improvement. Evaluated on the Offline Goal-conditioned RL Benchmark, MCTD outperforms baselines like Diffuser and Diffusion Forcing in long-horizon tasks such as maze navigation , cube manipulation, and visual pointmaze. The paper claims MCTD is the first to explicitly integrate MCTS with diffusion planning, offering a scalable System 2 reasoning approach. **After Rebuttal Summary** The reviewers provided a comprehensive ablation that eased my main concerns that this was simply performing a form of standard sampling. I think this ablation improves the strength of the paper and the takeaways regarding the role that MCTS plays in performance. Claims And Evidence: Claims made are both supported and convincing. Methods And Evaluation Criteria: Evaluation datasets are appropriate. Theoretical Claims: N/A Experimental Designs Or Analyses: Yes sound. Supplementary Material: Yes, full and sufficient. Relation To Broader Scientific Literature: The contributions of "Monte Carlo Tree Diffusion for System 2 Planning" bridge diffusion-based planning and Monte Carlo Tree Search (MCTS) literature. It builds on diffusion works like Diffuser (Janner et al., 2022) and Diffusion Forcing (Chen et al., 2024a) by adding tree-structured search to improve test-time compute scalability, addressing limitations noted in Karras et al. (2022), and extends guidance concepts from Dhariwal & Nichol (2021) into meta-actions. In the broader RL and MCTS context, it advances Coulom (2006) and applications like AlphaGo (Silver et al., 2016) by planning at a higher abstraction level with subplans as nodes, reducing reliance on error-prone forward models (Hafner et al., 2022) and aligning with hierarchical planning trends (Chen et al., 2024c), offering a novel, efficient alternative for long-horizon tasks and System 2 reasoning. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths - Novel integration of MCTS with diffusion that enables planning at a high level of abstraction, which is very promising for making test time planning methods more efficient. Weaknesses - Lack of ablations investigating the parameters of MCTS search. In particularly investigating the balancing of the exploitation-exploration trade off. This would give more insight as to whether the planning is more sampling based or really finding promising branches and exploiting them. Willing to raise my score if such ablations are added. Other Comments Or Suggestions: No Questions For Authors: - Could your method be more generally presented as planning at a higher level of abstraction and then open to a variety of planning algorithms, or is it inherently tied to an MCTS based approach? - The readout policy used was best child in MCTS, did you explore any other readout policies? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Relation To Broader Scientific Literatures** We thank you for acknowledging our work's connection to the broader scientific literatures. We will ensure these connections are clearly articulated in the revised version. > Lack of ablations investigating the parameters of MCTS search. In particularly investigating the **balancing of the exploitation-exploration trade off.** > We appreciate this valuable suggestion. Following your suggestion, ****we have conducted comprehensive ablation studies focusing on the MCTS hyperparameters that control exploration-exploitation balance in MCTD on PointMaze. **The results demonstrate the exploration-exploitation balance is one of the key factors to optimize the performance efficiently.** Below are key results from these studies: **Meta-action choice ablation study** Choice 1: [0, 0.02, 0.05, 0.07, 0.1] Choice 2: [0, 0.05, 0.1, 0.5, 1] Choice 3: [0, 0.1, 0.5, 1, 2] Choice 4: [0.5, 1, 2, 3, 4] Choice 5: [2, 3, 4, 5, 6] Choice 6: [4, 5, 6, 7, 8] Choice 7: [0, 0.1, 1, 10, 100] Choices: C1 / C2 / C3 / C4 / C5 / C6 / C7 Performances: - Medium: 88 ± 16 / 98 ± 6 / 100 ± 0 / 100 ± 0 / 98 ± 6 / 98 ± 6 / 100 ± 0 - Large: 44 ± 15 / 100 ± 0 / 98 ± 6 / 90 ± 10 / 80 ± 0 / 80 ± 0 / 100 ± 0 - Giant: 54 ± 23 / 100 ± 0 / 100 ± 0 / 100 ± 0 / 88 ± 16 / 74 ± 18 / 100 ± 0 The results reveal that when meta-actions are heavily biased toward either exploration (C1) or exploitation (C6), performance degrades significantly. Interestingly, MCTD demonstrates robust performance with balanced meta-action sets, even when some values are extreme outliers (e.g., C7). This suggests that maintaining diverse guidance options is more important than their specific values, as long as both exploration and exploitation are adequately represented. **UCB hyperparameter ablation study** We investigated the influence of the UCB exploration weight parameter $C$ in the formula: $v_{\text{UCB}} = v_i + C\sqrt{\frac{\ln N}{n_i}}$, where $v_i$ is the node's estimated value, and $N$ and $n_i$ are the visit counts of the parent node and the node itself, respectively. C=0(Greedy) / 1.141 (Default) / 3 / 5 / 10 Medium: - Performance: 88 ± 13 / 100 ± 0 / 100 ± 0 / 98 ± 6 / 98 ± 6 - Run time: 15 ± 1 / 31 ± 4 / 63 ± 10 / 76 ± 23 / 92 ± 55 - Search#: 105 ± 84 / 77 ± 29 / 94 ± 8 / 120 ± 27 / 134 ± 23 Large: - Performance: 90 ± 10 / 98 ± 6 / 100 ± 0 / 98 ± 6 / 100 ± 0 - Run time: 16 ± 0 / 74 ± 34 / 90 ± 10 / 102 ± 14 / 104 ± 18 - Search#: 117 ± 78 / 174 ± 90 / 211 ± 26 / 257 ± 41 / 265 ± 43 Giant: - Performance: 82 ± 14 / 100 ± 0 / 98 ± 6 / 100 ± 0 / 100 ± 0 - Run time: 25 ± 1 / 215 ± 23 / 216 ± 12 / 225 ± 22 / 230 ± 10 - Search#: 228 ± 69 / 394 ± 152 / 442 ± 14 / 464 ± 13 / 493 ± 7 With C=0 (pure greedy search), MCTD shows faster inference time and conducts fewer searches, but performance degrades. This occurs because greedy search tends to explore the tree depth-wise quickly, reducing the overhead from jumpy denoising but compromising thorough exploration. The default value (C=1.141) achieves a good balance, while higher values show diminishing returns in performance despite increased computational costs. **Other ablation studies** We also studied other ablations for **subplan length**, **causal denoising**, **tree search**, and **the scale of jumpiness** which are discussed in the rebuttal for 6pZp's review. > Could your method be more **generally presented as planning** at a higher level of abstraction and then open to a variety of planning algorithms, or is it inherently tied to an MCTS based approach? > While we built our approach on an MCTS backbone, the core idea—treating partial denoising steps as higher-level "states" and using meta-actions (guidance choices) to expand them—is not inherently tied to MCTS. In principle, **one could substitute different search algorithms** (e.g., best-first search or A*) to explore and refine these partial trajectories. However, MCTS's ability to balance exploration-exploitation through principled tree search is uniquely suited for navigating the complex, high-dimensional trajectory spaces in diffusion planning. > The **readout policy** used was best child in MCTS, did you explore any other readout policies? > Thank you for suggesting this interesting idea. We agree that we may consider most visited child as an alternative. However, **it seems not be fitting well to our settings** as it pursues finding sequences of a complete high-quality plan instead of inferring the next best action. In future work, we plan to explore other readout strategies and their effects on planning in different domains. [1] Silver, David, et al. "Mastering the game of Go with deep neural networks and tree search." *nature* 529.7587 (2016): 484-489.
Summary: Standard diffusion-based planners don't improve with additional test-time computation. This limits their effectiveness on complex long-horizon tasks.The authors introduce Monte Carlo Tree Diffusion (MCTD) which formulates the diffusion process in such a way that each denoising process can effectively "branch out" - i.e. each denoising step can theoretically happen in different goal directions an with different denoising rates. Notably the "branching out" of diffusion step gives rise to a tree structure when different branches are explored. This in turn allows for it to search over these branches and build on the monte carlo tree search literature to efficiently explore different branches to find better overall plans at the cost of additional test-time computation. The authors show that MCTD significantly outperforms standard diffusion planners and reinforcement learning baselines on challenging long-horizon tasks with sparse rewards albeit at the cost of large test-time computation. ## Post Rebuttal I acknowledge the authors' rebuttal and maintain my original assessment. Claims And Evidence: Key Claim: MCTD formulation of diffusion allows to exploit test-time compute to improve the overall quality of long horizon plans. Evidence: Quantitative results across maze navigation tasks show significant improvements over baselines. Moreover the integration of MCTS principles into this diffusion framework is sound and the implementation details are detailed. Minor comment 1: The claim of "System 2 planning" is bit of a stretch as the system does not implictly have a world model it can search all variations upon. At best its a very informed heuristic search over the data distribution and does not have any sense of planning in abstract space and grounding it to low level actions. Minor comment 2: The claim that MCTD "overcome limitations" of both MCTS and diffusion models is also a bit of a stretch. It is definitely the case that MCTD allows us to combine the theory of MCTS with the diffusion process. But it does it at the cost of test time compute and is a different dimension to doing the diffusion process and not really "overcoming limitations" in a pure sense. This does not "magically" solve the limitation of MCTS being slow and needing a model, or diffusion process being suboptimal when being used in its more standard forms. Methods And Evaluation Criteria: The evaluation criteria is appropriate. - The maze navigation tasks (point-mass and ant) provide clear tests of long-horizon planning capabilities with varying difficulty levels - The multi-cube manipulation evaluates compositional planning and transfer in a physical domain. - Visual maze navigation tests performance under partial observability - The metrics (success rates and rewards) directly measure planning effectiveness It is unfortunate that the random search using repeated diffusion was not scaled to match the test time compute used by MCTD. It would have been very useful to show that random search alone scales very poorly with more test time compute. Theoretical Claims: While relationship with MCTS has been established and is clearly motivated, theoretical results will be very hard to inherit as the "fast diffusion" does not use any model whose sub-optimality can be quantified. Moreover the action spaces defined in the diffusion process is technically continuous with a non trivial dynamics model which has not been studied for th eproblem. Experimental Designs Or Analyses: - The choice of OGBench's maze environments provides a good testbed for long-horizon planning with sparse rewards - Their ablation of "Diffuser with diffused action" highlights the failure mode of baseline approaches concerns: - Lack of direct comparison to other test-time optimization methods which can scale with compute. Random diffusion process can be easily scaled to any amount of compute for fair comparison. How about a baseline where the tree search is simply conducted greedily at every diffusion step? It could very well be the case that one step lookahead at every step would also have been enough. - Missing analysis of how performance varies with subplan count and meta-action choices. IT would have been wonderful to see the effectiveness of the approach being highlighted for varying number of sub-plan count and meta action choices. Supplementary Material: The algorithmic details in the supplementary material was very welcome. Relation To Broader Scientific Literature: Understanding the nuances of diffusion process and leveraging it for faster generation of plans has been explored by some related works where they directly predict a candidate first and then do a limited number of diffusion steps. This work explores the opposite direction where test time compute is being exploited. leveraging test time compute has always been grounded in search in one form or another. Searching all different paths the diffusion process could have taken and exploring through them has close connections with tree of thought literature from other generative models like LLMs. Essential References Not Discussed: NA Other Strengths And Weaknesses: Weakneses: - There could be simple but strong baselines that leverage test time compute in trivial ways to match the compute used by MCDS. For example, greedy search on the MCDS tree structure, repeated runs of random diffuser-random search. - The work does not provide detailed study on the effects of sub-plan length and meta action choices in a controlled setting. - The work also does not show what is the trade-off between using the scale of "jumpiness" in the jumpy denoising and overall plan quality. Other Comments Or Suggestions: - Whle analogies are clear with MCTS, its is a bit hard to follow with references to "nodes" and "meta actions". May be it would have been simpler to simply define a diffusion MDP where each state is a set of partially denoised sub plans and action can denoise one subplan in different directions. This way may be it would have been easier to define and contrast directly with the MCTS formulation. - It would be very interesting to reformulate the test time compute objective to have "diverse set of solutions" rather than just optimizing for the best solution. Questions For Authors: - Did the authors visualize a particular example where UCB style approach had clear benefits over just greedy search ? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: > The claim of "System 2 planning" is bit of a stretch as the system does not implictly have a world model it can search all variations upon. > We acknowledge that there may be differing perspectives on what constitutes System 2 planning in the context of ML. Drawing on Kahneman [1], we define System 2 planning as a deliberate, sequential search across multiple hypotheses at inference time, which aligns with our approach. Regarding world models, we argue that diffusion models can qualify. Although they may not strictly meet the action-conditioned forward model definition, we believe a narrow view of “world model” is unnecessary for System 2 classification. Since diffusion models implicitly encode structural knowledge, we question whether excluding them is truly warranted. Our method uses a semi-autoregressive rollout guided by meta-actions, similar to autoregressive, action-conditioned forward models but operating in a more temporally abstract space. We believe that this abstraction retains the hypothesis-driven nature of System 2 planning and supports our contribution to broader discussions on such systems. [1] Kahneman, Daniel, 1934-2024, author. Thinking, Fast and Slow. New York :Farrar, Straus and Giroux, 2011. > The claim that MCTD "overcome limitations" of both MCTS and diffusion models is also a bit of a stretch. > We acknowledge that our phrasing could be more precise. Rather than "overcoming limitations," MCTD brings complementary strengths from each approach while addressing certain weaknesses. We will revise this claim more moderately in the final version. > It is unfortunate that the random search using repeated diffusion was not scaled to **match the test time compute** used by MCTD. > Major computation overhead on Diffusion planner is on denoising process, and **we evaluated some baselines with matched budget in our paper.** Diffuser-Random Search baseline already uses the same number of denoising steps as MCTD, as discussed in Section 5.2. Additionally, Section 5.5 compares both Diffuser and Diffuser-Random Search against MCTD using equivalent computational budgets. > theoretical results will be very hard to inherit as the "fast diffusion" does not use any model whose sub-optimality can be quantified. > We agree on this. Thank you pointing out this! > How about a baseline where the tree search is simply conducted **greedily at every diffusion step**? > Following your recommendation, we implemented and evaluated one-step greedy search at every diffusion step on Diffuser for the PointMaze. The approach can search diverse candidates at each step and select the best using the same reward function as MCTD. We scaled this approach to use even more children than MCTD (MCTD uses 5 children). **The results demonstrate that TTC scaling is indeed limited with this approach.** Specifically, we did: Child# = 5 / 10 / 15 / 20 Performances: - Medium: 62±6 / 58±11 / 60±0 / 60±0 - Large: 40±0 / 40±0 / 40±0 / 40±0 - Giant: 0±0 / 0±0 / 0±0 / 0±0 The results are primarily due to the lack of deep searching across multiple samples and the absence of backtracking over tree structure. > Missing analysis of how **performance varies with subplan count** and **meta-action choices**. > Thanks to your feedback, we **additionally conducted comprehensive ablation studies for both subplan length and meta-action choices on PointMaze**. They are discussed in the rebuttal for reviewers 6pZp and TyXv due to the space limitation. > Searching all different paths the diffusion process could have taken and exploring through them has close **connections with tree of thought** literature from other generative models like LLMs. > Thank you for pointing out this. We agree that our work shares important connections with Tree of Thought approaches. We will strengthen these references in our revised paper. > There could be simple but strong baselines ... greedy search on the MCDS tree structure, **repeated runs of random diffuser-random search**. > We addressed this in the evaluation for one-step greedy search. > The work does not provide detailed study on the effects of sub-plan length and meta action choices in a controlled setting. > We addressed this in the performance analysis for subplan count and meta-action choices. > The work also does not show what is **the trade-off between using the scale of "jumpiness" in the jumpy denoising and overall plan quality**. > We appreciate this insightful suggestion. We **additionally investigated the effects of the "jumpiness" scale** through additional ablation study, which is discussed in the rebuttal for 6pZp's review. > Did the authors visualize a particular example where **UCB style approach had clear benefits** over just greedy search ? > We appreciate this insightful suggestion. We **additionally conducted an ablation study controlling the weight C for visit count in the UCB value formulation** which is discussed in the rebuttal for TyXv's review. --- Rebuttal Comment 1.1: Comment: I confirm that I have read the author response to my review. I thank the authors for the added experiments in the rebuttal phase. This helps make a stronger case for the approach. I stand by my decision to accept the paper.
Summary: This paper proposes a new algorithm, MCTD, which involves MCTS in the diffusion model. MCTD uses whole trajectories as its states and introduces a meta-action to generate child nodes based on whether a guidance function is used. By leveraging the MCTS process, MCTD achieves better success rates and runtime performance in most test settings compared to baselines such as Diffuser and Diffusion Forcing. Claims And Evidence: in detailed comments Methods And Evaluation Criteria: in detailed comments Theoretical Claims: NA Experimental Designs Or Analyses: in detailed comments Supplementary Material: NA Relation To Broader Scientific Literature: in detailed comments Essential References Not Discussed: NA Other Strengths And Weaknesses: Strengths: The idea of MCTD is interesting. MCTD performance surpasses baselines in most test cases, both in success rate and runtime. Weaknesses: The meta-action is not very convincing; I am not sure why we need a no-guidance action. The writing should be largely improved. Other Comments Or Suggestions: Code only has README, and README is empty. Preliminaries Diffuser equation 1: Why is there an exp which is different from the diffuser paper? Monte Carlo Tree Diffusion 3.1 “Specifically, we partition the full trajectory… x_s!=empty.” What is x_i in x? What is S? What is “subplan” and “denoising schedule”? I am totally lost in this paragraph. “Because S ≪ N.” What is N? I assume x_s is an agent path trajectory but with noise (middle result of the diffusion model). So basically, this part wants to say MCTD uses x_s as a tree node, so the tree height depends on the diffusion model iteration steps. If we only need 5 steps to recover the whole trajectory from noise, the tree height will be 5. 3.2 “While meta-actions… during the denoising process.” So what are “meta-actions”? I still have no idea about them. “In MCTD, we observe… in the offline data.” I don’t understand, and there may be some grammar problems. To my understanding, the meta-action is just two sample models: one samples from p, and the other samples from p_g. According to figure 1, it seems MCTD alternates between using two kinds of probabilities. In the tree, each node only has two child nodes. So the question is, could we have more child nodes, or how could we determine the optimal number of child nodes? 3.3 The author should ensure the notations are consistent or explain new notations such as g in equation 4 and g_s in equation 3. My question is, if we already have a reward function and also a guidance function, why do we need a no_guidance action? I think, just like in the diffuser model, encoding this information in the inference part should be enough. Experiments Since the baselines are mainly diffuser models, I think there should be a test using the same maze map and robot stacking test settings as in the diffuser paper. Questions For Authors: In the tree, each node only has two child nodes. So the question is, could we have more child nodes, or how could we determine the optimal number of child nodes? If we already have a reward function and also a guidance function, why do we need a no_guidance action? I think, just like in the diffuser model, encoding this information in the inference part should be enough. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: > I am not sure why we need a **no-guidance action**. > Diffusion planners often rely on strong guidance, but under unseen goals or complex long-horizon tasks, this can be detrimental. Allowing “no-guidance” (or weak guidance) provides essential exploration. We demonstrate this by **showing the performance degradation when MCTD only uses high-guidance values.** Default (Guidance Levels: [0, 0.1, 0.5, 1, 2]) / Only High Guidance (Guidance Levels: [2, 2, 2, 2, 2]) Performances: - Medium: 100 ± 0 / 98 ± 6 - Large: 98 ± 6 / 80 ± 0 - Giant: 100 ± 0 / 78 ± 11 > Code only has README, and README is empty. > We are working on the code release. We will make sure to release the code after the review process. > Why is there an exp which is different from the diffuser paper? > This reflects a probabilistic interpretation common in RL, where optimality can be linked to exponentiating the reward, $\exp(r(\tau))$. In the original Diffuser paper, a similar concept was denoted by a heuristic term $h(\tau)$. We write $\exp(\mathcal{J}(\tau))=h(\tau)$. > What is $\mathbf{x}_i$ in $\mathbf{x}$? What is $S$ ? > As written in the page 3, line 136, $\mathbf{x}_i$ is subplan in full trajectory $\mathbf{x}$, and $\bigcap_S \mathbf{x}_S=\emptyset$ means there is no shared trajectories between subplans. $S$ is the number of subplans in the full trajectory $\mathbf{x}$. > What is “subplan” and “denoising schedule”? > As written in the page 3, line 136, subplan is the part of trajectories. Denoising schedule is the assignments of different noise levels per each subplan. For instance, in MCTD, near future subplans are considered as low noised, while further future subplans are as highly noised as shown in Figure 1 (b). > “Because S ≪ N.” What is N? > N is the total length of trajectories and S is the number of subplans. > I assume x_s is an agent path trajectory but with noise (middle result of the diffusion model). ... so the tree height depends on the diffusion model iteration steps. > $\mathbf{x}_s$ denotes a **subplan** (a partial trajectory segment) at some intermediate stage of denoising, not the full trajectory. In MCTD, the **tree height is determined by the number of subplans** $S$, rather than the total denoising steps. > What are "meta-actions"? > In standard MCTS, each node expands by choosing from all possible actions. Because we want to keep the branching factor manageable (especially in continuous spaces), we introduce **meta-actions** to represent **guidance levels**. > “In MCTD, we observe… in the offline data.” I don’t understand, and there may be some grammar problems. > Although we checked that its grammar is not wrong, we agree that it could be clearer with a slight change as follows: *In MCTD, we observe that sampling from the prior distribution p(x), i.e., using the standard diffusion sampler, represents exploratory behavior as it does not attempt to achieve any goal. Instead, it only imitates the prior behavior contained in the offline data.* In the part, we discussed the sampling from the prior $p(\mathbf{x})$ represents the exploratory behavior by not attempting to achieve any goal. > could we have **more child nodes**, or how could we determine the **optimal number of child nodes**? > We show a simple illustration in the Figure 1 with two children. In practice, we can use **multiple guidance levels** and indeed we do in our experiments. Currently, the number of child nodes (meta-actions) is a **hyperparameter**. Exploring an adaptive selection of the number of meta-actions for each task is an interesting direction for future work. > The author should ensure the notations are consistent or explain new notations such as g in equation 4 and g_s in equation 3. > $\mathbf{g}$ and $g_s$ are introduced in page 3 as the guidance schedule and guidance, but we will revise to reintroduce them right before each equation. > why do we need a no_guidance action? > We replied in the no-guidance action investigation. > ... I think there should be a test using **the same maze map** and robot stacking test settings as in **the diffuser paper**. > We appreciate the suggestion. While we could not evaluate MCTD on the robot stacking due to time limitation, we **did** evaluate on the D4RL multi-2D maze tasks which are evaluated in Diffuser paper. **The results demonstrate MCTD outperforms Diffuser.** Diffuser / MCTD Performances: - Umaze: 128.9 ± 1.8 / 239.7 ± 46.2 - Medium: 127.2 ± 3.4 / 430.6 ± 104.8 - Large: 132.1 ± 5.8 / 548.9 ± 195.8 We measured the performance with the accumulated rewards from the environment as done in Diffuser. The performance is averaged from 100 samples. We analyzed the performance gaps due to the searchablity on MCTD and different guidance types (Diffuser - inpainting / MCTD - reducing distances between states and goal). The inpainting guidance can lead inefficient planning when the start and goal are nearer than the plan length. --- Rebuttal Comment 1.1: Comment: Thank you for the response. I believe this paper could benefit from some revisions. My score remains primarily due to concerns about the writing issue.
null
null
null
null
null
null
Arbitrarily-Conditioned Multi-Functional Diffusion for Multi-Physics Emulation
Accept (poster)
Summary: The authors outline a generative modeling technique to produce functional solutions to multi-function problems, with an emphasis on emulating multi-function physical systems. This is accomplished by casting the typical DDPM framework into the functional space, where the noise is framed as a Gaussian process. Then, DDPM effectively samples at sampling points along the functions domain, and this is extended to multiple functions simultaneously. Key contributions also include a tailored loss function for arbitrary conditioning, including conditioning on entire functions or on subsets of particular functional domains. They also provide an efficient scheme for sampling from the GP. They compare to extensive state-of-the art baselines, including neural operators and a diffusion-based method, exhibiting superior performance. In an ablation-like study, they verify that the explicit training using the masked loss and 0-filling is beneficial. ## update after rebuttal: I believe my original score holds, and that this work is a good contribution. Claims And Evidence: Their main claim is the advantage and flexibility of the arbitrary conditioning in generative models when solving diverse inverse problems. Training to solve all functions simultaneously, the authors demonstrate that their method outperforms methods trained specifically for each task individually, validating that the multi-function estimation approach is advantageous. Methods And Evaluation Criteria: The authors rigorously evaluate their methods using L2 error on the reconstruction of functional outputs. Additionally, they assess the ability of their method to align with true physical equations, which is a particularly important consideration and often overlooked. Similarly, they assess the diversity in their outputs (exhibiting competitive performance). Finally, they fully examine the conditional capability of their approach for full-function prediction (forward models and inverse problems) and interpolation within a given function’s domain. Theoretical Claims: The authors do not make theoretical claims. Experimental Designs Or Analyses: The experiments sufficiently assess the validity of the approach. Natural questions might include (1) are neural operators trained for specific tasks more effective, (2) are other diffusion-like approaches more effective, particularly those with different conditioning strategies, and (3) how effective is the uncertainty quantification enabled by the method? The authors clearly and effectively answer questions (1) and (2) in the experiments, evaluating on a wide array of settings with competitive baselines. However, despite mentioning uncertainty quantification earlier in the paper, it is not addressed in the experiments. It would be interesting to see how uncertainty intervals or estimates obtained from this method compare to those obtained from other capable methods, such as simformer. Supplementary Material: The appendices provide background information regarding experiments settings/datasets, baseline methods, and hyper-parameter selection. Additionally, they provide more experiment results and visualization of the samples from the proposed approach. Relation To Broader Scientific Literature: The authors position their work within the context of generative modeling via diffusion. Specifically, they connect to other diffusion-based methods for functional generation/generating the outputs to a function, making the distinction that their work focuses on the generation of multiple related functions simultaneously. They also connect to the neural operator literature, which aims to learn function-to-function mappings, e.g., corresponding to solutions of PDEs. However, by leveraging diffusion/generative modeling, their framework naturally incorporates uncertainty quantification in addition to prediction. Essential References Not Discussed: While not necessary for comparison in experiments, it might be good to acknowledge other diffusion-based methods for solving inverse problems, as that is a major focus of this work. The most important work in this domain is perhaps [1], with others including [2,3]. These methods need not be compared with experimentally, but it would be interesting to discuss the advantages and differences between these approaches and the outlined approach in the context of multi-function generation and inverse problems. [1] Chung et al. "Diffusion Posterior Sampling for General Noisy Inverse Problems." ICLR 2023. [2] Wang et al. "Zero-Shot Image Restoration Using Denoising Diffusion Null-Space Model." ICLR 2023. [3] Song et al. "Pseudoinverse-guided diffusion models for inverse problems." ICLR, 2023. Other Strengths And Weaknesses: I think this a strong work with rigorous experimental comparisons. While I was not initially convinced in the main text by the utility of the masked loss function for training a diffusion model capable of flexible conditional generation, it is made evident in the experiments that this is indeed necessary, as it is demonstrated to be more effective than prior methods with trained conditioning and an “ablated” version of the proposed approach. Moreover, the authors focus on key important evaluation metrics, such a physical consistency and diversity, which is an often overlooked and important component in generative modeling for surrogate modeling, solving inverse problems, and unconditional generation. The main downside, however, is that the authors did not further investigate the ability of their approach for uncertainty quantification, which, as they note, is another important consideration to be made for modeling physical systems. With that said, I believe they have outlined an approach that will be valuable to share with the community. Other Comments Or Suggestions: The position of algorithm 1 in the middle of the text is a bit strange, maybe it would be better to move it to the top of the page. Questions For Authors: 1.The arbitrary choice of h being Bernoulli-distributed with p=0.5 is not justified. Wouldn’t a network trained with such an assumption always “assume” the input to have roughly half of the components masked? Have you experimented with varying p through training? 2. “We used FNO to construct our denoising network” What does this mean? You share the same architecture as FNO? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their insightful comments and valuable suggestions. > It would be interesting to see how uncertainty intervals or estimates obtained from this method compare to those obtained from other capable methods, such as simformer. R1: Thank you for the great suggestion — we completely agree. We have added the evaluation and analysis of uncertainty quantification. Please see our **response R1 to Reviewer bmeB** for details. >The most important work in this domain is perhaps [1], with others including [2,3]. These methods need not be compared with experimentally, but it would be interesting to discuss the advantages and differences between these approaches R2: Thank you very much for providing these excellent references! We will certainly cite and discuss them. The key difference lies in the training and inference paradigms. The referenced works [1–3] focus on training an **unconditional diffusion model**, and only at inference time do they incorporate the likelihood of observations or measurements to guide sampling from the conditional distribution. The advantage of this approach is that it only requires a standard diffusion model --- potentially even a pre-trained one --- without the need to address conditional sampling tasks during training. Indeed, our baseline **MFD-Inpaint** (see Section 5.2) follows this paradigm. However, the referenced methods may have limitations when applied to our scenarios:[1] requires a closed-form expression of the likelihood or forward model $p(y|x_0)$, or at least an efficient way to compute its gradient (or sensitivity) --- which is often infeasible in multi-physics systems governed by complex differential equations. The work [2] further restricts the applicability by assuming that the forward model is linear. [3] is more flexible and can handle nonlinear forward models, but it requires a "pseudo-inverse" operator for the forward process, which might not be easy to obtain. Our approach takes a different route: during training, we **explicitly incorporate conditionality** by randomly sampling masks and training our diffusion model to handle a wide range of conditional sampling tasks (including the unconditional case). At inference time, we **do not incorporate any likelihood scores** --- instead, we directly sample from our learned denoising model. Empirically, we found that this conditional training strategy yields **large improvements over unconditional training approaches** like MFD-Inpaint across a variety of tasks (see Table 3 and our response R1 to Reviewer BtjJ). > The arbitrary choice of h being Bernoulli-distributed with p=0.5 is not justified... Have you experimented with varying p through training? R3: Thank you for the insightful question. When there is no prior knowledge about whether a function value should be conditioned on or be generated, we suggested using $p = 0.5$ to avoid bias --- giving each component an equal chance of being conditioned or sampled. This is actually a uniform prior. We do agree, the mean of the proportion of the masked components is 0.5; but the standard deviation is also 0.5 ($\sqrt{p\times(1-p)}=0.5)$, which is the maximum possible, indicating that **the actual masked proportion can vary widely**. In our experiments, we used $p = 0.5$ to maintain a neutral setting. However, our method places no restrictions on the choice of $p$; it can be adjusted based on domain knowledge or specific application needs. We have now included results for alternative values of $p$: 0.2, 0.4, 0.6, and 0.8. As shown below, the performance for $p=0.4$ and $p=0.6$ is comparable to that of $p=0.5$, demonstrating a certain degree of robustness to the choice of $p$. However, when $p$ is set to more extreme values, such as 0.2 or 0.8, the performance degrades markedly. We will add these results and discussions into our paper. **Relative $L_2$ error** |System|Task|$p=0.2$|0.4|0.5|0.6|0.8| |-|-|-|-|-|-|-| DF|$f,u$ to $a$|2.16e-2|1.6e-2|**1.32e-2**|1.34e-2|1.26e-2| ||$a,u$ to $f$|1.85e-2|1.61e-2|**1.59e-2**|1.67e-2|1.7e-2| ||$a, f$ to $u$|2.5e-2|1.95e-2|**1.75e-2**|2.05e-2|2e-2| ||$u$ to $a$|4.48e-2|4.14e-2|**3.91e-2**|3.93e-2|4.96e-2| ||$u$ to $f$|4.26e-2|4.07e-2|**3.98e-2**|4.32e-2|5.02e-2| CD|$s,u$ to $v$|3.24e-2|2.72e-2|**2.17e-2**|2.33e-2|2.91e-2| ||$v,u$ to $s$|7.11e-2|6.85e-2|**5.45e-2**|5.86e-2|8.35e-2| ||$v,s$ to $u$|1.81e-2|1.76e-2| **1.60e-2**|1.77e-2|3.56e-2| ||$u$ to $v$|2.91e-2|**2.41e-2**| 2.66e-2|2.65e-2|5.15e-2| ||$u$ to $s$|7.69e-2|**5.62e-2**| 6.06e-2|6.91e-2|9.15e-2| > “We used FNO to construct our denoising network” What does this mean? You share the same architecture as FNO? R4: Great question. Yes, we use the same architecture as FNO - specifically, a sequence of Fourier layers — to construct our denoising network. We will make this point clearer in the paper. --- Rebuttal Comment 1.1: Comment: Thank you for addressing my concerns. I maintain that this paper represents a good contribution to the conference, and believe my original score holds. --- Reply to Comment 1.1.1: Comment: Thank you for your support and feedback to our response!
Summary: The paper presents Arbitrarily-Conditioned Multi-Functional Diffusion (ACM-FD), a novel probabilistic surrogate model for multi-physics emulation. The key contributions include: - A multi-functional diffusion framework based on DDPM that models noise as multiple Gaussian processes, enabling generation of multiple functions in multi-physics systems - An innovative denoising loss using random masks that handles all possible conditional parts within the system - An efficient training and sampling approach using Kronecker product structure in the GP covariance matrix - Comprehensive experiments across four multi-physics systems showing superior performance compared to state-of-the-art neural operators The paper demonstrates that ACM-FD can handle various tasks including forward prediction, inverse problems, completion, and data simulation within a single framework, while providing uncertainty quantification. Claims And Evidence: The paper's claims are generally well-supported by evidence, with a few areas that could benefit from additional clarification: Strong Claims with Clear Evidence: - The ability to handle multiple functions of interest - The computational efficiency gains through Kronecker product structure - The superior performance across 24 prediction tasks Areas Needing Additional Support: - The claim about "substantially reducing the training and sampling costs" would benefit from quantitative comparisons with baseline methods Methods And Evaluation Criteria: Suggestions for Improvement: - Consider adding ablation studies to demonstrate the importance of each component (random masks, Kronecker structure) - There are also some diffusion model works [1,2] that add masks during the diffusion process, and the reviewers believe that this needs to be discussed. NOTE: these two works are also using diffusion models to inverse problem. - Add more details about the training process and hyperparameter selection [1] High-Frequency Space Diffusion Model for Accelerated MRI [2] Measurement-conditioned denoising diffusion probabilistic model for under-sampled medical image reconstruction Theoretical Claims: N/A Experimental Designs Or Analyses: The experimental design is comprehensive but could be enhanced: Suggestions for Improvement: - Add statistical significance tests for the performance comparisons - Include more details about the training data generation process - Provide visualization of the generated functions and their uncertainty estimates Supplementary Material: N/A Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: Weaknesses: - Limited theoretical analysis of the proposed method - Lack of comparison with traditional numerical methods - Need for more detailed ablation studies - Could benefit from more extensive visualization of results Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the many constructive comments. > Consider adding ablation studies to demonstrate importance of each component (random masks, Kronecker structure) R1: Great suggestion. Actually, we have already conducted studies on **the effect of random masks**. In Section 5.3, we compared our method (ACM-FD) with a variant trained **without** random masks (denoted as MFD-Inpaint) on the function completion task. As shown in Table 3, incorporating random masks during training significantly reduces the relative $L_2$ error --- by as much as 58\% to 96\%. Here, we additionally include results for prediction tasks, presented below. Across various prediction settings, our model trained with random masks consistently outperforms the variant without them (denoted as MFD), achieving substantial error reductions ranging from 40.9% to 96.2%. **Relative $L_2$ error** |System | Task | MFD | ACM-FD | |-|-|-|-| DF|$f,u$ to $a$|1.70e-1 (3.45e-3)|**1.32e-2 (2.18e-4)**| ||$a,u$ to $f$|6.98e-2 (3.09e-3)|**1.59e-2 (1.59e-4)**| ||$a, f$ to $u$|2.96e-2 (1.16e-3)|**1.75e-2 (4.16e-04)**| ||$u$ to $a$|1.70e-1 (3.56e-3)|**3.91e-2 (7.08e-04)**| ||$u$ to $f$ |1.05e-1 (4.3e-3)|**3.98e-2 (6.45e-04)**| CD|$s,u$ to $v$|5.47e-1 (3.56e-2)|**2.17e-2 (4.53e-04)**| ||$v,u$ to $s$|3.95e-1 (4.01e-2)|**5.45e-2 (1.40e-03)**| ||$v,s$ to $u$|3.68e-2 (1.65e-3)|**1.60e-2 (2.15e-04)**| ||$u$ to $v$|6.94e-1 (3.64e-2)|**2.66e-2 (3.08e-04)**| ||$u$ to $s$|9.23e-1 (3.64e-2)|**6.06e-2 (2.54e-04)**| We have also supplemented the ablation study on the use of the Kronecker product structure. Please **see R2 to Reviewer bme8** for details. > discuss [1,2] R2: Thank you for the excellent references. We will definitely cite and discuss them. A key distinction is that the masks used in [1,2] are **fixed** and derived from the sampling patterns inherent to the medical imaging data. As a result, these works focus on a **single** inverse task --- either recovering high-frequency regions [1] or unsampled measurements [2]. In contrast, our method **repeatedly** samples **random masks** during training, enabling it to jointly handle a wide variety of conditional sampling tasks, encompassing all kinds of forward prediction, inverse prediction, and completion tasks. Another critical difference lies in the **functional space formulation** of our model: the noises used during both training and inference are stochastic functions sampled from Gaussian processes. Our method is designed to generate functions either unconditionally or conditioned on other function samples. This motivation leads to fundamentally different designs in model architecture, training loss, and computational techniques (e.g., the use of Kronecker product). > Add more details about the training process and hyperparameter selection R3: Thank you for the great suggestion. We actually have provided detailed information in **Appendix Section B**, including the set of hyperparameters, their ranges, implementation details and libraries used for each method, as well as the validation settings. Due to space constraints, only a subset of this information is included in the main paper, with a reference to the appendix (see Line 366). In response to your feedback, we will further expand and enrich Appendix Section B to improve clarity and completeness. > Add statistical significance tests for the performance comparisons R4: Thank you for the excellent suggestion. Based on a $z$-test conducted on the prediction errors with associated error bars, our approach outperforms the baseline methods in the vast majority of cases at the 95% significance level. We will include the results in our paper. >Include more details about the training data generation process R5: We actually have included these details in **Appendix Section A**, which provides the governing equations, numerical simulation library used, simulation procedures, and data collection process. We believe this information is sufficient to generate all the data for our experimental settings. Additionally, we will publicly release our experimental data. > Provide visualization of the generated functions and their uncertainty estimates R6: Thank you for the great suggestion. We have added the suggested visualizations and analysis --- **please see our response R1 to Reviewer bmeB for details**. > Lack of comparison with traditional numerical methods R7: In fact, we did compare with traditional numerical methods, as all our **test data** (as well as the training data) are generated using them. We consider the outputs of the traditional methods as the **gold standard**, and our goal is to train a surrogate model that closely approximates their results. The relative $L_2$ error reported in Table 1-3 quantifies the difference of our method's prediction **as compared to the traditional methods**. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for their responses during the rebuttal period. I now have a deeper understanding of the details in the paper, and I will accordingly raise my score. However, I still believe that the claim about 'substantially reducing the training and sampling costs' would benefit from quantitative comparisons with baseline methods, which do not appear to have been addressed in the rebuttal. --- Reply to Comment 1.1.1: Comment: Thank you for your response. We truly appreciate your positive feedback and additional suggestion! To clarify regarding the training and sampling costs: we have indeed provided **quantitative comparisons** with the baselines in **our response R2 to Reviewer bmeB** (they were not included again in this rebuttal thread due to space limit). By leveraging the Kronecker product structure, our method achieves substantial reductions in both training and inference time—over 40% in training and 85% in inference—as shown in the "Runtime reduction" column of the corresponding tables in R2 to Reviewer bmeB. Please see more details, discussion, and analysis in R2 to Reviewer bmeB. We will make sure to incorporate these results and the associated discussion in our paper.
Summary: The paper introduces Arbitrarily-Conditioned Multi-Functional Diffusion (ACM-FD), a novel probabilistic surrogate model designed for multi-physics emulation. It aims to address limitations of traditional machine learning-based surrogate models, which are typically task-specific and lack uncertainty quantification. ACM-FD is based on the Denoising Diffusion Probabilistic Model (DDPM) but extends it to handle multiple functions in multi-physics systems with a unified framework. The key contributions are: 1. Multi-Functional Diffusion Framework: The paper introduces a framework based on DDPM that uses Gaussian Processes (GPs) for noise modeling, enabling the generation of multiple functions in multi-physics systems. 2. Innovative Denoising Loss: A random-mask-based, zero-regularized denoising loss is proposed to handle conditional generation tasks, improving network stability and task flexibility. 3. Efficient Training and Sampling: The model reduces computational costs by leveraging a Kronecker product structure and tensor algebra to simplify GP covariance matrix operations. 4. Experiments: ACM-FD reaches top-tier performance in 24 prediction tasks across four multi-physics systems, achieving high accuracy, adherence to governing equations, and diversity in generated data. It also excels in function completion compared to inpainting and interpolation methods. Claims And Evidence: Most of the claims made in this paper are well supported by clear evidence. However, the authors have highlighted the importance of uncertainty quantification (UQ) and the advantage of their proposed method over the baselines by diffusion model naturally supporting UQ; while there is no analysis presented for UQ capabilities (i.e. ECE scores / confidence intervals that match the stochasticity in the data if any). The authors are also claiming that the proposed method is efficient to train with Kronecker product, it would be supportive to have a training time comparison table to demonstrate the advantage. Since ACM-FD only needs to be trained once for each dataset, it would be reasonable to compare the total time of training other baselines for all prediction tasks regarding a dataset with the time ACM-FD needs. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are suitable for the problem presented. However, the uncertainty quantification performance is not evaluated although demonstrated as an advantage over other baseline models in the paper. Theoretical Claims: In this work, the authors focus on the methodological framework with empirical validation. Experimental Designs Or Analyses: The authors compare ACM-FD against various baseline models, such as Fourier Neural Operator (FNO), Transformer-based Neural Operator (GNOT), DeepONet (DON), and POD-based DON (PODDON). These baselines are compared thoroughly across 4 Physical systems: Darcy Flow, Convection-Diffusion, Diffusion-Reaction, and Torus Fluid Systems; together with 4 prediction tasks for each system: forward prediction, inverse inference, joint simulation, and function completion. ACM-FD outperforms other baselines in most of the tasks. But FNO outperforms ACM-FD in 8 out of 10 Torus Fluid Systems tasks, resembling potential improvements for ACM-FD. Supplementary Material: The authors did not upload supplementary material, but the appendix contains rich information about experimental details and additional results. Relation To Broader Scientific Literature: ACM-FD advances beyond the baseline methods by enabling a single model to perform forward prediction, inverse inference, joint simulation, and function completion, with built-in uncertainty quantification. Works like PODDON have sought to improve efficiency in neural operators but still fall short in handling diverse prediction tasks. The generalization capability to different sub-tasks from one unified training stage can have great impact in reducing physics emulation costs. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: 1. Paper is well written and easy to follow. 2. The random masking and function discretization together provides a simple but efficient way to learn the full functional space (from data) during training. 3. The proposed method is novel in Multi-Physics Emulation field and eliminates the need to retrain models for diverse prediction tasks. 4. Empirical results show strong performance improvements. Weaknesses: 1. Although authors have demonstrated the natural uncertainty quantification advantage with DDPM compared to the baselines, there is no analysis of UQ performance. 2. Efficiency in training the DDPM model is also presented as a contribution, which lacks a training-time comparison against baseline models. For details of suggestion please refer to "Claims And Evidence" section in the review. Other Comments Or Suggestions: It would be clearer to have another figure to demonstrate how the inputs and outputs of the target function are discretized on the mesh. Questions For Authors: Please refer to weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the thoughtful and constructive comments. > no analysis of UQ performance. R1: Great suggestion. Here we add UQ performance evaluation & analysis results. First, to evaluate UQ quality, we computed the **emprical coverage probability** [1]: $CP = \frac{1}{N} \sum_{i=1}^N I(y_i \in C_\alpha)$ where $y_i$ is the ground-truth function value, and $C_\alpha$ is the $\alpha$ confidence interval derived from 100 predictive samples generated by our method. We varied $\alpha$ from {90%, 95%, 99%}, and examined our method in eight prediction tasks across Convection-Diffusion (C-D) and Darcy Flow (D-F) systems. We compared against Simformer. Note that the other baselines are deterministic and unable to perform UQ. **C-D** | Task | Method | $\alpha$=0.9 | 0.95 | 0.99| |-|-|-|-|-| | $s,u$ to $v$ | Ours | **0.833** | **0.88** | **0.921** | | | Simformer | 0.736 | 0.814 | 0.871 | | $v,u$ to $s$ | Ours | **0.766** | **0.842** | **0.913** | | | Simformer | 0.683 | 0.767 | 0.879 | | $v,s$ to $u$ | Ours | **0.939** | **0.968** | **0.99** | | | Simformer | 0.695 | 0.771 | 0.858 | | $u$ to $v$ | Ours | **0.821** | **0.87** | **0.922** | | | Simformer | 0.775 | 0.85 | 0.912 | | $u$ to $s$ | Ours | **0.92** | **0.949** | **0.972** | | | Simformer | 0.716 | 0.773 | 0.823 | **D-F** | Task | Method | $\alpha$=0.9 | 0.95 | 0.99| |----------|-----------|--------------------|----------------------------|-----------| | $a, u$ to $f$ | Ours | **0.947** | **0.974** | **0.991** | | | Simformer | 0.829 | 0.895 | 0.95 | | $a, f$ to $u$ | Ours | **0.915** | **0.949** | **0.998** | | | Simformer | 0.922 | 0.955 | **0.998** | | $u$ to $f$ | Ours | 0.867 | 0.909 | 0.952 | | | Simformer | **0.918** | **0.953** | **0.98** | In most cases our method achieves coverage much *closer* to $\alpha$, showing superior quality in the estimated confidence intervals. Next, we visualized prediction examples along with their uncertainties, measured by predictive standard deviation (std). As shown [here](https://github.com/wjnzgzjyb/Arbitrarily-Conditioned-Multi-Functional-Diffusion-for-Multi-Physics-Emulation/tree/main), more accurate predictions tend to have lower std, i.e., low uncertainty, while regions with larger prediction errors correspond to higher std — providing further qualitative evidence that our uncertainty estimates are well-aligned with prediction quality. We will include the results in our paper. [1] Dodge, Y. (Ed.). (2003). The Oxford dictionary of statistical terms. Oxford University Press. > lacks a training-time comparison against baseline models. R2: Excellent suggestion. We have added an ablation study to assess the impact of incorporating Kronecker product in our method, and compared both the training and inference time with other methods. For a fair and comprehensive evaluation, we first examined the per-epoch training time. For neural operator baselines, we measured their *total* per-epoch time across all the tasks. **Training time (per-epoch) in seconds** |System|Ours w/ Kronekcer | Ours w/o Kronecker |Runtime Reduction| Simformer | FNO | GNOT | DON| |-|-|-|-|-|-|-|-| | C-D | 3.09 |5.5 | 43.8%| 23.6 | 16.45 | 144 | 1.53| | D-F | 3.27 | 5.55 |41.4% | 23.5 | 15.6 |148| 1.48| **Training time (total) in hours** | System | Ours w/ Kronekcer | Ours w/o Kronekcer | Simformer | FNO | GNOT | DON| |-|-|-|-|-|-|-| |C-D| 17.2 | >48 | 45.9 |4.57| 40 | 4.25| |D-F| 18.2 | >48 | 45.7 |4.33| 41.1| 4.11| The results show that leveraging Kronecker product properties largely improves the training efficiency of our method. Our per-epoch training time is much less than all the competing methods except DON, which achieves exceptionally fast training by using PCA bases as the trunk net. However, diffusion models typically require far more training epochs than deterministic neural operators. For instance, FNO and GNOT converge within 1K epochs across all the cases, while our method typically needs around 20K epochs. Consequently, despite per-epoch efficiency gains, our method, as well as Simfomer --- another diffusion-based method --- still takes longer overall training time. **Inference time (per-sample) in seconds** | System | Ours w/ Kronekcer | Ours w/o Kronecker |Runtime Reduction | Simformer | |-|-|-|-|-| |C-D|0.899|6.66|86.5%|7.39| |D-F|0.975|6.7|85.4%|7.34| Lastly, by using Kronecker product, our method achieves substantial acceleration in generation, with a 6.7x speed-up. During inference, the computational cost is dominated by sampling noise functions, whereas during training, a substantial portion of cost also arises from gradient computation. Consequently, the runtime advantage of using the Kronecker product is even more pronounced during inference. We will include these results and discussions in the paper. --- Rebuttal Comment 1.1: Comment: Thanks for addressing the concerns. I believe this paper proposes a novel and interesting idea which shows good contribution. I decide to maintain my score. --- Reply to Comment 1.1.1: Comment: Thank you. We appreciate your feedback and positive comments!
null
null
null
null
null
null
null
null
Neighbour-Driven Gaussian Process Variational Autoencoders for Scalable Structured Latent Modelling
Accept (poster)
Summary: The authors propose a new approximation for Gaussian Process Variational Autoencoders to improve computational efficiency. A GP-VAE is a VAE for the structured data case (e.g. where we have both images Y, and auxiliary information X, such as time, or location) that replaces the independent Gaussian prior over the latents with a GP prior (with the kernel operating over X, giving functions from e.g. time or location, to latents), better utilising correlations between datapoints. The authors use the nearest-neighbour kernel approximation from the GP literature, which only keeps the covariances from the set of H nearest neighbours for each points, leading to a block-sparse kernel matrix, which has similar computational complexity to an inducing-point GP. They propose and evaluate two different ways of combining nearest-neighbour GPs with the GP-VAE framework, and find that their model outperforms other approximate GP-VAE models with similar time complexity. ### Update after rebuttals I have maintained my score of "accept" - please see rebuttal comment. Claims And Evidence: - The authors' method indeed appears to improve task performance compared to other approximate GP-VAE methods. - For comparisons against non-GP VAE methods, the authors at one point state "VAE and HI-VAE perform poorly as they do not leverage spatial information"; it is not clear to me whether this means these methods were not provided X at all, which would obviously result in worse performance. Methods And Evaluation Criteria: - The proposed solution of using nearest-neighbour GPs in a GP-VAE is very sensible, given the existing literature on nearest-neighbour GPs, and makes sense for the proposed VAE task. Given that inducing point GPs have already been tried in this domain, this paper fills in an obvious gap in the literature. - The authors test their method on a good range of tasks, all of which fall under the "structured latent modelling" umbrella. - The two proposed variants look quite similar at a high level, but sometimes result in moderately different performance. It would have been useful to explain why two different variants were proposed, compare and contrast the two methods in the paper after both have been presented. In particular, there are subtle differences between the ELBOs of the two methods that would be worth highlighting. Theoretical Claims: - I did not verify the authors' derivations for their method, which were consigned to the appendix. However, the resulting formulae looked sensible, and did seem to support the time-complexities stated by the authors. Experimental Designs Or Analyses: - The experiments are sensible, and cover a good variety of datasets. - The authors compare against many other baseline methods, which is good. It is not always clear why some baselines are only used in certain experiments but not others; for example, in Table 1, the baselines are different between the "Corrupted Frames" experiment and the "Missing Frames" experiment, and I'm not sure why (perhaps I missed an explanation in the paper). - All experiments report means and standard deviations over 10 random trials, including the baselines (which they ran themselves), which is excellent. Additionally, they also report the wall-clock time of all the experiments, which is important given that the point of the approximation is computational efficiency. Supplementary Material: - There is a lengthy appendix containing proofs and additional tables and plots which I did not review in any depth. - There does not appear to be any code provided. It would be good for reproducibility to have some code, even if it is not very clean. Relation To Broader Scientific Literature: - Nearest neighbour GPs were already proposed in earlier papers - GP-VAEs were already proposed in earlier papers - The novelty in this paper stems from combining nearest-neighbour GPs with GP-VAEs. Though this does not seem to have required a huge amount of novel theoretical work, it is an obvious gap in the literature, and this is a well written and executed attempt to fill this gap. Essential References Not Discussed: The references provided seem suitable. Other Strengths And Weaknesses: ### Strengths - The paper is very well written. I particularly liked the fact that the authors made it very clear how different models related to each other, with sentences such as "In contrast, setting H = 0 will cause the model to degenerate into VAEs." ### Weaknesses - The proposed methods are not always that much better than other baselines. For example, in Table 3, the HI-VAE baseline performs as well as their method, despite not utilising the spatial information. Other Comments Or Suggestions: - The table comparing time complexities in Appendix D (Table 15) is very nice and would be useful to have somewhere in the main text, if space allows. - Section 2.2 (in the background section) talks about inducing point GPVAEs, but as I understood it, we were not using this framework in this paper. If this is not necessary background to understand this paper's methods, it may be best to remove it to avoid adding irrelevant and confusing details. Questions For Authors: - Why are the baselines different between the two experiments in Table 1? - How are you performing bolding in the tables? In Table 3, you have bolded HI-VAE's training time, though this is not the lowest training time on that table. - Do the non-GP VAE baselines have access to X at all, e.g. as arguments to the encoder / decoder? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We genuinely appreciate your insightful and encouraging feedback and highlighting aspects of our paper that might have gone unnoticed. Below, we hope to address the points in the review. --- ### 1. Using auxiliary information $X$ We acknowledge that our non-GP baselines, VAE and HI-VAE, do *not* use the auxiliary information $X$ - yet they are widely adopted as baselines in the GPVAE literature (e.g., [1,2,3,4]) to underscore the benefits of imposing a GP prior. Actually, as demonstrated by several GPVAE works (e.g., GPVAE-Casale [6], SVGPVAE [2], SGPBAE [1]), providing low-dimensional $X$ directly as input to a standard VAE (e.g., in a *conditional VAE* fashion [5]) rarely outperforms GPVAE models on the similar tasks we explored in the paper. As you suggested, we also implemented a Conditional VAE (CVAE) baseline for our *Missing Frames* task, following the design in [6]. Specifically, we processed the scaled timestamp $t$ into $[\sin(t), \cos(t)]$ and used an MLP to produce an additional time feature, then stacked it to the vector after the encoder and before the decoder. The missing frames were generated based on average latent representations and new time features. Below are the results comparing CVAE to our models from 10 random trials: | Model/Metric| NLL| RMSE| Training Time(s/epoch)| |-|-|-|-| |CVAE|213.891$\pm$13.121|10.276$\pm$0.189| **4.736$\pm$0.033**| |GPVAE-HPA-H10(ours)|**71.264$\pm$1.263**|**5.521$\pm$0.046**|9.819$\pm$0.064| |GPVAE-SPA-H10(ours)|**69.538$\pm$0.664**|**5.412$\pm$0.026**|9.358$\pm$0.083| As shown, although CVAE trains faster, it performs significantly worse, aligning with previous findings that *directly* injecting low-dimensional $X$ into a standard VAE generally underperforms GPVAE. These results illustrate that the GP prior provides a more effective mechanism to correlate latent variables based on $X$. --- ### 2. Clarify baseline differences across experiments We omitted certain baselines for practical reasons: **VAE / HI-VAE in Missing Frames** In the *Corrupted Frames* setting, partially corrupted images can still be passed to the encoder at test time. However, the *Missing Frames* experiment requires generating entire frames from the latent structure with the new timestamp $x_*$. Since VAE and HI-VAE do not incorporate $X$, they cannot directly generate fully missing frames. Hence, we excluded them in the *Missing Frames* baselines. **GPVAE-Diag / GPVAE-Banded for Longer Sequences** Both GPVAE-Diag and GPVAE-Banded [3] were designed mainly for relatively *short* sequences. In our trials with 100-frame sequences, these models showed prohibitively long training times. Since the *Missing Frames* experiment was specifically designed to focus on scalable models suitable for longer and irregular sequences, we chose to exclude methods that do not scale in practice. In future revisions, we will clarify these reasons in the paper. --- ### 3. Typographical issue, code release, and paper structure We appreciate you noting the boldface inconsistency in Table 3—this was indeed a formatting mistake, and we will correct it. Additionally, as stated in our appendix, we plan to release all source code for our experiments, including our re-implementations of numerous baselines in PyTorch for fairness and ease of comparison. We hope this effort could facilitate further research on GPVAE-based methods. We agree that Table 15 in Appendix D is quite informative. We will try to include it in the main paper (subject to page limits). We also acknowledge your comments about Section 2.2 on inducing points. Although we included it for context (many large-scale GP methods rely on inducing points), we will streamline or move some details to the appendix to keep our main text more focused on neighbour-driven approaches. --- ### 4. Differences between HPA and SPA Concretely, HPA (Hierarchical Prior Approximation) imposes sparsity in the *covariance* matrix by “switching off” non-neighbouring latent variables through a hierarchical selection variable. This yields a sparse *covariance* structure. SPA (Sparse Precision Approximation) factorises the GP distribution into chained conditional terms, effectively resulting in a *sparse precision* matrix rather than a sparse covariance matrix. In practice, both are valid neighbour-driven approaches that reduce computational cost. We will further clarify these differences in the final manuscript. --- We hope these clarifications address the points you raised. [1] Fully Bayesian Autoencoders with Latent Sparse Gaussian Processes (ICML 2023) [2] Scalable Gaussian Process Variational Autoencoders (AISTATS 2021) [3] GP-VAE: Deep Probabilistic Time Series Imputation (AISTATS 2020) [4] The Gaussian Process Prior VAE for Interpretable Latent Dynamics from Pixels (AABI 2020) [5] Learning Structured Output Representation using Deep Conditional Generative Models (NeurIPS 2015) [6] Gaussian Process Prior Variational Autoencoders (NeurIPS 2018) --- Rebuttal Comment 1.1: Comment: Thank you for your response. You have thoroughly addressed my concerns, and after briefly considering the other reviews and responses, I'd like to maintain my recommendation as "Accept" and emphasise to the AC that I believe this work has been carried out to a high standard. --- Reply to Comment 1.1.1: Comment: Thank you very much for your encouraging follow-up. We are truly grateful for the time and effort you dedicated to reviewing our submission, and we will incorporate your suggestions in the final version.
Summary: This paper successfully reduces the computational cost of Gaussian Process Variational Autoencoder (GPVAE) by incorporating the Nearest Neighbour Gaussian Process (NNGP). The experimental results demonstrate that the proposed method achieves high performance while reducing computational costs, compared to existing approaches aimed at making GPVAE more scalable. ## update after rebuttal While I will keep my score as Weak Accept, I would like to note that my overall opinion leans toward acceptance of this paper. Claims And Evidence: The claim that "existing GPVAEs are computationally expensive" is clear and well-supported by evidence. The proposed method, which addresses this issue using NNGP, is reasonable and well-motivated. Methods And Evaluation Criteria: The paper evaluates the proposed method using various datasets and evaluation metrics. Theoretical Claims: I have reviewed the theoretical aspects of the paper, and they appear to be valid. However, I have one concern, which I will outline in the Questions Section. Experimental Designs Or Analyses: The paper presents extensive experiments on multiple datasets, with various discussions. I have a few concerns, which I will outline in the Questions Section. Supplementary Material: I have reviewed the supplementary material and found no issues. Relation To Broader Scientific Literature: The references provided appear to be sufficient. Essential References Not Discussed: I believe the necessary references are cited appropriately. Other Strengths And Weaknesses: The idea of using NNGP to reduce the computational cost of GPVAE is simple yet effective. Other Comments Or Suggestions: Including a pseudo-code for the proposed method would make the paper easier to understand. Questions For Authors: The use of NNGP to reduce the computational cost of GPVAE is a simple yet effective approach, and it appears to achieve high performance experimentally. I am generally in favor of accepting this paper, but I have the following concerns. 1. If I understand correctly, the nearest neighbors are selected in the data space. However, since the Gaussian Process is applied to the latent variables, I find this slightly counterintuitive. Why did you choose to find neighbors in the data space rather than the latent space? Also, is the distance in the data space preserved in the latent variable space, especially in cases where the latent space exhibits structural transformations? (For example, two points that are far apart in the data space might be close in the latent space, and vice versa.) - If computational cost were not a concern, I would expect that GPVAE would achieve the best performance. However, in Figure 2, the proposed method slightly outperforms GPVAE in reconstruction error, and in Table 1, the NLL follows the order: GPVAE-Diag < Proposed Method < GPVAE-Banded. What is the reason for this? - While the proposed method achieves consistently high performance, some results are worth discussing. In Table 1 (Corrupted Frames), MGPVAE achieves better performance and lower computational cost than the proposed method, whereas in Missing Frames, the proposed method outperforms others. In Table 3, the proposed method slightly outperforms HI-VAE in performance, but HI-VAE is faster in training time. Given this situation, to what extent can hyperparameter tuning (i.e., increasing or decreasing the number of neighbors) improve the performance? - Since the number of neighbors is a hyperparameter that trades off between performance and computational cost, do you have any guidelines for setting this value? I would appreciate your response. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate your thoughtful feedback. We hope to address your concerns below. --- ### 1. Selecting neighbours in the data space vs. the latent space **A natural extension from NNGPs with local correlation** Our approach follows the principle of NNGPs, which typically rely on adjacency in the *input coordinates* $X$ (spatial, temporal, etc.) for large-scale GPs. Extending this principle to GPVAE, we assume that points close in $X$ are likely to have correlated latent representations under the GP prior, which generally holds well in practice. The GP prior *encourages* points that are near in $X$ to stay close in the latent space. This regularisation enforces correlation in the latent variables such that points with neighbouring inputs are unlikely to end up arbitrarily far apart in the latent space. **Possible extensions with complex kernels** We recognise that data points far in data space can sometimes be mapped near in latent space, especially when the encoder is strongly non-linear. In those cases, one might extend the kernel to be non-stationary or incorporate periodic terms. This more flexible kernel design can learn that certain points—though far in raw input coordinates—are effectively neighbours according to the kernel values. In principle, one could then pick neighbours by kernel similarity rather than Euclidean distance. For the tasks we explored, standard kernels and data-space adjacency already perform well. In summary, using data-space neighbours offers a straightforward, well-tested approach inspired by NNGPs. --- ### 2. Explaining performance gains over full GPVAE in some cases **Proposed method vs. GPVAE-Diag** A *full* GP prior can be a stronger regulariser, but it also complicates optimisation, making converging to a good solution harder. By focusing on local correlations rather than forcing a single large covariance across all data points, our method “loosens” the constraints on $Z$ and may achieve better results in some tasks. **Proposed method vs. GPVAE-Banded** Both GPVAE-Diag [1] and our model share a standard encoder leading to a *diagonal* posterior (Eq(3)). By contrast, GPVAE-Banded [1] employs a *tridiagonal* encoder covariance, which introduces extra parameters (+~50%) to capture between-sample correlations more explicitly. Although this may yield slightly better performance on *short* sequences, it does not scale easily to longer sequences. Our design focuses on a diagonal encoder with a neighbour-based GP prior, achieving a more balanced trade-off between accuracy and computational feasibility. --- ### 3. Further discussion on some experimental results & hyperparameter guidelines **Short corrupted vs. long missing frames: different experimental settings** - The *Corrupted Frames* experiment (following [1]) features a large dataset (60,000 sequences) but *short* sequences (10 frames each). MGPVAE handles short sequences relatively efficiently because its sequential computations do not accumulate large overhead when the sequence length is small. Meanwhile, since the image transformations are relatively smooth from one frame to the next (no missing frames), MGPVAE’s Markov structure enjoys an advantage in capturing local frame-to-frame correlations. - By contrast, the *Missing Frames* experiment (following [2]) uses *longer* sequences (100 frames each), and each frame has a 60% chance of being dropped. This creates irregular, sometimes large gaps between observed frames, which presents two challenges for MGPVAE: (1) Given a longer sequence, the forward–backward process needs more runtime to propagate information; (2) The model may accumulate more error at large time gaps where many frames are missing in uneven patterns. Our neighbour-driven method does *not* rely on strictly sequential processes and can have faster training and better predictive performance in this setting. - HI-VAE can train faster as it does *not* learn inter-sample dependencies through a GP prior, but this may come at the cost of accuracy. **Guidelines for the number of neighbours $H$** The number of nearest-neighbours $H$ directly influences how local or global the GP prior is. In practice, picking $H$ in the same (or smaller) range as the number of inducing points used by sparse-GP baselines is a good starting point: smaller $H$ reduces runtime but may weaken correlation modelling, while larger $H$ may improve accuracy at the expense of more computation. We recommend beginning with a moderate $H$ based on (1) estimated correlation length scales and (2) available computational budget, then adjusting based on validation performance. --- ### 4. Request for Pseudo-Code We completely agree that providing pseudo-code would help readers reproduce our approach. We will include a succinct algorithm pseudo-code in the appendix of the final version. [1] GP-VAE: Deep Probabilistic Time Series Imputation (AISTATS 2020) [2] Markovian Gaussian Process Variational Autoencoders (ICML 2023) --- Rebuttal Comment 1.1: Comment: Thank you for your response. My concerns have been resolved. While I will keep my score as Weak Accept, I would like to note that my overall opinion leans toward acceptance of this paper. --- Reply to Comment 1.1.1: Comment: Thank you for letting us know that our clarifications have addressed your concerns. We appreciate your leaning toward acceptance, and we will incorporate your feedback in the final version. Thank you again for your time and insight.
Summary: Variational Autoencoders are deep generative models the learn low dimensional latent representations or high dimensional data, e.g. images in pixel space. When we have extra auxiliarly information, such as images from video and we also know the timestamps of the frames, instead of inferring independant latent representations for each image, we can use the timestamps to force structure of of the series of latent representations and the Gaussian Process Prior VAE is a class of model that uses the auxiliary infomration for inform a correlated GP prior over the corresponding latent representations of the high dimensional datapoints. However GPs have $O(N^3)$ computational cost where $N$ is the number of data points in the GP. As a result, recent works has proposed to use Sparse GPs, each prediction is made based an a small set of pseudo points reducing costs as compared to using the full dataset. This work proposes to use nearest neighbor GPs, each prediction is made based on using the nearest points in the dataset. The authos Claims And Evidence: They authoprs claim their method has higher accuracy and quicker traiing times, I feel both of which are intuitive and justified by experiments. Methods And Evaluation Criteria: ## Hierarchical Prior Approximation ELBO Derivation I beleive I undertsand the intention of the approximation, however to me it seems the notation may have an error or two. - The prior Equation 6 and 7, $p(Z|\textbf{w}) = \mathcal{N}(Z|0, D_W K_{XX}D_W)$ suggests that a masking vector $\textbf{w}\in \\{0, 1\\}^N$ sets some of the prior covariance rows/cols to 0, effectively the corresponding dimensions of $\textbf{Z}$ have a prior density that is a point mass at $z_i=0$, they become fixed constants and prior density is infinite everywhere (Note $det(D_wK_{XX}D_w)^{-1} = 0^{-1}=\infty$). - (minor typo) Equation 8 gives the approximate posterior in Eq 8, assuming the masking vector $\textbf{w}$ is known whould also have mean 0 for the masked elements, $q(Z|\textbf{w}) = \mathcal{N}(Z|D_W\mu(Y), D_W K_{XX}D_W)$ - on P3, $L_{HPA}$ contains the term $q(Z|Y)$ which is not defined, though it is stated to be the same as Equation 4, the traditional VAE, presumably the unmasked approximate posterior. I beleive this to be an minor error, marginalizing $\int_\textbf{w}q(Z|\textbf{w})p(\textbf{w})d\textbf{w}$ yields a mixture of gaussians, a point mass and the prior, (see [1], page 3, left col, bottom). - on P3, $L_{HPA}$ contains the term $KL(q(Z|\textbf{w})||p(Z|\textbf{w}))$, if some dimensions of $Z$ are point masses then $p(Z|\textbf{w})$ both denstiies are infinite everywhere. - Eq 10, the final ELBO $L_{HPA}$ seems intuitive and valid, the encoder should learn a good reconstruction as well as the neihbourhood should adhere to the local GP prior based on neighbour points. I believe a similar result can be acheived via a different root as the masking $\textbf{w}$ seems to make things a bit messy in my view and I beleive it does not have the same effect as simply removing dimensions (which is the desired goal and matcheas with Eq 10). Presumably one could derive an ELBO f [1] [Sparse within Sparse Gaussian Processes using Neighbor Information, Tran et.al.](https://arxiv.org/pdf/2011.05041) Theoretical Claims: There are no theoretical proofs. See Methods for comment on deriving the ELBO $L_{HPA}$ Experimental Designs Or Analyses: The paper contains many experiments from the literatire as well as some new ones. - page 5, section 5.1, it is stated that GP-VAE Pearce is included but I don't see it anywhere, presumably in such a setting, the full expensive model of Pearce 2019 is the best case scenario and provides an upper bound on how any GP-VAE could perform in such a small scale use case? I beleive the Longitudinal Variational Autoencoder should be added as a baseline for time series experiments [Longitudinal Variational Autoencoder Ramchandran et.al. AISTATS 2021](https://proceedings.mlr.press/v130/ramchandran21b.html) Supplementary Material: Section B.1: the $L_{HPA}$ derivation Relation To Broader Scientific Literature: This work nicely builds upon GP-VAE, SGP-VAE and provides an intuitive step forward showing expected performance gains. Essential References Not Discussed: [Longitudinal Variational Autoencoder Ramchandran et.al. AISTATS 2021](https://proceedings.mlr.press/v130/ramchandran21b.html) Other Strengths And Weaknesses: Overall I am positive about the paper. It is intuitive and simple and clean. My main concerns are cleaning up some maths Other Comments Or Suggestions: Not at this time Questions For Authors: - Can the derivation of Equation 10 be done by starting firectly from assuming we have a minibtach, and ignoring the hierachical approach? - Are Pearce 2019 included in the results? If not can they be added - Can the authors include the Longitudinal VAE where approriate? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank you for your valuable feedback. We provide a detailed response below. --- ### 1. Mathematical notations and clarification Conceptually, our approach closely aligns with the NNGPs proposed by [1], where a hierarchical mechanism enforces local sparsity on inducing variables. To draw a parallel, our GPVAE-HPA can be seen as applying such an idea to latent NNGPs with each “inducing variable” placed at each data point (similar to [2]). We then incorporate a decoder network to model the "likelihood" from $Z$ to $Y$ and an encoder network for amortised inference on $Y$, thereby forming our GPVAE architecture. We acknowledge that using $q(Z|Y)$ directly in $L_{HPA}$ (page 3) may cause confusion. We will replace it with $E_{p(w)}[q(Z|w,Y)]$, thus rewriting $L_{HPA}=E_{p(w)}[E_{q(Z|w,Y)}\log p(Y|Z)-KL[q(Z|w)|p(Z|w)]]$, which leads to an ELBO similar to Eq(10) of [1]. Here, $p(w)$ is any implicit distribution from which we can sample. In our implementation, $p(w)$ reflects a nearest-neighbour selection strategy akin to Eq (11) of [1]. We will clarify this connection with $L_{HPA}$ in the main text and Appendix B.1 and correct the minor notational issues in Eq(8) as $q(Z|w)=\mathcal{N}(Z|D_w\mu(Y),D_w\sigma^2(Y)D_w)$. We believe these revisions will both enhance the clarity of our derivation and highlight how GPVAE-HPA naturally builds upon established results for NNGPs. In our setup, the prior $p(Z|w)$ and the approximate posterior $q(Z|Y,w)$ in the KL share the same degenerate dimensions, meaning those components are effectively “turned off” in both distributions. It is reasonable to define their "KL contribution" as zero. Thus, the overall KL is effectively computed only on the non-degenerate subspace, ensuring the KL remains well-defined. --- ### 2. Longitudinal VAE (LVAE) as baseline The LVAE is also an inducing-point-based model, originally designed for longitudinal data with discrete “instance” IDs (e.g., a patient seen over multiple visits). This model embeds the instance ID into an additive kernel. Our experiments introduce *new* sequences at test time and do not explicitly provide IDs. Nevertheless, we have run additional experiments to address your request. We adapt LVAE by manually assigning unique IDs to each new sequence during testing. We presented the results of LVAEs with different numbers of inducing points $M$ from 10 random trials. The following table shows the performance of LVAEs on the Rotated MNIST *Missing Frame* experiment: |Model/Metric|NLL|RMSE|Training time(s/epoch)| |-|-|-|-| |LVAE-M10|73.887$\pm$0.748|5.578$\pm$0.022|28.919$\pm$0.253| |LVAE-M30|73.545$\pm$1.035|5.558$\pm$0.024|29.063$\pm$0.122| |GPVAE-HPA-H10(ours)|**71.264$\pm$1.263**|**5.521$\pm$0.046**|**9.819$\pm$0.064**| |GPVAE-SPA-H10(ours)|**69.538$\pm$0.664**|**5.412$\pm$0.026**|**9.358$\pm$ 0.083**| This table demonstrates that although LVAEs can get moderate results in terms of NLL and RMSE, our models consistently outperform LVAEs with fewer parameters and faster training. The following table illustrates the performance of LVAEs on the MuJoCo experiment: |Model/Metric|NLL|RMSE|Training time(s/epoch)| |-|-|-|-| |LVAE-M10|-0.003$\pm$0.316|0.175$\pm$0.021|**35.564$\pm$0.342**| |LVAE-M30|-0.814$\pm$0.201|0.107 $\pm$ 0.023|35.951$\pm$0.443| |GPVAE-HPA-H10(ours) |**-2.335$\pm$0.032**|**0.022$\pm$0.001**|37.000$\pm$3.793| |GPVAE-SPA-H10(ours) |**-1.715$\pm$0.159**|**0.038$\pm$0.007**|39.403$\pm$2.755| The table shows LVAEs exhibit substantially lower performance than our models with comparable training time. We will clarify the model differences in the revised manuscript and include relevant results in Section 5 of our paper and the appendix. --- ### 3. Including GPVAE-Pearce We included GPVAE-Pearce [3] as a *full GPVAE* baseline in Section 5.1. Specifically, we plot its performance with a dashed blue line in Figure 2(b) alongside GPVAE-Casale [4] to represent another “best-case” full-GP approach. Although both GPVAE-Pearce and GPVAE-Casale use fully correlated GP priors, they differ in how they construct their variational distributions—GPVAE-Pearce’s setup is more closely aligned with SVGPVAE [5], while GPVAE-Casale adopts a diagonal-encoder approach that matches ours. In our experiment, GPVAE-Casale slightly outperformed GPVAE-Pearce, so Figure 2(a) highlights GPVAE-Casale alone for clarity, whereas Figure 2(b) includes both baselines to show the overall trend. We will clarify this point in the final revision to avoid confusion about whether GPVAE-Pearce was tested. [1] Sparse Within Sparse Gaussian Processes using Neighbor Information (ICML 2021) [2] Variational Nearest Neighbor Gaussian Process (ICML 2022) [3] The Gaussian Process Prior VAE for Interpretable Latent Dynamics from Pixels (AABI 2020) [4] Gaussian Process Prior Variational Autoencoders (NeurIPS 2018) [5] Scalable Gaussian Process Variational Autoencoders (AISTATS 2021)
null
null
null
null
null
null
null
null
Adversarial Cooperative Rationalization: The Risk of Spurious Correlations in Even Clean Datasets
Accept (poster)
Summary: This paper proposes TeaRGIB to enhance the robustness of dynamic graph representation learning methods by using the Information Bottleneck (IB) principle to reduce the redundancy and noise in dynamic graphs. TeaRGIB first employs von Neumann entropy (VNGE) to model the evolution of dynamic graphs. And it decomposes the dynamic graph learning problem into learning compact structure and representation given the previous temporal graphs. Experimental results show that the proposed method enjoys good link prediction accuracy and robust to several types of noise. Claims And Evidence: The reviewer is confused about the motivation of the structural evolution loss. The author needs to elaborate on why von Neumann entropy (VNGE) could capture the structural evolution. And what is the motivation of aligning von Neumann entropy (VNGE) with the mutual information? Is there any theoretical justification for this alignment? Methods And Evaluation Criteria: The decomposition of the dynamic graph information bottleneck objective in Section 4.1 follows GIB [1] and utilizes the local dependency assumption, which is similar to Dynamic GIB [2]. The author needs to discuss the difference between Dynamic GIB and the proposed method to highlight the novelty. [1]. Graph Information Bottleneck. NeurIPS 2020. [2]. Dynamic Graph Information Bottleneck. WWW2024. Theoretical Claims: The reviewer checks most of the proofs, and they seem to be good. But further theoretical justification for structural evolution in Section 4.2 is necessary. Experimental Designs Or Analyses: The proposed TeaRGIB only adopts the Graph Attention Network as the backbone network. The generalization ability of TeaRGIB to other (dynamic) graph neural network architectures is questionable. Supplementary Material: I checked the proofs and the abilation study. Relation To Broader Scientific Literature: The key contribution is to extend the Graph Information Bottleneck principle to the dynamic graph learning domain. The proposed method could enhance the robustness of dynamic graph learning models. However, the proposed method is similar to Dynamic GIB as pointed out in the Methods And Evaluation Criteria Part. The author is encouraged to further discuss the difference between the two work, or the contribution would be limited. Essential References Not Discussed: [1]. Dynamic Graph Information Bottleneck. WWW 2024. [2]. TempME: Towards the Explainability of Temporal Graph Neural Networks via Motif Discovery. NeurIPS 2024. Other Strengths And Weaknesses: Please see above. Other Comments Or Suggestions: The author is encouraged to address the concerns in the above parts. Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your review. But it seems that you may have mistakenly pasted review comments intended for another paper here. So, we will wait for your correction.
Summary: This paper examines the phenomenon of spurious correlations induced by the cooperative rationalization approach, where a generator selects informative subsets of data as rationales for predictors. It reveals that even clean datasets can develop unintended spurious correlations due to the cooperative game between generator and predictor. The paper introduces an adversarial inspection and instructional mechanism to identify and mitigate these correlations, achieving significant empirical improvements on various text and graph classification tasks. Claims And Evidence: The paper mentioned "such a cooperative game could unintentionally introduce a sampling bias during rationale extraction", which can be a very interesting finding. However, the claim seems to broad and the authors did not present comprehensive insights into how this can generalize to a wider range of different scenarios. I would suggest to narrow down the scope of the claim of this paper to the focused areas, including text-based and graph-based ones. Methods And Evaluation Criteria: Yes Theoretical Claims: They appear to be generally correct and the intuition makes sense to me as well. Experimental Designs Or Analyses: They appear to be carefully designed and I do not have major complaints about the experimental designs and analyses. Supplementary Material: Yes most of the appendix. Relation To Broader Scientific Literature: This paper is properly positioned as a part of the broader scientific literature with clarifications about the relationship between itself and other works. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths : 1 - This paper highlights a previously underexplored source of spurious correlations introduced by cooperative rationalization, even in clean datasets. 2 - It also proposes a practical adversarial inspection and instructional method to mitigate identified biases effectively. 3 - The paper demonstrates the effectiveness of the proposed approach across diverse datasets and multiple model architectures (GRUs, BERT, GCN). Weaknesses: 1 - This paper involves broad claims that may not hold in the domain not discussed in this paper. 2 - I did not see much analysis about complexity and running time. How does the proposed method scale to large datasets? Extra adversarial inspections and instructions could significantly increase training time and computational resources. 3 - Although the paper provides theoretical explanations, it does not thoroughly explore conditions under which the adversarial intervention might fail or succeed universally. Other Comments Or Suggestions: Typos exist, e.g., Figure ?? in Appendix B.3. Questions For Authors: See weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for taking the time to carefully review our work and provide constructive feedback. **Claims&Wekaness1**. The claim seems to broad. **A1**. Thanks a lot for the valuable suggestion. We will narrow down the scope to the text and graph domains to ensure rigor. We'd like to kindly clarify that we have mentioned rationalization is mainly used for NLP (the right part of L93–94). Although our experiments are conducted on text and graph data, the theoretical analysis in Sec4 is based on abstract variables and is not restricted to a specific scenario. Could you kindly clarify your concern in more detail regarding why you believe the analysis may not generalize to other scenarios? This would help us address your concern more effectively. We believe such correlations may also arise in the image domain. For example, consider a cat-vs-dog classification dataset in which every original image contains a green pixel. If the generator selects a subset of pixels as the rationale, it may (during this selection process) consistently include the green pixel as part of the rationale for all images labeled 1, while excluding it from images labeled 0. This kind of sampling bias would introduce a spurious correlation. However, our focus in this paper is solely on explaining why such situations can arise (i.e., identifying a potential risk or vulnerability). Assessing how likely this is to occur in real-world settings is somewhat beyond the scope of this paper. We appreciate your suggestion and will clarify and narrow down the scope in our revision. **Weakness2**. Complexity and running time. **A2**. Thanks for the valuable suggestion. Below is a comparison between the complexity of vanilla RNP and RNP+A2I (referred to as A2I for short). In general, A2I introduces an additional attacker module, and as a result, A2I has approximately 1.5 times the number of parameters of vanilla RNP. However, we would like to note that introducing extra modules is common practice in the field of rationalization. For example, our A2I model has a comparable number of parameters to Inter-RAT, a previously proposed variant of RNP. Below are the practical time and memory consumption of RNP and A2I on the Beer-Appearance dataset (we add the results of Inter-RAT as a reference). | Beer-Appearance | batch size |lr| epochs |memory (MB)| RTX3090 hours | |:---:|:---:|:--------:|:-------:|:---:|:---:| |Inter-RAT| 256 | 0.001 |20| 3660 |2.34 | |RNP | 128 | 0.0001 | 200 | 2184|0.37| | A2I| 128 | 0.0001 | 200 |2784|0.61| | A2I | 128 | 0.0001 | 100 | 2784 |0.31| |A2I | 256 | - | - | 3664 |-| Per epoch, RNP consumes only about 60% of the training time compared to RNP+A2I. However, A2I converges significantly faster than vanilla RNP and typically requires fewer training epochs (about 50% on most datasets). Since memory usage is influenced by batch size, we further assign A2I a batch size of 256 (the same as Inter-RAT). Under this setting, the memory usage is 3664 MB, which is comparable to that of Inter-RAT. Regarding the second question, "How does the proposed method scale to large datasets?", we are not entirely sure about the specific concern. Are you referring to datasets with more samples, or datasets with longer text inputs? If you mean the former, we believe the table in the Appendix addresses this concern: the largest dataset we use (Hotel-Cleanliness) contains 150k text samples. If you mean the latter, we follow one of our baseline Inter-RAT to test our A2I on the Movie dataset, which has much longer texts (the average length is about 775, and we set the max length to be 1024). The results are as follows (the results of other baselines are copied from Inter-RAT): | Movie| S | P| R | F1| |---|---|---|---|---| | RNP| 20.0 | 35.6 | 21.1 | 24.1| | A2R | 20.0 | 48.7 | 31.9 | 34.9| | INVRAT| 20.0 | 33.9 | 24.3 | 28.3| | Inter-RAT| 20.0 |35.7| 35.8 | 35.7| | A2I| 20.6 | 45.0 |31.9| **37.1** | Our A2I still significantly outperforms RNP in terms of F1 score on the long-text dataset. **Weakness3**. It does not thoroughly explore conditions under which the adversarial intervention might fail or succeed universally. **A3**. We are sorry, but we are not very sure about your specific concern. Could you please specify it with more details? We would like to clarify that our analysis already covers both sides of cooperative rationalization and how our attack succeeds in each case. Specifically, in the first part of Sec 4.3 (L241–316), we discuss how our attacker can correct the predictor and generator when they select the wrong rationales. Then, from L318 to the end of Sec4.3, we discuss how, when the predictor and generator select the correct rationales, our attacker does not introduce any negative impact. We think these two cases together cover both sides of the issue. **Typos**. Thank you very much for the careful reading, it refers to Fig.3. We will fix it in our revision._
Summary: This paper addresses a crucial issue in rationalization frameworks. They find that even if the original dataset does not have spurious correlations, the cooperative generator-predictor setup can cause spurious correlations that the predictor can exploit. This paper identifies the cause of this issue and proposes an attacker-based method A2I. A2I introduces an attacker which selects trivial patterns to fool the predictor. They empirically show that this introduction of an attacker can significantly improve the performance of rationalization, and furthermore show that the attacker successfully captures the trivial patterns recognized by the predictor and that the inspection prevents the predictor from adopting the trivial patterns. Claims And Evidence: Their claims are convincing. Methods And Evaluation Criteria: The authors’ attacker-based strategy is well motivated and practically sensible. By introducing an attacker that selects trivial patterns to flip predicted labels, the approach mitigates spurious correlations introduced by the generator–predictor interplay. This design choice is well explained in Section 4. Theoretical Claims: The paper does not present formal theorems or statements in the main part of the paper. I did not review the appendix. Experimental Designs Or Analyses: Experiment of Figure 3: I am not fully convinced that this experiment provides the evidence that the generator-predictor interaction creates spurious correlation. Here is why: Since the generator is not trained with the ground truth label information, for a given trained g, Z and Y are independent. The independence of g and Z on Y means that (in the notation of equation (5)), P[Y=1|Z=t+, g] = P[Y=1]. Therefore, the orange curve should not include the spurious correlation. This contradicts with the statement in the paper. Supplementary Material: I did not review the supplementary material. Relation To Broader Scientific Literature: The authors shed light on the spurious correlations induced by the generator–predictor framework. This has a sharp difference from those embedded in the dataset, which has been discussed in prior works. Essential References Not Discussed: I am not aware of any papers which should be discussed. Other Strengths And Weaknesses: Strength: They perform extensive empirical study to support their idea and the performance of the proposed algorithm. In particular, they examine the performance on multiple text classification datasets , multiple sparsity regimes, and different encoder architectures. Furthermore, the study of attack success rate successfully supports their ideas on the spurious correlations and attackers. Weaknesses: While they claim their algorithm’s performance is comparative to LLama3.1-8b-instruct, the LLM is relatively small, and it is better to compare it with larger models or other models. I cannot judge whether their algorithm is state of the art. Other Comments Or Suggestions: The notation “g” is used in the first section without giving definition. In the later section, I noticed that the paper defined it as the generator. It might be better to define it earlier. Equation (8) can be stated in a more mathematically precise way. I believe the current form does not make sense though I understand what the authors want to state. Questions For Authors: In Equation (11), The meaning of [0.5,0.5] is unclear. Does this represent a random variable? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank you for dedicating your time and expertise to review our paper. Your insightful comments and suggestions are highly valued and appreciated. **Q1**. Experiment of Figure 3: I am not fully convinced that this experiment provides the evidence that the generator-predictor interaction creates spurious correlation. Here is why: Since the generator is not trained with the ground truth label information, for a given trained g, Z and Y are independent. The independence of g and Z on Y means that (in the notation of equation (5)), P[Y=1|Z=t+, g] = P[Y=1]. Therefore, the orange curve should not include the spurious correlation. This contradicts with the statement in the paper. **A1**. We are sorry to cause such a misunderstanding. We agree that $g$ is not trained with $Y$, but we can not say $P[Y=1|Z=t+, g] = P[Y=1]$. Here is a simple intuitive toy example. Suppose there is a box containing approximately 200 cat photos and 200 dog photos. The cat images are labeled as 1, and the dog images are labeled as 0. For both cats and dogs, half of the images have a red background and the other half have a green background. In this dataset, background color is considered a trivial pattern and is independent of the label $Y$. Now, suppose a "hand" randomly samples 100 cat images and 100 dog images from the box to form a new dataset. Although the hand is a random hand and never uses the labels during sampling, biases may still occur in the specific sample. For example, the sampled cat images might include 50 with red backgrounds, while the sampled dog images might only include 30 with red backgrounds. As a result, under this particular sampling by the hand, background color and label $Y$ are no longer independent. Returning to rationalization. We consider that in the original dataset, all text samples end with a period. However, one possible scenario is that in the random sample drawn by $g$, 51% of the positive samples include the period as part of $Z$, while only 49% of the negative samples include the period in $Z$. As a result, "whether $Z$ contains a period" is no longer independent of the label $Y$. This spurious correlation is caused by the sampling bias of $g$. This is because once the generator, originally a random variable, is instantiated into a specific value $g$, it inevitably contains bias. In the example above, the spurious correlation caused by bias may not seem very severe, as we only considered a single trivial pattern. However, given that text should be regarded as a high-dimensional vector, chances are that the accumulation of biases from multiple different trivial patterns can become quite significant. **Q2**. While they claim their algorithm’s performance is comparative to LLama3.1-8b-instruct, the LLM is relatively small, and it is better to compare it with larger models or other models. I cannot judge whether their algorithm is state of the art. **A2**. We are sorry to say that our RTX3090 and RTX 4090 GPUs cannot afford LoRA fine-tuning for models larger than 8B. Our methods is SOTA compared to previous rationalization methods. But we acknowledge that our method can not outperform more powerful LLMs like GPT4. However, the training of those LLMs usually involves extensive human alignment, which can be very expensive. Our model is small. Besides, we do not use human-annotated rationales for training. So, our model is much cheaper and can work in places with limited resources. Despite the issues of SOTA results, our two major contributions still make sense. First, we find a new kind of spurious correlation, which may open up new avenues for future research. Second, we propose a direction to mitigate this issue with attack, which may inspire others to develop better methods. **Q3**. In Equation (11), The meaning of [0.5,0.5] is unclear. Does this represent a random variable? **A3**. Yes, it represents a random noise. Thanks for the reminder, and we will clarify this in the revision. **Q4**. Suggestions about $g$ and Eq.8. **A4**. Thank you for your suggestion. We will do it in our revision.
Summary: The paper examines the unintended biases in self-rationalizing models, where a generator selects key input segments for a predictor. It reveals that cooperative training can introduce spurious correlations even in clean datasets. An adversarial inspection method is proposed to detect and mitigate these biases, improving interpretability and performance across multiple classification tasks. Claims And Evidence: The paper claims that cooperative rationalization can introduce spurious correlations even in clean datasets. It shows this through a theoretical analysis demonstrating how the generator’s selection process can create dependencies between extracted rationales and labels. Empirical evidence supports this claim by showing that predictors trained on randomly selected rationales still achieve high accuracy, indicating reliance on trivial patterns (Figure 3). The paper claims that its proposed adversarial method (A2I) can detect and mitigate spurious correlations. It shows this through an attack-based inspection method that successfully identifies trivial patterns (high attack success rate in Figure 6) and an instruction mechanism that reduces reliance on these patterns, leading to improved rationale quality across multiple datasets (Tables 1–4). Methods And Evaluation Criteria: The proposed methods effectively address spurious correlations in rationalization frameworks using adversarial inspection and instruction. Evaluation criteria, including benchmark datasets and rationale quality metrics, align well with the problem. The results are strong for both text and graph tasks. Theoretical Claims: yes, in Appendix C Experimental Designs Or Analyses: The paper compares its method (A2I) with multiple baselines, including RNP, FR, Inter RAT, and NIR The attack success rate is used to as a measurement criteria. The experiments span different types of datasets: text (BeerAdvocate, HotelReview) and graphs (BA2Motifs, GOODMotif). The method is tested with both BERT (a pretrained Transformer) and non-pretrained models like GRUs and GCNs. Supplementary Material: yes, appendix Relation To Broader Scientific Literature: The paper builds on Rationalizing Neural Predictions (RNP), a cooperative framework where a generator selects rationales that a predictor then uses for classification​. Essential References Not Discussed: N/A Other Strengths And Weaknesses: strengths: Unlike prior work that focuses on spurious correlations inherent in datasets, this study uncovers how rationalization frameworks themselves can introduce biases, even in clean data. The paper provides mathematical proofs (Appendix C) and extensive empirical validation across multiple benchmarks (Tables 1–4). weakness: there is no ablation study on how each of the component of the proposed algorithm contribute to the success the effectiveness of the method seems to rely on the attacker’s ability to identify trivial patterns. Other Comments Or Suggestions: n/a Questions For Authors: n/a Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you deeply for taking the time to thoroughly review our paper. If we understand correctly, the only weakness you mentioned is about the lack of ablation study. **Q1**. There is no ablation study on how each of the component of the proposed algorithm contribute to the success the effectiveness of the method seems to rely on the attacker’s ability to identify trivial patterns. **A1**. We think this is a misunderstanding. Compared to the standard rationalization framework RNP, our A2I introduces only one extra component (the attacker). So, the results of RNP can directly serve as the ablation study of our RNP+A2I. Also, the results of FR can serve as the ablation study of our FR+A2I. And we have designed a special experiment to verify the effectiveness of the attacker. The attack success rate of RNP in Figure 6 implies that our attacker can successfully identify the trivial patterns learned by the predictor. And the low attack success rate of our RNP+A2I implies that our attacker-based instruction can effectively deter the predictor from adopting trivial patterns.
null
null
null
null
null
null
FunBO: Discovering Acquisition Functions for Bayesian Optimization with FunSearch
Accept (poster)
Summary: This paper introduces FunBO, Building on the FunSearch framework to discover novel acquisition functions for BO. FunBO iteratively modifies an initial acquisition function to improve the BO performance. The discovered AFs are provided as explicit code snippets, enhancing interpretability and facilitating direct deployment without additional overhead. The method evaluates candidate AFs based on how accurately and efficiently they identify the global optimum of a set of objective functions. The authors show that FunBO discovers AFs that generalize well both within specific function classes and across different types of functions. These AFs outperform established alternatives like EI and UCB, while achieving competitive performance against specialized AFs designed for specific function types. Claims And Evidence: The paper's primary claim is that FunBO can discover explicit, code-based acquisition functions that outperform traditional options like EI and UCB. The experimental results do support this claim across various settings - the discovered AFs generally perform better than standard acquisition functions in most of the tests. However, the evidence to me is not strong enough. While the authors show performance improvements over traditional AFs, the margins of improvement vary considerably across different test functions, and in some cases appear modest. The paper makes a strong case for interpretability - providing the actual code for all discovered AFs is compelling evidence for this aspect. But regarding performance claims, a more comprehensive comparison against state-of-the-art methods would have strengthened the paper's position. The authors acknowledge high variance in the quality of discovered AFs, which raises questions about the consistency and reliability of the approach without multiple runs. Methods And Evaluation Criteria: The evaluation method is generally sound and appropriate in BO: * The authors use standard benchmark functions and synthetic GP functions that are common in the BO literature. * The use of normalized average simple regret as the primary metric is standard. * The experimental design carefully controls for confounding factors by maintaining consistent GP hyperparameters, evaluation grids, and initial designs across all methods. However, I think the major limitation is the selection of baselines. The authors only compare against classical AFs (EI, UCB, PofI) and some recent meta-learning approaches (MetaBO, FSAF), they omit comparisons with several important recent advances: * More advanced AFs such as entropy-based methods (e.g., Max-Value Entropy Search), which have shown superior performance in many settings. * End-to-end methods that jointly learn surrogate models and acquisition strategies. This gap makes it difficult to assess how FunBO compares to the current state-of-the-art beyond traditional methods. While the focus on comparing against interpretable baselines is understandable given FunBO's emphasis on interpretability, including at least some of these more advanced methods would have provided a more complete picture of FunBO's competitive standing in the broader landscape of modern BO techniques. Theoretical Claims: The paper does not make formal theoretical claims requiring proof verification. Experimental Designs Or Analyses: The experimental design appears methodologically sound. Supplementary Material: I have reviewed the supplementary material, including a detailed discussion of related work, code implementations for all FunBO components, and experimental details. The only thing I note is that while the experimental setup is detailed, the authors do not explicitly mention the total number of repetitions used to calculate the statistics presented in their results. Please correct me if I missed it. Relation To Broader Scientific Literature: FunBO potentially addresses challenges in scientific optimization domains where traditional AFs struggle. This approach could particularly benefit high-dimensional problems, multi-objective optimization scenarios, or constrained optimization settings that are prevalent in scientific applications like drug discovery, materials science, and engineering design. Essential References Not Discussed: The literature review appears comprehensive, covering classical AFs, meta-learning approaches, and LLM-augmented BO methods. The authors also acknowledge concurrent work by Yao et al. (2024) on representing AFs in code. I would only suggest authors include several other meta-learning BO papers [1-3], and one LLM BO paper [4]. [1] Song et al. "Reinforced in-context black-box optimization." [2] Müller et al. "Pfns4bo: In-context learning for bayesian optimization." [3] Yang et al. "MONGOOSE: Path-wise smooth Bayesian optimisation via meta-learning." [4] Song et al. "Position: Leverage foundational models for black-box optimization." Other Strengths And Weaknesses: * The computational cost of FunBO is prohibitively high (48 hours of training), which severely limits its practical utility. This investment only makes sense when repeatedly optimizing similar functions, but in most real applications, trying several standard AFs would be much faster than running FunBO. * While the authors emphasize interpretability as a key advantage, they don't actually interpret any of their discovered AFs. I would have expected at least one example where they explain why a particular code structure performs better than traditional AFs on a specific task. Without such analysis, the claimed interpretability benefit remains largely theoretical. * The current implementation seems limited to discrete evaluation points (Sobol grid). It's unclear how the approach would extend to true continuous optimization of acquisition functions, which might be important for some applications. * As the authors acknowledge, there's high variance in the quality of discovered AFs, requiring multiple FunBO runs to find good solutions. This inconsistency further increases the already substantial computational demands. Other Comments Or Suggestions: * There appears to be a duplication in the related work section - the paragraph on "LLMs and black-box optimization" is repeated almost verbatim in the page 12. * Figure 3's top plot doesn't effectively demonstrate FunBO's advantages, as the exploration trajectories for all methods overlap significantly. A more visually distinct example would better highlight the differences in exploration strategies. * It would be helpful to see an ablation study on the impact of different components of FunBO (e.g., evaluation metric design, island model parameters). Questions For Authors: * How robust are the discovered AFs to changes in the surrogate model (e.g., different kernel functions or non-GP models)? * Can you provide any interpretation why the discovered AFs that you showed in the appendix perform better than traditional AFs on the corresponding task? I think this work is super interesting and I really like the idea, I am willing to increase my score if some of my concerns/questions are properly addressed. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your in-depth review of our work and valuable feedback. **Missing Baselines** Our work focuses (footnote 2) on AFs that can be evaluated in closed form given GP posterior parameters thus being fast-to-evaluate and easily deployable while avoiding complexities like Monte-Carlo sampling (e.g., for entropy/KG methods). This is a deliberate choice to avoid the complexities of approximations (e.g., MC sampling), which would make both the evaluation of candidate AFs within FunBO computationally intractable and the generation of corresponding code by the LLM significantly more difficult. As we restricted FunBO's search space to focus on discovering AFs within the class of closed-form deterministic functions, we *ensured a fairer comparison* by contrasting the discovered AFs against appropriate baselines like EI/UCB/PoI. In terms of end-to-end methods, we view FunBO as complementary. As noted in our Related Work section, Chen et al. (2022) found that even their transformer-based model benefited from using a standard AF (EI) for action selection. This suggests that AFs discovered by FunBO could potentially be integrated into such end-to-end frameworks in the future to further boost performance. **Computational Cost & Variance** We acknowledge that the computational cost of FunBO is significant. Note however that FunBO is primarily intended as an offline procedure to discover high-quality AFs. The computational expense is incurred during this one-time search phase. Once discovered, the resulting code-based AF is typically very cheap to evaluate during the actual online BO task. The idea is that FunBO invests computation upfront to find an AF that significantly improves sample efficiency later. While variance in the quality of the discovered AFs might further increase the computational cost, it's crucial to distinguish between the variance of the search process from the value of its outcome. Once found, a successful AF discovered by FunBO has an intrinsic value that is independent of the stochastic process that generated it. The AF represents a novel optimization strategy whose performance can be evaluated and used independently of the original search. **Discrete AF Optimization** Within our experimental evaluation, the optimization of the AF was performed over a discrete Sobol grid to ensure fair comparison across all AFs. However, the discovered AFs are not fundamentally restricted to discrete optimization. As long as the surrogate model is differentiable (like a GP with RBF kernel) and the generated code includes differentiable functions, the AF can be optimized using continuous methods, similarly to EI or UCB. For instance, we could rewrite the AF in Fig 2 to take the input location x, return the corresponding scalar acquisition value and optimize it via L-BFGS. **Robustness to Surrogate Model** Our current work focused on discovering AFs conditioned on a specific surrogate model, as stated in Sec 4, to isolate the AF's contribution. The performance of the discovered AFs might indeed depend on this choice. However, the FunBO methodology itself is flexible. By simply modifying the `run_bo` implementation to use a different surrogate model, FunBO can be used to discover AFs tailored to the specific model's properties. **Interpretation of Discovered AFs** Note that we included an AF interpretation on line 287 and commented on the Goldstein-Price AF on line 359. As an additional example, the AF found for GPs-ID can be simplified to be written as `(EI ** 2) / (1 + (z / beta)**2 * sqrt(var))**2`. This function calculates the standard EI, squares it, and then divides it by a penalty term that increases with both the standardized improvement `z = (incumbent - mean) / std` and the uncertainty `sqrt(var)`. The squaring of the EI non-linearly amplifies regions with high EI values relative to those with low or moderate EI. It increases the "peakiness" of the acquisition function, leading to stronger exploitation of the most promising point(s). The term `(1 + (z/beta)**2 * sqrt(var))**2` penalizes points more heavily if they have a very high expected improvement (z is large and positive) and/or high uncertainty (`sqrt(var)` is large). This might act as a regularizer against over-optimism or excessive jumps into highly uncertain areas, even if they look very promising according to EI. It's a novel way of balancing exploration and exploitation discovered by FunBO. **Number of Repetitions** The mean and standard deviation shown in the regret plots represent the performance variation across the set of test functions, not across multiple independent runs of the BO algorithm on a single function instance. For instance, for OOD-Bench, the average/std dev is over the 9 distinct test functions. We will clarify this point in the captions.
Summary: The authors propose FunBO, a novel method for designing novel acquisition functions (AFs) for BO using LLMs. FunBO aims to design new AFs that perform well across a variety of objective functions. FunBO takes a target set of objective functions and some initial standard AF. Then, FunBO iteratively modifies the initial AF to improve the resultant BO performance on the set of objective functions. The result is a novel AF designed specifically to perform well across the given set of objective functions. The novel AF is output directly in code format, making the designed AF both easy to use and interpret. The authors provide extensive empirical analysis of FunBO, comparing BO performance with AFs designed by FunBO, to performance with both standard general purpose AFs, and with AFs customized specifically for the given objective function type (function-specific AFs). Results show that AFs designed by FunBO outperform general purpose AFs, and perform comparably with the function-specific AFs. Additionally, results show that AFs designed by FunBO perform well even on objective functions that were outside of the FunBO training distribution, demonstrating that the AFs designed by FunBO generalize well to other objective functions. Empirical results include both standard benchmark objective functions and hyperparameter optimization tasks. ## update after rebuttal As I stated in my comment below, the authors sufficiently answered all of my questions in their rebuttal, and I maintain that this work should be accepted for all of the reasons stated in my initial review. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: There are no theoretical claims or proofs in this paper as far as I am aware. Experimental Designs Or Analyses: Yes, I checked all experimental design setups and experimental results provided in the paper. All are valid as far as I am aware. Supplementary Material: Yes, I reviewed all supplementary material in the appendix. Relation To Broader Scientific Literature: Designing AFs for BO is a relevant problem because choosing or designing a good acquisition for a specific objective function can have a significant impact on optimization performance. Additionally, different AFs can be better or worse choices for different objective functions. New methods for designing good AFs that perform well across a variety of objective functions will be of interest to the broader scientific community, and especially to the Bayesian optimization research community. Essential References Not Discussed: There are no essential references missing as far as I am aware. Other Strengths And Weaknesses: Strengths: 1: Improving accessibility of BO methods for practitioners: Designing AFs to maximize BO performance is a difficult problem that often requires advanced knowledge about Bayesian optimization. FunBO presents a way for practitioners in other fields (i.e. the natural sciences), who may want to apply BO methods to their specific problems, to design AFs without this advanced knowledge. Additionally, the fact that FunBO outputs designed AFs directly in python code makes this extremely user friendly (it’s as simple as copying and pasting the designed AF into the user’s .py file). 2: Compelling empirical results: The empirical results provided are compelling and clearly demonstrate the author’s claims that 1) AFs designed by FunBO outperform general purpose AFs, 2) AFs designed by FunBO perform comparably even when compared to function-specific AFs, and 3) AFs designed by FunBO generalize well to objective functions outside of the training distribution. Weaknesses: A minor weakness of this paper is that the novel methodological contribution is on the smaller side. This is because FunBO builds on ideas from existing work (FunSearch) and applies these ideas to the new application of designing AFs. That being said, this is a very minor weakness because significant work was done to apply these ideas to this new application setting of designing AFs, and strong empirical results demonstrate that FunBO makes an impactful contribution to the literature. I therefore still recommend that this paper be accepted. Other Comments Or Suggestions: Typos: Line 44: “In other to avoid” should be “In order to avoid” Line 319: “that performs well across function classes” should be “that perform well across function classes” Line 356: “FunBO is able reach” should be “FunBO is able to reach” Questions For Authors: Question 1: Have the authors considered applying FunBO to design AFs for additional real optimization problems outside of hyperparameter optimization? While hyperparameter optimization is one of the most common applications of BO, there is much recent work applying BO to various interesting problems in robotics, materials science, database tuning, drug design, nuclear reactor stability, etc. It would be cool to see if FunBO could be applied to design AFs to improve BO for some of these other kinds of practical applications. Question 2: I wonder if additional information specific to the particular problem setting could also be provided to FunBO to further improve the AF design. For example, adding problem descriptive information such as “this is a problem in hyperparameter tuning” or “this is a problem in robotics” could be optionally provided to the LLM to allow FunBO to incorporate this information in the design of the AF. Is this something the authors have considered? Do the authors think this would be straightforward to do, and do you think that it could further improve performance of FunBO? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for your feedback and for highlighting the strengths of our work. We appreciate the constructive comments and recommendation for acceptance. Regarding the specific questions: **Application to Other Real-World Problems** This is an excellent point. Our primary goal in this initial work was to establish the FunBO methodology itself, demonstrating its feasibility for discovering novel AFs from code and rigorously evaluating their performance and generalization capabilities using well-understood application domain (standard benchmarks and simple HPO problems). Exploring diverse real-world applications would require careful consideration of the specific problem characteristics and potentially curating relevant sets G representative of the target domain. We agree this is a significant and exciting direction, and we view it as important future work, building upon the foundation laid in this paper. **Incorporating Problem-Specific Information** Currently, FunBO provides context to the LLM through the structure of the initial AF, the function docstrings and the prompt structure including prior examples. Building on this, there are indeed several ways the prompt context could be further enriched: - *Adding Performance Data*: We could go beyond the implicit ranking in the prompt and explicitly include the numerical scores attained by the included AFs. This might provide the LLM with a clearer signal about the magnitude of improvement to aim for. - *Adding Self-Critique*: Another avenue, inspired by self-improvement techniques, could involve incorporating automatically generated or human-written critique/analysis of why the provided example AFs perform as they do directly into the prompt. This could potentially help the LLM focus on refining successful strategies or avoiding pitfalls. - *Adding Problem Descriptions*: Incorporating explicit textual descriptions of the problem domain (e.g., "tuning hyperparameters for a binary classification model" or "optimizing a robotic grasp"), as proposed. From an implementation perspective these can be easily incorporated in the prompt structure used by FunBO. It is plausible that providing richer information through any of these additions could allow the LLM to better leverage its internal knowledge base, potentially leading to further improvements in the discovered AFs. The exploration of the impact of richer prompt contexts (including scores, critique, or domain descriptions) is an interesting direction for future investigation, as mentioned in our Future Work section (Section 5). --- Rebuttal Comment 1.1: Comment: Thank you for answering both of my questions. I maintain that this work should be accepted.
Summary: &nbsp; The authors present FunBO, an algorithm that leverages an LLM to propose novel acquisition functions (AFs) for Bayesian optimization. The authors evaluate FunBO under three settings, in-distribution (ID), out-of-distribution (OOD), and few-shot. FunBO relies on the availability of a large set of related tasks (ID) or unrelated tasks (OOD) with which to evaluate new AF candidates. Overall, although the method is interesting, I have some concerns about the empirical evaluation of FunBO. First, the experiments are run using fixed hyperparameters for the GP surrogate and without acquisition function maximization which doesn't reflect the real-world application of Bayesian optimization. Second, it would be nice to see FunBO evaluated on more real-world black-box optimization problems such as molecule generation [13] or on a standard hyperparameter optimization benchmark [11]. Third, the fact that random search outperforms other AFs on standard optimization benchmarks in the authors' experiments seems to contradict the results of previous studies and raises some questions about the empirical evaluation which are not directly answerable because the authors have not provided their source code. Nonetheless, I will be willing to revise my score if the authors can address the concerns raised. &nbsp; ## **Post-Rebuttal Update** &nbsp; Unfortunately, the authors did not have sufficient time to reply to my final comment and supply either the code or the full details of their additional experiments such that they can be reproduced. My primary concern hence remains that the empirical results are somewhat suspicious given that e.g. the performance of random search against standard AFs is at odds with prior literature. During the AC-reviewer discussion phase I will try my best to independently verify the results in BoTorch. If the authors could supply the mathematical form of the FunBO acquisition function they evaluated as a confidential comment to the AC to be disseminated to me that would be great. &nbsp; Claims And Evidence: &nbsp; In terms of the claims made in the paper the authors state: &nbsp; >"Across all objective functions, αFunBO leads to a convergence performance that outperform general purpose AFs (Fig. 4)." &nbsp; It seems as though MetaBO is stronger than FunBO on Hartmann? &nbsp; In the experiment section the authors claim in relation to the performance of random search that, &nbsp; >"This is due to random search performing competitively on functions with numerous local optima, which are generally harder to optimize." &nbsp; This does not seem to be borne out in the literature. I discuss this in more detail below. In the Related Work section, the authors state, &nbsp; >"This confirms the continued importance of AFs as crucial components in BO, even when combined with transformer-based approaches, and highlights the importance of a method such as FunBO that can be seamlessly integrated with these newer architectures, potentially leading to further improvements in performance." &nbsp; Retraining a transformer architecture is likely to be very expensive. Will such an integration not fall foul of the limitations highlighted in the authors' conclusion regarding the expense of running FunBO? &nbsp; Methods And Evaluation Criteria: &nbsp; 1. It looks as though Equation 1 is underspecified. From the authors' description in the paragraph below, it is implied that the number of steps tao required to identify the optimum is recorded (otherwise at a fixed budget T, the second term would always be 1) yet this isn't indicated in Equation 1. 2. The authors state that, they removed the local maximization of AFs from the MetaBO baseline. Why was this done? Presumably because the authors' omit AF maximization in FunBO? 3. In their experiments the authors state, &nbsp; >"To isolate the effect of using different AFs and eliminate confounding factors related to AF maximization or surrogate models, we maximized all AFs on a fixed Sobol grid (of size NSG) over each function’s input space." &nbsp; Surely the relevant comparison is whether FunBO can yield better acquisition functions in real-world problems where acquisition function maximization and hyperparameter tuning of the surrogate model would take place? It is colloquially accepted that the quality of the GP surrogate is a more important factor than the choice of acquisition function [10, 12] in real-world BO performance and so it would make sense to run an empirical evaluation where GP hyperparameter optimization and AF maximization were present. One option would be to take a SOTA BO algorithm such as HEBO [10] and evaluate whether FunBO could improve its performance on the Bayesmark benchmark [11]. &nbsp; Theoretical Claims: &nbsp; Not applicable. &nbsp; Experimental Designs Or Analyses: &nbsp; 1. The method hinges on the assumption that the related functions G are much cheaper to evaluate relative to f? 2. Why did the authors choose an RBF kernel in place of a Matern 3/2 kernel as is commonly adopted for standard optimization problems. 3. OOD-Bench seems to be the most practically relevant benchmark. For this experiment it would be nice (but expensive) to see the contents of G_tr and G_v rotated with optimization of GP hyperparameters. 4. Although MetaBO uses fixed GP hypers and the authors may have been concerned with a direct comparison, I reiterate that I believe a set of experiments that would support the real-world utility of FunBO would feature optimization of the GP hypers and maximization of the AF. 5. The authors use two example AFs in the prompt. How was this number chosen? Was there a sensitivity analysis on the number of AFs to include? Do the authors think the number of examples affects performance? &nbsp; Supplementary Material: &nbsp; 1. In Appendix C, the precise sampling scheme should be provided. For example how are AFs sampled from within an island based on their score and length? In the absence of code this information would be needed to reproduce the paper even if it is included in the original FunSearch paper it would help if it was repeated here. &nbsp; Relation To Broader Scientific Literature: &nbsp; 1. It may be worth briefly discussing connections between FunSearch and OPRO [7]. While FunSearch maintains scores for each heuristic, OPRO maintains scores for each prompt. 2. How does the authors' approach differ from symbolic regression approaches uses genetic programming? Did the authors consider this? &nbsp; Essential References Not Discussed: &nbsp; 1. When introducing Bayesian optimization, it may be worth citing the originating papers for the method [2, 3] as discussed in [4]. 2. Reference [5] should be cited when introducing Expected Improvement (EI) as discussed in [4]. 3. For probability of improvement, the originating work is [2] and for knowledge gradient the originating work is [5] as discussed in [4]. 4. In the Related Work, it would be worth the authors mentioning the references [14, 15] which both leverage LLM embeddings for black-box optimization. 5. InstructZero was accepted at ICML 2024. &nbsp; **__REFERENCES__** &nbsp; [1] Benjamins, C., Raponi, E., Jankovic, A., Doerr, C. and Lindauer, M., 2023, December. [Self-Adjusting Weighted Expected Improvement for Bayesian Optimization.](https://proceedings.mlr.press/v224/benjamins23a) In International Conference on Automated Machine Learning (pp. 6-1). PMLR. [2] Kushner, HJ., [A Versatile Stochastic Model of a Function of Unknown and Time Varying Form](https://www.sciencedirect.com/science/article/pii/0022247X62900112). Journal of Mathematical Analysis and Applications 5(1):150–167. 1962. [3] Kushner HJ., [A New Method of Locating the Maximum Point of an Arbitrary Multipeak Curve in the Presence of Noise](https://asmedigitalcollection.asme.org/fluidsengineering/article-abstract/86/1/97/392213/A-New-Method-of-Locating-the-Maximum-Point-of-an?redirectedFrom=fulltext). Journal of Basic Engineering 86(1):97–106. 1964. [4] Garnett, R., [Bayesian optimization](https://bayesoptbook.com/). Cambridge University Press. 2023. [5] Saltines, VR., One Method of Multiextremum Optimization. Avtomatika i Vychislitel’naya Tekhnika (Automatic Control and Computer Sciences) 5(3):33–38. 1971. [6] Lyu, W., Yang, F., Yan, C., Zhou, D. and Zeng, X., 2018, July. [Batch Bayesian optimization via multi-objective acquisition ensemble for automated analog circuit design.](https://proceedings.mlr.press/v80/lyu18a.html?ref=https://githubhelp.com) In International conference on machine learning (pp. 3306-3314). PMLR. [7] Guo, Q., Wang, R., Guo, J., Li, B., Song, K., Tan, X., Liu, G., Bian, J. and Yang, Y., [Connecting Large Language Models with Evolutionary Algorithms Yields Powerful Prompt Optimizers.](https://openreview.net/forum?id=ZG3RaNIsO8) In The Twelfth International Conference on Learning Representations. [8] Daulton, S., Ament, S., Eriksson, D., Balandat, M. and Bakshy, E., 2023, December. [Unexpected improvements to expected improvement for Bayesian optimization.](https://proceedings.neurips.cc/paper/2023/hash/419f72cbd568ad62183f8132a3605a2a-Abstract-Conference.html) In Proceedings of the 37th International Conference on Neural Information Processing Systems (pp. 20577-20612). [9] Balandat, M., Karrer, B., Jiang, D., Daulton, S., Letham, B., Wilson, A.G. and Bakshy, E., 2020. [BoTorch: A framework for efficient Monte-Carlo Bayesian optimization.](https://proceedings.neurips.cc/paper/2020/hash/f5b1b89d98b7286673128a5fb112cb9a-Abstract.html) Advances in neural information processing systems, 33, pp.21524-21538. [10] Cowen-Rivers, A.I., Lyu, W., Tutunov, R., Wang, Z., Grosnit, A., Griffiths, R.R., Maraval, A.M., Jianye, H., Wang, J., Peters, J. and Bou-Ammar, H., 2022. [HEBO: Pushing the limits of sample-efficient hyper-parameter optimisation.](https://www.jair.org/index.php/jair/article/view/13643) Journal of Artificial Intelligence Research, 74, pp.1269-1349. [11] Turner, R., Eriksson, D., McCourt, M., Kiili, J., Laaksonen, E., Xu, Z. and Guyon, I., 2021, August. [Bayesian optimization is superior to random search for machine learning hyperparameter tuning: Analysis of the black-box optimization challenge 2020.](https://proceedings.mlr.press/v133/turner21a.html) In NeurIPS 2020 Competition and Demonstration Track (pp. 3-26). PMLR. [12] Shahriari, B., Swersky, K., Wang, Z., Adams, R.P. and De Freitas, N., 2015. [Taking the human out of the loop: A review of Bayesian optimization.](https://ieeexplore.ieee.org/abstract/document/7352306) Proceedings of the IEEE, 104(1), pp.148-175. [13] Gao, W., Fu, T., Sun, J. and Coley, C., 2022. [Sample efficiency matters: a benchmark for practical molecular optimization.](https://proceedings.neurips.cc/paper_files/paper/2022/hash/8644353f7d307baaf29bc1e56fe8e0ec-Abstract-Datasets_and_Benchmarks.html) Advances in Neural Information Processing Systems, 35, pp.21342-21357. [14] Kristiadi, A., Strieth-Kalthoff, F., Skreta, M., Poupart, P., Aspuru-Guzik, A. and Pleiss, G., 2024, July. [A Sober Look at LLMs for Material Discovery: Are They Actually Good for Bayesian Optimization Over Molecules?.](https://proceedings.mlr.press/v235/kristiadi24a.html) In International Conference on Machine Learning (pp. 25603-25622). PMLR. [15] Rankovic, B., Schwaller, P., BoChemian: [Large language model embeddings for Bayesian optimization of chemical reactions.](https://openreview.net/forum?id=A1RVn1m3J3) In: NeurIPS 2023 Workshop on Adaptive Experimental Design and Active Learning in the Real World. 2023. &nbsp; Other Strengths And Weaknesses: &nbsp; 1. To my mind the experiments performed by the authors although extensive do not contain enough real-world examples. A hyperparameter optimization problem with two hyperparameters e.g. the SVM experiments is not particularly compelling. It would be more interesting to see the method applied to a setting such as molecule generation [13] for example or the Bayesmark benchmark [11]. If these problems are prohibitive from a cost perspective it would be nice to see this limitation explicitly mentioned in relation to these applications (instead of abstractly as is currently the case in the conclusion). 2. It would be great if the code could be released. If there is a delay on industrial approval for code release this will hopefully have been addressed in the time between submission and the rebuttal phase. &nbsp; Other Comments Or Suggestions: &nbsp; 1. There are some missing capitalizations in the references e.g. "Bayesian". 2. There are some missing journal/conference indications for arXiv papers in the references e.g. [1] was published at the AutoML conference. 3. Typo in the introduction, "In other to avoid". 4. When listing approaches to mitigate reliance on a single acquisition function in the introduction it would also be worth mentioning ensemble approaches such as MACE [6] where the Pareto front over different acquisition functions is computed. 5. In the preliminaries section, it would be worth articulating the purpose of the auxiliary objective functions g_j. It would also be worth articulating any assumptions about the relation between these auxiliary functions g and the objective to be optimized f. The authors later state that, "The learned AFs are trained on the set G, whose functions are assumed to be drawn from the same distribution or function class associated to f." For clarity it would be worth including this assumption when the set G is first introduced. 6. PI would probably be a less cumbersome acronym than PofI? 7. Typo in Figure 6 docstring, "finds its optimum" 8. Typo in Table 1 and Table 2, the likelihood noise parameter sigma (1e-5) should not have a subscript f. 9. Typo section 3, "an algorithm discovery problem". 10. The clarity of the description of the terms in Equation 1 could be improved e.g. $y^{*}\_j$ is the known optimal value of the objective $g_j$ rather than the optimum which would be the corresponding $x_j$. The authors describe $x_{j}^{t=0}$ as the optimal input value at $t=0$. What does this mean? The optimal value according to the surrogate model at $t=0$? The authors also describe the "found optimal input value". I think this would be better described as the model's estimate of the optimal input value or similar. 11. In Figure 2, it would help if scores were indicated for each acquisition function to highlight the ascending order in which they appear in the prompt. 12. In the definition of regret on page 6 it is somewhat confusing to use $y*$ as the notation for the true optimum. Typically $f(x)$ refers to the noiseless value of the objective and $y$ refers to a noisy value of the objective. 13. On page 6, the authors state, "with less than 25 trials". This should be "with fewer than 25 trials". "Fewer" is typically used with countable nouns such as trials, oranges, ICML submissions, whereas "less" is used with uncountable nouns such as time, money, and water. 14. In Figure 8, the variable **percentage_steps_needed** is not defined. 15. Line 746, Figure 7, is there a typo? **found_min = initial_min_y = float(np.min(model.Y))**. 16. Line 762, Figure 7, comment is broken. 17. Line 764, Figure 7, **if found_min == true_min**. Should there not be some tolerance factor here if this is real Python code? 18. Line 793, using $\mathbf{X}$ is a slightly strange way of defining the RBF kernel if $\mathbf{X}$ is the design matrix. Why not use the standard $\mathbf{x}$ notation? 19. Can the authors explain line 948 in Figure 12, "predictive_var [(shape−10)//2] *= dim". &nbsp; Questions For Authors: &nbsp; 1. The authors state, "FunBO explores a significantly more complex function space where programs take as inputs multiple arrays.". Can the authors give an example of what they mean by "multiple arrays"? 2. The authors state that, "When no validation functions are used ($G = G_{Tr}$), the AF with the highest average performance on $G_{Tr}$ is returned.". When would the case arise where no validation functions are available? 3. The authors state in footnote 6, "We explored using a random selection of initial points as an alternative to EI. However, this approach did not yield good results as using a random selection was incentivizing the generation of functions with a stochastic output, for which convergence results are not reproducible.". What do the authors mean by this? Can the authors present these negative results? 4. For the plot in Figure 2, how many random seeds were taken? Why is std/2 used in place of the standard error? 5. In Figure 9, the legend labels are very difficult to read. It seems as though random search is the next best performing acquisition function relative to FunBO? I see the authors later mention the strong performance of random search but the explanation isn't particularly convincing given results reported for the same optimization benchmarks in other papers e.g. in [8] the authors find that random search is the worst-performing AF on the Michaliewicz function in Figure 3. Additionally, in [9] Figure 6, the authors find that random search is the worst performing AF on the Hartmann 6D function (in the parallel BO setting with a batch size q=4). Even if the experimental settings are slightly different the overall trend is still very surprising. Additionally, how many random seeds were taken. Why is the half standard deviation used for errorbars in place of the standard error? 6. In the abstract the authors state, "we propose FunBO, an LLM-based method that can be used to learn new AFs written in computer code by leveraging access to a limited number of evaluations for a set of objective functions.". What is meant by this? The number of evaluations seems quite large when aggregated across the set of related functions G required to evaluate the efficacy of the proposed AF. 7. In Figure 5 why isn't random search shown in the first two panels? &nbsp; Ethical Review Concerns: &nbsp; Not applicable. &nbsp; Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thanks for your in-depth review of our work and valuable feedback. **Transformer Cost** We agree that re-training large transformer architectures is expensive. Our statement was intended to mean that a pre-trained transformer surrogate model could benefit from using a cheap-to-evaluate, FunBO-discovered AF instead of standard AFs (like EI, as used in Chen et al. 2022), rather than implying FunBO would be involved in the transformer training itself. **AF/HP optimization** We removed the local maximization step from the MetaBO baseline to ensure a consistent comparison framework where all AFs are optimized over the same Sobol grid. This was done to isolate the performance differences attributable purely to the AF's selection logic, removing confounding effects from the quality of optimization routines. For the same reason, we used fixed GP hyperparameters. We aimed for a controlled comparison of the AFs' inherent selection strategies and believe this approach leads to an evaluation that is sound and appropriate as recognized by Reviewer FFHS (“The experimental design carefully controls for confounding factors by maintaining consistent GP hyperparameters …”). We will clarify that this might deviate from a fully realistic deployment scenario. **G cheaper than f** FunBO can run even if functions in $G_{\text{Tr}}$ are expensive, but the offline discovery phase becomes proportionally costly. The practicality of FunBO relies on the idea that functions in $G_{\text{Tr}}$ are indeed cheaper (e.g., lower fidelity simulations, see response to Reviewer zkmL) and the large offline cost is amortized by finding an AF that saves numerous evaluations of the truly expensive target function f during online deployment. **Questions**: 1) This refers to the AF signature where `predictive_mean` and `predictive_var` are NumPy arrays, as opposed to simpler functions found by FunSearch that might only take scalar or tuple inputs. 2) This occurs if $|G|$ is small and/or if the user decides to use all available functions for training to maximize the training signal e.g. in few shot settings. 3) Using a completely random initial AF encourages the LLM to generate AF code that itself included stochastic elements (e.g., various calls to `np.random`). This makes the resulting BO loop non-deterministic, making scoring inconsistent and reproducibility difficult. 4) The mean and standard deviation shown in the regret plots represent the performance variation across the set of test functions (e.g 9 distinct test functions for OOD-Bench), not across multiple independent runs of the BO algorithm on a single function instance. We used std/2 for visualization purpose. 5) Thanks for pointing us to these references. Notice that the figures you mention are averaging the convergence results over multiple random initializations. In our case we are averaging the convergence results over multiple, completely different, functions. Plotting the performance on the single functions reveals that known AFs outperform random search in most cases, performing better for lower dimensional functions and being competitive on functions with numerous local optima. 6) This refers to the limited number of functions available in G. We will rephrase this to be “by leveraging access to a number of evaluations for a *limited* set of objective functions.” 7) We omitted them to avoid clattering the figure but we are happy to add them back to the manuscript. Other comments: 1) For Fig 4 (right panel, HM3), MetaBO is indeed outperforming FunBO. However, in the statement you report we refer to “general purpose AF” (e.g. EI) that are indeed underperforming compared to FunBO. 2) This is a typo. $T_{h_{j}^{\tau}}$​ represents the number of BO trials required by the algorithm using AF $h_j^{\tau}$ to identify the true minimum value $y_j^*$​. This is recorded for each AF $h_j^{\tau}$ sampled at step $\tau$. Note that also $x^*_{j,h^\tau} $ should be $x^*_{j,h_j^\tau}$. 3) The sampling scheme details are given in Appendix C and they are kept fixed with respect to FunSearch. The code for the latter is available on [GitHub](https://github.com/google-deepmind/funsearch/blob/main/implementation/programs_database.py) making the algorithm reproducible. 4) As specified on line 312, all experiments are conducted using FunSearch with default hyperparameters, including the number of examples in the prompt. 5) Thank you for providing this helpful list of suggested references. We will incorporate them in the revised manuscript and add a related discussion. Regarding symbolic regressions, these methods also searches for mathematical expressions, often represented as trees and evolved using genetic operators. FunBO differs by operating directly on code representations, leveraging an LLM for generation/mutation, and using full BO performance as the fitness metric. While related in the goal of finding functional forms, the search space representation and operators are different. --- Rebuttal Comment 1.1: Comment: &nbsp; 1. **Transformer Cost**: Do the authors mean to imply that the FunBO-discovered AFs in this work could be applied out-of-the-box to transformer-based surrogates? &nbsp; 2. **AF/HP optimization**: The MetaBO paper uses fixed GP hyperparameters and so I appreciate that using fixed GP hyperparameters in FunBO is the fairest means of comparison. However, the practical utility of FunBO would be in improving real-world BO performance which would involve GP hyperparameter and AF optimization. I retain the opinion that the true test of FunBO's utility would be to see it improve upon SOTA on a real-world benchmark such as molecule generation or Bayesmark (or any other suitable real-world task) by discovering a performant acquisition function. &nbsp; 3. **G Cheaper than f**: Many thanks to the authors for clarifying the perceived use-case of FunBO in multifidelity settings. &nbsp; 4. **Performance of Random Search**: Many thanks to the authors for clarifying that the averaging occurs across tasks. In the individual plots the fact that random search is performant on Branin still contrasts with the results in the literature. &nbsp; 5. **Sampling Scheme in Appendix C**: I was indeed referring to the sampling scheme in appendix C in my initial review. The full details of the scheme should be provided within the current paper. &nbsp; 6. **Comparison with Symbolic Regression Approaches**: It would be quite interesting to see a direct comparison of FunBO and symbolic regression in future work. &nbsp; 7. **Remaining Questions**: I understand that space constraints may have prevented the authors from addressing all questions in my initial review. I am more than happy to engage in discussion on the remaining points. &nbsp; In summary, I will be very happy to increase my score if **a)** experiments could be added demonstrating the advantages of FunBO together with GP hyperparameter and AF optimization on just a single example problem. **b)** the discrepancy in the performance of random search vs. standard AFs on Branin can be explained. **c)** an explanation can be provided as for why the source code has not been released. &nbsp; --- Reply to Comment 1.1.1: Comment: Thanks for your additional response. **Transformer Cost:** Yes **Sampling Scheme in Appendix C:** We are happy to include this **Performance of Random Search:** See b) below **a)** To address this point *we have tested an AF found by FunBO using the standard BO pipeline available in [BoTorch](https://botorch.org/)*. In particular, we have taken the AF found by FunBO for OOD-Bench and tested it on the test functions for OOD-Bench using the code in BoTorch [tutorial 1](https://colab.sandbox.google.com/github/pytorch/botorch/blob/v0.13.0/tutorials/closed_loop_botorch_only/closed_loop_botorch_only.ipynb) and [tutorial 2](https://colab.sandbox.google.com/github/pytorch/botorch/blob/v0.13.0/tutorials/custom_acquisition/custom_acquisition.ipynb). Note that this evaluation uses all standard settings in terms of optimization of the AFs, optimization of the GPs hyperparameters, random selection of initial points etc. For each test function we have run the algorithm with 10 different random initial designs and plotted the results averaging over the runs (for these plots we used the plot formatting of BoTorch given in the linked Colabs, this is to also address concerns regarding plotting). Unfortunately we cannot share the code at this stage. However, **we have added all additional convergence plots at this [anonymous link](https://anonymous.4open.science/r/funbo_results-607F)**. Notice how, across all functions, FunBO performs either comparable or better than EI and UCB. We are happy to repeat all evaluations with this BoTorch pipeline, add the convergence plots with GP hyperparameters optimization and AF optimization and share the corresponding code. **b)** The convergence plots given at the [anonymous link](https://anonymous.4open.science/r/funbo_results-607F) also include random search. Note how random search is, as expected, underperforming on all test functions. However, there are functions in which the performance of random search is comparable to EI (Sphere) or FunBO (Hartmann 3d). Removing the AF grid optimization, plotting the best objective value found over the number of observations rather than the regrets, averaging over random initial designs and disaggregating the results to show performance by test function leads to the expected low performance of the random search. We are happy to repeat the analysis using these settings for the remaining experiments in the manuscript. **c)** As previously mentioned, all code needed to run the algorithm (together with the code in the [FunSearch GitHub repository](https://github.com/google-deepmind/funsearch) has been given in the appendix. However, if more convenient, we are happy to open source the repo itself.
Summary: The paper proposes FunBO, a method to learn novel acquisition functions to increase efficiency of the Bayesian optimisation. In particular, acquisition function is represented as a Python program and FunSearch is adopted to search over programs' space. Authors evaluate their approach on both in-distribution settings, i.e., the target function is within the same class as given training functions, and out-of-distribution settings. It is shown that FunBO outperforms the considered baselines. Claims And Evidence: The experiments convincingly show the advantages of the proposed FunBO method. Methods And Evaluation Criteria: FunBO is evaluated on diverse range of functions, covering both in-distribution and out-of-distribution settings. Theoretical Claims: N/A Experimental Designs Or Analyses: Seems valid. Supplementary Material: I briefly checked the supplementary material. Relation To Broader Scientific Literature: I think the problem of learning problem-specific acquisition functions is important problem. In general, Bayesian optimisation arises in many applications, thus many fields could benefit from FunBO that improves the efficiency of the optimisation. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: 1. I am curious about the intrinsic motivation of representing AFs with programs? What are the real benefits behind it? Representing them as programs leads to the requirement of doing hard discrete search, thus making the overall procedure even more expensive. Overall, I understand that it comes from the fact that modern LLMs became good priors for code generation and can generate "executable" programs, thus giving good start for such search, but besides that, what are the benefits compared to representing AF via neural network? 2. As you also mention in the limitations, FunBO might be really strong in the situations where running full BO loop is very cheap/fast to provide the feedback to an LLM, then FunBO can iterate quickly. What are the interesting practical use-cases for such setting? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your feedback and for highlighting the strengths of our work. We hope the responses below clarify the motivation and practical considerations for FunBO. **Benefits of Representing AFs as Programs** Representing AFs as programs offers the following advantages: - *Interpretability*: As highlighted in our paper, code-based AFs are inherently interpretable. It is possible to directly inspect, analyze, and understand the logic behind the optimization strategy employed by the discovered AF. This contrasts with neural network based AFs, for which it is difficult to understand why they perform well or how they balance exploration/exploitation. - *Deployability and Simplicity*: AFs discovered by FunBO are outputted as concise code snippets (e.g., Python functions). As mentioned in Section 1, these can be readily integrated into existing BO libraries and workflows with minimal overhead. - *Leveraging Foundational Knowledge*: LLMs trained on vast code and text have knowledge about code but also about existing BO strategies. FunBO aims at leveraging this embedded knowledge to efficiently explore the program space and construct effective heuristics. We have further clarified the benefits of the code representation by adding text elaborating on these advantages (Interpretability, Deployability, and Leveraging Foundational Knowledge) around line 92 in Section 1. **Practical Use Cases** In the limitation section we mentioned the fact that FunBO's evaluation cost might be high during the discovery phase as it involves running a full BO loop for each candidate AF on functions in G. However, there are scenarios where the evaluation cost for the functions in G might relatively cheap. For instance: - *Simulations/Models at Lower Fidelity*: G could consist of faster, lower-fidelity simulators or simplified analytical models related to the expensive target problem f. - *Surrogate Models*: G could include cheap surrogate models learned from previous related tasks. - *ML Hyperparameter Optimization*. G can represent the performance of a ML model on different (potentially smaller) datasets, where training/evaluation might be faster than on the final, large target dataset. Note that the computational cost incurred during this one-time search phase is compensated during the online deployment phase where the resulting discovered AF reduces the number of costly function evaluations needed. We have expanded the discussion on the computational cost within the Limitations paragraph in Section 5 and gave the examples listed above as cases where the auxiliary functions used during the search phase might be relatively inexpensive.
null
null
null
null
null
null
SKOLR: Structured Koopman Operator Linear RNN for Time-Series Forecasting
Accept (poster)
Summary: This paper introduces SKOLR, a structured Koopman operator-based approach to time-series forecasting that connects Koopman operator approximations with linear recurrent neural networks (RNNs). By representing dynamic states using time-delayed observations, the method approximates the Koopman operator in a structured manner through a parallel linear RNN stack. The proposed approach integrates a learnable spectral decomposition of the input signal with a multilayer perceptron (MLP) as measurement functions. SKOLR is evaluated on standard forecasting benchmarks and nonlinear dynamical systems, where it appears to achieve competitive results. Claims And Evidence: The paper makes several key claims: 1. Establishing a Connection Between Koopman Operator Theory and Linear RNNs – The authors show an equivalence between Koopman operator approximations and linear RNN updates, but this connection is somewhat superficial. First, this is not a strictly novel result, as a very similar connection was shown in [a] (see appendix E). Furthermore, I think that theoretical justification is not fully developed, and the assumptions required for this equivalence to hold are not explicitly stated. 2. Superior Forecasting Performance – While the reported numbers show competitive results, the claim of superiority cannot be convincingly ascertained. Many of the performance gains are on the third significant digit, and since variances are not reported, the differences between methods might be due to statistical uncertainty. 3. Efficiency and Parameter Reduction – The authors show that SKOLR achieves competitive results with reduced computational costs. Overall, the claims lack sufficient theoretical and empirical justification, and the evidence presented does not unambiguously support SKOLR’s superiority. [a] "Resurrecting Recurrent Neural Networks for Long Sequences" by Orvieto et al. 2023 Methods And Evaluation Criteria: The evaluation criteria largely follows [b], and tests SKOLR on standard time-series forecasting benchmarks. As I already mentioned, a big shortcoming of the evaluation is the lack of reported standard deviations. As many of the baselines attain similar errors, single estimates of the mean absolute error are, in my opinion, not sufficient to evaluate a method. [b] "Koopa: Learning non- stationary time series dynamics with koopman predictors." by Liu et al. 2023 Theoretical Claims: As mentioned above, the main theoretical claim of the paper is the equivalence between linear RNNs and Koopman operator theory. For this equivalence, only intuition is provided, there is no formal theorem or clear set of assumptions under which the approximation holds. Furthermore, a very similar equivalence was proved already in [a]. [a] "Resurrecting Recurrent Neural Networks for Long Sequences" by Orvieto et al. 2023 Experimental Designs Or Analyses: I think that the analysis of the method is far from comprehensive. Indeed, SKOLR is an assembly of many inter-dependent components: 1. Spectral encoder and spectral decoder 2. Structured RNN stack While an "Ablation Study" is presented in Section 4.3 the Authors mainly test how different settings in the encoder and RNN stacks impact the final performance. I would classify it as a study of the dependence on hyperparameters. An ablation study, conversely, should selectively remove components (such as the spectral encoder, or the structured decomposition of the RNN) and see the impact on performances. What Section 4.3 shows, instead, is that if the hyperparameters of SKOLR are not well-tuned, the claimed performance gains are completely washed away. Supplementary Material: I've checked every section of the Supplementary, without reading it in detail though. Relation To Broader Scientific Literature: This paper does not sufficiently engage with prior work. Koopman-based learning is an active research area, yet the paper misses key references on data-driven Koopman approximations and alternative learning-based Koopman methods. See the essential references not discussed, below. Essential References Not Discussed: For linear RNNs, please check the aforementioned [a], establishing the connection between Linear RNNs and koopman operators. More generally, there is a growing literature on machine learning methods for Koopman operators, see e.g. [b] [a] "Resurrecting Recurrent Neural Networks for Long Sequences" by Orvieto et al. 2023 [b] "Learning dynamical systems via Koopman operator regression in reproducing kernel Hilbert spaces" by Kostic et al. 2022 Other Strengths And Weaknesses: Strengths: - The paper provides a method for time-series forecasting connecting Koopman theory with RNNs. - The empirical results show competitive performance, even if not definitively superior. - The method is computationally efficient compared to transformers. Weakneses - The theoretical claims are underdeveloped and were already presented in prior works, which the Authors do not acknowledge. - The experimental results do not convincingly demonstrate clear superiority over existing methods. Other Comments Or Suggestions: - More rigorous ablations should be included to demonstrate the necessity of each architectural component. - The paper should engage more deeply with prior literature to clarify its novelty. - More discussion on failure cases and limitations would improve transparency. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: ## Claim 1 We would kindly ask the reviewer for more supporting arguments for the assertions. > "connection is somewhat superficial" We establish a clear connection between an EDMD-style approximation of the Koopman operator for a history-augmented state and a linear RNN. This motivates a highly effective but simple architecture, which turns out to be not superficial. > "theoretical justification is not fully developed" We contend that we have fully developed the theoretical justification mathematically. Eq. 5 is the equation for a linear RNN with a finite history. Eq. 6-7 present the EDMD-style approximation to the Koopman operator. And Sec. 3.3 explains that if we consider an augmented state with historical observations then we can rewrite the approximate Koopman operator relation as Eq. 8. Obviously, Eq. 8 has the identical structural form as Eq. 5, implying that we can implement Eq. 8 using a linear RNN (with an MLP embedding). > "only intuition is provided" The development is by no means "only intuition". It is a clear and concise mathematical development. However, we are happy to provide a more formal proposition-proof structure in an appendix if this is desired. Please flag this, and we will provide it in the final response. > "assumptions ... are not explicitly stated" We kindly ask which assumptions are missing in this development? The key assumptions are expressed in the fact that we are applying EDMD-style approximation (so the EDMD assumptions apply); we specify the full rank assumption for a unique solution. > "a very similar equivalence was proved already" App. E [a] contains NO equations connecting linear RNNs to Koopman theory — only a **qualitative observation ("Sounds familiar? This is exactly what a wide MLP + Linear RNN can do.")** without mathematical formulation or proof. While we acknowledge this prior insight, our contribution provides the explicit mathematical connection that was not previously established. ## Connection to [a] and [b] We would include the suggested works into our discussion. However, we'd like to highlight key distinctions from [a]. As aforementioned, [a] only offers a qualitative discussion of Koopman operator theory relevance, listing equations for Koopman theory (Appx. E, Eq.s 17-22) followed by a conclusion without any mathematical development. More importantly, the architecture in [a] does not follow the qualitative observation in Appex. E: the stack of [linear recurrent units + MLPs] has strayed away from the Koopman operators. In contrast, our architecture is an exact match to our derivations in (5) and (8). Furthermore, [a] does not address time-series forecasting with any experiments on this task. In fact, the proposed architecture in [a] performs poorly in the time-series forecasting setting (See [Tab. 7](https://anonymous.4open.science/r/SKOLR-1D6F/Tab7_LRU.pdf)). Concerning [b], we will cite it and include more discussion about the intersection of machine learning methods and Koopman operators. ## Statistical Significance To address the reviewer's concern about statistical significance, we add [Tab. 6](https://anonymous.4open.science/r/SKOLR-1D6F/Tab6_Std.pdf) with standard deviation (std) across 3 independent runs for all datasets and forecast horizons. The low stds (<0.003 for most datasets) demonstrate the consistency and reliability of SKOLR's performance. Thus, we contend that SKOLR's improvements, even when small in magnitude, represent genuine performance differences rather than statistical fluctuations. Additionally, we will provide confidence intervals and statistical significance tests in our final response. ## Efficiency and Parameter Reduction We will add a dedicated "Theoretical Complexity Analysis" section to formally establish SKOLR's computational benefits. Due to the word limit, please refer to response to reviewer hUoy (point 1.2) for detailed information. This comprehensive theoretical analysis, combined with our expanded empirical results in [Tab.4](https://anonymous.4open.science/r/SKOLR-1D6F/Tab4_Computation.pdf), provides thorough justification for SKOLR's efficiency claims. ## Ablation study As suggested by the reviewer, we have also conducted a more comprehensive ablation study on the design elements of SKOLR. As shown in [Tab. 5](https://anonymous.4open.science/r/SKOLR-1D6F/Tab5_Ablation.pdf), we compare our full SKOLR model with two ablated variants: (1) "w/o Structure": no structured decomposition, using a single branch with dimension (N×D); (2) "w/o Spectral Encoder": no learnable frequency decomposition, while maintaining the multi-branch structure. The results show that both components contribute meaningfully. Removing the structured decomposition leads to performance degradation on 27/32 tasks, with notable declines on ETTh1 and ILI, while increasing computational overhead. Similarly, removing spectral encoder impacts performance on 23/32 tasks, though with a smaller overall effect.
Summary: The authors develop an approach to forecast time-series, via Koopman operator theory, through the use of linear RNNs. The authors demonstrate that their approach delivers state-of-the-art performance on long and short term forecasting benchmarks and is significantly less computationally expensive. Claims And Evidence: There are a few claims that I believe could be strengthened with more convincing evidence/change of language: 1. That the different branches of SKOLR "specialize in distinct frequency bands" (page 6). Looking at Fig. 3 (left column), it is not particularly clear to me that the two branches "specialize in distinct frequencies". If anything, it looks like they agree on many of the frequencies. Clarifying this (and/or providing a quantitative comparison to make their point more clear) would be good. 2. The performance of SKOLR on ILI is the best of all the methods tested. However, saying "SKOLR demonstrates remarkable forecasting capability", when the MSE (~1.5) is of a similar magnitude as the average ILI across a given flu season (e.g., the national baseline ILI for the 2018-2019 season was 2.2), is perhaps a little misleading. Toning down this remark is, in my opinion, more appropriate. 3. In the ablation study (setting the learnable weights to 1) the results appear relatively consistent with the non-ablated model (the difference between the MSE is often different by <5%). This combined with the first point above (#1) makes me somewhat skeptical that weighting the branches differently really affects the performance significantly. Doing the same analysis on other data sets where perhaps it makes a bigger difference or changing the language to be more transparent that the weighting doesn't often lead to a large difference would be good. Methods And Evaluation Criteria: The authors did a great job evaluating their approach. They thoroughly compared on baselines and provided evaluation of model size and computational resources used. Theoretical Claims: The main theoretical claim made by this paper (which is perhaps too strong of a word, as the authors more argue that two things are similar) is that the Koopman representation can be equivalent to a linear RNN. This inspires their branched decomposition. While I agree that Eq. 5 and Eq. 8 are the same, I think this claim is somewhat misleading. In particular, RNNs typically take in an input at each time step (the $U v_k$ in Eq. 4), in addition to propagating forward the internal state. This makes linear RNNs a dynamical system with control (or inputs). In Eq. 8, the Koopman operator is solely propagating forward the observable functions $g$. This is more equivalent to inputing some state into a linear RNN and then evolving it forward (without providing any more inputs). So while I think the equivalence is correct, it relies on viewing RNNs differently than they are typically thought of. Making this more clear is important. Experimental Designs Or Analyses: The experimental design and analysis all appears sound. Supplementary Material: I looked through the entire set of Appendices. Relation To Broader Scientific Literature: The contributions of this paper are relatively well situated within the broader scientific literature. In particular, by comparing their method to Koopa, the current SOTA, they convincingly show some of the benefits of their approach. Discussing, in more detail, the advantage of having a smaller, less computationally demanding model (in the Introduction) could further strengthen their paper. Essential References Not Discussed: Given that this area of using Koopman + ML for forecasting has exploded over the past few years, it is reasonable that there may be papers that the authors did not cite that I believe could be relevant. However, there are several foundational papers that the authors did not cite. Adding these (and discussing how the author's work relates) is important and will strengthen the paper. 1. Mezic 2005 Nonlinear Dynamics - This work is the most relevant paper for discussing Koopman mode decomposition and the Koopman representation (and should be cited when citing Koopman 1931). 2. Rowley et al. 2009 Journal of Fluid Mechanics - This was the first work to demonstrate that DMD could approximate the Koopman mode decomposition. 3. Kutz et al. 2015 SIAM J. Appl. Dyn. Sys. - This work proposes a similar decomposition of data via FFT and I believe it employs a similar block structure as is used in SKOLR. 4. Arbabi and Mezic 2017 SIAM J. Appl. Dyn. Sys. - This work was the first to rigorously propose using time-delays for constructing a Koopman operator (Hankel DMD). Also, minor point, but the authors reference Li et al. (2017) when talking about EDMD (Sec. 3.2), but the first paper to develop EDMD was Williams et al. (2015) (which the authors do cite earlier and later). Other Strengths And Weaknesses: **Strengths** 1. This paper achieves SOTA performance (even if the gains are at times relatively small). The comparison with other methods was convincing. 2. The improvement in decreasing model size was significant and is an exciting outcome of the work. 3. Fig. 1 was helpful for understanding the method. 4. Comparing how the use of different number of branches affected performance (Fig. 4, Tables 3 and 4) was interesting. **Weaknesses** 1. As noted above, there are several claims that I think need to be clarified/strengthened. 2. As noted above, there are several important papers that need to be cited and discussed. 3. As noted above, clarity is need when discussing the "equivalence" with the linear RNNs. Other Comments Or Suggestions: **Minor points** 1. (very minor) The authors write "time series" throughout the paper, but the title has "time-series" 2. (very minor) The authors write that "early methods used RNNs and CNNs" (page 8), but then cite papers from 2020/2024. Questions For Authors: I do not have any other questions for the authors. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ## Strengthen Claims with more Convincing Evidence ### Branch Decomposition - frequency band specialization We appreciate the reviewer’s insightful feedback and agree that "specialization" is too strong a claim. We consider that Fig. 3 provides evidence that different branches place more emphasis on some frequency bands (e.g., Branch 1 has a greater relative emphasis on the low-frequency component). But there is not evidence of specialization. ### ILI performance Regarding the performance of SKOLR on ILI, we recognize the concern about the phrasing of "remarkable forecasting capability." While SKOLR achieves the best MSE among the tested models, we agree that this is mischaracterization, given that the error is still high. We will change the text to more accurately state that SKOLR outperforms the baseline methods, with significant error reduction for the shorter horizons. ### Ablation Study - learnable frequencies The observation on the ablation study is insightful. We agree with the general observation that the learnable decomposition is not a critical component and does not lead to a major improvement. On the other hand, it does lead to a relatively consistent 2-5\% improvement, and it is not many parameters to learn. We will modify the text to downplay the significance of the learnable frequency decomposition, and explain that while it does bring some benefit, SKOLR performs well without it. It suggests that the key aspect of the multiple branches is to allow for MLPs to learn different embeddings. ## Theoretical Claims - linkage between Koopman operator and RNN Our claim is not as strong as "the Koopman representation can be equivalent to a linear RNN". In the abstract and introduction, we were careful to write that we "establish a connection between Koopman operator approximation and linear RNNs". In Section 3.3, after equation 8, we write that "this structure can be implemented as a linear RNN". **So, our conclusion/claim is more that a linear RNN (with MLP embeddings) provides an efficient and effective mechanism for implementing a specific structured Koopman operator approximation (involving an augmented state incorporating past observations).** Our equivalence specifically refers to the scenario where we construct an extended state representation using lagged observations (Section 3.3), which effectively transforms the Koopman operator approximation into a form where Eq. 5 and Eq. 8 become structurally identical. Since your review is thorough, indicating careful attention to the paper, it seems that we haven't made it clear enough that our theoretical claim (which is indeed more of an algorithmic development) is more limited. We will modify the language to make it very clear. This is perhaps also an issue we would raise with the qualitative connection in Orvieto et al. (2023) between Koopman operator analysis and an MLP + linear RNN; when you write equations, the connection is not quite so simple, nor as general, as suggested in that work. ## References Thank you very much for highlighting these essential references. We will incorporate them to properly situate SKOLR within the Koopman literature. We'll add Mezic (2005) alongside Koopman (1931) as foundational work on Koopman mode decomposition, and Rowley et al. (2009) when discussing DMD's relationship to Koopman theory. We'll discuss connections between our spectral decomposition approach and Kutz et al. (2015), whose frequency-based decomposition shares similarities with our branch structure. Importantly, we'll acknowledge Arbabi and Mezic (2017) when discussing time-delay embeddings in Section 3.3, as their Hankel DMD provides theoretical support for our extended state construction. We'll also correctly attribute EDMD to Williams et al. (2015) in Section 3.2, while maintaining the Li et al. (2017) reference for neural network extensions. These additions will provide proper attribution and better contextualize our contributions within the evolving landscape of Koopman-based methods. ## Weaknesses The three weaknesses are addressed in the comments above. ## Suggestions Thank you for these careful observations. We will standardize our usage of "time series" (without hyphen) throughout the paper for consistency. We also appreciate you flagging the timeline in our literature review. We will revise this section to include pioneering works such as [1]-[3] and properly reflect the development of time series forecasting methods. [1] Hochreiter, S., \& Schmidhuber, J. (1997). Long short-term memory. Neural Computation, 9(8), 1735-1780. [2] Chung, J., Gulcehre, C., Cho, K., & Bengio, Y. (2014). Empirical evaluation of gated recurrent neural networks on sequence modeling. [3] Binkowski, M., Marti, G., & Donnat, P. (2018, July). Autoregressive convolutional neural networks for asynchronous time series. --- Rebuttal Comment 1.1: Comment: I thank the authors for their thorough and respectful responses to my review. It is clear they are willing to engage in the review process. All of my comments/questions have been sufficiently addressed and I believe the paper, with the changes the authors detail that they will make, will strengthen the paper. At the moment I will keep my score of a 3, but I feel more confident in this assessment and more confident in being willing to support this paper. --- Reply to Comment 1.1.1: Comment: Thank you for your thoughtful feedback and for acknowledging our engagement with the review process. We're pleased that our responses have addressed your comments and questions satisfactorily. We appreciate your continued support of our paper. The revisions we've outlined will be implemented carefully to strengthen the paper as you've noted. Your constructive criticism has been invaluable in helping us improve our manuscript, and we're grateful for the time and expertise you've contributed to this process.
Summary: This paper introduces a novel approach that connects Koopman operator approximations with RNNs for efficient time-series modeling. Koopman operator theory provides a linear representation of nonlinear dynamical systems but is typically infinite-dimensional, making it challenging to apply directly. To address this, the authors propose a structured approximation of the Koopman operator and establish its equivalence to linear RNN updates. Based on this insight, they develop SKOLR, a model that integrates learnable spectral decomposition of input signals with multilayer perceptrons (MLPs) as measurement functions and a highly parallelized linear RNN stack to implement the structured Koopman operator. ## update after rebuttal Sorry for the last-minute response. The authors have addressed most of my concerns. I have updated my score to "weak accept" accordingly. Claims And Evidence: The claims in the paper are generally well-supported by clear and convincing evidence. However, there are still some problematic claims as follows: * High efficiency: the authors claim in there paper that the proposed architecture is computationally effective. However, they only provide experiment results in ETTm2 and Traffic datasets (in Fig. 5). The authors should include results of other datasets. Moreover, including theoretical analysis of computation complexity could strengthen this claim. The authors also mention that the branch decomposition make the linear RNN chains highly parallel. I suggest that the authors should also explain how they implement the parallel computing. * Exceptional performance: the authors claim that this architure delivers exceptional performance. However, I do not showcases in the paper. The authors should also present showcases to demenstrate that their model are effective in capturing complex temporal patterns (see line 366). Methods And Evaluation Criteria: The proposed methods and evaluation criteria in the paper are well-suited for the problem. The authors bridge a gap between Koopa Operator Theory and RNN to achieve both efficiency and performance. The evaluation criteria (ECL, Traffic and other benchmarks & metrics) is widely used in time series forecasting evaluation. Theoretical Claims: I have checked the correctness of proofs for theoretical claims. If authors could also include theoretical analysis for computational complexity (discussed above), it would be better. Experimental Designs Or Analyses: I have checked the soundness/validity of experimental designs and analyses. The authors provide results in widely used time series forecasting benchmarks and nonlinear dynamical system and claim performance of the model and provide efficiency comparision in ETTm2 and Traffic to claim the efficiency. As discussed above, more showcases should be included and efficiency promotion should be analyzed in more datasets. Supplementary Material: I have checked the supplementary material, which includes their code implementation. Relation To Broader Scientific Literature: The key contributions of this paper are related to the application of physics informed methods (Koopman Operator Theory in this paper) in time series. The proposed decomposition and RNN stack improve computational efficiency and accuracy compared to previous Koopman-based methods ([1] [2]). [1] Koopa: Learning Non-stationary Time Series Dynamics with Koopman Predictors. [2] Koopman neural forecaster for time series with temporal distribution shifts. Essential References Not Discussed: There is some reseearches incorporating Koopman Theory into RNNs and the authors should briefly discuss the difference and novelty. [1] Recurrent neural networks and Koopman-based frameworks for temporal predictions in a low-order model of turbulence Other Strengths And Weaknesses: Strengths: * Writting: This paper is well written and the figures could express authors' idea clearly. After reading this paper, I could easily understand the authors' idea. The work is well written providing sufficient relevant background knowledge on Koopman operator theory also for reader who is unfamiliar with this theory. * Strong performance in benchmarks: SKOLR outperforms or matches state-of-the-art models on multiple standard forecasting datasets. The model also demonstrates superior performance in nonlinear dynamical system modeling (e.g., Pendulum, Duffing, Lotka-Volterra, Lorenz ’63), validating its effectiveness beyond traditional forecasting tasks. Weaknesses: * Clarity: While the learnable frequency decomposition improves performance, it is not clearly justified why this approach is superior to other feature extraction methods. The paper lacks an in-depth exploration of why the Koopman-based linear RNN structure generalizes well to different datasets beyond empirical results. * Theoretical analysis: the essence of the deep Koopman method lies in the spectrum of the Koopman operator, because the eigenvalues determine the model's behavior during long-term evolution. However, the author did not provide any analysis or visualized results regarding the eigenvalues. Other Comments Or Suggestions: I am willing to raise my score if the authors could address my question. Questions For Authors: 1. I noticed in Fig. 1, the Encoder only untilizes $x_1$ to derive $z_i$ for $i=1,2...L$, is it a typo? 2. Will this architecture suffer from error accumulation which usually happens in RNNs? How to solve this problem? Could authors provide some showcases for long context prediction? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ## 1. High Efficiency ### 1.1 Computational efficiency: We provide additional results for all datasets. Please see [Tab.4](https://anonymous.4open.science/r/SKOLR-1D6F/Tab4_Computation.pdf). SKOLR achieves a compelling trade-off between memory, computation time, and accuracy. ### 1.2 Theoretical Complexity Analysis SKOLR achieves computational efficiency through structured design and linear operations. For a time series (length L, patch length P, embedding dimension D, N branches): - **Time complexity**: O(N × (L/P) × D²) from spectral decomposition, encoder/decoder MLPs, and linear RNN computation - **Memory complexity**: O(N × D²) for parameters and O(N × (L/P) × D) for activations Compared to a non-structured model with dimension D' = N×D: - Non-structured approach: O((L/P) × N²D²) time and O(N²D²) memory - SKOLR provides N-fold reduction in computational requirements SKOLR avoids quadratic scaling with sequence length seen in transformers (O((L/P)² × D + (L/P) × D²) time, O((L/P)² + (L/P) × D) memory). ### 1.3 Parallel computing The N separate branches are processed independently (in our code), reducing time complexity to O((L/P) × D²). The linear RNN computation has no activation functions, so the hidden state evolution is: $h_k = g(y_k) + \sum_{s=1}^{L/P} W^s g(y_{k-s})$. This allows efficient matrix operations, reducing time complexity to O($D^3 \log(L/P)$ + $(L/P)^2 \times D$) per branch. For time series where $L/P \ll D$, this is a significant speedup. ## 2. Exceptional Performance Claim Our claims are based on Tab.s 1 \& 2 and Fig.s 2 \& 4. SKOLR requires less-than-half the memory of any baseline and less-than-half of Koopa's training time (Fig.4). SKOLR has the (equal-)lowest MSE in 17 our of 32 tasks and ranks second in a further 7 (Tab.1). SKOLR significantly outperforms Koopa for synthetic non-linear systems (Tab.2). Given the very low memory footprint, the low training time, and the impressive accuracy, we consider that the claim of "exceptional" performance is supported, but we can use a less strong adjective. We agree that the paper would be strengthened by more examples (showcases) demonstrating the capture of complex patterns. We do have one example in Fig. 3, but we will include more. Please see [Fig. 2](https://anonymous.4open.science/r/SKOLR-1D6F/Fig2_Compare.pdf) as an example. SKOLR's predictions have much lower variance than Koopa's and track the oscillatory behaviour better. ## 3. Relevant Reference We will cite it and add discussion. The paper evaluates the existing non-linear RNNs and Koopman methods for near-wall turbulence. It does not draw connections or develop a new method. ## 4. Clarity ### 4.1 Frequency Decomposition We agree that the motivation for our learnable frequency decomposition could be better. Please see the response to Reviewer kXAS. Learnable frequency decomposition offers three key advantages. (1) Each branch can focus on specific frequency bands, decomposing complex dynamics. (2) Learnable components adaptively determine which frequencies are most informative for prediction. (3) This approach aligns with Koopman theory, as different frequency components often correspond to different Koopman modes. ### 4.2 Generalization capability Constructing theoretical guarantees that the approach generalizes is challenging and would be a paper all on its own. We do provide experimental results for a large variety of time-series that exhibit very different characteristics, ranging from strongly seasonal temperature series to pendulums exhibiting highly non-linear dynamics. We consider that the experimental evidence in the paper is strongly supportive of a capability to generalize to a diverse range of time series. ## 5. Theoretical analysis: eigenvalues This is an excellent suggestion. Our focus is forecasting, so our results and analysis concentrate on that task. However, the analysis of learned Koopman operator eigenvalues can indeed reveal important characteristics. We analyzed eigenvalue plots for Traffic dataset (see [Fig.3](https://anonymous.4open.science/r/SKOLR-1D6F/Fig3_Eigenvalue.pdf)). We see that each branch learns complementary spectral properties, with all eigenvalues within the unit circle, indicating stable dynamics. Branch 1 shows concentration at magnitude 0.4, while Branch 2 exhibits a more uniform distribution. The presence of larger magnitudes (0.7-0.9) indicates capture of longer-term patterns. ## 6. Questions ### 6.1 It is a typo. We will revise Fig. 1 to show the proper time indexing across all components. ### 6.2 Long horizon prediction To show SKOLR's capability for longer horizons, we included experiments with extended prediction horizons (T=336, 720) in App. B.2 (Tab. 9), SKOLR maintains its performance advantage at extended horizons, with lower error growth rates. Please also see response to reviewer kXAS concerning test-time horizon extension and [Tab. 1](https://anonymous.4open.science/r/SKOLR-1D6F/Tab1_ScaleUp.pdf). --- Rebuttal Comment 1.1: Comment: Thanks for your response. My questions are still not completely solved. I will appreciate it if authors could provide more explanations. 1. For the showcases, I think authors should evaluate proposed model in real-world non-stationary datasets, rather than those that could be simulated using some functions. 2. For theoretical analysis of eigenvalues, could authors explain what the results mean? 3. My question "Will this architecture suffer from error accumulation which usually happens in RNNs? How to solve this problem?" is still not answered. --- Reply to Comment 1.1.1: Comment: Dear Reviewer hUoy, We appreciate your thoughtful questions. All figures in this response can be found in this [link](https://anonymous.4open.science/r/SKOLR-1D6F/Fig4-9.pdf). ## Showcase We add examples for ETTm2 and Electricity to showcase SKOLR's branchwise behavior. The figures show distinct frequency specializations. In Fig. 4, SKOLR decomposes the time series into complementary frequency components, with Branch 1 focusing on lower frequencies while Branch 2 captures specific dominant frequencies. In Fig. 5, Branch 1 emphasizes lower frequency components, with pronounced content around 2 μHz representing weekly cycles and 12-13 μHz capturing daily patterns. In contrast, branch 2 gives greater emphasis to higher frequency components, particularly around 37 μHz, which corresponds to shorter intraday cycles occurring every 6-8 hours. The time-domain plots demonstrate how these spectral differences translate into signal reconstruction. Branch 1 includes more longer-term cycles (daily/weekly) while Branch 2 has more emphasis on intra-day structure. ## Eigenvalues Thank you for your question about the theoretical implications of our eigenvalue analysis. The Koopman modes are derived through eigendecomposition of the RNN weight matrices $M_i$. These modes represent fundamental dynamical patterns in the data. Each mode captures specific components of the time series. The stability and oscillatory behavior of each mode is determined by the corresponding eigenvalue’s position in the complex plane. The eigenvalue plots in Fig.6 (last rebuttal) show that our Koopman operator approximation maintains stability by ensuring all eigenvalues remain within the unit circle. To expand the analysis, we include a case study by constructing the prediction from a single dominant Koopman mode. As shown in Fig. 7, the dominant mode successfully captures the primary trend and approximate shape of the future values, though with some timing and amplitude differences. While the dominant mode identifies the primary oscillatory behavior, the complete dynamics require contributions from multiple Koopman modes. This observation also partially addresses the next question. Error divergence is characteristic of unstable systems. Since our learned system maintains stability with eigenvalues inside the unit circle, error effects tend to be dampened over time. ## Error Accumulation Thank you for raising this important question about error accumulation. Our SKOLR model addresses this common RNN limitation through multiple complementary approaches: **1. Patch-based Processing** Time series forecasting has two main prediction methods: (a) direct prediction: forecasts the entire horizon at once but is parameter-inefficient and cannot extend the prediction horizon after training; (b) recursive prediction: iteratively uses predictions as inputs but may suffer from error accumulation over long sequences. In SKOLR, we use a patching approach (App.A.2) to create an effective middle ground between these methods. Instead of operating at the individual timestep level, we work with patches of multiple timesteps, directly predicting all values within each patch while only applying recursion between patches. This dramatically reduces the number of recursive steps (e.g., from 720 to just 5 with patch length 144), controlling error accumulation. Additionally, this method also reduces complexity to $O(\frac{L}{P})$) from RNN timestamp-based approaches ($O(L)$) while maintaining the core principle of Koopman operator theory. **2. Experiments on long horizon with varing iteration steps** We have conducted an experiment for longer horizon $L=720, T=720$ on dataset ETTm2. We vary the patch length $P=\{16, 24, 48, 144, 240\}$ of SKOLR to see the effect of the number of recursive steps. SKOLR with P=16 requires 45 recursive steps, while P=240 needs only 3, yet they maintain comparable error profiles in Fig.8. This empirically demonstrates that SKOLR's patch-based approach effectively controls error accumulation, even with increased recursion. Our experiments for T={336,720} (Appx. B.2 Table.9) has P=16. Even with an increased number of recursions, SKOLR emerges as the leading performer. **3. Comparison with Non-recurrent Models** We also compared SKOLR with iTransformer, which performs direct prediction without recurrence. Both models show similar patterns of error increase with longer horizons, indicating that this modest increase is inherent to all forecasting approaches when extending the prediction range, rather than being caused by recurrent error accumulation. **4. Visual Evidence** Our visualizations on horizon $T=720$ do not show obvious differences between the first and last patch predictions, demonstrating that error growth remains well-controlled even across extended forecasting horizons. See Fig.9 as an example. **These empirical results confirm that our architecture successfully overcomes the error accumulation limitations.**
Summary: This paper proposes a new linear RNN for time-series forecasting inspired by Koopman operator theory. The problem setup is, for an autonomous dynamical system $\mathbf{x}\_{k+1}=F(\mathbf{x}\_k)$ with observable $\mathbf{y}\_k=h(\mathbf{x}\_k)$ for an unknown $h(\cdot)$, to condition on a partial trajectory $\mathbf{y}\_1,...,\mathbf{y}\_L$ of a fixed length $L$ and forecast $T$ steps of the future $\mathbf{y}\_{L+1},...,\mathbf{y}\_{L+T}$. The authors start with the standard form of a linear RNN which evolves its hidden state $\mathbf{h}_k$ conditioned on an input sequence $\mathbf{v}_k$, and consider the case where $\mathbf{v}_k$ is pointwise mapped from $\mathbf{y}_k$. This reduces the linear RNN to $\mathbf{h}\_k=\sum\_{s=1}^k\mathbf{W}^{k-s} g(\mathbf{y}\_s)$ for some $g(\cdot)$. Then, the authors propose to construct an approximate state as a stack of past $L$ measurements $\tilde{\mathbf{x}}\_k \coloneqq [\mathbf{y}\_{k-L+1},...,\mathbf{y}\_k]$ and learn the dynamics $\tilde{\mathbf{x}}\_k=\tilde{F}(\tilde{\mathbf{x}}\_{k-1})$ by jointly learning $g(\cdot)$ on $\tilde{\mathbf{x}}\_k$ and a matrix $\mathbf{M}$ which parameterizes the dynamics of $g(\tilde{\mathbf{x}}\_k)$, using a similar objective to the standard EDMD, but with a structured constraint on $\mathbf{M}$ such that the learned dynamics can be always written as $g(\tilde{\mathbf{x}}\_k) = \mathbf{h}\_k=\sum\_{s=1}^k\mathbf{W}^{k-s} g(\mathbf{y}\_s)$. Basically, this constraint means that $\mathbf{M}$ is a blockwise diagonal matrix with each block given as a power of some learnable matrix $\mathbf{W}$. The authors parameterize $g(\cdot)$ as trainable gating in frequency domain (FFT -> trainable gate per frequency -> IFFT) followed by pointwise MLP, at multiple "heads". Then at each head the structured matrix $\mathbf{M}$ is separately parameterized. The decoder is simply a positionwise MLP. The authors report state-of-the-art performance of the proposed architecture on 8 benchmark datasets, using $L=2T$, and 4 physical dynamical systems. Further analysis and ablation study on the role of each component and hyperparameter are provided. ## update after rebuttal The following is my revised understanding of the paper: - The paper proposes a new linear RNN for time-series forecasting. The key components proposed are block-diagonal state-transition matrix and learnable frequency-domain gating. The former is inspired by Koopman operator theory for deterministic nonlinear dynamical systems and the EDMD algorithm. The latter is inspired by existing works in time-series modeling leveraging frequency-domain analysis components. - Empirically, the proposed method overall shows a competitive performance compared to the state-of-the-art Koopa, and at the same time has a good time and space computational efficiency. - A weakness is that while Koopman operator theory is stated as the inspiration of the architecture, the frequency-domain gating is not motivated from its principles. Therefore it is not clear what the final model is implementing in the perspective of Koopman theory. Considering both the practical benefits and weaknesses in the theoretical analysis, I updated my score accordingly. Claims And Evidence: The paper is focused on demonstrating state-of-the-art performance of the proposed method, and the main results are given in Table 1 and Figure 2, which supports the main claim. Also, the claim in page 5 that multi-branch parameterization reduces the parameter count is supported by Section 4.3.1. On the other hand, limitations of the presented evidences are as follows: - The authors have not shown whether it is possible to use a longer forecast horizon at the test-time, as in Section 5.3 of Liu et al., (2023). The utility of the method would be more convincing if this capability is demonstrated, especially given that both Koopa and SKOLR are based on Koopman operator theory. - In Appendix A.2, the authors claim they adopt non-overlapping patch tokenization after trainable gating in frequency domain. It is not clear how much this impacts the model efficiency compared to not tokenizing. Also, it is unclear whether this can be applied to other baselines (e.g., Koopa) as well. For fair comparison of efficiency one would expect patch tokenization to be consistently applied (or not applied) to all methods. - The hyperparameter selection protocol in Appendix A.2 is not very clearly described. Is the validation split used for the grid search? Also, having some analysis on the sensitivity to the hyperparameter choice (e.g., Appendix C of Liu et al., (2023)) would make the results more convincing. - In Table 8, some of the numbers are also not consistent with the reported results in Liu et al., (2023), and the highlightings of the MASE metric for Quarter and Others are wrong. Liu et al. Koopa: Learning Non-stationary Time Series Dynamics with Koopman Predictors (2023) Methods And Evaluation Criteria: The evaluation criteria closely follows a prior work (Liu et al., (2023)) which seems sound to me. Theoretical Claims: The paper does not make theoretical claims. Experimental Designs Or Analyses: Please see the above sections. Supplementary Material: I have reviewed the supplementary material. Relation To Broader Scientific Literature: The paper is related to the fields of linear RNNs, time-series forecasting, and Koopman operator theory. Specifically, the contributions are in the application of Koopman operator theory for designing a linear RNN for time-series forecasting. Essential References Not Discussed: The observation that Koopman operator theory can be connected to linear RNNs is not new, for example we have Orvieto et al. Resurrecting Recurrent Neural Networks for Long Sequences (2023), specifically Appendix E.1.2. Other Strengths And Weaknesses: Strengths - The empirical results are promising, especially considering the computational efficiency of the proposed method (but see the concerns in the Claims And Evidence section). Weaknesses - The presentation of the methodology in Section 3 could be overall improved. Especially, while the authors have invested a significant amount of text describing Koopman operator theory, linear RNNs, and EDMD, the design of the encoder which uses trainable frequency-domain gating is not motivated concretely from the background theory and seems rather arbitrary. This is a key design choice with its own ablation study, but I do not see how this is motivated. Also, the mix-up of symbols (e.g., using $\mathbf{Y}$ both before and after the frequency-domain gating) makes the description quite confusing. Other Comments Or Suggestions: For the presentation in Section 3, I suggest that the authors first describe the whole pipeline (including the encoder and decoder) for a single branch ($N=1$), and then describe the multiple-branch case. This is what is often done in sequence modeling papers using multi-head architectures. Questions For Authors: - In Section 4.2, are there particular reasons to refer to these 4 systems as non-linear systems, given that the dynamics in Table 1 are also non-linear? - In Figure 3, what parts of the figure do (a) and (b) correspond to? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ## Longer test-time forecast horizon We have now conducted experiments for increased test-time horizon. Please see [Tab. 1](https://anonymous.4open.science/r/SKOLR-1D6F/Tab1_ScaleUp.pdf). SKOLR has a recursive structure. Even if we train over a given horizon, we can recursively predict for a longer horizon. There is performance deterioration, but our results show it is not severe. SKOLR's performance compares favorably with Koopa's. ## Non-overlapping patch tokenization We apply patch processing before Linear RNN branches. This reduces complexity to O(L/P) from timestamps (O(L)). Our efficiency results *do* use this for baselines (PatchTST, iTransformer, Koopa). It slightly improves baseline performance. ## Hyperparameter selection + sensitivity We will clearly specify the hyperparameter selection protocol in App.A.2. We selected the optimal configuration based on MSE for the validation split. Our setup strictly separates training, validation, and test sets, ensuring no information leakage. Tab. 4 (paper) analyzes how branch number N and dimension D impact performance. Response [Tab. 3](https://anonymous.4open.science/r/SKOLR-1D6F/Tab3_PatchLen.pdf) provides new results of sensitivity to patch length. SKOLR exhibits little sensitivity to patch length P. We do not tune this in our experiments; we used P = L/6 for all datasets. ## Revise Tab. 8 We apologize for inconsistencies in Tab.8. We've verified all results against our original records and fixed the MASE metric issues for the Quarter and Others categories in the revised [Tab.8](https://anonymous.4open.science/r/SKOLR-1D6F/Tab8_ShortTerm.pdf). ## Relationship to [1] Orvieto et al. 2023 Thank you for highlighting this paper. We will modify the introduction: "In this work, we consider time-series forecasting, and establish a connection between Koopman operator approximation and linear RNNs, building on the observation made by [1]. We make a more explicit connection and devise an architecture that is a more direct match." In Related Work, we will add: "Orieto et al. made the observation, based on a qualitative discussion, that the Koopman operator representation of a dynamical system can be implemented by combining a wide MLP and a linear RNN. We provide a more explicit connection, providing equations to show a direct analogy between a structured approximation of a Koopman operator and an architecture comprised of an MLP encoder plus a linear RNN. Although [1] observe this connection, their studied architecture stacks linear recurrent units interspersed with non-linear activations or MLPs. While excellent for long-range reasoning tasks, this departs from the architecture in their App. E. By contrast, our developed architecture does consist of (multiple branches) of an MLP encoder, a single-layer linear RNN, and an MLP decoder. It thus adheres exactly to the demonstrated analogy between Eq. (5) and (8) of our paper. Whereas our focus is time series forecasting, [1] target long-range reasoning. Although it is possible to convert their architecture to address forecasting, performance suffers because it is not the design goal." We recognize the importance of acknowledging [1], but we don't believe that its existence significantly diminishes our contribution. The connection observed by Orieto et al. is qualitative; there are no supporting equations. In contrast, we develop an explicit connection by expressing a linear RNN in (5) and a structured Koopman operator approximation in (8). This explicit connection adds value beyond the qualitative insights. ## Presentation - Sec. 3 We apologize for notational confusion and will correct it. There are strong motivations for the frequency decomposition design choice. In classical time-frequency analysis, the value of adaptation to different frequencies has long been recognized. Wavelet analysis applies different filters at different frequency scales. In more recent forecasting literature, frequency decomposition has been shown to be highly effective in TimesNet (Wu 2023), Koopa (Liu 2023), and MTST (Zhang 2024). Low-frequency dynamics may be considerably different from high-frequency dynamics and are more easily learnt after disentanglement. We are motivated to allow for observable functions to be learned both in frequency and time, with explicit consideration of the frequency aspect. We will modify Sec.3 to describe the whole pipeline for a single branch, and then describe the multiple-branch case. ## Questions: Non-linear systems; Fig. label Tab. 1 signals are real system measurements, e.g., Electricity Transformer Temperature. We do not know the exact dynamics. The 4 systems in Sec.4.2 are commonly-studied, synthetic non-linear systems. These allow us to study a setting where it is important to model non-linear dynamics. We will write "Synthetic Non-linear Systems" to stress the synthetic nature. The left figure of Fig.3 corresponds to (a) and the right to (b). We will add clear labels. --- Rebuttal Comment 1.1: Comment: Thank you for providing the clarifications. The following is my revised understanding of the paper: - The paper proposes a new linear RNN for time-series forecasting. The key components proposed are block-diagonal state-transition matrix and learnable frequency-domain gating. The former is inspired by Koopman operator theory for deterministic nonlinear dynamical systems and the EDMD algorithm. The latter is inspired by existing works in time-series modeling leveraging frequency-domain analysis components. - Empirically, the proposed method overall shows a competitive performance compared to the state-of-the-art Koopa, and at the same time has a good time and space computational efficiency. I would like to see the other reviewers' comments on the author response before deciding my final score. --- Reply to Comment 1.1.1: Comment: Thank you for your thoughtful feedback and accurate summary of our paper's key contributions and empirical findings. We greatly appreciate your careful consideration of our clarifications and your concise articulation of our work on the linear RNN with block-diagonal state-transition matrix and learnable frequency-domain gating. Regarding the review process: > Reviewer hUoy has expressed positive willingness to consider raising their score after we addressed their specific questions. This was mentioned in the "Other Comments or Suggestions" section. We have now provided a response to each question. > Reviewer fgpm has noted "more confidence in being willing to support this paper" following our detailed responses, which is encouraging. > Reviewer qExE has not yet responded to the clarifying questions we posed regarding several of their claims in their review. We thank you and all reviewers for your positive engagement throughout this process. We understand your desire to review other reviewers' comments before finalizing your score, and we respect this thoughtful approach to the evaluation process. We're pleased that our responses have helped address your questions, and we remain available should you need any further clarification as you review the comments from other reviewers.
null
null
null
null
null
null
Quadruple Attention in Many-body Systems for Accurate Molecular Property Predictions
Accept (poster)
Summary: The paper introduces MABNet, a machine-learning model designed to improve molecular property predictions by explicitly modeling four-body interactions. The model is designed to be computationally efficient and maintains E(3)-equivariance, ensuring consistency with physical symmetries. Experiments on MD22 and SPICE datasets suggest that MABNet outperforms existing methods in predicting molecular energies and forces. Claims And Evidence: The claims are clear: 1. Introduction of an attention layer to model four-body terms in molecular conformations 2. Competitive performance on two relevant benchmarks. Given the importance of many-body interactions in determining molecular properties, the proposed architecture is sensible, and the reported experimental results are reasonable. Methods And Evaluation Criteria: The evaluation is based on well-established molecular property benchmarks (MD22 and SPICE), using metrics like Mean Absolute Error (MAE) for energy and force predictions. The comparison against multiple baselines ensures a reasonable assessment. Theoretical Claims: The paper is methodological and does not present any new theoretical claim Experimental Designs Or Analyses: The experimental design follows a standard energy+forces regression paradigm. However, I didn't find any information concerning: 1. The model hyperparameters for the baselines. It would be good to compare models that are "close" either in terms of the number of weights or in terms of the embedding dimension. (Table 8 discusses the embedding dimension of MABNet only, right?) 2. Some timings for the other baselines. 3. Error bars are missing. Either from multiple independent initializations of the networks or it is too costly, it is also sufficient to report the error variance on the test set. This would help in understanding whether the superior performance of MABNet is statistically significant. Supplementary Material: I've checked mainly concerning the additional details on the experiments. Relation To Broader Scientific Literature: The work builds on advances in geometric deep learning, molecular property prediction, and many-body physics. It references prior methods, such as SE3Set, VisNet, or QuinNet. The "Related Work" Section provides enough context to the understanding of the paper. Essential References Not Discussed: N/A Other Strengths And Weaknesses: **Strengths:** - Novel direct modeling of four-body interactions. - Strong empirical performance on molecular benchmarks. - Theoretical grounding in many-body physics and equivariance. **Weaknesses:** - Limited discussion on computational cost. - Limited discussion of the baseline hyperparameters. - While the paper is well written overall, I've found the equations in section 3 difficult to understand: the authors only explain what are the q and k variables without mentioning what are the "s", "a" or "v" variables. Other Comments Or Suggestions: Throughout the paper, the architectures based on message-passing and attention are presented as complementary options. However, they are not, and it is not clear where MABNet stands. As far as I understood, MABNet can be classified as a specific form of graph attention, and I think that giving a clearer context on the fundamental paradigms used to model molecules can improve the paper. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the detailed review and insightful comments. Below, we provide point-by-point responses. > Limited discussion of computational cost. R: We have added a detailed discussion of computational cost in the revised manuscript, along with timing comparisons against baselines. We have included timing information for key baselines to provide a better understanding of the computational cost of our method relative to others. The updated results will be included in the supplementary material. |Methods | Memory Used | Training Time (mins) | Inference Time (mins) | | --- | ----------- | ----------- | ----------- | | ViSNet | 15962 MiB | 7.12 | 38.95 | | MABNet (3-body) | 16542 MiB | 8.05 | 42.65| | MABNet (4-body) | 17400 MiB | 9.39 | 49.79 | > Limited discussion of baseline hyper-parameters. R: We have added a detailed description of the hyperparameters used for baselines in the revised manuscript and supplementary materials. Additionally, we have ensured that the comparisons involve models with similar embedding dimensions or comparable parameter counts. | Parameter | MABNet | ViSNet | Equiformer | | --- | ----------- | ----------- | ----------- | | emb. dim. | 256 | 256 | 128 | | layers | 9 | 9 | 6 | | Param. | 12.3M | 9.8M | > Missing error bars to assess the statistical significance of MABNet’s performance. R: We have conducted multiple independent runs of MABNet with different initializations and computed error variances on the test set. These error bars are now included in the revised manuscript to demonstrate the statistical significance of MABNet’s performance. |Ac-Ala3-NHMe| | AT-AT| | | --- | ----------- | ----------- | ----------- | |Energy| Forces| Energy| Forces| |0.05336$\pm$0.00117| 0.07725$\pm$0.00126| 0.06375$\pm$0.00183| 0.07323$\pm$0.00043| > Equations in Section 3 are difficult to follow. R: We apologize for the lack of clarity in our equations. In the revised manuscript, we have provided clear definitions of all variables, including "s," "a," and "v," along with detailed explanations of their roles in the four-body attention mechanism. This should make Section 3 more accessible to readers. We hope these revisions address your concerns and further strengthen our manuscript. Thank you again for your valuable feedback.
Summary: This paper introduces MABNet, an attention-based framework for molecular property prediction that explicitly models four-body interactions. The authors argue that current Graph Neural Networks (GNNs) and Transformers, while promising, struggle to directly capture complex many-body interactions, often relying on approximations. MABNet addresses this by enabling direct communication between atomic quartets through a new "quadruple attention" mechanism. This mechanism incorporates geometric features and E(3) equivariance, and employs spatial sparsity and quantum-inspired pruning to manage computational cost. The key algorithmic innovation is the many-body attention module within equivariant message-passing blocks. The paper claims state-of-the-art performance on challenging molecular property prediction benchmarks, MD22 and SPICE. Claims And Evidence: The claims of state-of-the-art performance on MD22 and SPICE benchmarks are presented, but the evidence is undermined by the lack of transparency regarding the codebase's origin. If the performance is primarily achieved by leveraging and slightly modifying existing VisNet code without proper attribution, then the claim of "state-of-the-art contribution" is significantly weakened. Most of the codebase is adopted from VisNet without any proper citation. The central issue is not whether the numbers are good, but whether the claimed algorithmic contribution is genuine and properly attributed. If the performance gains are largely due to the underlying VisNet framework, which is not acknowledged or properly cited in the methodology, then the claims are misleading and unsupported in terms of originality. A convincing demonstration of independent algorithmic contribution, beyond incremental modification of VisNet, is severely lacking. Methods And Evaluation Criteria: The evaluation criteria (MD22, SPICE benchmarks, MAE metrics) are standard and appropriate if the presented method were genuinely novel. However, the methodology section is deeply flawed by its complete omission of any mention or citation of VisNet or any related works, despite strong indications that the MABNet codebase is substantially derived from it. This omission is a critical ethical and scientific failing. The evaluation, therefore, becomes questionable because it's unclear what is truly being evaluated: a novel method, or a minor modification of VisNet presented as a major breakthrough. In addition, the authors didn't compare with any baselines regarding the efficiency analysis, which is crucial in many cases. Lastly, it is generally recommended to use rMD17 with standard splits compared with MD17 with random splits. Since the codebase provided by the author has the rMD17 dataset file, its questionable about the dataset choice, and why the authors chose to use MD17. Theoretical Claims: The authors claim the network to be E(3) equivariant message-passing, but didn't provide any theoretical proof on this. Experimental Designs Or Analyses: The ablation studies in Table 4, while showing some performance changes, are insufficient to demonstrate genuine algorithmic novelty independent of the underlying VisNet architecture or different modules discussed in the methodology section. There's no ablation study with the removal of certain modules discussed in the methodology section. The computational cost analysis is also questionable when it lacks of baselines. Supplementary Material: I reviewed the appendices included in the manuscript, which contain additional details on feature embedding, datasets, baselines, hyperparameters, and computational costs. Relation To Broader Scientific Literature: The paper discusses its relation to prior work in molecular property prediction, Graph Neural Networks, Transformers, and methods for modeling many-body interactions. However, the literature review could be more comprehensive. The authors should provide a broader overview of molecular property prediction works, even if they aren't used for direct comparison in the experiments. This would give readers a clearer understanding of the current landscape of the domain. Essential References Not Discussed: The paper discusses its relation to prior work in molecular property prediction, Graph Neural Networks, Transformers, and methods for modeling many-body interactions. However, the literature review could be more comprehensive. The authors should provide a broader overview of molecular property prediction or equivariant GNN or Geometric Graph Transformer works, even if they aren't used for direct comparison in the experiments. This would give readers a clearer understanding of the current landscape of the domain and where this article fit in the big picture. Review [1] gives a great overview of the geometric GNNs, several references are missing in this article, especially the Equivariant GNNs and Geometric Graph Transformers listed in Fig.4 [1] Han, Jiaqi, et al. "A survey of geometric graph neural networks: Data structures, models and applications." arXiv preprint arXiv:2403.00485 (2024). Other Strengths And Weaknesses: - The computational cost analysis in Table 10 shows that their 4-body attention approach is significantly slower than 3-body attention (2.35 it/s vs. 3.93 it/s for training), which could limit its applicability to very large molecular systems. - The paper could benefit from a more detailed discussion of how their approach might scale to even higher-order interactions (e.g., 5-body or beyond). - The presentation of the method could be improved, with more consistent notation and clearer symbolic systems throughout the paper. - The paper narrows the use of the network to many-body systems, while VisNet offers a more general framework that could potentially be adapted for many-body systems if needed. Other Comments Or Suggestions: - A visualization of the attention patterns learned by their model could provide additional insight into how it captures four-body interactions. - The presentation can be further improved. It is very hard to read and understand this article because of the presentation. There are notation inconsistencies throughout the manuscript (e.g., "Sotfmax" vs. "Softmax", inconsistent bold/non-bold for learnable matrices). - For MD17 evaluations, the authors should use the standard predefined splits on rMD17 dataset rather than random splits on MD17 dataset to ensure fair comparison with existing methods. The rMD17 dataset, a revised version of MD17, addresses the issue of numerical noise present in the original data, thus is preferred over the original MD17 dataset. Questions For Authors: 1. The term "many-body" is broad. Can authors clarify if MABNet only focuses on dihedral/torsional interactions or is it intended to capture broader quantum many-body effects? 2. How does the performance of MABNet scale with molecular size, particularly for systems larger than those in the MD22 and SPICE datasets? 3. Could you clarify the relationship between your implementation and VisNet? Specifically, which modules/sections in methodology are completely novel and which modules/sections are the same as previous work 4. Your method appears to be specifically designed for many-body systems, whereas VisNet is a more general framework. Could you discuss the trade-offs of your more specialized approach versus a more general framework that could potentially be adapted for many-body interactions? Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the detailed feedback and constructive suggestions to improve our manuscript. Briefly, the reviewer has two main concerns: (1) the contribution of MABNet compared to VisNet, and (2) missing details for baselines, computational efficiency, and visualization of attention patterns. Below, we summarize the reviewer’s questions and provide point-by-point responses. > The contribution of MABNet compared to VisNet We respectfully disagree with the reviewer’s assertion and would like to clarify the difference between MABNet and VisNet. VisNet’s primary contribution is the **runtime geometry calculation (RGC) mechanism**, which extracts angular and dihedral torsion information using vectors that connect the target node to its neighbors. This mechanism relies on pairwise interactions (message passing) to simulate higher-order (three-body, four-body) interactions. In contrast, MABNet introduces a novel **many-body attention mechanism**, which directly enables communication among multiple atoms (e.g., four atoms involved in a dihedral angle). For a dihedral angle, MABNet computes attention scores directly among the four atoms involved. This allows their features to interact directly, which results in a single joint update for the features of all four atoms. Additionally, we have included ablation studies in the revised manuscript to validate the impact of these modules. MABNet shows significant improvement (32.9% of energy) over VisNet on the MD22 dataset. |MD22 (Ac-Ala3-NHMe) | MAE| |---|--------| |VisNet|0.079| |MABNet (2-body)|0.072| |MABNet (3-body)|0.061| |MABNet|**0.053**| > Q1. The broader quantum many-body effects While our current implementation of MABNet focuses on explicitly modeling four-body interactions (e.g., dihedral/torsional interactions), the underlying many-body attention mechanism is designed to be flexible and can be extended to capture broader many-body effects, including higher-order interactions (e.g., five-body or beyond). > Q2. The performance of the MABNet scale with molecular size The computational complexity of MABNet is O(N × N × E), where N is the number of atoms and E is the number of edges in the molecular graph. By controlling the number of combinations during many-body attention computation, MABNet can be efficiently scaled to larger molecular systems. We would include additional experiments on larger molecular systems to demonstrate the scalability of MABNet. > Q3. Module Comparison between MABNet and VisNet While MABNet shares conceptual similarities with VisNet in *graph construction, feature embedding, and message-passing techniques*, the core interaction mechanism is fundamentally different: 1. VisNet relies on pairwise interactions and **runtime geometry calculation (RGC)** to indirectly simulate higher-order interactions. MABNet introduces a novel **many-body attention mechanism** that directly models four-body interactions by enabling simultaneous communication among multiple atoms (e.g., four atoms in a dihedral angle). 2. VisNet considers pairwise interactions between **all atom pairs**. MABNet, by contrast, requires considering combinations of atomic groups. To manage computational cost, MABNet restricts **the number of many-body combinations** (e.g., N × N × E, where N is the number of atoms) while maintaining high accuracy and efficient computations. > Q4. The generalizability of MABNet MABNet is explicitly designed to model four-body interactions, but it can also capture two/three-order interactions effectively. Results on MD17 and QM9 show that MABNet achieves SOTA performance on small molecular systems, while its ability to handle high-order interactions gives it a significant advantage on larger molecular systems. Thus, MABNet is a general framework that is effective across molecular sizes. > S1. The visualization of the attention patterns We agree that visualizing the attention patterns enhances interpretability. We will include visualizations of the learned attention patterns, showing how MABNet captures meaningful four-body interactions. > S2. The performance on the rMD17 dataset We acknowledge the reviewer’s concern regarding the use of random splits in our original MD17 evaluation. To address this, we have re-evaluated MABNet on the rMD17 dataset. The updated results, included in the revised manuscript, show that MABNet maintains competitive performance and further validate its robustness. | rMD17| Azobenzene | Paracetamol | | --- | -------| -------| |VisNet | 0.0156 | 0.0258 | |MABNet | **0.0153** | **0.0252** | > S3. Efficiency analysis We have included timing information for key baselines in the supplementary material. |Methods|Training(mins)|Inference(mins) | | --- |----------- | ----------- | |ViSNet|7.12 |38.95| |MABNet 3-body|8.05|42.65| |MABNet 4-body|9.39|49.79|
Summary: This paper introduces MABNet (MAny-Body interaction Network), a novel geometric attention framework designed to explicitly model four-body interactions for accurate molecular property predictions. MABNet aims to address the limitations of existing models that often implicitly approximate these complex interactions or handle at most triplets of nodes. The paper demonstrates that MABNet achieves sota performance on challenging benchmarks such as MD22 and SPICE. Claims And Evidence: - Explicit modeling of 4-body interactions - there's convincing evidence - sota performance on MD22 and SPICE - results demonstrate this - Efficient handling of higher-order interactions -While the complexity is stated as O(|N|^2 · |E|), the ablation study in Section 5.4 and the computational cost analysis in Appendix B.2 (Table 10) indicate that the increase in computational cost compared to methods considering fewer-body interactions is manageable Methods And Evaluation Criteria: - MD22 and SPICE datasets were chosen as they have larger graphs and are known benchmarking datasets in this field - which makes sense. Theoretical Claims: N/A Experimental Designs Or Analyses: - [Table 2] Compares results with previous sota approaches on MD22 benchmark - [Table 3] Compares results with previous sota approaches on SPICE benchmark - [Table 4] Ablation study demonstrating the impact of equivariance, 4-body attn w/ different cutoffs - [Table 10] Computational cost comparison across the ablations All of these experiments make sense. Supplementary Material: Yes, I reviewed the Supplementary Material. - Table 8: The hyperparameters chosen across datasets are consistent and that's not the driving factor for accuracies. - Table 10: To look into the Memory, Training, Inference costs Relation To Broader Scientific Literature: - Prior work have encoded triplets for attentions explicitly [TGT, Graphormer], however this work extends it to 4-body attn and make it scalable with cutoffs Essential References Not Discussed: - This work does not discuss a vast literature of geometric transformers and doesn't compare with them as well [1, 2, 3, 4, 5]. Even though none of these explicitly model 4-body attn in the way this work does. If this work would have have benchmarked their results on QM9 datasets, it would make it easy to compare against the vast literature in this field. 1. Wang, Yusong, et al. "Geometric transformer with interatomic positional encoding." Advances in Neural Information Processing Systems 36 (2023): 55981-55994. 2. Kwak, Bumju, et al. "Geometry-aware transformer for molecular property prediction." arXiv preprint arXiv:2106.15516 (2021). 3. Shi, Yu, et al. "Benchmarking graphormer on large-scale molecular modeling datasets." arXiv preprint arXiv:2203.04810 (2022). 4. Luo, Shengjie, et al. "One transformer can understand both 2d & 3d molecular data." arXiv preprint arXiv:2210.01765 (2022). 5. Choukroun, Yoni, and Lior Wolf. "Geometric transformer for end-to-end molecule properties prediction." arXiv preprint arXiv:2110.13721 (2021). Other Strengths And Weaknesses: - A stronger comparison against prior literature on geometric transformers would solidify the importance of this work. Other Comments Or Suggestions: - There appears to be a typographical error in Equation (3) on line 19, where "Sotfmax" should likely be "Softmax". Questions For Authors: - How does the training and inference cost compare against other geometric transformers or models? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate the reviewer’s careful review of our paper, positive feedback, and recognition of our work, particularly the novelty of our approach and the meaningfulness of our experiments. Below, we address the concerns point by point: > Comparison with Geometric Transformers R: We have included a discussion of the geometric transformer literature [1, 2, 3, 4, 5] in the revised manuscript. Additionally, we conducted benchmarking experiments on the QM9 dataset to provide a more direct comparison. Due to time constraints, we focused on two properties. Our method achieves an MAE of 0.038, compared to Transformer-M (0.041) and Geoformer (0.040). These results demonstrate that our approach achieves state-of-the-art performance, even on smaller molecular systems where higher-order multi-body interactions are less prominent. This highlights the strength and versatility of our method. |QM9 | $\alpha$ | $\epsilon_{HOMO}$ | | --- | ----------- | ----------- | | Transformer-M | 0.041 | 17.5 | | Geoformer| 0.040 | 18.4| | MABNet| 0.038 | 14.4 | > Training and Inference Costs R: In Section B.2 of the supplementary material, we have included a detailed discussion regarding the training and inference costs of our method. Additionally, we performed a comparative analysis of inference costs with Geoformer [1] in the MD22 dataset. Our results indicate that the improvement in prediction accuracy comes at a modest cost, with no significant degradation in training speed. This demonstrates the efficiency of our approach compared to similar models. |Methods | Memory Used | Inference Time (mins) | | --- | ----------- | ----------- | | Geoformer | 14362 MiB | 33.43 | | MABNet (3-body) | 16542 MiB | 42.65| | MABNet (4-body) | 17400 MiB | 49.79 | We hope these revisions address your concerns and further strengthen our manuscript. Thank you again for your valuable feedback. --- Rebuttal Comment 1.1: Comment: Thanks for the additional results clarifying my concern on apples to apples comparison. In light of these results, I've updated my score. --- Reply to Comment 1.1.1: Comment: Thank you for your valuable feedback. We sincerely appreciate your thoughtful comments and the constructive discussion, which have been very helpful in improving the quality of our paper. We will incorporate these suggested results and discussions in the revised manuscript to enhance clarity and completeness.
null
null
null
null
null
null
null
null
Understanding Sharpness Dynamics in NN Training with a Minimalist Example: The Effects of Dataset Difficulty, Depth, Stochasticity, and More
Accept (poster)
Summary: The paper investigates sharpness dynamics in neural network training using a minimalist deep linear network with one neuron per layer. It theoretically and empirically shows that this model captures progressive sharpening and edge-of-stability behaviors observed in practice. Key contributions include: 1. Identifying dataset difficulty $Q$, depth, batch size, and learning rate as factors influencing sharpness. 2. Deriving theoretical bounds on sharpness at minima, dependent on $Q$ and depth. 3. Demonstrating that SGD’s stochasticity reduces progressive sharpening compared to GD via layer imbalance dynamics. ## update after rebuttal The authors have done a good job explaining their work. My concerns about how to generalize this framework to non-linear models remain. Nevertheless, I believe this paper should be accepted, and I would retain my score. Claims And Evidence: Claim i: The minimalist model replicates sharpness dynamics (progressive sharpening, edge of stability). - Evidence: Figures 3–5 show sharpness trajectories in the model matching practical networks (e.g., Transformers). Claim ii: Sharpness at minima is bounded by $\sigma_1^2 Q^{(D-1)/D}/N$ (Theorem 4.6). - Evidence: Table 3 and Figure 7 show strong correlation between predicted and empirical sharpness. Claim iii: SGD increases layer imbalance $C(\theta)$, reducing sharpness (Theorem 5.6). - Evidence: Figure 6 shows $C(\theta)$ grows faster with smaller batch sizes/larger learning rates, aligning with reduced sharpness in Figure 3. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are appropriate and well-justified for studying sharpness dynamics: - Synthetic Data: Validates theoretical bounds under controlled settings (Sec 4.1). - Real Data: 2-class subsets of CIFAR10/SVHN balance tractability and realism. - Correlation Analysis: Tests generalizability of $\hat{S}_D$ to nonlinear networks (Fig 7, Table 5). Theoretical Claims: The theoretical claims are solid. I checked Sharpness Bounds (Theorems 4.3, 4.6): - Derived via NTK spectral analysis (Appendix A.2–A.3). - Key Step: Expressing the NTK matrix in terms of $\sigma_i$, $d_i$, $Q$, then bounding its spectral norm. I did not carefully check Layer Imbalance Dynamics (Lemma 5.2, Theorem 5.6) but it looks reasonable to me. The analysis assumes GF converges to global minima (Assumption 5.1). While empirically validated, formal convergence guarantees are not provided. Experimental Designs Or Analyses: - GF/GD/SGD trajectories: Figures 3–6 validate sharpness dynamics and layer imbalance. The experiments look reasonable to me. In figure 4, I am puzzled by why RK4 is used. - Figure 7 is good way to demonstrate the upper bound. It looks reasonable to me as well. Supplementary Material: Yes, I read all parts. Relation To Broader Scientific Literature: This work connects to: - Progressive sharpening (Cohen et al., 2021) and edge of stability (Damian et al., 2023). - SGD vs GD sharpness tradeoffs (Agarwala & Pennington, 2024). - Similar minimalist example like Zhu et al. 2023, Kalra et al. 2023, where here the model captures convergence behavior. Extends prior work by analyzing dataset difficulty $Q$ and depth $D$ in a tractable model. Essential References Not Discussed: I am not aware of. Other Strengths And Weaknesses: Strengths: - Novel insights into dataset difficulty and depth’s role in sharpness. - Clear empirical validation across architectures/activations (Table 5, Fig 7). Weaknesses: - Lack of nonlinear dynamics analysis (e.g., ReLU, attention). - It is a good model but feels like a marginal improvement over the "convex-hull" of existing works. Other Comments Or Suggestions: It would be very interesting if the authors can try to run some experiments that are closer to real-world usages. Such as GPT-2 level small experiments or some proper image classification tasks. I would like to see how far this simple model can go. Questions For Authors: 1. How does dataset difficulty $Q$ generalize to non-linear networks? I knew technically there is no difficulty defining it, but I wonder if the intuition still holds. 2. Why does $\hat{S}_D$ correlate with empirical sharpness even under non-balanced initializations (Fig 7)? Could the authors provide some intuition? 3. Could authors provide any intuition on why the precision matters so much? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We deeply appreciate your constructive feedback. Below are our responses to your questions and suggestions. ## Extending our work to more realistic scenarios Thank you for the insightful suggestion. To explore the applicability of our findings in more realistic settings, we conducted additional experiments training CNNs on the CIFAR-2 subset ($N=100$). The experimental setup follows the description in Section 5.1, with the only difference being that we fixed the random seed for weight initialization across runs. Our CNN architecture of depth $D=3$ consists of two convolutional layers (32 channels) with average pooling, followed by a fully connected layer. For predicted sharpness $\hat S_D$, we set $D=3$ that matches the depth of our CNN. The Pearson correlation between actual and predicted sharpness is summarized below: | identity | tanh | SiLU | ReLU | | --- | --- | --- | --- | | 0.7335 | 0.5723 | 0.3561 | 0.4314 | While the correlation decreases with non-linear activations, the identity activation still achieves a correlation above 0.7, suggesting that the intuition behind predicted sharpness may extend to CNNs. However, the behavior under non-linear activations is less clear. A deeper investigation into broader architectures remains an interesting direction for future work. ## How does $Q$ generalize to non-linear networks & Lack of nonlinear dynamics analysis We would like to clarify that $Q$ is a quantity that depends solely on the dataset, not on the architecture of the neural network nor the non-linearity of the activation function. Therefore, we interpret the question on $Q$ as asking whether a nonlinear dynamics analysis is possible. If our understanding is incorrect, please let us know. We now outline our initial attempt at analyzing non-linear networks. Assuming that $XX^\top$ is a diagonal matrix, we can analyze our minimal model with $D=2$. For such $X$’s, each $e_i$ is given as the standard basis vector. The loss function is given by $L(\theta; X) = \frac{1}{2N} \lVert h(Xu) v_1 - y \rVert^2$, and $o_i = e_i^\top h(Xu) = h(e_i^\top Xu)$. We consider a non-linear activation function $h$ that is injective, continuously differentiable, and has non-vanishing derivatives. The gradient flow dynamics for $o_i$ and $v_1$ are given by: $$ \dot o_i = - \frac{1}{N}v_1 \sigma_i \left[(v_1h(e_i^\top Xu) - y) \cdot h'( e_i^\top Xu) \right] = - \frac{1}{N}v_1 \sigma_i \left[(v_1h(\sigma_i o_i) - d_i) \cdot h'(\sigma_i o_i) \right]. $$ and $$ \dot v_1 = - \frac{1}{N} \sum_{i=1}^r h(e_i^\top Xu) \left[e_i^\top \left(v_1h(Xu) - y \right) \right] = - \frac{1}{N} \sum_{i=1}^r h(\sigma_i o_i) \left[ \left(v_1h(\sigma_i o_i) - d_i \right) \right] . $$ We can derive the following equation: $$ v_1 \dot v_1 = \sum_{i=1}^r \frac{\dot o_i}{\sigma_i} \frac{h(\sigma_i o_i)}{h'(\sigma_i o_i)} . $$ Let $g$ be an antiderivative of $\frac{h}{h'}$. Integrating both sides gives: $$ \frac{1}{2} v_1^2 = C+ \sum_{i=1}^r \frac{1}{\sigma_i^2} g(\sigma_i o_i). $$ Assuming the convergence of the loss, we have: $$ o_i = \frac{1}{\sigma_i} h^{-1}\left( \frac{d_i}{v_1} \right). $$ Therefore, at convergence, the following equation holds: $$ \frac{1}{2}v_1^2 = C + \sum_{i=1}^r \frac{1}{\sigma_i^2} g\left(h^{-1}\left( \frac{d_i}{v_1} \right)\right). $$ This results in a highly non-linear equation of $v_1$ which requires further investigation to fully understand its solutions $v_1^\star$. As such, we have decided to leave these parts for future work. ## Why does $\hat S_D$ correlate with empirical sharpness under non-balanced initialization? For the sake of theoretical analysis, our analysis employs a balanced initialization. Nevertheless, we anticipate that similar trends would be observed under a non-balanced initialization scheme. Moreover, owing to the network's relatively large width in our non-linear network experiments, we believe that the randomness in initialization is effectively "averaged out," leading to consistent results across different random seeds. ## Our minimalist model is a marginal improvement We agree that the minimalist model, on its own, builds upon previous works and is not the core novelty of our study. Instead, our primary contribution lies in rigorously connecting this model to the phenomenon of progressive sharpening—a relationship that has been underexplored in theoretical analyses. Through this, we derive novel insights into how problem parameters influence the degree of progressive sharpening during training. We believe these results advance the understanding of sharpening dynamics in ways that prior studies have not addressed. ## Why does precision matter so much? Please refer to our answer to Reviewer smTr. ## RK4 in Figure 4 Please refer to our answer to Reviewer 2e3Z.
Summary: This paper analyzes sharpness dynamics and characterizes the effect of dataset complexity, depth, batch size (Phenomena 1). Furthermore, they show a simplified model, a deep linear network with unit width trained on multiple examples, captures the Phenomena 1 and analyzes it in detail. By analyzing the sharpness at convergence and properties of the Gradient Flow, they show that the rate of progressive sharpening increases with depth and data complexity. They also argue the effect of batch noise and learning rate in SGD setting. Claims And Evidence: Claims: * (Empirical) Phenomena 1: Sharpness increases with (1) Dataset size, (2) Depth and decreases at smaller batch sizes (Figure 1 and 2) * (Empirical) The simplified model empirically captures Phenomena 1 * (Theoretical) The theoretical analysis (+ empirical analysis Tables) of the simplified model captures the effect of dataset complexity and depth. * The effect of batch size and learning rate in SGD case is captured in restricted settings (depth 2) Methods And Evaluation Criteria: The method and evaluations are solid. Theoretical Claims: The theoretical claims are sound. I have verified the claims in the main text and also verified dynamical equations for GF, GD and SGD. Experimental Designs Or Analyses: The experimental design and analyses are sound. Supplementary Material: I have gone through most of Appendix except Appendix A.2, A.3, which I have only skimmed through to check for soundness. Relation To Broader Scientific Literature: The paper improves our understanding of progressive sharpness by examining the effect of depth, dataset complexity and batch size. The effect of smaller batch size reducing the rate of progressive sharpening has been known in prior work, such as Ref. [1]. [1] Agarwala, A. and Pennington, J. High dimensional analysis reveals conservative sharpening and a stochastic edge of stability. arXiv preprint arXiv:2404.19261, 2024 Essential References Not Discussed: I think the work adequately discusses existing works. Other Strengths And Weaknesses: Strengths * The paper is clearly written and the claims are justified well. Weakness * For D > 2, only balanced initializations are examined * The GD/SGD analysis is restricted to D = 2 Other Comments Or Suggestions: Suggestions: * For equation 3, it might be better to use some other variable than d as its already used for input dimension Questions For Authors: Questions: * Effect of precision on EoS: if high precision can cause loss to diverge, then the theoretical analysis should suggest divergences. But typical theoretical analysis does not suggest that. Do the authors have some understanding of this? * I am not familiar with step size selection for Runge Kutta methods. Why was the step size chosen to be proportional to inverse sharpness and recomputed periodically? * In Figure 2, do the authors understand why does the sharpness decrease --> increase --> decrease --> finally increase? Typically, GD trajectories are known to show three types of dynamics [1]: (1) increase throughout, (2) decrease throughout, and (3) decrease then decrease. Is it because of the Runge-Kutta step size selection? [1] Gradient Descent on Neural Networks Typically Occurs at the Edge of Stability https://arxiv.org/abs/2103.00065 [2] Universal Sharpness Dynamics in Neural Network Training: Fixed Point Analysis, Edge of Stability, and Route to Chaos Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate your valuable feedback. Below, we address your comments. ## Balanced initializations for D>2 & GD/SGD analysis only on D=2 While it is true that the assumption of balanced initialization for $D > 2$ may appear restrictive, we believe it is justified for the following reasons: - In line 728 of our paper, we implicitly use that condition to derive the GF limit point $v_1^\star$. Without it, the resulting equation becomes a polynomial of degree $D$, whose closed-form solution can’t be derived when $D\geq 5$. For $2<D<5$ cases, the solutions are also hard to interpret. Although the roots might be solvable numerically, we prioritized presenting interpretable analytical bounds that highlight the effect of depth. - Balanced initialization is also a widely adopted assumption in the literature of deep linear networks [2, 3]. - As shown in Figure 7, experimental results that don’t satisfy balanced initialization align with our theoretical predictions. In the case of GD/SGD analysis, the balanced constant $C$ does not remain fixed throughout training. As a result, even if training starts from a balanced initialization for $D > 2$, the balancedness may not be preserved during the optimization process. For this reason, we limited our analysis to the two-layer case for intuitive understanding. ## Effect of precision on EoS in typical theory Few works mention the precision as a meaningful parameter for EoS phase. We found one work [1, Appendix D.3], where they also mention that EoS dynamics is sensitive to the precision. To elaborate the intuition, we’ll briefly introduce Section 4 of [1], which introduces a concise 2-dimensional description of gradient descent dynamics near the edge of stability by Taylor-expanding the loss gradient $\nabla L$ around a fixed reference point $\theta^\star$ (which can be understood as the first $\theta$ iterate that reaches sharpness $2/\eta$). Two key quantities are defined: - $x_t = u \cdot (\theta_t - \theta^\star)$ measures the displacement from $\theta^\star$ along the unstable (top eigenvector of $\nabla^2 L(\theta^\star)$) direction $u$. - $y_t = \nabla S(\theta^\star) \cdot (\theta_t - \theta^\star)$ quantifies the change in sharpness from its threshold value $2/\eta$. We assume that progressive sharpening happens at scale of $\alpha$: $-\nabla L(\theta^*) \cdot \nabla S(\theta^*) = \alpha > 0$. The dynamics are described in three stages cycling throughout: - **Progressive Sharpening:** For small $x_t$ and $y_t$, the sharpness increases linearly: $y_{t+1} - y_t \approx \eta \alpha \, (\alpha > 0).$ The update for $x_t$ is: $x_{t+1} \approx -(1+\eta y_t) x_t .$ Thus, once $y_t$ becomes positive, the factor $(1+ \eta y_t) > 1$ causes $|x_t|$ to grow. These updates rely on the 1st-order approximation of the gradient. - **Blowup:** With $y_t > 0$, the multiplicative effect in the $x$-update leads to exponential growth in $|x_t|$, marking the blowup phase where the 1st-order approximation no longer suffices. - **Self-Stabilization:** When $|x_t|$ is large, the 2nd-order approximation of gradient (which involves $\nabla^3 L$) yields: $y_{t+1} - y_t \approx \eta \left( \alpha - \frac{\beta}{2} x_t^2\right),$ which provides a negative feedback that reduces $y_t$ and stops further growth of $|x_t|$. Note from above that when $y_t < 0$, (i.e., before blowup), $|x_t|$ shrinks to zero **exponentially**. We hypothesize that higher numerical precision allows $|x_t|$ to remain small for longer iterations, postponing blowup and self-stabilization, resulting in abnormally high sharpness. This is further supported by the experimental results discussed in our response to Reviewer 2e3Z. ## Why is RK4 step size proportional to the inverse sharpness? The RK4 step size is chosen based on its linear stability range. For the ODE $\frac{dy}{dt} = \lambda y$ with $\lambda \in \mathbb{R}$, RK4 is stable if $-2.785 \leq \lambda \eta \leq 0$, where $\eta$ is the step size. Since we already flip the sign in gradient flow, using a step size proportional to the inverse sharpness ensures stable integration. Further rationale on our choice of RK4 can be found in our response to Reviewer 2e3Z. ## Sharpness dynamics in Figure 2 To clarify, Figure 2 illustrates GD and SGD dynamics (Appendix B.1), not RK4. The observed sharpness patterns are thus likely due to problem-specific characteristics of the dynamics. However, our work focuses on the end-to-end behavior, not intermediate dynamics, leaving this as an open question. --- [1] Damian et al., 2023, Self-Stabilization: The Implicit Bias of Gradient Descent at the Edge of Stability, ICLR. [2] Arora et al., 2018, On the Optimization of Deep Networks: Implicit Acceleration by Overparameterization, ICML. [3] Arora et al., 2019, A Convergence Analysis of Gradient Descent for Deep Linear Neural Networks, ICLR. --- Rebuttal Comment 1.1: Comment: I thank the authors for their rebuttal. Most of my concerns have been resolved. As my initial score was already high, I would like to keep my score. --- Reply to Comment 1.1.1: Comment: Thank you for your positive feedback and for taking the time to review our work. We are glad to hear that our rebuttal addressed most of your concerns. We are grateful for your thoughtful assessment throughout the review process.
Summary: This paper first shows that a "minimalist model"--one that has a single unit per layer with linear activations--can effectively captures a recently observed phenomenon called "progressive sharpening", where the sharpness of the loss increases as training progresses. This sharpness then stabilizes around $2/\eta$ ($\eta$ being the learning rate), a regime called the "edge of stability". The paper then defines several metrics: dataset difficulty and layer imbalance, which are used in theoretical bounds on the sharpness at the global minimum. ## Update after rebuttal I am now more confident in recommending acceptance. The rebuttal text should be added to the paper or the supplementary, and would improve its presentation and overall quality. Claims And Evidence: The authors test the effect of dataset size, network depth, batch size, and learning rate in progressive sharpening, and the theory derived lines up well with their experiments. While the datasets used in the paper are common and with good reason (for example, SVHN is "harder" than CIFAR-10, and this paper's results align with that), I'm concerned that *only* using these two limits the generalizability of the results. Most of this paper's results solely rely on CIFAR-10 (Figure 1-4), so it's unclear how well they translate to other datasets, such as those with different modalities. Another issue along these lines is in Appendix C, where the authors seem to have run the experiment for a single seed, on a single, randomly-generated dataset, with one set of parameters passed to the `minimal_data` function, and only using the minimal model with $D=2$. This limits how representative the results are. Methods And Evaluation Criteria: This is an interesting paper. For how surprising some results are, the authors do a good job designing experiments to show claimed effects. However, I believe the paper misses one aspect surrounding the variables it tests. While the paper tests the effect of learning rate and batch size separately on the sharpness, recent work has shown that these two determine the dynamics of SGD *jointly* as a ratio instead, see [1-2] below. Therefore, this paper would also benefit from performing its experiments with the *ratio* as a parameter. [1] Jastrzębski, S., Kenton, Z., Arpit, D., Ballas, N., Fischer, A., Bengio, Y., & Storkey, A. (2017). Three factors influencing minima in sgd. arXiv preprint arXiv:1711.04623. [2] Smith, S. L., Kindermans, P.-J., Ying, C., & Le, Q. V. (2018). Don’t Decay the Learning Rate, Increase the Batch Size. International Conference on Learning Representations. Theoretical Claims: I checked the proofs in the main paper and did not find any issues. I skimmed the proofs of the Appendix, where results seem to follow naturally from prior work. Experimental Designs Or Analyses: I have detailed these concerns under Claims and Evidence. Supplementary Material: I reviewed the entirety of the appendix. Relation To Broader Scientific Literature: The contributions of this paper are significant. Although the paper's theory and experiments are currently limited to a small architecture, the authors effectively argue why it is still relevant. Moreover, this paper is an advancement in the field, and other papers in the future will build on it to larger architectures. Essential References Not Discussed: I did not find any crucially missing references, though the authors should cite the LOBPCG paper since they use and mention it explicitly: [1] Knyazev, A. V. (2001). Toward the optimal preconditioned eigensolver: Locally optimal block preconditioned conjugate gradient method. SIAM journal on scientific computing, 23(2), 517-541. Other Strengths And Weaknesses: This paper could benefit from better presentation in a couple of instances: * On page 4, under "Progressive Sharpening", the authors state: "In Figure 3 and Figure 4, we can observe the same trend described in Phenomenon 1 [...]". This would be a good chance for Figure 3 to also show the edge of stability in the same plots. Figure 4 already does this fairly well. * In Section 4, the $o_i$ terms are used without definition. Other Comments Or Suggestions: None. Questions For Authors: 1. For Figure 4 (and Figure 1, as stated in Appendix B.1), why do you use the Runge-Kutta method? It's possible I misunderstood, but my read of these figures was that they plot sharpness over optimizer iterations; it's curious to not use something like SGD. 2. For sharpness, why do you specifically choose the maximum Hessian eigenvalue instead of other established measures such as the robustness under adversarial perturbations [1] or the volume of a polytope with a thresholded height [2]? These, especially the former, have been endorsed in the literature [3-4]. [1] Keskar, N. S., Mudigere, D., Nocedal, J., Smelyanskiy, M., & Tang, P. T. P. (2017). On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima. International Conference on Learning Representations. [2] Hochreiter, S., & Schmidhuber, J. (1997). Flat minima. Neural Computation, 9(1), 1–42. [3] Dinh, L., Pascanu, R., Bengio, S., & Bengio, Y. (2017, July). Sharp minima can generalize for deep nets. In International Conference on Machine Learning (pp. 1019-1028). PMLR. [4] Andriushchenko, Maksym, Francesco Croce, Maximilian Müller, Matthias Hein, and Nicolas Flammarion. “A Modern Look at the Relationship between Sharpness and Generalization.” In International Conference on Machine Learning, 840–902. PMLR, 2023. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate your insightful feedback. We have provided figures for additional experiments in the supplementary PDF file [[Link]](https://anonymous.4open.science/r/understand_progressive_sharpening-E2F5/Experimental_Results_2e3Z.pdf). Below, we address your comments. ## Applicability of results across datasets We have included additional experimental results on SVHN, shown in Figures 1–4 of the supplementary file, confirming that the observed trends hold across both CIFAR-10 and SVHN. To further test our results on a different modality, we conducted experiments on the Google Speech Commands dataset [3]. The results in Figures 7–10 and Tables 1–5 of the supplementary file again exhibit similar trends. These findings suggest that our observations capture general training dynamics rather than being specific to a particular dataset or modality. ## Additional Experiments on Precision (Appendix C) We appreciate the reviewer’s concern regarding the generality of our precision experiments in Appendix C. We have conducted additional runs on deep neural networks and a real-world dataset. The updated results are provided in Figures 5 and 6 of the supplementary PDF file. - **Figure 5:** We trained our minimalist model (width=1, depth=2) on a CIFAR-2 subset ($N=300, \eta=0.01$). The results confirm that higher precision leads to a blowup. Additionally, Figure 5a shows that increased precision postpones the sharpness drop (catapult phase). - **Figure 6:** We trained a 3-layer SiLU-activated NNs (width=32) on a CIFAR-2 subset ($N=300, \eta=0.01$). While all settings do not exhibit a blowup, higher precision delays the catapult phase until extremely high precision ($\geq 256$). Beyond this threshold, the training dynamics become nearly identical, with completely overlapping curves. We suspect that a larger width prevents excessive progressive sharpening and blowup. For further intuition, please refer to our response to Reviewer smTr. ## Ratio of batch size and learning rate as a key parameter We appreciate the suggestion to analyze the ratio $\eta/B$ as a key parameter. While we did not explicitly frame our experiments in terms of this ratio, our results in Figure 2 already allow for an implicit comparison. For example, the cases with $B=125, \eta=2/400$ and $B=250, \eta=2/200$ share the same ratio but exhibit slightly different sharpness trends, suggesting that $\eta/B$ alone may not fully capture the observed phenomena. In large-batch settings ($B \approx N$), our results align with the GD Edge of Stability (EoS) literature [1], showing that sharpness evolution remains largely unchanged before EoS, implying that the ratio has little effect on sharpness dynamics in this regime. Given these observations, we are uncertain about the additional insights gained by treating the ratio as the primary parameter, but we welcome further discussion. ## Use of Runge-Kutta in Figures 1 & 4 Our choice of the Runge-Kutta (RK4) method follows the setup in [1] (Appendix I.5). As shown in their Figures 29, 31, 33, 35, 37, and 39, RK4 gradient flow closely tracks GD dynamics before EoS. This allows us to analyze training dynamics independently of the learning rate schedule, providing a cleaner theoretical comparison. We will add the relevant citations in the revised manuscript. ## Choice of sharpness measure Our sharpness definition follows [1] and [2], as our primary goal is to study progressive sharpening observed in [1] and the transition into the EoS regime. While alternative measures offer valuable insights, they primarily focus on generalization rather than training dynamics. Since our study is centered on the optimization perspective, the Hessian maximum eigenvalue remains the most relevant metric. ## Minor concerns >Cite the LOBPCG paper Thank you for pointing this out. We will add the citation. >Modify Figure 3 to also show the edge of stability. Figure 4 already does this fairly well. We would like to clarify that Figure 4 does not depict the edge of stability. The observed sharpness saturation in Figure 4 is not due to the edge of stability but rather reflects sharpness dynamics under gradient flow, simulated using the Runge-Kutta (RK4) method instead of gradient descent. The edge of stability is explicitly presented in Figure 2 for the case where $B = N$, where sharpness saturates at 200 for a learning rate of $\eta = 2/200$. >$o_i$ is used without definition In Section 4, we specify that the first-layer weight $u$ can be decomposed as $u=\sum_{i=1}^r o_i w_i + \Pi_W^{\perp}u$, where $w_i$ are the right singular vectors of $X$. Thus, $o_i$ represents the component of $u$ along $w_i$. --- [1] Cohen et al., Gradient Descent on Neural Networks Typically Occurs at the Edge of Stability, ICLR 2021. [2] Damian et al., Self-Stabilization: The Implicit Bias of Gradient Descent at the Edge of Stability, ICLR 2023. [3] Warden P., Speech Commands: A Dataset for Limited-Vocabulary Speech Recognition, 2018. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed responses. The authors' rebuttal clearly addresses all of my concerns. Much of their rebuttal text to other reviewers would also be useful additions to the supplementary material. I am updating my score to recommend acceptance. --- Reply to Comment 1.1.1: Comment: We appreciate your careful reading of our response. Your feedback has greatly contributed to improving the quality of our work. Also, we would like to thank you for raising the score. We will incorporate the discussion here into the revised manuscript.
Summary: This paper employs a minimalist model to investigate the progressive sharpening phenomenon in the training of deep neural networks. Progressive sharpening is a widely observed phenomenon characterized by the enhancement of sharpness during training using gradient descent or stochastic gradient descent, before reaching a saturation point at the boundary of stability. The primary objective of this study is to explore the relationship between progressive sharpening and various problem parameters, such as dataset size, network depth, batch size, and learning rate. The minimalist model is a regression problem with a single neuron per layer and identity activation. Claims And Evidence: The provided paper introduces two crucial quantities: the dataset difficulty, denoted as $Q$, and the layer imbalance, denoted as $C(\theta)$. It establishes a bound on the sharpness of the minimizer in terms of these two quantities under specific assumptions. The paper substantiates these claims through both proofs and empirical evidence. Methods And Evaluation Criteria: The empirical methods appear to be sound and provide a valid basis for the verification of their theoretical results. Theoretical Claims: The proofs are grounded in conventional eigendecompositions of the data matrix, which appears sound to me. Experimental Designs Or Analyses: Empirical observations are systematically integrated throughout the paper to substantiate the theoretical assertions. Supplementary Material: I have reviewed the appendices, which comprise supporting evidence and supplementary experiments. Relation To Broader Scientific Literature: The studied phenomenon is closely related to the ongoing research on the edge-of-stability phenomenon and the potential implicit regularization effect of SGD. The minimalist examples also appear to have drawn inspiration from several previous works [1, 2], which should be included in the literature review (see also the next question). [1] Gunasekar, Suriya, et al. "Implicit bias of gradient descent on linear convolutional networks." Advances in neural information processing systems 31 (2018). [2] Woodworth, Blake, et al. "Kernel and rich regimes in overparametrized models." Conference on Learning Theory. PMLR, 2020. Essential References Not Discussed: I recommend that the paper conduct a more comprehensive literature review on the training dynamics of neural networks and the previous studies of gradient descent (GD) and stochastic gradient descent (SGD). While the authors have made it evident that their study focuses on the influence of problem settings on the progressive sharpening, I believe it would be essential to discuss the significance of this sharpening (implicit regularization) and how it could be utilized to enhance training (sharpness-aware minimization, SGD). Additionally, as previously mentioned, I believe the connection between the proposed minimalist model and previous works should be further elaborated. To list a few: (1) Implicit Regularization of SGD: [4, 5, 6] (2) SAM: [3, 7, 8] (3) Connections to linear diagonal networks (and its variants): [1, 2, 5, 6, 9, 10] [3] Long, Philip M., and Peter L. Bartlett. "Sharpness-aware minimization and the edge of stability." Journal of Machine Learning Research 25.179 (2024): 1-20. [4] Wu, Jingfeng, et al. "Direction matters: On the implicit bias of stochastic gradient descent with moderate learning rate." arXiv preprint arXiv:2011.02538 (2020). [5] HaoChen, Jeff Z., et al. "Shape matters: Understanding the implicit bias of the noise covariance." Conference on Learning Theory. PMLR, 2021. [6] Ren, Yinuo, Chao Ma, and Lexing Ying. "Understanding the generalization benefits of late learning rate decay." International Conference on Artificial Intelligence and Statistics. PMLR, 2024. [7] Andriushchenko, Maksym, and Nicolas Flammarion. "Towards understanding sharpness-aware minimization." International conference on machine learning. PMLR, 2022. [8] Foret, Pierre, et al. "Sharpness-aware Minimization for Efficiently Improving Generalization." International Conference on Learning Representations. [9] Pesme, Scott, Loucas Pillaud-Vivien, and Nicolas Flammarion. "Implicit bias of sgd for diagonal linear networks: a provable benefit of stochasticity." Advances in Neural Information Processing Systems 34 (2021): 29218-29230. [10] Cai, Yuhang, et al. "Large stepsize gradient descent for non-homogeneous two-layer networks: Margin improvement and fast optimization." Advances in Neural Information Processing Systems 37 (2024): 71306-71351. Other Strengths And Weaknesses: The paper is generally well-written and easy to follow. Heuristic claims are frequently validated with empirical observations, and the theorems are rigorously mathematically constructed. Other Comments Or Suggestions: See questions. Questions For Authors: - I am somewhat perplexed by the reasoning presented in Section 5.2. It is asserted that whenever $C(\theta) \leq 0$, the corresponding layer imbalance of both GD and SGD is inevitably bound to increase. However, as previously mentioned by the authors, $C(\theta)$ continues to rise even when $C(\theta)$ is positive, which is clearly not elucidated by Theorem 5.6. Could you provide a possible explanation for this discrepancy? In this context, how does the batch size $B$ and the step size $\eta$ influence the progressive sharpening process? - The balanced condition (Assumption 4.5) appears to be quite stringent. Could you elucidate the underlying rationale for this assumption? - As previously mentioned, how does progressive sharpening impact the generalization and the model’s quality after training? Are there any practical implications of the findings to the actual training of neural networks? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate your thoughtful comments. Below, we address your recommendations and questions. ## Comprehensive Literature Review & Implications Thank you for highlighting relevant literature that we initially omitted. We agree that incorporating a more comprehensive review will strengthen our work. - Linear diagonal networks: Our minimalist model shares similarities with diagonal linear networks in a sparse regression setting. [1] showed that SGD leads to solutions with better generalization than GD. Similarly, our results show that SGD induces less progressive sharpening than GD, leading to lower sharpness at convergence. Considering that lower sharpness correlates with improved generalization in diagonal linear networks [2], they both unveil how stochasticity can help generalization. - Potential practical implications on learning rate scheduling: The study by [6] highlights that the catapult mechanism contributes positively to model generalization, and catapults can be induced by designing a proper learning rate schedule. In light of this, predicting sharpness evolution can offer practical value when designing such schedulers. - Connection between sharpness and generalization: SAM [3] was introduced under the hypothesis that minimizing sharpness improves generalization. Moreover, GD with a large learning rate has been shown to implicitly find flatter solutions [4], which often generalize better than those obtained with small learning rates. While these works suggest a correlation between sharpness and generalization, [5] showed that this relationship is data-dependent. The link between sharpness and generalization remains an active research area, though our study focuses on optimization dynamics rather than generalization. Nevertheless, our findings on progressive sharpening may relate to generalization, as larger learning rates and smaller batch sizes yield less progressive sharpening, effectively regularizing toward lower-sharpness solutions. This also connects our analysis to the broader literature on the implicit bias of GD and SGD, including the works mentioned by the reviewer. ## Increase of $C(\theta)$ when it is positive We acknowledge that Theorem 5.6 may have caused confusion, especially when $C(\theta) > 0$. Our simplified version emphasized the effect of batch size and learning rate while avoiding complexity. Below is the complete result. In Appendix A.5, instead of applying the Cauchy-Schwarz inequality in line 847 (Eq. 23), we derive the following exact characterization: For SGD, $$ \\begin{align*} &\mathbb{E}\_P[C(\theta_\text{SGD}^+)] - C(\theta) \\\\ &= \underbrace{\frac{\eta^2}{N^2} [ - \Psi_1(\theta) C(\theta) + \Omega_1(\theta) ]}\_{C(\theta_\text{GD}^+) - C(\theta)} + \frac{\eta^2(N-B)}{BN^2(N-1)}[- (\Psi_2(\theta) - \Psi_1(\theta)) C(\theta) + (\Omega_2(\theta) - \Omega_1(\theta))] \\end{align*} $$ We denote, $\Omega_1(\theta) \triangleq \sum_{i} \sum_{j>i} \left[ \sigma_i (z(\theta)^\top e_i) o_j - \sigma_j (z(\theta)^\top e_j) o_i \right ]^2$, $\Omega_2(\theta) \triangleq N\sum_{i} \sum_{j>i} \lVert \sigma_i(z(\theta) \odot e_i) o_j - \sigma_j(z(\theta) \odot e_j) o_i \rVert^2$, and use $\Psi_1$, $\Psi_2$ from our theorem statement. Then, $C(\theta)$ increases for GD ($C(\theta_\text{GD}^+) - C(\theta) \geq 0$) if and only if $$ C(\theta) \leq \frac{\Omega_1(\theta)}{\Psi_1(\theta)}=: T_1(\theta). $$ Moreover, $C(\theta)$ increases for SGD faster than GD ($\mathbb{E}\_P [C(\theta\_\text{SGD}^+)] - C(\theta\_\text{GD}^+) \geq 0$) if and only if $$ C(\theta) \leq \frac{\Omega_2(\theta)- \Omega_1(\theta)}{\Psi_2(\theta) - \Psi_1(\theta)}=: T_2(\theta). $$ We numerically verified that $C(\theta) \leq T_1(\theta)$ and $C(\theta) \leq T_2(\theta)$ holds throughout the GD/SGD training. The experiment results in PDF [[link]](https://anonymous.4open.science/r/understand_progressive_sharpening-E2F5/Experimental_Results_47Zn.pdf) illustrate how $C(\theta)$ increases even when positive. Moreover, we observe the same dependence on learning rate $\eta$ and batch size $B$—specifically, $C(\theta)$ increases more with a larger learning rate or a smaller batch size, as long as $C(\theta) \leq T_1(\theta)$ and $C(\theta) \leq T_2(\theta)$ hold. We will incorporate this discussion into the revised manuscript. ## Underlying rationale for balanced condition Please refer to our response to Reviewer smTr. --- [1] Implicit Bias of SGD for Diagonal Linear Networks: a Provable Benefit of Stochasticity, NeurIPS 2021 [2] Implicit Bias of the Step Size in Linear Diagonal Neural Networks, ICML 2022 [3] Sharpness-Aware Minimization for Efficiently Improving Generalization, ICLR 2021 [4] Gradient Descent on Neural Networks Typically Occurs at the Edge of Stability, ICLR 2021 [5] A Modern Look at the Relationship between Sharpness and Generalization, ICML 2023 [6] Catapults in SGD: spikes in the training loss and their impact on generalization through feature learning, ICML 2024 --- Rebuttal Comment 1.1: Comment: I would like to express my gratitude to the reviewers for their thoughtful responses, which have partially clarified my concerns. In light of their feedback, I have revised my recommendation from 2 (Weak Reject) to 3 (Weak Accept). Nevertheless, I believe that certain modifications are necessary to enhance the readability and presentation of this paper. --- Reply to Comment 1.1.1: Comment: We appreciate your careful consideration of our work and are glad to hear that our response has helped address some of your concerns. We will also make further efforts to improve readability and presentation in the revised version. In addition to incorporating the author-reviewer discussion in our revision, we plan to add several results that will strengthen the paper. These include: - For $D=2$, under a standard random initialization scheme, we have analyzed the (expected) initial sharpness and sharpness at convergence. We will include the resulting quantitative characterization of the “sharpness increase” over the trajectory of training. We also provide a [[link]](https://anonymous.4open.science/r/ups-BCF4/Comment_to_47Zn.pdf) to the corresponding empirical results, which show that the expected sharpness increment closely matches the actual sharpness increase. - As part of our literature review, we will also include a discussion of [1], contextualizing it relative to our Theorem 5.6. They consider a scalar linear network with loss $\mathcal{L}(\boldsymbol w) \triangleq \frac{1}{2} \left( \prod_{i=1}^D \boldsymbol w_i - 1 \right)$, for depth $D \in \mathbb{N}$ and weights $\boldsymbol w \in \mathbb{R}^D$. Their Theorem 3.2 shows that gradient descent does not increase the sharpness of the gradient flow solution initialized at GD iterates (referred to as GFS sharpness). Similarly, our Theorem 5.6 and the rebuttal show that $C(\theta)$ increases over time when mild conditions on $C(\theta)$ are satisfied. Together with our Remark 4.4, these results imply that GFS sharpness decreases as training progresses under GD/SGD in our minimalist model. We sincerely thank you again for your valuable feedback and helpful suggestions. --- [1] Kreisler et al., Gradient Descent Monotonically Decreases the Sharpness of Gradient Flow Solutions in Scalar Networks and Beyond, ICML 2023.
null
null
null
null
null
null
Speeding up Policy Simulation in Supply Chain RL
Accept (poster)
Summary: This paper provides an iterative algorithm that is easy to parallelize with GPUs to speed up policy evaluation in RL. The theoretical analysis is specific to supply chain optimization, where the authors leverage the assumption that demand is significantly higher than supply so that actions at different time steps are largely independent, proving that the algorithm converges in a small number of iterations (independent of the time horizon length). The computational studies provide empirical evidence that largely aligns with the theoretical analysis and extends the applications to RL settings beyond supply chain analysis. Claims And Evidence: The claims are generally well-supported. I only have one concern related to the introduction. While the introduction outlines a general challenge in supply chain optimization (SCO), the theoretical analysis focuses specifically on fulfillment optimization (FO) problems that are not supply-constrained and have no replenishment. The assumptions made in the theoretical analysis may not generalize well beyond FO problems. I suggest revising the introduction to more clearly state these key assumptions and the specific focus of the paper to better align with its scope and contributions. Methods And Evaluation Criteria: The proposed method focuses on letting the algorithm converge in a small number of iteration. The numerical studies report the number of iterations taken to converge. They are well aligned. Theoretical Claims: I reviewed the proofs of Theorems 3.1 and 3.2. I didn't spot any major issues. Experimental Designs Or Analyses: Yes, I checked. I didn't identify any major issues. Some suggestions are mentioned in the "strengths and weaknesses" section below. Supplementary Material: I review the proofs in the appendix but did not run the code Relation To Broader Scientific Literature: The idea aligns with the recent trend of taking advantage of GPUs to accelerate optimization algorithms that are predominantly implemented with CPUs to date, thus could be interesting to a broad community. Essential References Not Discussed: I didn't identify any missing references that are essential. Other Strengths And Weaknesses: **Strengths**: - Overall, the paper is very well-written and thus easy to follow. - The idea aligns with the recent trend of taking advantage of GPUs to accelerate optimization algorithms that are predominantly implemented with CPUs to date, thus could be interesting to a broad community. - The numerical studies present significant speedups compared to existing methods. **Weaknesses**: - **Exposition**. My primary concern relates to the exposition, specifically the following two points. - **Intro**. While the introduction outlines a general challenge in supply chain optimization (SCO), the theoretical analysis focuses specifically on fulfillment optimization (FO) problems that are not supply-constrained and have no replenishment. The assumptions made in the theoretical analysis may not generalize well beyond FO problems. I suggest revising the introduction to more clearly state these key assumptions and the specific focus of the paper to better align with its scope and contributions. - **Section 3**. In section 3, the authors choose to present a detailed proof of a specific result (Theorem 3.1) and defer the proof of a general result (Theorem 3.2) to the appendix. Given that presenting Theorem 3.1 does not seem to yield much additional insight, the authors might consider restructuring this section to highlight the most significant contribution, i.e., Theorem 3.2, and present Theorem 3.1 as a corollary. Alternatively, the authors might restructure the discussions around Theorem 3.1 to highlight the insights obtained by analyzing this simple case---which might be hard to obtain given the complexity of the more general case. - **Theorem 3.2**. $\mathcal{Q}_T$ was defined as a set in lines 262--263, but is used as a scalar in this statement. - **No replenishment**. While the assumption of no replenishment enables elegant theoretical analysis, it warrants more careful justification since it may deviate from common supply chain practices. - **Data generation**. To ensure that the paper is self-contained, it would be helpful if the authors provided a brief description of their data generation process. Other Comments Or Suggestions: - PO is not defined in the main body. Questions For Authors: - Section 4.1. What do you use an MLP to approximate the greedy algorithm instead of implementing it exactly? - Section 4.1. In line 320, what does "increased conflicts" mean? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your detailed review! Please see our responses below. ## Comments on Exposition Thank you for these helpful comments! We will be sure to clarify these points in the updated manuscript. Please see specific clarifications below. ### Re: Intro and scope You are correct that, while Picard Iteration applies immediately to any MDP, the theoretical analysis of convergence is problem specific. However, there is actually a variety of problems in SCO that can be studied using similar methods. Our analysis for FO problem can admit replenishment as long as the replenishing frequency is not too high (Appendix C). Beyond the FO problem, we also address the inventory control problem — another representative example of SCO — in the Appendix B (due to space constraints). We develop a theoretical analysis for the inventory control problem, and demonstrate demonstrate significant speedups in experiments. That said, your suggestion regarding exposition is well taken—we will revise the introduction in the final version to better clarify the scope and focus of the paper, as well as to clearly state our theoretical assumptions. ### Re: Exposition of Theorem 3.1 / 3.2 Thank you for this suggestion! We will make the exposition clearer in this section. To clarify the goals of the current manuscript: Theorem 3.1 is an informal statement of Theorem 2.2, with the goal of enabling the reader to quickly grasp the main idea of our result: that Picard Iteration provides a factor $\sim M/J$ speedup, which we obtain by bounding the number of iterations as $|\mathcal{Q}_{T}|$. In Section 3.1 we then prove a special case of the main theorem (Theorem 3.1/3.2) for a specific greedy policy, which is easier to analyze. The goal of this section is to demonstrate the more general proof architecture, and to provide intuition for why one might hope that the number of iterations required should be small: The action taken at every time step is either going to be correct or will go to a node that would run out of capacity in the sequential scenario (Lemma 3.2). Theorem 3.1/3.2 follows directly from this Lemma, for a greedy policy. Subsequently, Section 3.2 states that the same result holds for a broader class of policies. As the analysis is more involved (but the result is the same), we defer this analysis to the Appendix. ### Re: No replenishment Thanks for raising this point. First, we note that the setting without replenishment is commonly studied in the literature (e.g., https://pubsonline.informs.org/doi/10.1287/educ.2019.0199). That said, adding replenishment indeed makes the setting more comprehensive and realistic. In Appendix B.3 we show the method's applicability in a one-warehouse-multi-retailer problem with replenishment. We find that the number of iterations is bounded by twice the number of times the central warehouse is replenished during the horizon, which is typically small compared to the total number of timesteps. We also confirm this result empirically, achieving up to 380x speed-ups for settings with 1,000 retailers. In addition, in Appendix C, we briefly present our generalization of the FO problem with replenishment, and show that the Picard iteration still converges quickly if the frequency of replenishment is low. We can add more details for the discussion in the revised version. ### Re: usage of $\mathcal{Q}_T$ Thank you, we will correct this -- it should be the cardinality $|\mathcal{Q}_{T}|$. ## Specific Questions 1. *Re: MLP approximation to greedy:* The goal here is to simulate a computational workload representative of the kinds of heavy-weight neural network policies often used in reinforcement learning for supply chain. To this end, we need to experiment with some sort of neural network policy which behaves predictably, which we obtain by cloning the greedy policy. We will clarify this in the experimental setup. 2. *Re: "Increased conflicts"*: Thank you for noting this -- we mean that the number of required Picard iterations increases. We will correct the language in this section to make it consistent with the rest of the paper. --- Rebuttal Comment 1.1: Comment: Thank you for your response. All my concerns have been properly addressed and I remain positive about this paper.
Summary: The paper studies the acceleration of policy simulation for supply chain problems often solved via RL. The objective is to accelerate sequential evaluations using caching mechanisms to be able to parallelize/batch simulations. Numerical experiments are performed for supply chain problems and beyond. Claims And Evidence: The claims of the paper appear valid. Methods And Evaluation Criteria: The approach used is interesting. The presentation of the concepts is reasonable. Theoretical Claims: The theoretical claims appear solid. However, I did not check all proofs provided in the Appendix in full detail. Experimental Designs Or Analyses: The experimental design is reasonable. Examples for supply chain problems as well as from MuJoCo are performed. Results are suitably discussed. Supplementary Material: The Appendix provides proofs and additional experiments. Relation To Broader Scientific Literature: The related literature is suitably discussed. Essential References Not Discussed: n.a. Other Strengths And Weaknesses: Strengths: - Analytical results (convergence) - The provided experiments are overall convincing Weaknesses: - Limitations of the approach could be discussed in more detail Other Comments Or Suggestions: page 1 contains the claim: “For a policy parameterized by a non-trivial deep neural network (DNN), the task of simply simulating a single sample path under the policy can thus take several hours (or more) in the context of an SCO problem” Do you have a supporting reference for the claim? Questions For Authors: (1) Do hyperparameters have to be tuned? (2) What are the limitations/downsides of the approach? (3) Is there a worst case setup? (4) Can the approach be used outside Supply Chain problems? How do the assumptions have to be adapted/generalized? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your detailed review! Please see our responses below. *Re: Do you have a supporting reference for the claim?* Our claim is based on direct experience from industry deployments and discussions with practitioners at major e-commerce companies. These conversations consistently confirmed the need for hours (if not days) of simulation time. It became very apparent in these calls with practitioners that the daily simulation need vastly exceeds the time available, such that often approximations (such as only simulating a portion of products and extrapolating their resource usage and cost to other products) are used to make daily decisions. Our work, however, may allow many of these simulations to be run in the available time window. Unfortunately, we are not allowed to provide a specific citation for these useful discussions. Note that the fulfillment optimization problem only comprises a small piece of the daily supply chain puzzle to be solved. For instance, the outcome of our fulfilment simulation also informs resource planning at fulfilment centers, transportation planning, etc. It is thus essential the simulation happens fast, such that other algorithms can use the output. While confidentiality prevents us from explicitly citing these discussions, we've ensured our numerical settings are realistic and conservative, drawing on publicly available industry data from Amazon and Walmart, as cited at the beginning of Section 4.1 and in Appendix B.3. We believe our chosen settings are representative for a large amount of companies (and conservative for truly large-scale settings). ## Specific questions ### 1. Hyperparameter tuning Yes, the main hyperparameter to tune is what we have called "chunk size," i.e., the number of steps to execute at each iteration. While choosing a good value for this parameter can substantially improve performance, Picard Iteration still achieves massive speedups relative to sequential simulation for a wide range of chunk sizes -- our experiments show robust speedups of 200-440X across a range of chunk sizes from 30 to 300. Hyperparameters can be tuned at low cost by selecting those that maximize simulation speed for a small sub-trajectory. ### 2. Limitations To understand potential limitations, consider a back-of-the-envelope analysis of the key factors contributing to the simulation runtime: - $t_{\pi}$, the time needed to execute the policy at a single time step - $t_f$, the time needed to execute one step of the dynamics - $T$, the problem horizon (number of steps) - $K$, the number of iterations required for Picard iteration to converge - $M$, the number of parallel processors available. Sequential execution requires time $T(t_\pi + t_f)$, whereas Picard iteration requires $K(Tt_\pi/M + T t_f)$ (for $M \leq T$). Potential limitations are easy to see from this equation: - If $K$ is sufficiently large, this can offset the benefits of parallelization. As we show in the paper, $K$ is provably small in many useful cases, but this does not hold universally; see the worst case setup in response to the next question. - If $t_f$ is sufficiently large (i.e., dynamics are very expensive) then the term $KTt_f$ can dominate. In many real problems (such as the FO and inventory management settings we analyze) the dynamics are trivial to compute, so $t_f$ is negligible. For settings where this is not true, this dependence can be improved by computing expensive parts of the state transitions in parallel and caching them, as we do for actions. ### 3. Worst-Case Setup Indeed, one can construct a problem and Picard iteration scheme (i.e., allocation of tasks to processors), for which convergence requires $T$ iterations, although this is quite pathological. Consider a setting with a single product, two parallel processors and two fulfillment centers (FC), FC1 and FC2, starting with equal capacity. Suppose that event assignments alternate between the two processors (i.e., processor 1 is responsible for odd-numbered events, and processor 2 for even-numbered events). Suppose that the policy is simply to fulfill orders from the FC having the largest remaining capacity, with a preference for FC1 over FC2 in case of ties. Finally, suppose that the initial trajectory provided to Picard Iteration is to unfulfill all orders. In this scenario, at the first iteration, both processors will fulfill all orders from FC1; In the second iteration, starting from the time $t=2$, both processors fulfill all subsequent orders from FC2; and in the third, starting from $t=3$, both processors fulfill all subsequent orders from FC1. They continue to alternate like this for $T$ iterations until convergence. Note that, in the implementation of Picard iteration that we analyze, since there is only one product, all events would be assigned to the same processor, which avoids these conflicts.
Summary: This paper introduces the Picard Iteration algorithm to accelerate policy simulation in reinforcement learning for supply chain optimization (SCO) problems, where simulating a single trajectory can take hours due to the serial nature of policy evaluations. The key innovation allows for the batched evaluation of a policy across a single trajectory by assigning policy evaluation tasks to independent processes and using a cached evaluation system that iteratively converges to the correct sequential simulation. The authors prove that for supply chain problems, this approach converges in a number of iterations independent of the time horizon, resulting in significant speedups. Empirical results demonstrate a 400x speedup on large-scale fulfillment optimization problems using a single GPU compared to sequential evaluation, and a 100x improvement over the Time Warp algorithm (a traditional parallel discrete event simulation approach). Additional experiments show that the method generalizes well to inventory control problems and has promising applications to standard reinforcement learning environments outside of supply chain, potentially providing speedups of up to 40x in MuJoCo environments. Claims And Evidence: Yes, the claims made in the paper are well-supported by both theoretical analysis and empirical evidence. The authors provide formal proofs for the convergence of their Picard Iteration algorithm in supply chain optimization contexts, along with extensive experimental results that demonstrate the claimed speedups across different problem settings, batch sizes, and demand distributions. The comparison against baseline methods like Time Warp is thorough, and they extend their evaluation to other domains like MuJoCo environments to show broader applicability. Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria are well-suited for the problem. The authors evaluate their Picard Iteration algorithm on relevant supply chain optimization problems (fulfillment optimization and inventory control), using realistic parameters based on industry settings. Their benchmarking against sequential simulation and Time Warp provides appropriate baselines. The additional exploration in MuJoCo environments demonstrates broader applicability beyond the supply chain domain. Theoretical Claims: I only checked the claims in the main text and I think they are reasonable. The proof approach for Theorem 3.1 in the special case (greedy policy without inventory constraints) is logically sound and builds intuitively on the properties of wrong actions and capacity exhaustion. The setup of Lemma 3.2 and its application to prove convergence in J+1 iterations follows naturally. Experimental Designs Or Analyses: I checked the experimental designs and analyses, and I feel the current setup is good. The authors present comprehensive experiments across different problem settings, with appropriate controls and ablation studies that demonstrate the robustness of their approach. While I think it could be possible to strengthen the paper by talking about how the initial actions affect the convergence - for example, explicitly showing that setting the actions to a better policy can speed up the convergence. In the MuJoCo experiments, they use the previous policy iterate to initialize the cache, which is intuitive, but a more systematic study of initialization strategies could provide additional insights. Also, for the normalized error metrics, I think it would be interesting to show the RMSE of actions not only the states. This would provide a more complete picture of convergence behavior and could potentially reveal different convergence patterns between state trajectories and action trajectories. Supplementary Material: I briefly checked section B.3 of the supplementary material, which contains the experimental results for the inventory control with replenishment problem (the One Warehouse Multi-Retailer problem). Relation To Broader Scientific Literature: The paper's key contribution of using Picard iteration for parallel policy simulation relates to several research areas. It addresses limitations in parallel discrete event simulation methods like Time Warp, which don't work well for supply chain problems lacking local causality properties. The approach complements existing parallel reinforcement learning literature (e.g., Asynchronous RL, RLlib) by focusing on accelerating a single trajectory rather than parallelizing across multiple trajectories or agents. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths Practical Impact: The paper addresses a significant bottleneck in applying RL to large-scale supply chain problems. A 400x speedup could transform what were previously impractical approaches into viable solutions. Theoretical Guarantees: The authors provide formal proofs that Picard converges to the correct trajectory and, more importantly, that it does so in a number of iterations independent of the simulation horizon length for SCO problems. Comprehensive Evaluation: The experimental validation is extensive, testing across various problem configurations, including different batch sizes and demand distributions. Weakness: While the paper demonstrates impressive performance gains, it provides limited detail on implementation challenges. Although the authors mention using JAX and GPU acceleration, it would benefit the practitioners if they could talk more about optimization of synchronization between iterations, or parameter tuning strategies like selecting optimal chunk sizes. Other Comments Or Suggestions: While this paper focuses on scenarios where policy evaluation is expensive compared to state transitions, there's also an opportunity to consider the inverse case. In many complex environments like physics simulations or large-scale agent models, state calculations can be more expensive than policy evaluations. Perhaps a "cached state" approach could be developed, mirroring the Picard Iteration but in reverse. Instead of caching actions, we could cache intermediate states and only recalculate states when necessary. Questions For Authors: 1 How does the choice of initial action sequence affect convergence speed in practice across different environments? Have you systematically studied how different initialization strategies impact convergence rates in various problem settings, and are there heuristics for selecting initial action sequences that could accelerate convergence? 2 In your convergence analysis for MuJoCo environments (Figure 3), you measure relative RMSE of state trajectories. Have you also analyzed the convergence behavior of action trajectories? 3 Figure 3 shows significantly different convergence patterns across MuJoCo environments, with some converging in just 2-3 iterations while others require 10+ iterations. Do you have insights into why these differences occur? Are there specific properties of these environments (state transitions, policy complexity, dynamics) that predict faster or slower convergence with Picard iteration? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your detailed review! Please see our responses below. ## Implementation Details *Re: "...it provides limited detail on implementation challenges..."*. Thank you for raising this point. Due to space constraints, we provide detailed implementation specifics in Appendix A.3. As you note, chunk size significantly affects performance; however, our experiments show robust speedups of approximately 200-440X across a range of chunk sizes from 30 to 300; we can provide further details in the Appendix. Selecting good hyperparameters is straightforward: a reasonable heuristic for tuning this hyperparameter is to simulate a sub-trajectory of length $\ll T$, and select the values which minimize the time required to simulate this sub-trajectory. We will provide these details in the manuscript, and we also plan to release a package implementing Picard iteration. The supplementary materials provided with our submission also contain all the code used to generate the experiments in the paper. ## Settings with expensive dynamics *Re: "While this paper focuses on scenarios where policy evaluation is expensive compared to state transitions"* This is a great point. Actually, to a large extent, what one considers to be "state" vs. "action" in our framework is a design choice. In other words, the user has flexibility to determine which parts of the combined vector $(s_{t+1}, a_t)$ are generated in parallel and cached (i.e., treated as the "action"), and which are recomputed on the fly by each worker (i.e., treated as the "state"). The most efficient partitioning of state and action is problem dependent, and as you note, could involve a reversal of the nominal roles if dynamics are expensive. ## Specific Questions ### 1. Re: effect of initial action sequence Loosely speaking, the closer the initialization is to the correct trajectory, the fewer iterations will be required. In a policy optimization setting, the most obvious and effective approach to initialization is to use trajectories collected using the previous policy iterate to initialize Picard Iteration, to collect trajectories using the current policy iterate. This is what we do in our policy optimization and MuJoCo experiments. In the policy evaluation setting for FO, we study the sensitivity of Picard Iteration to the initialization by experimenting with three natural initializations: - Unfulfill (441x Speedup): initially, all orders are unfulfilled. This setting corresponds to the theoretical results and experiments currently in the manuscript. - Naive (358x Speedup): Initially, fulfill all orders from their nearest fulfillment center, ignoring inventory and capacity constraints. - Random (390x Speedup): Initially, fulfill all orders from a random FC. Clearly, different initialization methods have an impact on the simulation speed, although massive speedups are possible regardless. Here, it appears that the "Unfulfill" strategy works the best. The intuitive explanation for this is consistent with our analysis: using this initialization strategy, after the first iteration, each processor will have correctly accounted for inventory and capacity consumed by orders for that processor's own products -- in some sense, a "greedy" initialization which is less naive than the "Naive" approach above. Further, unlike the alternatives, when initializing each chunk, this procedure is able to account for inventory and capacity consumed in previous chunks. We will add this discussion to the appendix. ### 2. Convergence of action sequences We observe similar convergence behavior for the RMSE of actions as we do for states. This should intuitively be the case as we would expect reasonable controllers to exhibit some sort of Lipschitz continuity, especially over regions of the state space for which we have training data. ### 3. Understanding convergence on MuJoCo Thank you for highlighting this interesting observation. We agree that understanding the factors affecting convergence in continuous control settings is an exciting direction. As a first step towards this kind of analysis, we can consider linear control of a linear dynamical system. Writing out the Picard iteration explicitly, with each timestep assigned to a different processor, this turns out to be equivalent to solving a particular linear system via fixed point iteration. More precisely, letting $X^{(k)} \in \mathbb{R}^{T \times d}$ be the matrix of state variables at iteration $k$, across all times $t \in [T]$, the Picard iteration computes $X^{(k+1)} = A X^{(k)} + b$ for some appropriately shaped $A, b$ which depend on the the matrices describing both the dynamics and the controller. It is then straightforward to show that $X^{(k)}$ converges geometrically at a rate depending on the spectral radius of the matrix $A$. --- Rebuttal Comment 1.1: Comment: Thank you for addressing all my concerns. I've updated my review and increased my score accordingly.
null
null
null
null
null
null
null
null
Non-Stationary Predictions May Be More Informative: Exploring Pseudo-Labels with a Two-Phase Pattern of Training Dynamics
Accept (poster)
Summary: This paper investigates the potential of a novel type of pseudo-labels—two-phase labels—in semi-supervised learning. Unlike conventional methods that rely on high-confidence and stable predictions, two-phase labels exhibit relatively low correctness and demonstrate a unique two-phase training dynamic: they initially predict one category in the early training stages and switch to another in later epochs. The authors demonstrate that these labels are highly informative for decision boundaries. To effectively identify such labels, they propose a 2-phasic metric for their quantitative characterization. Furthermore, a loss function is designed to enable models to learn correct correlations while simultaneously eliminating false ones. Extensive experiments are conducted to validate the effectiveness of incorporating two-phase labels. Claims And Evidence: The claims made in the submission are indeed supported by clear and convincing evidence. First, the authors analyze the rationale of why two-phase labels are effective from both perspectives of pattern learning and decision boundaries. Furthermore, extensive experiments demonstrate that two-phase labels serve as an effective booster for existing pseudo-labeling methods. These experiments cover a wide range of datasets and various state-of-the-art (SOTA) baseline pseudo-labeling methods. Besides, the authors demonstrate that two-phase labels are high-quality pseudo-labels, which are often overlooked by existing pseudo-labeling methods. Methods And Evaluation Criteria: Methods and evaluation criteria are well-suited for the problem and application at hand, including eight image and graph datasets (e.g., Cora and CIFAR-100), the baseline pseudo-labeling methods for comparison (e.g., Confidence and SOTA SoftMatch), and commonly used evaluation metrics (such as pseudo-labeling accuracy and IOU). Theoretical Claims: This paper does not propose explicit theoretical claims. The in-depth analysis of the rationale provides valuable insights, helping readers better understand why two-phase labels are well-suited for pseudo-labeling tasks. Experimental Designs Or Analyses: The experimental setup and analyses appear to be well-structured and appropriate for assessing the claims made, such as booster test (Table 1), complementary analysis (Table 2), ablation Study (Table 3), and parameter Sensitivity (Figure 6). Supplementary Material: I reviewed the supplementary material all. Relation To Broader Scientific Literature: This paper focuses on the research problem of pseudo-label selection in semi-supervised learning. Unlike previous works that primarily emphasize the role of high-confidence, high training stationary labels (Type I labels in Figure 1) in pseudo-labeling, this study explores the potential of a novel type of predicted labels (i.e., Type II labels in Figure 1), which exhibit relatively high accuracy but low training stationary. This exploration provides new and interesting insights. Furthermore, this paper further identifies a subset within Type II labels, termed two-phase labels, which not only provide significant information gain but also mitigate the risk of misclassification. Essential References Not Discussed: The key contribution is a novel pseudo-label selection method. The literature discussed by the authors is comprehensive and highly relevant to the topic. Other Strengths And Weaknesses: Strengths: 1.This paper is well-written and highly engaging. 2.The paper conducts an insightful exploration, investigating the potential of a type of predicted labels in pseudo-labeling—specifically, labels that exhibit relatively high accuracy but low training stability, which have often been overlooked in previous works. This offers valuable inspiration for future research. 3.This paper uncovers two-phase labels. Extensive experiments demonstrate that incorporating two-phase labels can significantly improve the accuracy of existing pseudo-labeling algorithms (1.73% on image datasets and 1.92% on graph datasets). This indicates that two-phase labels can serve as a valuable complement to pseudo-labels provided by existing methods. 4.The authors analyze the rationale behind the effectiveness of two-phase labels from both perspectives of pattern learning and decision boundaries. Weakness: 1.In the Booster test experiments, the authors adopted a three-stage protocol: in the early stage, pseudo-labels were generated using the baseline method, while in the later stage, pseudo-labels were generated using both the baseline method and the proposed method for comparison. This design is not commonly seen in general pseudo-labeling approaches. However, the authors did not clearly specify how the training epochs were divided into early and later stages, nor did they analyze the impact of this division on the experimental results. Therefore, the authors should provide a detailed analysis of this hyper-parameter setting to validate its robustness. 2.As shown in Figure 4, on the Cora dataset, two-phase labels are significantly closer to the decision boundaries compared to high-confidence labels. However, this phenomenon is not as evident on the CIFAR-100 image dataset. The authors should further analyze and clarify the reasons behind this discrepancy, such as whether factors like data distribution, model complexity, play a critical role. Other Comments Or Suggestions: The SoftMatch method achieves a balance between the quality and quantity of pseudo-labeled samples by weighting them according to their confidence, leading to significant performance improvements in pseudo-labeling tasks. Could this idea also be applied to the selection of two-phase labels to further enhance their effectiveness? Questions For Authors: Please see Other Strengths And Weaknesses, Other Comments Or Suggestions. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you very much for your thorough and constructive feedback! Below, we present our responses to each of your concerns and questions. **Weakness 1**: *In the Booster test experiments, the authors adopted a three-stage protocol. This design is not commonly seen in general pseudo-labeling approaches. The authors should provide a detailed analysis of this hyper-parameter setting to validate its robustness.* **Response:** The three-stage training protocol was designed to validate whether the proposed two-phase labels can effectively enhance existing pseudo-labeling methods. It is worth emphasizing that the primary contribution of this paper lies in the identification of the two-phase labels, which serve as a complementary enhancement to baseline pseudo-labels. During Stage 2, we use the baseline pseudo-labeling method while recording training dynamics. These recorded dynamics were subsequently utilized to calculate the 2-phasic metric used to identify pseudo labels in Stage 3. We determine the length of Stage 2 based on two robust principles: 1. In graph datasets, the transition from Stage 2 to Stage 3 is determined by model convergence criteria. Specifically, Stage 2 concludes when additional training iterations using the baseline pseudo-labeling method no longer yield performance improvements, at which point Stage 3 is initiated. 2. In image datasets, a fixed schedule is adopted, where Stage 3 begins after recording four epochs of training dynamics in Stage 2, followed by six training epochs in Stage 3. **Weakness 2**: *As shown in Figure 4, on the Cora dataset, two-phase labels are significantly closer to the decision boundaries compared to high-confidence labels. However, this phenomenon is not as evident in the CIFAR-100 image dataset. The authors should further analyze and clarify the reasons behind this discrepancy.* **Response:** Thank you for highlighting this critical observation. We find that this phenomenon is primarily attributed to the use of pre-trained models in image datasets. Specifically, pre-trained models learn diverse feature representations from large-scale pre-training datasets, which may cause samples from the same class to be scattered across multiple clusters in the latent space. This dispersion effect reduces the observable distinction between high-confidence pseudo-labels and the decision boundaries. To validate this point, we conducted an additional controlled experiment using a ResNet-18 model trained on CIFAR-100, replacing the original pre-trained Vision Transformer (ViT) backbone. This modification eliminates the representational biases introduced by pre-training. Our empirical results demonstrate that, under this setting, high-confidence pseudo-labels exhibit clearer separation from the decision boundaries. We will include these findings and their analysis in the camera-ready version. **Other Comments Or Suggestions**: *The SoftMatch method achieves a balance between the quality and quantity of pseudo-labeled samples by weighting them according to their confidence, leading to significant performance improvements in pseudo-labeling tasks. Could this idea also be applied to the selection of two-phase labels to further enhance their effectiveness?* **Response:** Thank you for this insightful suggestion. The SoftMatch method effectively balances pseudo-label quality and quantity by weighting samples based on their confidence scores. Inspired by this motivation, we adopt a similar strategy in our approach. Specifically, we first calculate a normalized 2-phasic metric to the [0,1] range by $\phi=Normalize(-1*2-phasic)$. We then calculated the mean $\hat{\mu}$ and variance $\hat{\sigma}$ of $\phi$ and assigned loss weights $\lambda$ to samples using the formula in SoftMatch $$\\begin{equation} \lambda(\mathbf{p})= \\begin{cases} \phi(\mathbf{p})* \exp\left(-\frac{(\phi(\mathbf{p}) - \hat{\mu})^2}{2\hat{\sigma}^2}\right), & \text{if } \phi(\mathbf{p}) < \hat{\mu}, \\\ \phi(\mathbf{p}), & \text{otherwise}. \\end{cases} \\end{equation}$$ As shown in Table A, experimental results show that integrating the SoftMatch weight into our 2-phasic metric led to a slight decrease in classification accuracy. We are conducting further analysis to investigate the underlying causes of this phenomenon. **Table A: Applying SoftMatch-inspired weighting to our 2-phasic metric** | L/C | 3 | 5 | 10 | | --- | ----- | ----- | ----- | | Confidence | 66.21 | 71.38 | 73.73 | | +2-phasic | 70.41 | 74.07 | 77.17 | | +2-phasic & Softmatch | 67.69 | 73.78 | 76.80 |
Summary: The paper proposes a novel and interesting 2-phasic metric for two-phase pseudo-label learning. The 2-phasic metric characterizes the two-phase pattern through both spatial and temporal measures. Extensive experimental results show the effectiveness of the proposed 2-phasic metric, especially when the number of labeled samples are very small. ## update after rebuttal Thanks for the rebuttal! Most of my concerns have been addressed. I keep my score unchanged. Claims And Evidence: The proposed 2-phasic metric is novel for pseudo-label learning. The motivation should be further explained in detail. The authors claim that the two-sample samples are important and have more information gain. But the author does not explain why the two-samples are important. The authors give many observations to demonstrate the importance of the two-samples and the training dynamics that used to design spatial and temporal measures. I am curious that the observations are general for all kinds of datasets. Moreover, more theoretical analysis behind the observations should be presented, which could make the paper more solid. Methods And Evaluation Criteria: The authors conduct extensive experiments to validate the effectiveness of the propose method. The experimental design is comprehensive. Theoretical Claims: The theoretical claims are reasonable. However, more theoretical analysis behind the proposed observations in this paper should be presented. Experimental Designs Or Analyses: The authors conduct comprehensive and solid experiments from multiple perspectives, including booster test, complementary analysis, ablation study, and hyperparameter analysis. The experimental results reveal the effectiveness of the proposed method and each component. Supplementary Material: I have reviewed all parts of supplementary material. Relation To Broader Scientific Literature: The paper addresses the pseudo-label learning problem from a new perspective. In contrast to existing works that focus on samples exhibiting high confidence score and high training stationary, this work exploits the potential the two-phase samples. Moreover, the paper designs a 2-phasic metric to identify the two-phase samples with both spatial and temporal measures. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: 1. The paper is very well written and easy to understand. 2. The proposed 2-phasic metric is novel. 3. The experiment design is comprehensive and experimental results show the effectiveness of the proposed method. Weaknesses: 1. More theoretical analysis behind the observations should be presented, which could make the paper more convincing. 2. The additional overhead during the computation of spatial and temporal measures makes the proposed method less efficient, especially when the size of dataset is large. The authors could analyze the computational efficiency of the method from an experimental view. 3. The authors could perform ablation study regarding the design 2-phasic metric and analyze effectiveness of the spatial measure and the temporal measur Other Comments Or Suggestions: N/A Questions For Authors: In the equation of Line 241, there is a typo in the the average change of other categories. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you very much for your constructive comments! We have carefully studied them and revised the paper accordingly. **Weakness 1**: *More theoretical analysis behind the observations should be presented, which could make the paper more convincing.* **Response:** We appreciate your insightful feedback. Due to limited time during the rebuttal, we would like to propose a preliminary theoretical idea as a potential direction for addressing this concern. Our idea is to use the local elasticity hypothesis to prove that adding 2-phase labels can promote feature separability. Local elasticity is the core concept introduced in [1], and it is defined as follows: when the model is updated at sample $x$ via SGD, the predicted change  $|f(x', w^+) - f(x', w)|$ of another feature vector $x'$ is positively correlated with the similarity between $x$ and  $x'$. Specifically, local elasticity describes a phenomenon that if $x$ and $x'$ belong to the same class (similar), the predicted change is significant; if they belong to different classes (dissimilar), the change is small. Reference [2] utilizes the local elasticity assumption to derive conditions for feature separability. This study describes the temporal evolution of features for two classes of samples, denoted as $X_{i}^1(t)$ and $X_{j}^2(t)$, in a binary classification context. Furthermore, Theorem 2.1 in [2] indicates that: > Given the feature vectors $X_{i}^1​(t)$ and $X_{j}^2​(t)$ for $i, j \in [n]$, as $t \to \infty$ and large $n$, > 1. if $\alpha>\beta$, they are asymptotically separable with probability tending to one, > 2. if $\alpha\leq\beta$, they are asymptotically separable with probability tending to zero. Our two-phasic metric is designed to identify samples that exhibit a two-stage characteristic in training dynamics. As illustrated in Figure 4, the 2-phasic samples tend to lie closer to the decision boundary. Intuitively, incorporating these samples may help reduce the inter-class effect $\beta$, thereby promoting stricter feature separability. In the future, we plan to study the local elasticity about 2-phase samples, estimate the intra-class effect $\alpha$ and inter-class effect $\beta$ before and after adding 2-phase samples. This analysis will allow us to verify our hypothesis and provide a theoretical foundation for the effectiveness of our method. [1] He H, Su W J. The local elasticity of neural networks. In International Conference on Learning Representations, 2020. [2] Zhang J, Wang H, Su W. Imitating deep learning dynamics via locally elastic stochastic differential equations. Advances in Neural Information Processing Systems, 2021. **Weakness 2**: *The additional overhead during the computation of spatial and temporal measures makes the proposed method less efficient, especially when the size of dataset is large. The authors could analyze the computational efficiency of the method from an experimental view.* **Response:** Thank you for pointing out this concern. We have conducted an empirical analysis of the computational efficiency of our method. Specifically, on the Cora dataset using GCN as the backbone, the measured time costs are as follows: - Training dynamics recording time: 0.11625 seconds - Two-phasic metric computation time: 0.00253 seconds - Total training time: 2.24 seconds These results indicate that the combined overhead of recording training dynamics and computing the two-phasic metric accounts for approximately 5% of the total training time. Therefore, we consider this additional computational cost to be acceptable in practice. **Weakness 3**: *The authors could perform ablation study regarding the design 2-phasic metric and analyze effectiveness of the spatial measure and the temporal measur.* **Response:** Thank you for the valuable suggestion. In response, we conducted an ablation study using GCN as the backbone on the Cora dataset to evaluate the contributions of the temporal and spatial characteristics in the 2-phasic metric. The experimental results are summarized in Table A. The results demonstrate that both the temporal and spatial components contribute positively to the overall performance, with the temporal features having a more significant impact. We will include the results and corresponding discussion in the final revised version of the paper. **Table A: Ablation study on the temporal and spatial characteristics of the two-phasic metric** | L/C | 3 | 5 | 10 | | --- | ---- | ---- | ---- | | Temporal only | 69.76 | 73.36 | 75.99 | | Spatial only | 68.24 | 72.70 | 73.87 | | 2-phasic metric | 70.27 | 73.40 | 76.24 | **Other Comments Or Suggestions**: *In the equation of Line 241, there is a typo in the average change of other categories.* **Response:** Thank you for pointing out the error. We have corrected $\neg \Delta_{b i}^{(i)}$ to $\Delta_{\neg b i}^{(i)}$ in the formula on Line 241.
Summary: This paper discovers a new type of predicted labels suitable for pseudo-labeling, termed two-phase labels, which exhibit a two-phase pattern during training and are informative for decision boundaries. This finding is different from existing methods which typically select predicted labels with high confidence scores and high training stationarity as pseudo-labels to augment training sets, and thus offers new insights for pseudo-labeling. Besides, this paper proposes a 2-phasic metric to mine the two-phase labels, and a loss function tailored for two-phase pseudo-labeling learning, allowing models not only to learn correct correlations but also to eliminate false ones. Claims And Evidence: The authors claim that they discover a new type of predicted labels suitable for pseudo-labeling. First, they analyze the rationale of two-phase labels for pseudo-labeling from different perspectives. Second, extensive experiments were conducted on eight benchmark datasets, including image and graph datasets, and the results show that the use of two-stage labeling can significantly improve the performance of existing pseudo-labeling methods. Specifically, the average classification accuracy of image datasets and graph datasets increased by 1.73% and 1.92%. Methods And Evaluation Criteria: The proposed methods make sense for this task. Experiments are performed on CIFAR100, EuroSAT, STL-10,Semi-Aves, Cora, Citeseer, PubMed, AmazonComputers. The evaluation citeria contains classification accuracy, correctness, Information gain, overlap of the two pseudo-labels, which are commonly used benchmarks. Theoretical Claims: The authors provide a theoretical rationale for the two-phase labels, and I find no apparent issues with their analysis. Experimental Designs Or Analyses: Experiments are performed on the dataset of CIFAR100, EuroSAT, STL-10, Semi-Aves, Cora, Citeseer, PubMed, and AmazonComputers, and clearly verify the effectiveness of the proposed methods. Supplementary Material: I reviewed the supplementary material which helped me understand the methodology. Relation To Broader Scientific Literature: The newly identified type of predicted labels, characterized by a two-phase pattern during training, is not only well-suited for pseudo-labeling and highly informative for decision boundaries, but also provides novel insights for other related learning tasks. Essential References Not Discussed: Key related works are all discussed. Other Strengths And Weaknesses: Strengths: This paper is a well-written and interesting paper. It introduces novel insights into two-phase labels for pseudo-labeling, proposes a two-phase metric to mine these labels, and presents a loss function specifically designed for two-phase pseudo-labeling learning. These new insights have the potential to make a significant impact on the community. Weaknesses: The method introduces hyperparameters that must be carefully selected to ensure the effectiveness of the proposed approach. Other Comments Or Suggestions: A more detailed discussion on the theoretical guarantees of the proposed metric would enhance its reliability for practical use. Questions For Authors: 1) In pseudo-labeling tasks, what metrics have been proposed by existing works to identify effective samples? 2) How do you construct an effective memory bank to support the calculation of the proposed metric? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you very much for your thorough and constructive feedback! Below, we present our responses to each of your concerns and questions. **Weaknesses**: *The method introduces hyperparameters that must be carefully selected to ensure the effectiveness of the proposed approach.* **Response:** We agree with you that the sensitivity of hyperparameter in 2-phasic metric is crucial for its practical applicability. As shown in Figure 6, although the model’s performance exhibits fluctuations with certain hyperparameter variations, the overall performance remains relatively stable. This indicates that some hyperparameters have low sensitivity and that there may be correlations among them. Therefore, in future work, we plan to reduce the number of hyperparameters by employing hyperparameter fusion strategies, aiming to simplify the deployment of the 2-phasic metric while retaining its performance advantages. **Other Comments Or Suggestions**: *A more detailed discussion on the theoretical guarantees of the proposed metric would enhance its reliability for practical use.* **Response:** We appreciate your insightful feedback. Due to limited time during the rebuttal, we propose a preliminary theoretical idea as a potential direction for addressing this concern. Our idea is to use the local elasticity hypothesis to prove that adding 2-phase labels can promote feature separability. Local elasticity is introduced: when the model is updated at sample $x$ via SGD, the predicted change  $|f(x', w^+) - f(x', w)|$ of another feature vector $x'$ is positively correlated with the similarity between $x$ and  $x'$ [1]. Reference [2] utilizes the local elasticity assumption to derive conditions for feature separability. This study describes the temporal evolution of features for two classes of samples, denoted as $X_{i}^1(t)$ and $X_{j}^2(t)$. Furthermore, Theorem 2.1 in [2] indicates that: > Given the feature vectors $X_{i}^1​(t)$,$X_{j}^2​(t)$ for $i, j \in [n]$, as $t \to \infty$ and large $n$, > 1. if $\alpha>\beta$, they are asymptotically separable with probability tending to one, > 2. if $\alpha\leq\beta$, they are asymptotically separable with probability tending to zero. As illustrated in Figure 4, our 2-phasic samples tend to lie closer to the decision boundary. Intuitively, incorporating these samples may help reduce the inter-class effect $\beta$, thereby promoting stricter feature separability. In the future, we plan to study the local elasticity about 2-phase samples, estimate the intra-class effect $\alpha$ and inter-class effect $\beta$ before and after adding 2-phase samples. This analysis will allow us to verify our hypothesis and provide a theoretical foundation for the effectiveness of our method. [1] The local elasticity of neural networks. ICLR, 2020. [2] Imitating deep learning dynamics via locally elastic stochastic differential equations. NeurIPS, 2021. **Q1**: *In pseudo-labeling tasks, what metrics have been proposed by existing works to identify effective samples?* **Response:** Pseudo-label selection metrics mainly fall into two categories: confidence-based and uncertainty-based. Confidence-based methods typically use the maximum softmax score as the selection criterion [1]. Enhancements include FlexMatch [3], which applies dynamic class-wise thresholds for more flexible filtering, and SoftMatch [4], which weights samples by confidence to balance pseudo-label quality and quantity. Uncertainty-based metrics prioritize samples with low prediction uncertainty. A representative approach is Monte Carlo dropout [5], which estimates uncertainty via the variance of predictions under multiple dropout runs. Another method leverages training stationarity (or time consistency), selecting labels with stable predictions over time [7]. Additionally, recent work proposes learning a metric via a neural network that models the relationship between label embeddings and feature embeddings to assess pseudo-label quality [8]. [1] Curriculum labeling: Revisiting pseudo-labeling for semi-supervised learning. AAAI, 2021. [2] Flexmatch: Boosting semi-supervised learning with curriculum pseudo labeling. NeurIPS, 2021. [3] Softmatch: Addressing the quantity-quality tradeoff in semi-supervised learning. ICLR, 2022. [4] Dropout as a Bayesian approximation: Representing model uncertainty in deep learning. ICLR, 2016. [5] Temporal self-ensembling teacher for semi-supervised object detection. IEEE Transactions on Multimedia, 2021. [6] Semireward: A general reward model for semi-supervised learning. ICLR, 2024. **Q2**: *How do you construct an effective memory bank to support the calculation of the proposed metric?* **Response:** In our experiments, we adopted a uniform sampling strategy to construct the memory bank. Specifically, model predictions were sampled and stored at fixed training steps. With only 50 recorded training dynamics, we achieved satisfactory accuracy in experiments on image datasets.
Summary: This paper introduces a novel type of pseudo-labels that hold significant potential for enhancing pseudo-labeling strategies and complementing existing methods. The authors further propose a metric to efficiently identify these two-phase labels. Extensive experiments on eight datasets demonstrate that the 2-phasic metric significantly boosts the performance of existing pseudo-labeling methods. Claims And Evidence: NA Methods And Evaluation Criteria: The proposed methods and evaluation criteria are appropriate for the problem of enhancing pseudo-labeling in semi-supervised learning. The 2-phasic metric and loss function effectively capture the unique characteristics of two-phase labels. The criteria used in the paper are standard and relevant, ensuring applicable results. Theoretical Claims: The paper supports its claims through experimental results and qualitative reasoning, rather than formal proofs. It demonstrates that incorporating two-phase labels improves model performance across multiple datasets and explains their effectiveness by their proximity to decision boundaries and ability to capture complex patterns. Experimental Designs Or Analyses: The experimental designs and analyses in the paper are sound and valid. It compares the 2-phasic metric against various baselines across diverse datasets, demonstrating notable performance gains. It also includes ablation studies to validate the contributions of different components. Overall, the experiments effectively substantiate the proposed method’s efficacy. Supplementary Material: Yes. The paper provides appendices that describe many of the details of two-phase labels. We mainly focus on parts B (Validation of LMO Entropy), D (2-phasic based Pseudo-labeling Algorithm), and E (Details of Experiments). Relation To Broader Scientific Literature: The paper’s core contributions are deeply connected to the broader fields of semi-supervised learning and training dynamics. Essential References Not Discussed: NA Other Strengths And Weaknesses: Strengths 1. The paper introduces novel two-phase labels and a tailored metric and loss function to enhance semi-supervised learning. 2. Extensive experiments show significant performance gains, especially with limited labeled data. 3. The approach is practical, easily integrated into existing frameworks and effective in scenarios with limited labeled data. Weaknesses 1. Requires careful parameter adjustment, which can be time-consuming. 2. Generalizability to other domains (e.g., text, time-series) is untested. Other Comments Or Suggestions: 1. Add equation numbers for all equations to enhance clarity and referencing. 2. Ensure figure references are accurate and well-positioned. 3. Enhance captions and explanations for figures and tables to better guide readers. Questions For Authors: 1. The paper identifies limitations, including parameter sensitivity and computational overhead. Could you discuss any ongoing or planned future work to address these issues? 2. Have you ever considered evaluating two-phasic metrics across various types of neural networks, such as RNNs and Transformers? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you so much for your detailed and constructive comments! We have carefully studied them and revised the paper accordingly. **Q1**: *The paper identifies limitations, including parameter sensitivity and computational overhead. Could you discuss any ongoing or planned future work to address these issues?* Response: Thank you for raising these important points. Below is our response to these limitations. 1. Hyperparameter determination As shown in Figure 6, although the model’s performance exhibits fluctuations with certain hyperparameter variations, the overall performance remains relatively stable. This indicates that some hyperparameters have low sensitivity and that there may be correlations among them. Therefore, in future work, we plan to reduce the number of hyperparameters by employing hyperparameter fusion strategies, aiming to simplify the deployment of the 2-phasic metric while retaining its performance advantages. 2. Computational Overhead Calculating the 2-phasic metric requires recording training dynamics but does not introduce significant computational overhead. Specifically, it introduces an additional space complexity of O(N|T|) and time complexity of O(NC|T|), where N is the number of unlabeled samples, C is the class count, and |T| is the number of recorded training dynamics. Empirical validation (e.g., C=100 and |T|=50 for CIFAR-100) shows that this complexity is not too high in real-world datasets. Moreover, the complexity limitations can be effectively mitigated through two strategies: 1. **Parallelization**: The recording of training dynamics and the computation of the 2-phasic metric are decoupled from the backpropagation process of the neural network. This allows for parallel execution, effectively hiding the additional computational latency within the training loop. 2. **Efficient Sampling for Memory Bank**: To further optimize memory usage, we propose an adaptive sampling strategy for memory bank construction. Rather than storing the training dynamics at all epochs, we selectively retain a representative subset by an adaptive sampling strategy based on the recorded training dynamics. **Q2**: *Have you ever considered evaluating 2-phasic metric across various types of neural networks, such as RNNs and Transformers?* Response: The principle of the 2-phasic metric is to capture the transition of neural network learning patterns from simple to complex. The universal law of neural networks learning from easy to hard has been well-established [1][2], endowing the 2-phasic metric with broad applicability across various neural network architectures. In our experiments, we used GCN and ViT as backbones for node classification and image classification tasks, respectively. The ViT represents a Transformer-based architecture. In the appendix, we additionally validated the effectiveness of the 2-phasic metric using GAT as the backbone for node classification. Moving forward, we plan to apply the 2-phasic metric to diverse data types and additional backbone architectures to further verify its generalizability. [1] Arpit, D., Jastrzebski, S., Ballas, N., Krueger, D., Bengio, E., Kanwal, M. S., Maharaj, T., Fischer, A., Courville, A., Bengio, Y., et al. A closer look at memorization in deep networks. In International Conference on Machine Learning, pp. 233–242, 2017. [2] Siddiqui, S. A., Rajkumar, N., Maharaj, T., Krueger, D., and Hooker, S. Metadata archaeology: Unearthing data subsets by leveraging training dynamics. In The Eleventh International Conference on Learning Representations,2023. **Other Comments Or Suggestions**: 1. Ensure figure references are accurate and well-positioned. 2. Enhance captions and explanations for figures and tables to better guide readers Thank you for your valuable suggestions. In response, we have made the following revisions: - We carefully reviewed all figure and table references, including the main paper and the Appendix. - We examined the captions of all figures and tables and revised some of them, such as the caption of Table 2. --- Rebuttal Comment 1.1: Comment: Thanks for your responses. The authors have addressed my previous concerns.
null
null
null
null
null
null
No Metric to Rule Them All: Toward Principled Evaluations of Graph-Learning Datasets
Accept (poster)
Summary: This paper is about evaluating graph datasets with the goal of finding good datasets. They argue that for good datasets, graph structure and node features should have two properties: (P1) they should be task-relevant; (P2) they should contain complementary information. To test this, they propose creating dataset perturbations and measuring their impact: on model performance (P1); or complimentarity of structure and features (P2). By testing a variety of different datasets, the authors make suggestions for how certain datasets should be used in the future. ## update after rebuttal See my rebuttal comment: This is a good paper that should be accepted -- no changes to my review. Claims And Evidence: The claims are well support by evidence. The theoretical claims / assumptions (P1, P2) make intuitive sense. The measurement of these properties is done by an expertly designed experiment. Methods And Evaluation Criteria: The proposed methods (of measuring the impact of P1 / P2) make sense. In, particular the authors base their claims on statistical significance testing, a practice which more machine learning papers should adopt. The methods and evluation make sense for the goal. Datasets were chosen carefully to cover different domains. Finally, the way GNN hyperparameters were tuned seems remarkably fair (and compute intensive). I have nothing to criticize of the overall experimental design. However, I feel like the authors could have also investigated the `ZINC` (12k) dataset. As it is used in almost every GNN expressivity paper. Thus, the authors could strengthen and extend the impact of their work by also including this dataset. Theoretical Claims: I only skimmed the small proofs in the appendix as they do not have a large impact on this work. Experimental Designs Or Analyses: Yes, the design of measuring P1 and P2 makes sense. Supplementary Material: NA Relation To Broader Scientific Literature: The relation to existing GNN literature is fairly discussed. Essential References Not Discussed: No. Other Strengths And Weaknesses: **Strengths:** - (S1) Our understanding of graph datasets is severely lacking. This work represents an important first step of doing this. - (S2) The work is built on simple (and in my opinion correct) assumptions (P1, P2). The proposed methods are simple, elegant and easy to understand. - (S3) The evaluation procedure is extensive (many datasets), fair (extensive hyperparameter tuning) and is based on statistical significance testing. - (S4) Paper is well written, clear and has a good layout. **Weaknesses:** - (W1): Edge features. This work does not take edge features into account. - (W2): Code for GNNs. As far as I can tell, the released code does not include the training code for the GNNs? This makes it difficult to evaluate this part of the work. In particular, I am interested whether GNNs where trained without taking edge features into account (as this would best align with the framework). - (W3): The Recommendations (Dataset Taxonomy). One of the key contribution of this work are its recommendations on how to to move forward with how we use datasets (Section 4.3). This is in my opinion, the most important, impactful and interesting part of this work. However, Section 4.3 is remarkably small and does not properly explain its recommandations. This is particularly problematic for the case of `realigning` datasets, as it is not clear to me what the authors mean by this (e.g. `better data modeling` is very abstract). - (W4): ZINC (see Methods And Evaluation Criteria). **To sum up,** this is a good paper that should be accepted. Other Comments Or Suggestions: - Definition 2.4 should define that $\varphi$ is a perturbation. It might be easier for the reader to notate a dataset $\mathcal{D}$ perturbed by $\varphi$ as $\varphi (\mathcal{D})$ instead of $\Phi (\mathcal{D})$. - Definition 2.9: `lifts that take in either structure-based distances arising from G or feature-based distances arising from X`. To me it seems like you intend to create the metric space only from structure or only from the features. However, this definition allows for a combination of both as well, contradicting the explanation. - You use euclidean distance on node features as a metric. For categorical features, in my opinion this only makes sense if they are one-hot encoded, is that the case? - The justifications in the proof of Theorem 2.15 are excellent as they make it easier for the reader to understand the proof. Unfortunately, they reference line numbers that are not part of the proof. As an alternative you could but the reference number on top of the $=$ signs (e.g. $\stackrel{\text{7}}{=} 1- ||D_{d_F}||_{1,1} $). Questions For Authors: See previous points. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your perceptive comments and your support of our work. In the following, we address the points raised under “Weaknesses” (W1–W4) and “Other Comments Or Suggestions” (Q1–Q4). The point raised under “Methods and Evaluation Criteria” is addressed with W4. - **W1 (Accounting for edge features).** We agree that extending our framework to account for edge features would be valuable and will add this to “Future Work” (after ll. 431–434r) in the updated version of the manuscript. - **W2 (GNN training code).** - We did not originally include the GNN training code in the reproducibility materials (mostly because the framework itself is already quite hefty) but have now added it to a separate folder (`gnn-training`) in the anonymously shared repository. We will include this code in our public reproducibility package (instead of the stripped-down `src`). - Consistent with our measurement setup, edge features were not taken into account in our experiments, and we will clarify this in the updated manuscript. Note also that not all datasets have edge features to begin with, not all architectures can take them into account, and the PyG implementations of GAT, GCN, and GIN ignore them by default. - **W3 (Recommendations/taxonomy).** In a nutshell, our recommendation is “realign” for datasets that do not exhibit performance separability, yet exhibit favorable mode diversity (derived from mode complementarity). In this scenario, there exists interesting variation in the data that graph-learning models can be challenged to leverage in their predictions. However, the existing relationship between this variation and the prediction target, to the extent that it can be picked up by the architectures examined, does not provide a significant performance advantage over settings in which this relationship is deliberately destroyed. “Realignment”, then, collectively denotes several potential operations, including (1) changing the benchmark setting (e.g., datasets with good feature diversity, such as MUTAG or AIDS, could serve as benchmarks for graph-free ML methods), (2) changing the prediction targets (e.g., using different categories to classify discussion threads in Reddit-B and Reddit-M), (3) amending the graph structure (if the dataset lacks structural diversity; this connects to the discussion on graph rewiring), and (4) amending the features (if the dataset lacks feature diversity, as is the case for DD). We will expand the “Dataset Taxonomy” section to clarify our recommendations and their derivation in the updated manuscript. - **W4 (ZINC-12k dataset).** Thank you for this suggestion. Since the task associated with ZINC-12k is graph regression, which is commonly evaluated with MAE (rather than accuracy or AUROC), we will take the opportunity to showcase the ability of RINGS to handle graph-regression tasks and include not only ZINC-12k but also two other graph-regression datasets, QM9 and ZINC-250k, adding a separate one-column plot with the graph-regression results to complement Figure 3 (_experiments in progress_). Note that the ZINC-12k dataset (and also QM9 and ZINC-250k) are largely “saturated” (i.e., progress has stalled at low levels of MAE), and it is known that they do not always necessitate a graph-based model, such that we expect high overall performance levels and low performance separability. Thus, we additionally aim to provide results for more challenging molecular benchmark datasets like MOSES and GuacaMol. - **Q1 (Definition 2.4).** We originally adopted the uppercase notation to distinguish individual mode perturbations from perturbations applied to a whole dataset but will change this to lowercase following your suggestion. We will also add “and a mode perturbation $\varphi$" after “attributed graphs” in l. 114r of the revised manuscript. - **Q2 (Definition 2.9).** While our experiments focus on perturbations that affect either the graph structure or the node features, we deliberately designed our framework to be more general, accommodating also perturbations that would require joint lifts of the structure and the features (as these might help isolate model contributions or dataset characteristics in future work). We will clarify this in the updated manuscript. - **Q3 (Handling of categorical node features).** Correct. To match the setup encountered by most graph-learning models, we use the node-feature encodings provided by PyG, which one-hot-encodes categorical node features by default. However, we also appreciate that the one-hot handling of non-metric attributes in most standard graph-learning frameworks itself may not be ideal, and our framework can easily accommodate other distance metrics. - **Q4 (Reference numbers).** Thank you for this suggestion, which we will implement in the updated manuscript. We hope that our responses addressed your concerns and are happy to answer further questions. --- Rebuttal Comment 1.1: Comment: Thank you for addressing my comments and especially for uploading the GNN code. I could indeed verify that edge features were not used. I remain of the opinion that this is a good paper that should be accepted. Congratulations to the authors for this cool work.
Summary: This paper focuses on the problem of evaluating benchmark datasets for the task of graph classification. In particular, they consider studying the importance that both structure and node features have on performance. They consider measuring this by perturbing the original graph, both in terms of the structures and features, and measuring both the change in performance and change in graph properties. They do so across a variety of common datasets. They find that for many datasets, perturbing either the features or structure has little effect on both performance and mode complementarity. They argue that this indicates a need for newer datasets where both the features are structures are necessary for optimal performance. Claims And Evidence: I think the claims made in this paper are well supported by the provided evidence. Methods And Evaluation Criteria: I think all the proposed method and evaluation are mostly well designed. Specifically, 1. I appreciate that the authors considered multiple types of perturbations. The different levels of perturbation are really helpful for accounting for the various ways that both modifying the features and structure can effect performance. 2. I like that the authors considered a wide variety of datasets, that cover the common ones used for graph classification. This allows for a more comprehensive study. Furthermore, I like the comprehensiveness of the final results. However, I disagree with how the authors measured performance separability. In Figure 3, the authors compare the final performance by perturbation type. However, I believe that looking at the overall performance is actually not the correct way to check the effect on performance. In reality, for two perturbations, it could be that they actually differ greatly in "how" they achieve similar performance. In an extreme example, let's say we get a 50% accuracy on both the original dataset and one perturbation. Naively, it seems that the perturbation has no effect. However, it could be that they are completely disjoint in which samples they correctly classify. As such, I think Figure 3 should be supplemented with an additional metric that considers the overlap in correctly classified samples. A simple way would be too calculate the what \% of correct samples for the perturbed dataset were correctly answered by the original dataset . That is, let $S$ be the set of correctly classified samples w/o perturbation. Furthermore, let $S_{\phi}$ be the same under some perturbation $\phi$. We can then calculate $({S_{\phi} \cap S}) / S_{\phi}$, which will be $1$ when all the samples answered correctly by $\phi$ were also correct when using the original dataset. This is of course just one example, the metric itself can vary. I should be clear, I don't think this will have much of an effect at all on the final results and conclusions made in the paper. However, I believe that the current strategy of just comparing the overall performance is too coarse, and that it's important that we're sure that perturbing the graph either does/doesn't effect the sample-level performance. Theoretical Claims: Yes, all of them. Experimental Designs Or Analyses: I think the experimental design and analyses are generally good. However, there are two instances where I think they can be improved: 1. One area I think can be improved is which models were used. Currently, only basic GNNs are used in the study. However, many papers have shown that more expressive models can achieve much better performance that basic GNNs on graph-level tasks. More recently, graph transformers have shown very promising performance. I think incorporating 1 or 2 of these methods, such as [1], would enhance the conclusions made by this study. This is especially true when perturbing the graph structure, as these more recent methods shown an enhanced ability to distinguish more complicated graph structural patterns. 2. It's unclear to me what it means when performance and mode separability aren't aligned. For example, for both Reddit datasets, the performance separability is very low, indicating that they are poor datasets. However you then show that the mode diversity is quite high for both structures and features. The authors argue that such datasets may be "misaligned", however it's unclear what this means. I think the authors need to answer: (a) What is meant by misaligned? (b) How can we better align such datasets? (c) If the mode diversity is high, then why is the performance separability so low? I'd argue that this last question is most important as to me it suggests that either: (a) the current methods for measuring $D_F$ and $D_S$ may be lacking (b) Or that the disparity may be due to the methods used. However, if the authors have another idea for the cause I'd be interested. [1] Rampášek, Ladislav, et al. "Recipe for a general, powerful, scalable graph transformer." Advances in Neural Information Processing Systems 35 (2022): 14501-14515. Supplementary Material: I looked at some of the additional results. Relation To Broader Scientific Literature: I think this paper relates to a lot of work in the area of Graph ML. Particularly, I really like the motivation and the idea behind this study. I think that far too often we neglect the quality of our benchmark datasets, however they are essential for our understanding of the strengths and weaknesses of proposed methods. Essential References Not Discussed: None. Other Strengths And Weaknesses: None. Other Comments Or Suggestions: 1. I think you can be clearer in the introduction and abstract that you're only focusing on graph classification. This isn't a weakness or anything, just that by reading both the reader may come away with an impression that you study a variety of different graph tasks (as far as I can tell, it is only mentioned at the end of the intro that the focus is graph classification, my apologies if I missed something) Questions For Authors: 1. In Figure 3, the accuracy of MolHIV seems to stay unchanged at 100% (or close) across the original dataset and all perturbations. However, the AUC is much lower and varies a lot. I'm curious if the authors have an idea as to what may be causing this discrepancy? Because for all the other datasets, both AUC and accuracy tend to be in lockstep with one another. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your encouraging feedback. We address your points by the section in which they were raised. - **Methods And Evaluation Criteria: Measuring performance separability.** We designed performance separability mirroring model-centric evaluation practices to facilitate adoption but agree that a fine-grained perspective will provide additional insights. Following your suggestion, we will include an experiment that examines performance changes at the level of individual graphs in the updated manuscript. We did not log test-set performance at the level of individual graphs in our original experiments but are recomputing the necessary evaluations (*experiments in progress*). - **Experimental Designs or Analyses.** - **Coverage of GNN models.** Following your suggestion, we will include results for the GPS and Graphformer models in the updated manuscript (*experiments in progress*). - **Interpreting the discrepancy between performance separability and mode diversity.** **(a)** We say that a dataset is “misaligned” if it does not exhibit performance separability, yet exhibits favorable mode diversity (derived from mode complementarity). **(b)** “Realignment” collectively denotes several potential operations, including (1) changing the benchmark setting (e.g., datasets with good feature diversity, such as MUTAG or AIDS, could serve as benchmarks for graph-free ML methods), (2) changing the prediction targets (e.g., using different categories to classify discussion threads in Reddit-B or Reddit-M), (3) amending the graph structure (if the dataset lacks structural diversity; this connects to the graph-rewiring discussion), and (4) amending the features (if the dataset lacks feature diversity, e.g., for DD). **(c)** Our experiments show that *higher mode complementarity* is associated with *higher performance*; we do not make any claims about the relationship between *mode diversity* and *performance separability*. Since performance separability assesses the task-specific performance gap between a dataset and its perturbations, and mode diversity measures the task-agnostic variation contained in the modes of an individual dataset, we would not expect high mode diversity to be consistently associated with performance separability (e.g., a dataset could have high structural and feature diversity but low mode complementarity, which could lead to “complete graph” and “complete features” performing on par with the original dataset, eliminating performance separability). *Misalignment* occurs when there is interesting variation in the data that models could leverage in their predictions (high mode diversity), but the existing relationship between this variation and the prediction target does not provide a significant performance advantage over settings in which this relationship is deliberately destroyed (lack of performance separability). While the lack of performance separability could also be due to limitations of the models employed (i.e., they may not be expressive enough), given our comprehensive measurement setup, we deem it more likely that the relationship between the variation in the data and the prediction target is strained, prompting the need for *realignment*. However, disentangling in detail the model-related and data-related factors contributing to a lack of performance separability would be a valuable next step to further enhance the RINGS framework. We will include these clarifications in the “Dataset Taxonomy” section of our updated manuscript. - **Other Comments or Suggestions: Clarifying the focus on graph classification.** We currently mention the focus on graph classification in ll. 78–80r, ll. 245–247r, ll. 430–433l, and ll. 431–435r. With the inclusion of graph-regression datasets suggested by Reviewer 5Mv7, we will now have two graph-level tasks in our experiments. Hence, we will make the following amendments to clarify our scope: - Abstract ll. 42/43l: Insert “on graph-level tasks” after “extensive set of experiments”. - Introduction l. 80r: Rewrite “extensive experiments on real-world graph-classification datasets” to “extensive experiments, focusing on real-world datasets with graph-level tasks.” - **Questions for Authors: Accuracy and AUROC on MolHIV.** MolHIV is highly imbalanced, with only 1443 out of 41127 graphs labeled 1 (and the rest labeled 0), leading to consistently high accuracy. AUROC is the officially recommended evaluation metric for this dataset; we include accuracy for completeness only and will note this in the updated manuscript. MolHIV also highlights the need for careful evaluation-metric choices (e.g., given the very skewed class distribution, AUPRC may be preferable over AUROC). We believe that the improvements to our manuscript prompted by your comments will further strengthen our submission and are happy to address any further questions you may have. --- Rebuttal Comment 1.1: Comment: Thank you. I appreciate the clarifications and the promise of including additional experiments (transformers + individual graph performance). Given that both experiments are still in progress (and they were my main concerns), I will keep my positive score for now. Please let me know if the experiments finish before the end of the rebuttal stage and I will adjust my score accordingly. --- Reply to Comment 1.1.1: Comment: Thank you for confirming your support of our work. We appreciate your sustained interest in our additional experiments, which we believe will strengthen our evidence base but leave the main message of our paper unaltered. While we will happily include our extended results in the updated version of our manuscript, we also value diligence over speed in the design and execution of our experiments. At this point, we have graph-level performance logs for a subset of our original configurations, and the additional graph-learning models we decided to include following your suggestion are still tuning. We share our preliminary results for the graph-level performance comparisons in the newly added `rebuttal` folder in the anonymously shared repository. In these experiments, for a given dataset, we measure the average similarity between the sets of samples correctly classified by two different (mode perturbation, architecture) configurations A and B, using as our similarity measure either the asymmetric score you proposed (suffix `asymmetric`, dividing by the total number of correct classifications by A) or the Jaccard similarity (suffix `jaccard`, dividing by the cardinality of the union of correct classifications by A and B). Our preliminary results include (1) all (mode perturbation, architecture) configurations for MUTAG averaged over 17 seeds, (2) all (mode perturbation, architecture) configurations for Proteins averaged over 15 seeds, and (3) a subset of the (mode perturbation, architecture) configurations for NCI1 averaged over 10 seeds. For the updated manuscript, we will report statistics for all (mode perturbation, architecture) tuples associated with each dataset, average over 100 random seeds, and include the precise level and variation of our estimates in separate tables (the necessary computations are ongoing). Using the complete graph-level performance data, we will also be able to identify the individual graphs that are responsible for the performance fluctuations we observe (reducing the "accuracy similarity" defined above), and to examine how the mode complementarity of these graphs impacts their "classifiability" depending on the mode-complementarity distribution of the training (and validation) data. Our work aims to, inter alia, raise the standards of experimental hygiene in the graph-learning community. As we are prudently implementing your suggestions, we hope that our preliminary results convinced you that our original results will hold also in light of our additional experiments, and we are looking forward to including our extended analyses in the updated version of our manuscript.
Summary: This paper introduces a novel framework, Rings, for evaluating the quality of graph-learning datasets by quantifying differences between the original dataset and its perturbed representations. The authors propose two key metrics: performance separability and mode complementarity, which assess the relevance and complementary nature of the graph structure and node features for a given task. The framework is applied to 13 popular graph-classification datasets, revealing significant insights into the quality and suitability of these datasets for graph-learning tasks. The paper is well-structured, methodologically sound, and provides actionable recommendations for improving benchmarking practices in graph learning. Claims And Evidence: The claims are well supported by the empirical studies. Methods And Evaluation Criteria: The methods and evaluation criteria are appropriate. Theoretical Claims: N.A. Experimental Designs Or Analyses: Experiments follow standard procedures in graph learning. Supplementary Material: NA. Relation To Broader Scientific Literature: This submission can inspire graph learning area, especially its understanding of datasets, which may promote the design of graph models. Essential References Not Discussed: N.A. Other Strengths And Weaknesses: Strengths: S1. The Rings framework offeres a principled approach to dataset evaluation that goes beyond traditional model-centric evaluations. S2. The extensive experiments on 13 datasets demonstrate the practical utility of the framework and provide valuable insights into the quality of these graph datasets. S3. The paper is well-written, with clear explanations of the methodology and rigorous experimental design. Concerns: C1. The current framework is primarily focused on graph-level tasks. Extending it to node-level and edge-level tasks could further enhance its applicability. C2. While the paper provides some theoretical insights, a more comprehensive theoretical analysis of the properties of several key concepts could strengthen the work. Other Comments Or Suggestions: NA. Questions For Authors: In addition to C1 and C2, how does the RING framework affect (promote) the graph model design (under node-/edge-/graph-level tasks)? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your thoughtful comments and your support of our work. In the following, we address C1 and C2 as well as your additional question (“Q1”). - **C1 (restriction to graph-level tasks).** As noted in “Discussion→Future Work” (ll. 431–435r), we see extending our framework to node-level and edge-level tasks as an interesting direction for future work. While assessing performance separability for such tasks will essentially work out-of-the-box, extending the notion of mode complementarity to these settings will require considerable additional research. - **C2 (additional theoretical insights).** As stated in “Discussion→Future Work” (ll. 427–431r), we agree that further theoretical insights are desirable. However, we estimate that the corresponding analyses will merit a separate paper. We will provide some more guidance on how to arrive at additional theoretical insights (e.g., by taking an information-theoretic perspective) in the “Future Work” paragraph of the updated manuscript. - **Q1 (model-design guidance).** We believe that RINGS could promote model design in several ways. 1. As highlighted by our dataset taxonomy, performance separability can eliminate benchmark datasets that do not require the specific capabilities of graph-learning models to integrate information from both modes, helping the community direct model development to where it is most needed. 2. Mode-complementarity and mode-diversity measurements could guide architectural choices for a specific task on a specific dataset (e.g., whether a model should focus on leveraging the graph structure or the node features). 3. Mode-complementarity and mode-diversity measurements could inform model designs that incorporate data-centric components to adaptively enhance performance (both at the level of individual observations and at the level of entire datasets)–e.g., by preprocessing the data to increase mode complementarity (which is correlated with performance). We will add these suggestions to the updated manuscript but note that 2\. and 3\. merit further experimental confirmation. We hope that these responses addressed your concerns and are happy to answer any further questions you may have.
null
null
null
null
null
null
null
null
RelGNN: Composite Message Passing for Relational Deep Learning
Accept (poster)
Summary: The paper proposes a graph neural network with attention mechanism, called RelGNN, for predictive tasks on relational tables. The paper introduces atomic routes based on primary-foreign key connections and design a composite message passing using atomic Routes. RelGNN achieves good results on RELBENCH that is a widely accepted benchmark for deep learning on relational data. ## update after rebuttal My questions are answered. I maintain my original score. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes, the proposed methods is evaluated on RELBENCH that is a widely accepted benchmark for deep learning on relational data. Theoretical Claims: Yes. I did read and understand the equations in the paper. Experimental Designs Or Analyses: Yes, I did. Experimental evaluations in the paper are conducted on a state-of-the-art benchmark. Supplementary Material: I did not find supplementary material, e.g., source codes/implementations, for this paper. Relation To Broader Scientific Literature: The key contributions of the paper are atomic routes and a composite message passing with atomic routes, which are simple but efficient. In general, simple but efficient approaches are easier to be deployed into systems. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths S1: Authors propose a simple but efficient solution for learning over relational tables. S2: RelGNN have good performances on all tasks. S3: The paper reads well. Weakness W1: Limitations of RelGNN are not discussed W2: Implementations are not open-sourced. Other Comments Or Suggestions: Personal comment: I can not learn some implementation details as I did not find the source codes. Questions For Authors: Q1: Will RelGNN degenerate on relational tables with many-to-many relationships? For example, for movie and actor tables, one actors can appear in multiple movies and one movie can feature many actors. Q2: The claim made in line 233 - 234 "This ensures both broad applicability and scalability across diverse datasets". Can authors explain what "broad applicability and scalability" means? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for acknowledging RelGNN’s efficiency, strong performance, and the clarity of our writing. We also appreciate the constructive feedback and address each point below: > **Regarding Discussion on the Limitations (W1)** The reviewer mentions that the limitations of RelGNN are not discussed. In fact, we did discuss specific limitations within the experiment sections (e.g., discussing limitations on recommendation tasks due to the constraints of the inherited framework and prediction head from RDL; analyzing potential reasons for performance variations across tasks). We recognize these discussions and insights could be more accessible. In the revised version, we will add a consolidated limitations section. This reorganization will improve clarity without requiring substantial new content, as the critical analysis is already present in our original manuscript. > **Regarding Open-Sourcing Implementations (W2)** This is a good point and we acknowledge the importance of reproducibility and transparency. We've now made our source code available at https://anonymous.4open.science/r/RelGNN. > **Regarding Handling Many-to-Many Relationships (Q1)** The reviewer inquires about RelGNN’s ability to handle many-to-many relationships. The answer is an unambiguous yes. Many-to-many relationships are common in relational databases and are routinely modeled using bridge tables. For instance, in the rel-hm dataset, a customer may purchase multiple articles, and an article may be purchased by many customers; similarly, in the rel-f1 dataset, a driver participates in multiple races while each race includes multiple drivers. Such relationships are handled through bridge tables (e.g., transaction tables in rel-hm dataset) which is standard practice in relational databases. Incorporating these bridge tables are necessary by definition of database - as primary keys must uniquely identify rows, many-to-many relationships require intermediate tables with foreign keys pointing to both related entities. In the reviewer's movie-actor example, a valid relational database design requires more than just "actors" and "movies" tables. To represent this many-to-many relationship correctly, a bridge table (commonly called "appearances" or "cast") must be introduced. This bridge table would contain foreign keys referencing the primary keys of both the "actors" and "movies" tables, thereby preserving referential integrity while enabling the many-to-many relationship. In fact, this prevalence of bridge-node-modeled many-to-many relationships distinguishes relational data graphs from general heterogeneous graphs, which motivated RelGNN's design to specifically leverage this unique structure, leading to improved performance compared to general heterogeneous GNNs. We thank the reviewer for this great question and will clarify this point in the revised manuscript. > **Regarding Clarification on “Broad Applicability and Scalability”** "Broad applicability and scalability" refers to two key advantages: - Broad Applicability: RelGNN's atomic routes can be automatically extracted without domain-specific knowledge (unlike traditional meta-path approaches), making it applicable to diverse relational datasets beyond Relbench. - Scalability: Atomic routes are computed at the schema graph level—a graph where nodes represent table types (which typically number in the tens even for large databases) rather than the data graph level, where nodes represent individual entities (which can number in the millions). This design makes our method agnostic to the number of entities, allowing RelGNN to scale efficiently to very large databases where the number of entities grow rapidly while table types remain relatively stable. We thank the reviewer for bringing this up and will clarify this explanation in the revised manuscript. In summary, we - Note that limitations were discussed in our experiment section and commit to a more explicit discussion in the revised version. - Make our source code available. - Clarify that many-to-many relationships are handled via bridge nodes, a necessity in relational database design. - Explain "broad applicability and scalability" by highlighting RelGNN’s independence from human knowledge and its efficiency as the database scales. We hope these clarifications address the reviewer’s concerns and will update the manuscript when permitted. --- Rebuttal Comment 1.1: Comment: Thanks. I have no further questions. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the prompt response and are glad to hear that all questions have been addressed. We sincerely appreciate the reviewer’s thoughtful feedback in helping us improve the quality of our paper. If the reviewer finds it appropriate, we would be grateful for a re-evaluation and a potential update to the rating in light of our response.
Summary: The paper introduces RelGNN, a graph neural network (GNN) framework specifically designed for Relational Deep Learning (RDL), the task of doing end-to-end predictive modeling directly on relational databases (multiple tables linked by primary/foreign keys). RelGNN leverages “atomic routes,” which reflect short tripartite or star-shaped connectivity among tables that have multiple foreign keys. These are sequences (or hyperedges) of node-types that facilitate more direct single-hop message passing between relational tables that are semantically connected. ## update after rebuttal My concerns are addressed. Considering that I gave a relatively high score in the review, I will just maintain my score. Claims And Evidence: Claim 1: RelGNN better models relational DB structure through “atomic routes” than standard heterogeneous GNNs. Evidence of Claim 1: The authors highlight the difference between typical “meta-path” approaches in heterogeneous graphs and the new approach of extracting routes directly from primary–foreign key constraints. The authors then show strong empirical improvements (≥15/30 tasks with >4% relative gain) on the RELBENCH benchmark. Claim 2: Gains are larger on DBs with more complicated foreign-key structures (“bridge” or “hub” shapes). Evidence of Claim 2: They measure the improvements by listing results across all tasks and highlight bigger improvements in “complex” schemas (e.g. rel-f1). Methods And Evaluation Criteria: Overall, the proposed methods and metrics make sense for the targeted RDL tasks. Theoretical Claims: The paper does not heavily focus on new theoretical proofs. The main conceptual claim is that “atomic routes” in relational data can be grouped for single-step message passing. The authors do not provide formal theorems, but they do discuss the correctness of capturing needed information in one pass. Experimental Designs Or Analyses: They follow the standard approach from RELBENCH. Each of the 30 tasks is tested with a consistent data split (temporal). Baselines are run with the same node embedding initialization and the same sampling method. Besides, the authors highlight that improvements are especially large when the relational schema is complicated. That matches the paper’s central premise. The experiments appear valid for the stated goal. There do not appear to be unusual or confounding design choices in their methodology. Supplementary Material: Yes, I read all of the supplementary materials as the main paper text references Appendix sections describing (i) data/tables from the 7 real-world DBs, (ii) tasks, and (iii) further architecture or training details. Relation To Broader Scientific Literature: The authors situate their approach relative to Relational Deep Learning (Fey et al., 2024) and the concept of knowledge graphs in heterogeneous GNNs. The difference is that knowledge graphs revolve around semantic relation types (like “author-of,” “works-in,” etc.), whereas relational DB edges come strictly from primary–foreign key constraints. They also place it in context with meta-path approaches for heterogeneous GNNs (like HAN, R-GCN). However, they argue that meta-paths require domain knowledge, whereas they systematically build “atomic routes” from DB schema keys. To conclude, they do address the main relevant lines of prior work on heterogeneous graph modeling and knowledge graph completions. Essential References Not Discussed: Given the scope, they mostly reference the standard GNN and relational DB learning papers. Possibly referencing more about “multi-hop GNN minimization / skipping aggregator nodes” from other design patterns might help, but not necessarily “essential.” No glaring missing references stand out. Other Strengths And Weaknesses: Stengths: - Novel approach to dealing with multi-foreign-key bridging nodes, which is common in real relational DBs. - Relatively strong results on a large set of real tasks from RELBENCH. - Minimal overhead or domain knowledge needed: the “atomic route” concept is systematically derived from foreign key constraints. Weaknesses: - May not handle self-joins or self-loop tables elegantly (they mention rel-stack’s performance is not improved as much). - The proposed method focuses on composite message passing but does not discuss integration with more advanced GNN aggregator designs or advanced time encodings. Other Comments Or Suggestions: 1. A small ablation might be helpful: e.g. how does the removal of the composite step or the removal of atomic routes degrade performance? 2. The authors could incorporate or discuss how “atomic routes” would handle self-referencing foreign keys or cyclical references. 3. Please check the formating of the tables and the margins on the last page of the paper, as they are poorly presented. Questions For Authors: 1. For tasks like rel-stack with self-joins (post to post), do you see a better representation approach than treating them like normal foreign-key edges? 2. For extremely large DBs, do atomic routes lead to any complexities in memory usage or graph construction time? 3. For tables with many foreign keys, do you risk a large blow-up in “composite messages” to handle all pairwise or triple connections? Ethical Review Concerns: N.A. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive comments. We appreciate the recognition of our method’s novelty, strong results, and the clarity of our experimental setup. We address each point below: > **Regarding Handling Self-Loop Tables (W1, C2, Q1)** The reviewer suggests discussing solutions for self-loop tables, which we previously identified as a potential limitation. Although self-loop tables are rare in relational databases—only 1 out of 7 datasets in Relbench—we agree that discussing potential solutions is important. We experimented with adding positional encoding to help the model distinguish messages from different types of tables, which resulted in a 2% improvement on both entity classification tasks in the rel‑stack dataset. We will include a discussion in the revised version. > **Regarding Discussion of Integrating Other Components (W2)** The reviewer suggests discussing advanced GNN aggregators and time encodings. We find this an interesting point and will include a discussion on how our framework can be flexibly integrated with these components. We note that incorporating such elements is complementary and orthogonal to our primary focus on enhancing the message passing design for RDL. To ensure a fair comparison, we kept other components the same as prior work. > **Regarding Ablation Study (C1)** We appreciate the suggestion and incorporate an ablation study to further examine the impact of atomic routes. We instantiated Eq5 of RelGNN with GraphSAGE, which is equivalent to GraphSAGE + atomic routes. Additionally, we removed atomic routes from RelGNN, reducing it to a heterogeneous GAT. As shown in Table 1-3, the performance gap in both cases (GraphSAGE vs RelGNN w/ GraphSAGE; GAT vs RelGNN) highlights the effectiveness of atomic routes. We will include this ablation study in the revised version. **Table 1. Classification (ROC-AUC, ↑)** |Task|GraphSAGE|RelGNN w/ GraphSAGE|GAT|RelGNN| |-|-|-|-|-| |user-churn|70.42|70.90|63.21|70.99| |item-churn|82.81|82.94|69.99|82.64| |user-visits|66.20|66.80|64.82|66.18| |user-clicks|65.90|66.72|65.85|68.23| |user-repeat|76.89|78.82|68.24|79.61| |user-ignore|81.62|85.58|82.04|86.18| |driver-dnf|72.62|74.77|70.26|75.29| |driver-top3|75.54|84.86|60.03|85.69| |user-churn|69.88|70.29|64.72|70.93| |user-engagement|90.59|90.70|89.59|90.75| |user-badge|88.86|88.99|84.51|88.98| |study-outcome|68.60|69.34|66.19|71.24| **Table 2. Regression (MAE, ↓)** |Task|GraphSAGE|RelGNN w/ GraphSAGE|GAT|RelGNN| |-|-|-|-|-| |User-ltv|14.313|14.240|16.626|14.230| |Item-ltv|50.053|48.282|58.902|48.767| |Ad-ctr|0.041|0.037|0.043|0.037| |User-attendance|0.258|0.238|0.263|0.238| |Driver-position|4.022|3.792|4.268|3.798| |Item-sales|0.056|0.053|0.079|0.054| |Post-votes|0.065|0.065|0.068|0.065| |Study-adverse|44.473|45.531|46.026|44.461| |Site-success|0.400|0.354|0.393|0.301| **Table 3. Recommendation (MAP, ↑)** |Task|GraphSAGE|RelGNN w/ GraphSAGE|GAT|RelGNN| |-|-|-|-|-| |user-item-purchase|0.74|0.79|0.44|0.77| |user-item-rate|0.87|0.91|0.78|0.92| |user-item-review|0.47|0.57|0.29|0.52| |user-ad-visit|3.66|3.95|1.99|3.94| |user-item-purchase|2.81|2.82|1.88|2.81| |user-post-comment|12.72|13.9|11.97|14| |post-post-related|10.83|11.79|10.71|11.66| |condition-sponsor-run|11.36|11.62|10.43|11.55| |site-sponsor-run|19.00|19.17|17.90|19.14| > **Regarding Format (C3)** We thank the reviewer for pointing this out and will revise the paper. > **Regarding Complexity in Large DBs (Q2)** The reviewer asks if the complexity of computing atomic routes increases with larger DBs. The answer is no. Atomic routes are computed at the schema graph level—where nodes represent table types, which typically number in the tens—even for very large databases. This design choice makes our method agnostic to the number of individual entities (which can be in the millions), allowing RelGNN to scale efficiently as databases grow while the number of table types remains relatively stable. We will clarify this in the revised version. > **Regarding Handling Tables with Many Foreign Keys (Q3)** The reviewer is concerned about complexity when handling tables with many foreign keys. We employ a sampling method (following the standard RDL implementation), so only a subset of nodes around the seed node is sampled. Consequently, we only process foreign keys within the sampled subset. The sample size can be controlled to ensure that the number of connections remains manageable. We will explain this in the revised version. In summary, we provide potential solution for self-loop tables, discuss integration with advanced techniques, provide the ablation study that confirms the effectiveness of atomic routes and clarify how RelGNN is capable of handling large and complex DBs efficiently. We hope these clarifications address the reviewer’s concerns and will update the manuscript when permitted. --- Rebuttal Comment 1.1: Comment: Many thanks for the rebuttal and extra experiments. Yet I am still concerned about the changes going to happen in the paper. For example, Regarding Handling Self-Loop Tables (W1, C2, Q1), and Regarding Discussion of Integrating Other Components (W2). Although the authors mentioned will include a discussion in the paper, the discussion details are unclear in this rebuttal. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the prompt response and for acknowledging the additional experiments. Since we cannot revise the paper at this stage, we provide the exact content we plan to include in the revision to address the concerns. > **Handling Self-Loop Tables** (W1, C2, Q1) We'll include: Although self-loop tables are rare in relational databases (only 1 out of 7 datasets in RelBench), improving model handling of such cases is worthwhile. The key to addressing this challenge is enhancing the model's ability to distinguish messages coming from the same table type (for self-loop tables) versus those from different types (for non-self-loop tables). To address this, we propose incorporating relative positional encoding (RPE) over the schema graph (where nodes are table types and edges are primary-foreign key relations). RPE helps the model identify whether two tables are of the same type and also offers additional benefits. Since the schema graph defines the database structure at a macro level, it can provide global relational context that complements the local structural information captured by GNNs on the data graph. Moreover, schema graphs are typically small (with only tens of nodes even in large databases) and static, so the RPE can be precomputed once with negligible overhead and reused throughout training and inference. To evaluate, we implemented an RPE method based on [1]: eigen-decompose the Laplacian of the schema graph $L=V\text{diag}(\lambda)V^{\top}$, apply $m$ element-wise MLPs $\phi_k(\cdot)$ to obtain $Q[:,:,k]=V\text{diag}(\phi_k(\lambda))V^{\top}$, for $k\in[m]$, resulting in a tensor $Q\in\mathbb{R}^{N\times N\times m}$, and project via an MLP $\rho$ to obtain $RPE=\rho(Q)\in\mathbb{R}^{N\times N\times d}$, where $N$ is the number of node types of $d$ is the dimension of node embeddings. For a message from node type $i$ to $j$, we update the original message $m$ as $m' = m + \alpha \cdot RPE[i, j]$ with learnable $\alpha$. This resulted in a 2% improvement on both classification tasks in rel-stack, which contains self-loops. > **Integrating Other Components** (W2) We'll include: We focus on improving the core design of message passing, but the RelGNN framework is flexible and accommodates more advanced components. **Advanced GNN Aggregators**. RelGNN allows easy integration of different GNN aggregators by re-instantiating the AGGR operation in Eq. 5. For example, PNA [2] improves expressiveness by combining multiple aggregators and degree-scalers. It can be integrated into RelGNN by instatiating Eq. 5 with $AGGR(h_{dst}^{(l)},${{$h_{fuse}^{(l)}$}}$)=W_{\text{proj}}h_{dst}^{(l)}+\sum_{fuse\in \mathcal{N}(dst)} M(h_{fuse}^{(l)})$ where $M(\cdot)$ is the PNA operator. Model-agnostic aggregators can also be integrated on top of RelGNN. For instance, Jumping Knowledge Network [3] improves performance by adaptively combining representations from multiple layers. This can be implemented by collecting outputs from each RelGNN layer and passing them to PyG’s `JumpingKnowledge` module, with the result used as input to the final prediction layer. **Effective Time Encodings**. Many RDL tasks involve temporal prediction, and various temporal encodings can be incorporated as additional node features for RelGNN. For example, Time2Vec [4] maps timestamps $t$ to $\mathbb{R}^d$ via $\text{Time2Vec}(t)[i]=sin(\omega_it+\phi_i)$ with learnable parameters $\omega_i$ and $\phi_i$. GraphMixer [4] introduces a fixed time encoding $t \rightarrow \cos(t \omega)$, where $\omega=${$\alpha^{-\frac{i-1}{\beta}}$}$_{i=1}^d$ and the choice of $\alpha,\beta$ depends on the dataset's time range. These techniques are compatible with RelGNN and introduce minimal overhead. > **Other Changes** (C1, C3, Q2, Q3) We will - include ablation results in Section 4 (C1) - fix formatting issues (C3) - clarify how RelGNN scales to large DBs (Q2) and handles many foreign keys (Q3) in Section 3.2, elaborating on atomic routes and sampling strategy as described in the original rebuttal. We hope these specific additions address the reviewer’s concerns and demonstrate our commitment to improving the final version. References: [1] Huang et al. "On the stability of expressive positional encodings for graphs." ICLR, 2024. [2] Corso et al. "Principal neighbourhood aggregation for graph nets." NeurIPS, 2020. [3] Xu et al. "Representation learning on graphs with jumping knowledge networks." ICML, 2018. [4] Kazemi et al. "Time2vec: Learning a vector representation of time." arXiv:1907.05321. [5] Cong et al. "Do we really need complicated model architectures for temporal networks?." ICLR, 2023.
Summary: The manuscript proposes RelGNN, a graph neural network framework tailored for relational deep learning (RDL), enabling predictive modeling on relational databases. RelGNN introduces atomic routes, which capture high-order tripartite structures to facilitate direct single-hop interactions between heterogeneous nodes. By designing a composite message passing mechanism based on atomic routes, RelGNN completes the two-step information exchange in a single step with an attention mechanism. RelGNN is empirically evaluated on 30 tasks from RelBench [1] which consists of entity classification, entity regression, and recommendation. [1] Fey et al., “Relational Deep Learning - Graph Representation Learning on Relational Databases”, ICML Position Paper, 2024. ## Update after rebuttal The paper could benefit from including additional evidence to support the main claim and existing heterogeneous GNNs with relevant adjustments in the experiments. Claims And Evidence: In Section 3.1, the authors claim that standard heterogeneous GNNs entangle irrelevant information when propagating messages through intermediate nodes with two or more foreign keys. However, there is insufficient evidence to support the claim that messages propagated from one-hop or two-hop neighboring nodes of an intermediate node contribute as irrelevant noise. For instance, the authors argue that the “constructors” node introduces noise during the message-passing process to the “standings” node. However, even in a two-hop scheme, the “constructors” node does not contribute to the message-passing process to the “standings” node. Furthermore, in relational databases, the assumption that the information from tables connected via primary-foreign key relationships to a table corresponding to an intermediate node cannot be deemed irrelevant contradicts the rationale for using GNN-based models. Therefore, a clear theoretical justification for this claim is necessary. Methods And Evaluation Criteria: RelGNN is evaluated on the recently proposed RelBench [1]. There is no new benchmark dataset or evaluation criteria. Theoretical Claims: There is no formal proof or theoretical justification for the claims made in this manuscript. Experimental Designs Or Analyses: The baseline models for evaluating the proposed RelGNN are highly limited. The authors primarily use only the heterogeneous GraphSAGE from the original RelBench [1] as the baseline. However, it is necessary to compare RelGNN with other heterogeneous GNNs that employ different backbone GNN architectures, such as GAT [2] and GIN [3], rather than relying solely on GraphSAGE. Additionally, other heterogeneous GNNs including models that use meta-paths should be included as baselines [4,5,6]. Furthermore, despite the use of only a limited baseline, the performance improvement of RelGNN is not significant. [2] Veličković et al., “Graph Attention Networks”, ICLR, 2018.\ [3] Xu et al., “How Powerful are Graph Neural Networks?”, ICLR, 2019.\ [4] Fu et al., “MAGNN: Metapath Aggregated Graph Neural Network for Heterogeneous Graph Embedding”, WWW, 2020.\ [5] Hu et al., “Leveraging Meta-path based Context for Top-N Recommendation with A Neural Co-Attention Model”, KDD, 2018.\ [6] Tang et al., “HiGPT: Heterogeneous Graph Language Model”, KDD, 2024. Supplementary Material: There is no code/data. The appendix provides statistics and descriptions of the benchmark datasets [1]. Relation To Broader Scientific Literature: In this manuscript, RelGNN is proposed for predictive tasks on relational databases. However, the rationale behind introducing its main idea, the atomic route, is not clearly justified. Additionally, the limited number and variety of baseline models make it difficult to properly assess the significance of RelGNN. Essential References Not Discussed: References and discussions of the various baseline models mentioned in the “Experimental Designs or Analyses*” section are necessary [2,3,4,5,6]. Other Strengths And Weaknesses: - The authors provide reasonable explanations for the experimental results of the proposed model, particularly when its performance is comparable to or worse than the baseline models. However, most limitations have been left as future work without being addressed. Additionally, further discussion is needed after incorporating other baseline models. - There is a lack of justification for key model components, such as the attention mechanism and FUSE operation. In particular, without an ablation study on the attention mechanism, it is unclear whether the model’s performance improvements are due to the introduction of atomic routes or the attention mechanism itself. - The manuscript lacks a clear explanation of the model’s training methodology. While it can be inferred that the training table was used following previous work [1], this is not explicitly stated, leaving the details of the training and inference process ambiguous. Additionally, the meaning of the gray nodes in Figure 2(c) is not clearly explained, making it difficult to interpret their role. - The code/data for the experiments is not provided, and the detailed descriptions of the experimental setup (e.g., number of layers) are insufficient, making it difficult to ensure reproducibility. Other Comments Or Suggestions: - In line 373 of the main text, the phrase starting with “tianlangNote to self” in the right column may compromise anonymity. - The terms “Relational Entity Graph” and “Relational Data Graph” appear to refer to the same concept, yet they are used inconsistently throughout the manuscript. Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: We thank the reviewer for the feedback and address each concern as follows: > Regarding Justification of Claims The message passing process through an intermediate node introduces an imbalance: the source node’s signal is aggregated twice, while signals from intermediate nodes are only aggregated once. In RDL, the source nodes typically contain the most critical information for prediction, so preserving a clean, undiluted source signal is essential. Standard heterogeneous architectures attempt to address this issue by employing skip connections and additional components such as MLPs, which must implicitly learn to disentangle the clean source signal from the noise introduced during multiple aggregations. In contrast, our approach explicitly leverages atomic routes and composite message passing to maintain a direct, efficient exchange of information between source, intermediate, and destination nodes. This targeted mechanism minimizes the dilution of the source signal without introducing additional parameters, ensuring that the most critical information propagates effectively through the network. > Regarding Additional Baselines The reviewer asks for additional baselines. Our original choice of baselines was intended to maintain consistency with prior work [1]. In response to the reviewer’s suggestion, we conducted experiments using GAT [2] and GIN [3] as backbone architectures. As presented in Table 1, the performance remains largely unchanged across different backbone GNN architectures. Regarding meta-path-based GNNs, these methods require substantial manual effort and domain expertise to select appropriate meta-paths, making them less directly applicable to RelBench. In contrast, our method is fully automated and eliminates the need for such human intervention. We have already discussed meta-path approaches and cited [4,5] in the original manuscript, and we will add the citation [6] in the revised version. > Regarding Limitations The reviewer notes that some limitations are left as future work. We clarify that addressing most of these limitations is orthogonal to our main contribution. RDL is an emerging field, and the pipeline described in [1] opens up research opportunities across multiple dimensions. Our work focuses on the foundational aspect of improving message passing design, while complementary aspects—such as refined prediction heads and improved recommendation frameworks, noted as limitations of the current pipeline—are recognized as independent. By keeping these components constant, we isolate the effect of our proposed message passing method, ensuring a fair comparison. We highlight these limitations to encourage further advancements in the field. > Regarding Ablation on Model Components We appreciate the reviewer’s suggestion for the ablation study. The FUSE operation of RelGNN is instantiated in the same way as GraphSAGE in our original manuscript. For the ablation study, we further instantiate the AGGR operation with GraphSAGE instead of attention so that the only difference from the baseline is the inclusion of atomic routes. As shown in Table 1-3, the performance gap clearly demonstrates the effectiveness of atomic routes. The attention mechanism itself does not contribute significantly to performance gains (RelGNN w/ attn vs RelGNN wo/ attn; GraphSAGE baseline vs GAT baseline). We will include further details of this ablation study in the revised version. Table 1. Classification (ROC-AUC, ↑) Table 2. Regression (MAE, ↓) Table 3. Recommendation (MAP, ↑) > Regarding Code and Other Details We've now made our source code and model checkpoints available at https://anonymous.4open.science/r/RelGNN for reproducibility. The training setup and experimental details are maintained consistent with [1] to ensure a fair comparison. In the revised manuscript, we will add a section detailing these setup. The gray nodes in Figure 2(c) denote training tables, derived in the same manner as in [1]. “Relational Entity Graph” specifically refers to the graph built from relational tables that encompasses all entities, while “Relational Data Graph” is used more generally to refer to any graph constructed from a relational database (e.g., a subgraph sampled from the Relational Entity Graph). We will clarify these points in the revised manuscript. We hope these clarifications address the reviewer’s concerns and will update the manuscript when permitted.
null
null
null
null
null
null
null
null
Learning the RoPEs: Better 2D and 3D Position Encodings with STRING
Accept (spotlight poster)
Summary: This submission proposes STRING, a novel method that generalizes the popular Rotary Position Encodings (RoPE) used in Transformers. Unlike RoPE, which is naturally suited for 1D inputs and then extended to higher dimensions, STRING offers a theoretical framework that accommodates multi-dimensional (2D/3D) inputs while preserving essential properties like translational invariance and separability. Through extensive experiments—covering image classification, open-vocabulary detection, simulation-based robotics tasks, and real-world robot manipulation—STRING consistently outperforms or matches RoPE and standard baselines. The authors also provide in-depth theoretical justification showing that STRING is, in fact, the most general family of rotation-based position encodings (under certain smoothness constraints). ## After Rebuttal The authors' reply not well addressed my concerns on 3D imitation learning part. I might keep my score as weak accept. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes Experimental Designs Or Analyses: The experiments cover several aspects, including vision classification and retrieval, object detection, and robotic manipulation. The authors even conduct the real-world 3D robot manipulation to show the real world effectiveness of String. However, for the real-world 3D Robot Manipulation part, the authors only show that 3D String is better than 2D, which is actually widely shown [1, 2]. Could authors try some simple 3D-based imitation learning methods [1,2] and showcase whether String could directly improve them? [1] Ze et al. "3d diffusion policy: Generalizable visuomotor policy learning via simple 3d representations." arXiv preprint arXiv:2403.03954 (2024). [2] Ke e tal. "3d diffuser actor: Policy diffusion with 3d scene representations." arXiv preprint arXiv:2402.10885 (2024). Supplementary Material: There is no supplementary material. Relation To Broader Scientific Literature: The proposed method generalizes RoPE with a unified rhetorical framework, which is related and useful to the community. Essential References Not Discussed: N/A Other Strengths And Weaknesses: 1. The introduction is well written, clearly introducing the context and the motivation. 2. The experiments covering several domains are very extensive. Other Comments Or Suggestions: N/A Questions For Authors: As mentioned above, for the 3D robot manipulation part, it will be more convincing to showcase how STRING improve 3D-based imitation learning methods instead of 2D baselines. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We would like to sincerely thank the Reviewer for the comments. We provide detailed answers below. $\textbf{Additional tests, e.g. 3D-based imitation learning methods from [1,2]}$: Thank you very much for the comment. The main goal of the 3D bi-arm KUKA experiments was to combine the best techniques to obtain new SOTA. We have already established STRING as providing gains on the top of the regular 3D algorithms, that in turn improved upon their 2D counterparts in Sec. 4.2.2., where we focused on 3D object localization (see: Table 4). We explicitly describe this goal, putting our plan for bi-arm KUKA experiments in the context of our previous findings, at the beginning of Sec. 4.4 (l.372-375: $\textit{“Establishing STRING as superior to other methods on previous tasks, ...”}$). Having said that, we still included in the bi-arm KUKA section 3D-based imitation learning methods that do not leverage STRING. We trained via imitation learning a policy directly using depth to produce normal maps (paragraph: $\textit{“Implicit depth via normal maps”}$ at the beginning of Sec. 4.4.2). This is based on well established methods for incorporating depth into robotics tasks from [1]. As we show in Fig. 5, STRING provides ~$\textbf{11}$% improvement accuracy-wise, as compared to that baseline. We would like to note one more thing. The 2D policies we compared against in the bi-arm KUKA section were strong enough to outperform various 3D counterparts given similar computational budget (e.g. point-cloud-based methods that were used in the first paper pointed by the Reviewer; we will explicitly clarify it in the camera-ready version). This was the case since point cloud (PC) approaches suffer from the very high computational footprint. The average number of points in the bi-arm KUKA PCs was of order 5K+. Training/deploying policies directly using all the points was infeasible and various tokenization techniques to reduce the number of 3D tokens had a detrimental effect on the performance. Furthermore, base 2D policies applied in that section used vision encoders pre-trained on massive amount of vision data. Trained-from-scratch 3D encoders were not a match for them. Thus it was important for us to build on the 2D policies in that section. The approach we used there with the positional encoding mechanism in the form of STRING applied on RGB-D images via the 3D patch-lifting (see paragraph: $\textit{“Lifting patches to 3D for STRING”}$, p.7) enabled us to successfully do it. First of all: it did not require working with PCs and retained the same number of patches as its 2D counterpart. That was critical for computational efficiency. Secondly, it could easily reuse 2D-pretrained checkpoints, leveraging vision input pre-processing from the 2D baseline as is, only to add depth signal to modulate attention computation. The STRING variant re-used all pre-trained parameters of its 2D counterpart, introducing a negligible amount of additional 3D-specific trainable parameters. Finally, STRING variant could be set up in such a way that at the very beginning of its training it was equivalent to the 2D variant. This was achieved by setting up a depth scaling parameter for the patch-depth (obtained by averaging over depth values of its constituting pixels) as zero at the beginning of training. We commented on some of these in the paper (e.g. a paragraph “From 2D to 3D with STRING”, p. 8), but will provide more detailed discussion in the camera-ready version. We also would like to note that we added several additional experiments for the rebuttal, confirming the effectiveness of STRING: 1. We re-ran the evaluations for the $\textit{BowlOnRack}$ task (we chose one task, given rebuttal period time constraints) from the Aloha-sim portfolio, by increasing the number of trials $\textbf{from 10 to 100}$. The ranking of methods in that setting stays exactly the same as reported before with STRING outperforming others. 2. We run experiments for 2D and 3D object detection using the SoViT-400m models which surpass ViT-H and are competitive in performance with ViT-G. As compared to baseline, STRING provides >$\textbf{4.7}$% relative improvement on COCO AP and >$\textbf{5.1}$% relative improvement on LVIS AP and improves also upon regular RoPE in the OWL-ViT-2D setting. For 3D object detection with SoViT-400m models, using STRING led to additional accuracy gains as compared to the baseline ($\textbf{1.2}$ of the baseline improvement coming from using a larger model). 3. We also conducted additional scaling experiments, where we increased the image resolution from 256 x 256 to 512 x 512. STRING maintained an advantage over the baseline for the object detection task from Sec. 4.2.2 (3D bounding boxes): $\textbf{6}$%+ for the 2D variant and almost $\textbf{10}$% for the 3D variant. $\underline{References}$: [1] Early or late fusion matters: Efficient rgb-d fusion in vision transformers for 3d object recognition; Tziafas et al.; IROS 2023.
Summary: This paper proposes STRING, which generalizes the 2D rotation matrix in position encoding in RoPEs to a more general form of rotation, which is parameterized by the linear combinations of skew-symmetric generator matrices. This allows the framework to obtain exact translational invariant or rotational invariant, depending on the needs. The proposed method is tested on 2D and 3D object detection tasks, as well as a robot policy learning task, in which the proposed method is shown to obtain better performance than ViT and RoPEs. Claims And Evidence: The main contribution of this paper is to generalize the position encoding. Another contribution is that this position encoding can be extended to 3D. The authors provide a detailed explanation of their position encoding design and proof of its invariance. Methods And Evaluation Criteria: Evaluation metrics are standard. There are two ways to learn the skew-symmetric matrix: Cayley-STRING and Circulant-STRING. While they are each introduced clearly, a discussion about the trade-off and when to use which would be beneficial. Theoretical Claims: The paper provides theorems of the position encoding based on the generators, which proves how STRING is a generalization of RoPE and the translational invariant property. Experimental Designs Or Analyses: The authors provide extensive experiments on 2D object detection, 3D object detection, and policy learning for robot manipulation. All of them are compared with standard baselines ViT and RoPE. While the improvements are marginal, this is the nature of generalizing RoPE. The out-of-distribution evaluation shows the robustness if the framework is extended to 3D. Additional discussion on the differences between Cayley-STRING and Circulant-STRING can strengthen the paper. Supplementary Material: The supplementary material provides proof of the theorem in the paper and more information about the experiment setup. Relation To Broader Scientific Literature: Overall, the paper is a great addition to the literature. It generalizes RoPEs by a natural choice of position encodings. The paper is both theoretically sound and well supported by extensive experimental results. Essential References Not Discussed: The references are addressed adequately. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: How does the discretization in the sensory data (for example, limited image resolution/token) affect the performance of the system? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We would like to sincerely thank the Reviewer for the comments. We provide detailed answers below. $\textbf{Discussion on trade-offs regarding using Cayley-STRING and Circulant-STRING}$: Thank you very much for an excellent comment. With token dimensionality per head $d_{\mathrm{head}}$, Cayley-STRING introduces $d_{\mathrm{head}}(d_{\mathrm{head}} - 1) / 2+ d_{\mathrm{head}} / 2$ parameters per head while Circulant-STRING adds $d_{\mathrm{head}}$ parameters per head. This is the case since Cayley-STRING trains a full skew-symmetric matrix and d / 2 frequencies, while Circulant-STRING uses a learnable circulant matrix which is fully determined by its first row. Furthermore, multiplication with Cayley-STRING matrices takes quadratic time in $d_{head}$ during training, whereas multiplication with Circulant-STRING matrices takes time $O(d_{\mathrm{head}} \log(d_{\mathrm{head}}))$ via Fast Fourier Transform (FFT). During inference, quadratic cost can be in principle absorbed into existing q/k projections at no extra cost (for the vanilla attention mechanism). Both approaches improve upon regular RoPE, yet Cayley-STRING in general leads to larger improvements. Thus we have here a classic quality-speed tradeoff. For applications with strict memory constraints, we recommend to use Circulant-STRING due to its very compact computational footprint, whereas for other applications Cayley-STRING is recommended. We will add the above discussion to the camera-ready version of the paper. =========================================================================================================================================================== $\textbf{The impact of discretization in the sensory data on the performance of the system}:$ Thank you for your question. We conducted our experiments with images of various resolutions, seeing consistent gains across the board, as compared to the baselines. The statement that STRING is effective even if the sensory data is not accurate (due to discretization or noise) is also supported by our 3D experiments with the bi-arm KUKA robot. In all those experiments, depth sensors provide a relatively noisy signal, yet STRING is capable of leveraging depth sensors to provide $\textbf{8}$%+ accuracy improvements in regular in-distribution tasks and from $\textbf{15}$% to $\textbf{42}$% improvements for out-of-distribution tasks. We will add this discussion to the camera-ready version of the paper.
Summary: This paper proposes a class of positional encoding for high-dimensional tokens with both separability and translation invariance. The proposed positional encoding is a generalization to the rotary positional encodings (RoPE) based on Lie groups. The authors also provide a computationally efficient implementation of it. Experiments show its gains with transformer models in robotics and vision tasks. ## update after rebuttal Thank the authors for their responses! My questions are well-answered. Claims And Evidence: The three key contributions claimed in the paper are: - "We introduce STRING, a new family of position encodings for multidimensional token coordinates that respect both separability and translational invariance." - The proposed method is indeed a "family" of positional encodings with different variants. - Both separability and translation invariance are well discussed in the paper. - "We rigorously analyse STRING’s theoretical properties ( Sec. 3), proving that it is more general than RoPE. We provide computationally efficient implementations." - The theoretical properties are well provided. - The relations between STRING and RoPE are well discussed. - For computational efficiency, there are some theoretical discussions about computational complexity, but it would be good to also have some statistics or analysis on its time or memory consumption in the experiments. - "We show strong accuracy gains across varied models using Transformers with STRING, on a range of robotics and general vision tasks (see Fig. 1 and Sec. 4)." - The experimental results support this argument. Methods And Evaluation Criteria: The method and evaluations look reasonable to me. The evaluations are mostly done on public benchmarks with widely adopted metrics. Theoretical Claims: I briefly went through the proofs for Theorem 3.2-3.4. Theorem 3.2 is mostly based on the general conclusions from group theory and I think they are good. The intuitions for Theorem 3.3 and 3.4 look right, where I briefly followed the proofs but didn't check all computations in details. For Theorem 3.5, I am not very familiar with the computational complexity of FFT and DFT algorithms, and thus I cannot say too much about the proof. But a small question I have is, it seems that this complexity analysis is mostly about the forward computation? How would the complexity be like for backpropogation? Experimental Designs Or Analyses: The experimental designs look reasonable to me, especially the focus on comparing STRING (with its variants) to previous positional encodings. The Training curve in Fig. 4 also looks good, as it shows that networks with STRING positional encodings are also easy to optimize for. It may be good to also have some statistics or discussions on the number of parameters, memory consumption, or training/inference time, as efficient implementation is discussed in the key contributions. Supplementary Material: I checked the website provided by the authors, which mostly shows the qualitative results of the experiments. Relation To Broader Scientific Literature: My personal feeling is that the proposed method is a generalization of RoPE (and thus RoPE is its closest related work). But this generalization is non-trivial -- it covers the most general cases and shows different variants. Computational efficiency is also studied in this generalization. So overall I think the proposed method has good novelty and well-developed details. Essential References Not Discussed: The related works and preliminaries have provided a good background for the topic studied in this paper. Other Strengths And Weaknesses: To summarize, the strenghts and weaknesses are: Strengths: - A general framework with theoretical proofs, well-designed details, and different variants. - Good experimental results. Weaknesses: - Computational efficiency: This point can be better justified if there could be some statistics or discussions on the number of parameters, memory consumption, or training/inference time. - More explanations on Theorem 3.3 and 3.4 and the notations could make the paper easier to read. Other Comments Or Suggestions: - It may be good to have clearer explanations of the differences between Theorem 3.3 and 3.4, instead of calling both of them "RoPE is a type of STRING". Based on my understanding, Theorem 3.3 is saying that RoPE is a special case of STRING (and thus STRING is a generalization of RoPE), and Theorem 3.4 is about, and Theorem 3.4 is saying that all STRING positional encodings can be expressed with RoPE. - It may be good to have a clearer introduction to the notations at the beginning. For example, it took me some time to find what are $d_c$ and $d$. Questions For Authors: I don't have specific questions in addition to the comments I made above. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We would like to sincerely thank the Reviewer for the comments. We provide detailed answers below. $\textbf{Theorem 3.5}$: The proof relies on the fact that the computation of $exp(\mathbf{M})$ for a given matrix $\mathbf{M}$ can be re-written as: $\exp(\mathbf{M}) = \Sigma \exp(\mathbf{D}) \Sigma^{-1}$ for a given factorization: $\mathbf{M}=\Sigma \mathbf{D} \Sigma^{-1}$ (from the Taylor-series definition of matrix exponentiation). The next observation is that, for the matrices of interest in the proof, the factorization can be done efficiently with $\Sigma$ and $\mathbf{D}$ being: (1) a DFT matrix and (2) a diagonal matrix with the main diagonal given as the action of $\Sigma$ on an easy-to-compute vector, respectively. The final observation is that the DFT matrix supports fast matrix-vector multiplication (via the Fast Fourier Transform) and the diagonal matrix exponentiation is obtained by element-wise exponentiation. We will clarify it in the final version of the paper by providing a sketch of the proof in the main body of the paper. Thank you very much for an excellent question regarding the backpropagation pass. Even though the analysis in the paper was made for the forward pass, it can be extended to the backward pass. This comes from the fact that the computation of the gradient in the backpropagation over networks leveraging Circulant-STRING matrices also involves multiplications with DFT matrices. The analysis of the backpropagation is more technical, but basically follows steps from Sec. 4.1 of [1]. We will provide a paragraph to discuss computational gains in backpropagation in the camera-ready version of the paper. $\underline{References}$: [1] An Exploration of Parameter Redundancy in Deep Networks with Circulant Projections; Cheng et al.; ICCV 2015. =========================================================================================================================================================== $\textbf{Statistics or discussions on the number of parameters, memory consumption, or training/inference time}$: Thank you very much for the comment. STRING introduces a negligible amount of extra trainable parameters. With token dimensionality per head $d_{\mathrm{head}}$, Cayley-STRING introduces $d_{\mathrm{head}}(d_{\mathrm{head}} - 1) / 2 + d_{\mathrm{head}} / 2$ parameters per head while Circulant-STRING adds $d_{\mathrm{head}}$ parameters per head. Thus for instance for the: (1) Cayley-STRING variant with no parameter sharing across heads, (2) Cayley-STRING with parameters shared across heads and (3) Circulant-STRING with no parameter sharing across heads, parameter count increase is marginal: (1) 0.34%, (2) 0.028% and (3) 0.016% respectively. In terms of training time and memory footprint, the performance of Cayley-STRING and Circulant-STRING is similar to that of the RoPE-M model (~$\pm 7 \$%). We will include these numbers in the supplementary material of the camera-ready version of the paper. =========================================================================================================================================================== $\textbf{Theorem 3.3 and 3.4}$: We thank the Reviewer for the comment, and will ensure that our notation is consistent and clear. Theorem 3.3 shows that RoPE is a special case of STRING for a particular choice of an antisymmetric generator. Meanwhile, Theorem 3.4 shows that RoPE is a special case of STRING with basis transformation matrix P = I. Both theoretical results show that our new method is more general than that previous SOTA, but from complementary perspectives. As shown in the paper, this increased generality unlocks better performance in experiments. We will make sure this is clear in the text and highlight the differences between these important, closely-related results. =========================================================================================================================================================== $\textbf{Clearer introduction to the notation at the beginning}$: Thank you for the question. $d$ refers to the dimensionality of the latent representations in the Transformer, i.e. the length of queries, keys and values. Meanwhile, $d_c$ refers to the dimensionality of the position coordinates encoded using STRING, i.e. 1 for sequences (text), 2 for 2D data (images), and 3 for 3D data (images with depth). We will make sure this is clear in the text.
Summary: This paper introduces STRING (Separable Translationally Invariant Position Encodings), a new family of position encodings. STRING extends RoPE using a theoretical framework based on Lie groups, maintaining key properties like separability and translational invariance, while supporting 2D/3D token representations. The authors propose effective implementations, such as Cayley-STRING and Circulant-STRING, and demonstrate through experiments that STRING outperforms RoPE and absolute position encodings (APEs) across tasks. The experiments include: Image classification and retrieval. Open-vocabulary 2D/3D object detection. Robotic manipulation tasks in both simulation and real-world settings. Claims And Evidence: The paper’s claims are generally well-supported by clear evidence: STRING generalizes RoPE with separability and translational invariance. Supported by rigorous theoretical proofs (Theorem 3.2 and 3.3). STRING outperforms RoPE and APEs in tasks like classification, detection, and robotics. Backed by experimental results showing consistent improvements, e.g., higher accuracy in ImageNet and better IOU in 3D object detection. STRING is computationally efficient. Supported by proposed implementations (Cayley-STRING, Circulant-STRING) and complexity analysis. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are appropriate: Methods: STRING is well-designed for tasks requiring translationally invariant position encodings, with practical implementations like Cayley-STRING and Circulant-STRING ensuring efficiency. Evaluation: Benchmarks like ImageNet, WebLI-3D, and robotics tasks (ALOHA and KUKA) are relevant and suitable for testing STRING's performance. However, the ALOHA simulation tasks (10 trials per task) and KUKA real-world experiments (50 trials) have limited evaluations, which may not fully capture variability or ensure robustness. Larger-scale testing is needed to strengthen the conclusions. Theoretical Claims: The theoretical claims, including Theorem 3.2 and 3.3, were reviewed and appear to be correct, with clear proofs provided in the appendix. Experimental Designs Or Analyses: Please see the Methods And Evaluation Criteria. The ALOHA simulation tasks (10 trials per task) and KUKA real-world experiments (50 trials) have limited evaluations, which may not fully capture variability or ensure robustness. Larger-scale testing is needed to strengthen the conclusions. Supplementary Material: I reviewed the supplementary material, focusing on the theoretical proofs and experimental details. Relation To Broader Scientific Literature: The paper builds on prior work in position encoding and equivariant representations, extending these ideas with the STRING framework. It specifically addresses limitations of fixed encodings in transformers and draws inspiration from group theory and circulant structures, contributing a novel approach aligned with recent advances in efficient, invariant neural architectures. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: The paper focuses on the important area of position encoding, which is highly relevant to the broader field of machine learning. It introduces novel theoretical derivations and technical contributions, which are innovative and valuable. Weaknesses: **Limited Scalability Experiments:** The paper lacks scaling experiments. It is unclear if Circulant-S and Cayley-S perform well on larger models, such as ViT-H or ViT-G. **Marginal Improvements Over Baselines:** I noticed that Circulant-S and Cayley-S do not show significant improvements over baselines like RoPE, especially in the ALOHA simulation task, where the results are nearly identical across many tasks. **Robotics Task Evaluation:** The evaluation on robotics tasks could be more robust. For example, testing on more trials and tasks would strengthen the claims. While the theoretical contributions are clear, I am not deeply familiar with this field, which limits my ability to assess the novelty of the contributions in the broader context of existing research. Other Comments Or Suggestions: N/A Questions For Authors: I give a Weak Accept because of the solid theoretical proofs and novel technical contributions. However, I am not very familiar with this field, so I cannot guarantee my judgment with full confidence. I did not give a higher score because the experimental results show limited improvements over the baselines, especially due to the lack of scaling-related experiments, which raises concerns about the actual effectiveness of the proposed methods. Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We would like to sincerely thank the Reviewer for the comments. We provide detailed answers below. $\textbf{Increasing the number of trials in testing (currently 10 trials / task for Aloha-sim and 50 trials for bi-arm KUKA)}:$ We sincerely thank the Reviewer for the comment. We would like to note that 10 trials / task is considered normal for the evaluation of the robotics controllers, both in sim and on hardware (e.g. [1]: 5 repetitions as stated in Sec 6.1, $\textit{"This is repeated \textbf{5} times"}$ and [2]: 10 repetitions in $\textit{“We ran each of the learned controller \textbf{10} times.”}$ at the beginning of Sec. B (“Results and Analysis”)). In fact it is also considered normal for testing several regular machine learning algorithms. For instance, most papers evaluating Random Fourier Features and related methods, such as [3] ($\textit{“All experiments are repeated \textbf{ten} times”}$, p.7), [4] (caption of Fig. 2 $\textit{“\textbf{10} independent trials are executed”}$) that usually can be quickly evaluated with very limited computational resources, apply 10 repetitions to calculate empirical MSEs. Having said that, following Reviewer’s comment for the rebuttal, we re-ran the evaluations for the $\textit{BowlOnRack}$ task (we chose one task, given rebuttal period time constraints) from the Aloha simulation portfolio, by increasing the number of trials $\textbf{from 10 to 100}$. The ranking of different methods in that setting $\textbf{stays exactly the same}$ as reported before: 1. STRING (83% accuracy) 2. regular ViT baseline (80% accuracy) and 3. RoPE (75%). We also would like to emphasize that all reported KUKA experiments are $\textbf{on-robot}$ (and thus much more time-consuming) and in the paper we use many more trials for the corresponding evals. $\underline{References}$: [1] Scalable Deep Reinforcement Learning for Vision-Based Robotic Manipulation; Kalashnikov et al.; CoRL 2018; $\textbf{Best Systems Paper, ~1.8K citations}$. [2] Deep Spatial Autoencoders for Visuomotor Learning; Finn et al., ICRA 2016; $\textbf{764 citations}$. [3] Nystrom Method vs Random Fourier Features: A Theoretical and Empirical Comparison; Yang et al.; NeurIPS 2012; 450+ citations). [4] Quasi-Monte Carlo Feature Maps for Shift-Invariant Kernels; Avron et al.; ICML 2014 =========================================================================================================================================================== $\textbf{Scaling Experiments to show that STRING still performs well}:$ Thank you very much for an interesting question about scaling to larger models. Following the Reviewer’s suggestion, we have run such experiments using the SoViT-400m class of models which surpass ViT-H and are competitive in performance with ViT-G [1]. As compared to baseline, STRING provides $\textbf{>4.7}$% relative improvement on COCO AP and $\textbf{>5.1}$% relative improvement on LVIS AP and improves also upon regular RoPE in the OWL-ViT-2D setting. For the task of 3D object detection with SoViT-400m models, using STRING led to additional accuracy gains as compared to the baseline ($\textbf{1.2}$ of the baseline improvement coming from using a larger model). Thus, we tested that STRING does perform well also on larger models. We have also conducted additional scaling experiments, where we increased the resolution of images from 256 x 256 to 512 x 512. In the higher-resolution setting, STRING maintained an advantage over the baseline for the object detection task from Sec. 4.2.2 (3D bounding boxes): $\textbf{6}$%+ for the 2D variant and almost $\textbf{10}$% for the 3D variant. We will include all these results in the camera-ready version of the paper. $\underline{References}$: [1] Getting ViT in Shape: Scaling Laws for Compute-Optimal Model Design; Alabdulmohsin et al.; NeurIPS 2023 =========================================================================================================================================================== $\textbf{Improvements Over Baselines}$: Thank you very much for your question. The STRING variant provides highest average accuracy over all Aloha-sim tasks. However, what is as important and clearly seen in Fig. 4, it also provides significant convergence improvements in training, achieving accuracy level of regular RoPE in $\textbf{1.6x}$ shorter time. We explicitly commented on that in l.367-368. This clearly distinguished STRING from regular RoPE. Furthermore, our 3D object detection experiments clearly show improvements coming from STRING (from approximately $\textbf{1}$% to $\textbf{1.5}$%), in both: 2D and 3D settings, as compared to $\textbf{the best}$ RoPE variants (Table 4). These improvements are actually visible to the naked eye. We show it in Fig. 2, where predicted small 3D boxes (in blue) are clearly misaligned with ground truth bounding boxes (in green) for baseline and RoPE, while pretty well aligned for STRING. --- Rebuttal Comment 1.1: Comment: Thank the authors for answering my question. I will maintain my original positive score.
null
null
null
null
null
null
SUICA: Learning Super-high Dimensional Sparse Implicit Neural Representations for Spatial Transcriptomics
Accept (poster)
Summary: This paper proposed a method for jointly denoising and improving resolution for spatial transcriptomics data. ## update after rebuttal I raised score. Claims And Evidence: Yes, their claim is supported. Methods And Evaluation Criteria: I have a question about problem definition, which is discussed in the question section. Theoretical Claims: Yes, they look good. Experimental Designs Or Analyses: Yes, I have checked the soundness. Supplementary Material: Their supplementary materials look good. Relation To Broader Scientific Literature: Yes, their contirbution is important for computational biology. Essential References Not Discussed: Yes, I think they need to include more baselines. Please see the questons. Other Strengths And Weaknesses: I have raised several questions in the Question section. Other Comments Or Suggestions: The presentaiton looks good. Questions For Authors: This paper is very interesting, but I have several questions about problem definition, model implementation, and evaluations. I will consider raising my score if the authors fully address my concerns. 1. I am confused about the imputation function. From my understanding, imputation in ST data means predicting unmeasured genes, which always need external resources, such as reference scRNA-seq data, to predict those unseen ones. Could you please explain how to use your pipeline to impute gene expression? Otherwise, is it the same as denoising? 2. What is the difference between z_gt and z^hat in their implementation, could the authors visualize their latent representations and discuss potential differences? I think we can directly use a GNN to encode spatial transcriptomic data with the awareness of spatial context, what is the drawback of this design? 3. The ablation studies seem different from their presentations, as their listed loss functions include L_dice, L_mae, and L_mse, but in section 4.7, the presented results are different. 4. The authors perform experiments based on both ST data with different resolutions, such as Slide-seq V2 and Visium. It will be interesting to investigate the performance differences across data with different resolutions. Could the authors elaborate on this point? 5. The cited link of STAGATE seems wrong, as the correct one should be: https://www.nature.com/articles/s41467-022-29439-6. I think it is also interesting to include a comparison with recent baselines such as SEDE (https://genomemedicine.biomedcentral.com/articles/10.1186/s13073-024-01283-x) Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We want to thank Reviewer RB2n for the valuable comments and suggestions, which have greatly helped us improve the quality of our work. 1. **Definition of imputation**: Thank you for pointing out this issue. In the manuscript, we use "spatial imputation" to refer to predict gene expressions at 2D locations not included in the training subset. This setting can also be seen as "super resolution" or "densification". As for the task of "gene imputation", we randomly mute a part of gene expressions and apply SUICA to do the reconstruction, so that such muted values could be restored. This setting is more relevant to "reconstruction". Please refer to our response to Reviewer C5Qu for the detailed data flow. 2. **Latent code**: $z_\textrm{gt}$ refers to the encoded latent code by the GAE's encoder; $\hat{z}$ represents the fitted $z$ by the INR. The goal of our model is to map spatial coordinates of spots to their corresponding gene expressions. However, as shown in the ablation study (Table 3), a vanilla INR can not perform well due to the dimensional challenge of ST. Therefore, we enforce INRs to learn the mapping between spatial coordinates and lower dimensional embeddings $z_\textrm{gt}$. Ideally, $\hat{z}$ should accurately align with $z_\textrm{gt}$ which allows GAE’s decoder to seamlessly restore the gene expressions. We have visualized the embeddings of $\hat{z}$ and $z_\textrm{gt}$ using UMAPs. The results show that the INR can effectively reproduce the embeddings learned by GAE’s encoder, allowing SUICA to infer the embeddings of arbitrary locations for spatial imputation. Please kindly refer to [this anonymous link](https://imgur.com/a/RlYePtD) for the visualization. 3. **Why not use GNN only**: We agree with you that GNN can encode ST data. However, we - apply GAE's encoder to gain low dimensional embeddings to enhance INR (Table 3), - employ INRs to map spatial coordinates of arbitrary spots to their embeddings, - and use GAE's decoder to restore the gene expression from embeddings. Without these key factors, using GNN only is difficult to map spatial coordinates to gene expression. 4. **Detailed ablation on loss**: For the reviewer's information, we break down the loss terms leveraged in SUICA and offer a more detailed ablation on the identical data in Table 3. To better show the loss combinations' effect on the sparsity, we additionally report IoU of non-zero areas for reference. We make our choice as MSE+MAE+Dice to ensure a relatively stable trade-off between numerical fidelity and statistical correlation. | Embryo E16.5 | MSE $\times 10^{-2}$$\downarrow$ | Cosine$\uparrow$ | Pearson$\uparrow$ | IoU$\uparrow$ | | ------------ | --------- | ------ | ------- | ----- | | MSE | 1.23 | 0.843 | 0.522 | 0.558 | | MSE+Dice | 1.22 | 0.843 | 0.572 | 0.674 | | MSE+MAE | 1.60 | 0.789 | 0.751 | 0.937 | | MSE+MAE+Dice | 1.48 | 0.806 | 0.747 | 0.928 | | Human Brain | MSE $\times 10^{-3}$$\downarrow$ | Cosine$\uparrow$ | Pearson$\uparrow$ | IoU$\uparrow$ | | ------------ | --------- | ------ | ------- | ----- | | MSE | 10.50 | 0.723 | 0.507 | 0.532 | | MSE+Dice | 10.72 | 0.719 | 0.574 | 0.696 | | MSE+MAE | 11.27 | 0.695 | 0.691 | 0.933 | | MSE+MAE+Dice | 7.05 | 0.826 | 0.800 | 0.925 | 5. **Resolution**: We agree with Reviewer RB2n that the performance is relevant to the resolution of data. However, it is somehow difficult to ablate such influence because in practice, the resolution is usually coupled with other factors that will have a potential effect on the performance. For instance, a high spatial resolution usually leads to a high drop-out rate, which is also challenging for modeling. We have performed experiments with different held-out rates in the response to Reviewer Q3xk, and we hope the results of such artificial simulation could be of some insights. 6. **Comparison with SEDR**: As is suggested, we benchmark against SEDR. Since SEDR can not perform spatial imputation, we report its gene imputation and denoising performance. However, when attempting to let SEDR reconstruct the raw gene expressions of *Stereo-seq MOSTA* (the default use case is to adopt PCA-reduced expressions), OOM is observed with RTX 4090, so we instead report the scores on *Visium-Human Brain*. MAE/MSE: $\times 10^{-2}$. | Gene Imputation | MAE$\downarrow$ | MSE$\downarrow$ | Cosine$\uparrow$ | Pearson$\uparrow$ | Spearman$\uparrow$ | | - | - | - | - | - | - | | STAGE | 7.13 | 1.69 | 0.692 | 0.667 | 0.110 | | SEDR | 37.0 | 91.4 | 0.0 | 0.048 | 0.076 | | SUICA | 5.64 | 0.699 | 0.835 | 0.829 | 0.391 | | **Denoising** | | | | | | | STAGE | 5.49 | 0.757 | 0.814 | 0.740 | 0.140 | | SEDR | 47.5 | 90.8 | 0.0255 | 0.117 | 0.0793 | | SUICA| 4.20 | 0.582 | 0.860 | 0.751 | 0.201 | --- Rebuttal Comment 1.1: Comment: Thank you for your comments. I will keep my score as I am already in the side of acceptance. Good luck! --- Reply to Comment 1.1.1: Comment: Thank you very much for your prompt acknowledgement and response! Should you have further questions, we will be glad to participate in the discussion.
Summary: The paper proposes SUICA, a method for continuous modeling of spatial transcriptomics data by combining a graph-augmented autoencoder (GAE) with implicit neural representations (INRs). The key idea involves compressing high-dimensional, sparse gene expression data into a low-dimensional latent space using a GAE, then mapping spatial coordinates to this latent representation via an INR (using either FFN or SIREN), and finally decoding back to the full gene expression space. To address sparsity in spatial transcriptomics, the authors reformulate the reconstruction loss with a quasi-classification approach based on Dice Loss. Experimental evaluations on multiple datasets (including Stereo-seq MOSTA, Slide-seqV2 mouse hippocampus, and Visium-Mouse brain) demonstrate improvements in spatial and gene imputation as well as bio conservation. Claims And Evidence: - The authors claim that SUICA outperforms conventional INR variants and existing methods (e.g., STAGE) in terms of numerical fidelity, statistical correlation, and biological conservation. Experimental results are presented over several datasets with metrics such as gene MAE, MSE, cosine similarity, correlations, and ARI of cell type. SUICA achieves new SOTA in most cases. - However, while the authors claim their approach uniquely handles super-high dimensionality, the model input to GAE is actually dimensionality-reduced input via PCA (Appendix B, line 699). This critical detail is omitted from the main text, and Figure 2 is extremely misleading as it depicts identical input and output sizes for the GAE. Methods And Evaluation Criteria: The use of GAE is interesting, but it's unclear whether performance improvements stem from integrating expression from neighboring spots to reduce noise, which is an already well-established approach in the field. For example the original MOSTA paper uses bin 50 (summing 50 spots) for downstream analysis. Theoretical Claims: N/A Experimental Designs Or Analyses: Given that the GAE input is PCA-transformed, the paper would benefit from a detailed ablation study on PCA's contribution to the results. Supplementary Material: The appendix has been reviewed; no additional supplementary materials were attached to the paper. Relation To Broader Scientific Literature: - The paper adequately discusses relevant works in the field. - The work presents an interesting use of INR in spatial biology, which is novel to the best of review's knowledge. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: - Is the regression target of FFN or SIREN baseline also PCA transformed? - How is data preprocessing conducted, including normalization, qc's. - Given the extremely complex model with three training stages, how were hyperparameters selected? - Can you please share your thoughts on previous sections? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the efforts of Reviewer Q3xk by offering comments and suggestions, which would definitely help us to improve the manuscript for its potential readers. We are also afraid that there is some critical misunderstanding towards SUICA, so we hope this rebuttal could lead to a better alignment and kindly ask Reviewer Q3xk to re-evaluate the technical contribution of SUICA. 1. **SUICA does not use PCA and operates in full gene space**: We apologize for the confusion. To clarify, the pipeline of SUICA does **NOT** use PCA for dimensionality reduction at any stage and both the input and output of the GAE are in full-gene space. During test time, it predicts gene expression profiles of the same dimensionality as the raw input, using only spatial coordinates. The discussion about PCA in Appendix B is intended to solve the potential concern of whether the encoder and decoder of GAE could be replaced by PCA and iPCA. However, this is not part of SUICA’s pipeline. The dimension reduction and reconstruction in SUICA are respectively conducted by the encoder and decoder of GAE. Therefore, the notion that input to GAE is dimensionality-reduced input via PCA is **incorrect**. We apologize again for being unclear and will make revisions to Section 4.3 and Appendix B to prevent further misunderstanding. 2. **The use of GAE**: The effectiveness of GAE has been discussed and verified in Section 3.2.2 and Section 4.7. We agree with the reviewer's intuitive idea that the root of performance is by leveraging neighborhood information and exploiting the correlation among data. But we would like to emphasize that the INR in SUICA also plays an important role in the continuous modeling of ST, making it possible to query the gene expressions at arbitrary locations, instead of performing denoising. To this end, we believe that the technical contributions of SUICA are non-trivial. 3. **Regression target for SIREN & FFN**: The regression target for SIREN and FFN is the raw gene expressions without being processed by PCA, as is also the target of SUICA. 4. **Data preprocessing**: As is mentioned in Appendix F, the preprocessing includes cells and gene filtering and count depth scaling. We first filtered spots with less than 200 genes expressed and genes that were detected in less than 3 spots. Then we removed spots whose total number of raw counts is lower than the threshold (set as 200 empirically). Then we performed count depth scaling (``sc.pp.normalize_total()``) following the tutorial of scanpy. 5. **Training-relevant hyperparameters**: SUICA applies staged training to obtain satisfactory results with super-high dimension data, which to our knowledge is acceptable and far from being extremely complex. The training-relevant hyperparameters are set according to empirical results, with which SUICA is able to perform stably across varying ST platforms. It is worth noting that, we use the identical set of hyperparameters for ST data from different platforms. --- Rebuttal Comment 1.1: Comment: Thanks for your detailed rebuttal! I have increased my score. --- Reply to Comment 1.1.1: Comment: Thank you for looking into the rebuttal! Should you have further questions, we will be glad to participate in the discussion.
Summary: The authors introduce SUICA, an implicit neural representation-based ST prediction method, which demonstrates another way of gene/spatial imputation for ST. This provides a different take on ST inference frameworks, which predominantly were based on image-based or neighborhood-cell based without relying on nearby coordinates. Claims And Evidence: I think the authors provide claims that are always backed up by evidence. Methods And Evaluation Criteria: Yes Theoretical Claims: No theoretical claims provided Experimental Designs Or Analyses: Yes. Supplementary Material: Yes. Relation To Broader Scientific Literature: I think this reflects a good timely contribution to the field. Essential References Not Discussed: There are few key references on "super-resolution" of ST that I felt were not discussed, mostly based on image-based algorithms. [1] Hu, Jian, et al. "Deciphering tumor ecosystems at super resolution from spatial transcriptomics with TESLA." Cell systems 14.5 (2023): 404-417. [2] Zhang, Daiwei, et al. "Inferring super-resolution tissue architecture by integrating spatial transcriptomics with histology." Nature biotechnology 42.9 (2024): 1372-1377. Other Strengths And Weaknesses: I think this is a timely contribution to the field, which has been dominated by image-based prediction or purely cell-based GCN-based approach. However, I think a bit more efforts are required to explain/convince readers who are predominantly familiar with current methods and not so with INR-based. If the authors are able to address few additional suggestions, I am willing to increase the score. - Figure 2 is great for explaining the training pipeline, but I think there needs to be an illustrative description (also corresponding detailed explanation in the text) on how the test-time inference is carried out. What are the required inputs? is it just the subset of coordinates? Is it just the subset of the cell-level expressions? Can these be randomly distributed (as in the experiments)? This will help the readers understand the situation in which SUICA is available or not - There are a few super-resolution ST approaches, although they are based on image-based inputs (TESLA, ISTAR). I would like to see comparison with them if possible (TESLA should be easier to implement, ISTAR maybe not given the rebuttal timeframe) - Ideally I want to see a bit more ablation in the evaluation. The authors experiment with 80% randomly sampled spots for training and remaining 20% for evaluation. What if it's on the lower-end of the training? Such ablation experiments will help readers understand amount of data required to ensure "decent" imputation performance. - Can the authors comment on whether SUICA needs to be trained everytime for each ST plane? Based on my understanding, it does seem that SUICA needs to be fitted for each plane. However, for the embryo dataset, where it seems to consist of serial sections of ST, maybe only few sections can be used for training and the other purely for evaluation? In other words, I would love to see generalization performance/argument for SUICA. - It would be great to see a bit more comparison with image-based ST imputation. TRIPLEX is great, but it uses underpowered vision encoders that might not be as great for downstream imputation performance. Maybe the authors can reference some baselines from recent HEST study [1]? References [1] Jaume, Guillaume, et al. "Hest-1k: A dataset for spatial transcriptomics and histology image analysis." Advances in Neural Information Processing Systems 37 (2024): 53798-53833. Other Comments Or Suggestions: See above Questions For Authors: See above Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We are sincerely thankful for Reviewer C5Qu's comments and suggestions, and we hope the following responses could help to resolve the concerns. 1. **Pipeline of SUICA**: Thank you for raising the issue about the confusion of data flow. We will modify the manuscript to help readers better understand our data flow. Here, we summarize the data flow according to Section 4.2 of the manuscript into the following points: - Spatial imputation: Take an ST slice {($x$,$y$)} as an example, whose data spots are split into a training subset {($x_\textrm{train}$,$y_\textrm{train}$)} and a test subset {($x_\textrm{test}$,$y_\textrm{test}$)}. With the training subset, we train the self-regressing GAE, whose encoder encodes all $y_\textrm{train}$ as $z_\textrm{train}$. Then we start fitting the INR to approximate the mapping from $x_\textrm{train}$ to $y_\textrm{train}$ (with intermediate supervision of $z_\textrm{train}$). For test-time inference, SUICA **only needs $x_\textrm{test}$** to infer corresponding $y_\textrm{test}$. - Gene imputation: We randomly mute a part of the gene expressions of the data matrix, fit SUICA with all of the data, and infer **all of the $x$**. We expect the prediction $\hat{y}$ to be imputed. - Denoising: The overall data flow is basically the same as gene imputation, but with injected noise as the degradation. As a result, **the required input at test time is only the coordinates** and we do not make any assumption of the spatial distribution (e.g., a uniform distribution as regular grids), which means SUICA is able to be both trained and tested with unstructured coordinates. We will follow the reviewer's suggestion to elaborate this issue more clearly in the revised manuscript. 2. **Image-based super-resolution**: Thank you for suggesting benchmarking against other image-based super-resolution methods. We additionally compare SUICA with TESLA and UNIv2[1] with *Visium-Human Brain* and *Visium-Mouse Brain*. Note that SUICA is reference-free and does not incorporate any image information. We also report IoU of non-zero areas for reference to emphasize the predicted sparsity. MAE/MSE: $\times 10^{-1}$. | Human Brain | MAE$\downarrow$ | MSE$\downarrow$ | Cosine$\uparrow$ | Pearson$\uparrow$ | Spearman$\uparrow$ | IoU$\uparrow$ | | - | - | - | - | - | - | - | | TESLA | 0.280 | 0.00533 | 0.857 | 0.828 | 0.421 | 0.859 | | UNIv2 | 0.730 | 0.141 | 0.723 | 0.633 | 0.129 | 0.388 | | SUICA | 0.498 | 0.0567 | 0.860 | 0.846 | 0.445 | 0.931 | | **Mouse Brain** | | | | | | | | TESLA | 4.00 | 2.58 | 0.877 | 0.786 | 0.619 | 0.619 | | UNIv2 | 6.94 | 7.88 | 0.790 | 0.631 | 0.425 | 0.0 | | SUICA | 3.68 | 2.45 | 0.932 | 0.800 | 0.660 | 0.655 | 3. **Held-out rates**: Thank you for emphasizing the necessity of ablation study with different training-test ratios. Under the setting of spatial imputation (or "super resolution"), we evaluate the performance of SUICA with different held-out rates (the proportion of spots kept for test) with *Embryo E16.5*. According to the results, we find there is no significant performance drop when the held-out rate raises. MAE/MSE: $\times 10^{-2}$. | Held-out Rates | MAE$\downarrow$ | MSE$\downarrow$ | Cosine$\uparrow$ | Pearson$\uparrow$ | | - | - | - | - | - | | 20% | 8.01 | 1.47 | 0.807 | 0.761 | | 40% | 7.98 | 1.54 | 0.799 | 0.748 | | 60% | 7.94 | 1.52 | 0.802 | 0.751 | | 80% | 8.13 | 1.66 | 0.781 | 0.735 | 4. **Generalizability**: As is discussed in the limitations of Appendix G (Line 770), INRs are case-by-case optimized. At the current stage, we consider this limitation not fatal since the data heterogeneity across different ST platforms prevents the construction of a unified dataset and many SOTA methods also adopt the case-by-case paradigm, e.g, STAGE and SEDR. But we do acknowledge that incorporating INRs for modeling ST in a generalizable way is a promising direction, and there have been some preliminary attempts for images and 3D assets [2]. 5. **Baselines from HEST**: Thank you for suggesting the benchmarking against more image-based imputation methods. Here we have selected UNIv2 [1] and please refer to the previous table for the results. [1] Chen, Richard J., et al. "Towards a general-purpose foundation model for computational pathology." *Nature Medicine* 30.3 (2024): 850-862. [2] Ma, Qi, et al. "Implicit Zoo: A Large-Scale Dataset of Neural Implicit Functions for 2D Images and 3D Scenes." *The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track*. --- Rebuttal Comment 1.1: Comment: Thank you for putting the rebuttal together. I still have few more questions based on the rebuttal 1. **Super-resolution** I wouldn't say TESLA is image-based, since it is not predicting ST from the image. Could authors comment on why TESLA, which does not require complicated learning procedure, seems to be quite competitive with SUICA? 2. **Held-out** I actually meant to have the same test set (not increasing like you presented) and change the size of the training set. This will help us analyze SUICA's data-efficiency property, as I expect larger training dataset size to yield better performa --- Reply to Comment 1.1.1: Comment: Thank you very much for the prompt response towards our rebuttal. We hope the follow-up response could resolve the concerns. 1. **Competitive performance of TESLA**: We mainly attribute the competitive performance of TESLA to the histology used as inputs. It is supposed to be easier for a network to exploit the piece-wise smoothness with pixels than with the raw gene expressions alone. For instance, the super-high dimensionality makes it difficult to measure the expression-wise distances between spots. Besides, we conduct additional experiments under the task of gene imputation to better reveal the performance with degraded inputs, where we find that TESLA is quite sensitive in this case. | | | MSE$\downarrow$ | Cosine$\uparrow$ | Pearson$\uparrow$ | | --------------- | ----- | --------------- | ---------------- | ----------------- | | *Human Brain* | TESLA | 0.00863 | 0.662 | 0.534 | | | SUICA | 0.00699 | 0.835 | 0.829 | | *Mouse Brain* | TESLA | 0.788 | 0.668 | 0.499 | | | SUICA | 0.607 | 0.825 | 0.690 | 2. **Data-efficiency of SUICA**: We augment new experiments according to the suggestion. The percentage refers to the the proportion of training samples with regard to the whole dataset, while the test set remains the same. MAE/MSE: $\times 10^{-2}$. | % training | MAE$\downarrow$ | MSE$\downarrow$ | Cosine$\uparrow$ | Pearson$\uparrow$ | | ---------- | --------------- | --------------- | ---------------- | ----------------- | | 80% | 8.01 | 1.47 | 0.807 | 0.761 | | 60% | 7.96 | 1.52 | 0.801 | 0.752 | | 40% | 8.00 | 1.59 | 0.790 | 0.739 | | 20% | 8.14 | 1.62 | 0.786 | 0.738 |
null
null
null
null
null
null
null
null
EARL-BO: Reinforcement Learning for Multi-Step Lookahead, High-Dimensional Bayesian Optimization
Accept (poster)
Summary: This paper propose a quite interesting and promising way to combine strength of RL with effective BO optimizer. The authors first pose the limitations of current BO optimizer and hence locate the "myoptic" issue wihtin these BO methods. To address that, the authors propose modelling the BO optimization iterations as MDP~(at least POMDP) and learn a sampling neural network by k-step PPO. A series of novel designs have been proposed by the authors to facilitate this technical objective. First, the authors give a reasonable MDP definition that align well with BO optimziation process, including the state reprsentation, action, max(.,.) reward and transition dynamics. With the MDP definition, an attention-based state extraction mechanism is introduced with a weighted scoring function, which could help extract order-invariant and size-invariant information from gradually enlarged query points set D. The RL agent in this paper serve as a sampling method to sample next query point and the reward function rewards actions with maximum EI value. Moreover, the authors carefully designed the training procedure, which include two stages: 1) off-policy learning from advanced BO method; 2) further on-policy fine-tuning with parts of the network parameters. The effectiveness of the learned BO policy is tested on both synthetic and reaslistic optimziation scenarios and shows improved performance against several baselines. ## after the rebuttal Thanks for the responses. Overall, I still think the added validation and further explanation are not convincing enough. I keep my boardline rejection opinion on this paper. However, I respect the other two reviewers and AC/PC and would leave the final decision to them. The writing of this paper is clear and well organized. Claims And Evidence: I acknowledge and agree one of the authors: a) BO has myoptic issue and using MDP modelling and RL for solving the MDP could address this issue. This claim is clearly supported by the elaboration of motivation, related works and methodology in Section 2 and Section 3. There are three claims that I think are not fairly validated and supported: a) In page 2, left column, line 073, the authors claim that "An integrated RL-based BO method that efficiently solves the SDP inherent in multi-step BO". However, no empirical evidence is provided to demonstrate that point. Is the efficiency here means the computational complexity? If so, this method proposed in this paper involve RL training, isn't is less efficient than traditional BOs? b) The authors claim that the proposed method aims to address high-dimensioanl problems. However, the problem with maximum dimensions in the experiment is 19-D, which is apparently not a high-dimensional problem. c) In page 4, right column, line 205-208, the authors claim that "Permutation invariance is crucial in BO, as the order in which data points is acquired should not affect the learning process". I wonder whether this claim is correct since the sampling trajectories show clear time-dependent features as the GP approximation transforms from a state to another. I strongly request the authors to add an ablation where position encodings are added to the sampling point trajectories and compare with the current version. This would definitily help back up this claim you have posed and seriously affect the correctness of your neural network designs. Methods And Evaluation Criteria: N/A Theoretical Claims: There are no theoretical claims in this paper. Experimental Designs Or Analyses: The experimental protocols are somehow insufficient: a) At least one evolution strategy baseline should be compared since ES also show robust optimziation performance on expensive and high-dimensional problems. I give two suggestions: Sep-CMAES (https://link.springer.com/chapter/10.1007/978-3-540-87700-4_30) and MMES (https://arxiv.org/pdf/2203.12675). b) The authors only present three problem instances in HPO-B, which makes me wonder the perforamnce of the propsoed method on many other HPO isntances in the benchmark. More instances should be involved and average performance and error bars are needed to demonstrate the efrectiveness. Besides, instances in HPO-B can not fully represent challenging high-dimensional optimziation problems, consider NeuroEvolution benchmarks. c) There are no ablation studies to explore the true effectiveness of the proposed designs. Supplementary Material: I have reviewed supplementary material, which includes a "AAAI" directory, with the sourcecodes of this paper in it. Relation To Broader Scientific Literature: I notice that this paper show relation with a broader research domain: Learning to Optimize (L2O). Within this scope, the researches widely dicuss how learning techniques such as reinfircement learning can be leveraged to boost traditional optimization methods. Essential References Not Discussed: As I said above, since the topic discussed in this paper falls into Learning to Optimize domain, at least some latest development within this scope should be cited in the beginning of the paper. In particular, since this paper focus on continuous black-box optimization domain, I suggest the following related papers: [1] https://dl.acm.org/doi/abs/10.1145/3321707.3321813 [2] https://openreview.net/forum?id=h3jZCLjhtmV [3] https://arxiv.org/abs/2402.17423 [4] https://ui.adsabs.harvard.edu/abs/2024arXiv240503728L/abstract [5] https://arxiv.org/abs/2411.00625 Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: a) I suggest shorten the lengthy text content about related works in Section 2.1, about RL and PPO algorithms in Section 3.1 and Section 3.2. Instead, provide sufficient ablation study results to reinforce the scientific integrity of this paper. b) The begining of Section 2, there are many "K" symbols, with different meanings. Correct them please. c) Figure 1 is comprehensive yet might be too comprehensive to understand the concept of the propsoed work properly. Simplifying this figure would help readers indeed. d) a small question, in page 5, left column, line 232-233, the authors claim that an arithmetic mean could also be used as aggregation function. However, it might reduces the bias hence makes it hard for RL to learn, isn't it? The design now, which is a neural network could provide sufficient bias, from my opinion. Questions For Authors: See my comments in other blocks. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their detailed assessment of our work and for raising important questions that help strengthen the paper. We appreciate the time taken to thoroughly review our submission and provide constructive feedback. ### **Regarding the claim about "efficiently solving SDP":** We apologize for the lack of clarity in our terminology. Our claim here refers to the relative efficiency of RL in (approximately) solving SDPs, compared to other *multi-step lookahead* BO methods, as the traditional single-step BO approaches do not solve the SDP. For example, our 3-step lookahead EARL-BO demonstrates superior optimization performance across various benchmark functions compared to the Rollout_BO, while also having smaller computational time requirements. Efficiency is indeed an important aspect, which we will explore in a revised version. Please see the response to Reviewer eu5H for our preliminary analysis of CPU time for each BO method on the 8D Ackley function. ### **On addressing "high-dimensional problems":** We should clarify that we tried to emphasize in our writing that by "high-dimensional," we mean higher-dimensional again in comparison to other *non-myopic* BO methods, which typically handle only very low-dimensional problems (≤6D). EARL-BO successfully scales to practically relevant dimensionalities (up to 19D in our experiments), which represents an advancement for non-myopic BO approaches. Moreover, EARL-BO is a modular framework, and scalability depends mostly on the chosen RL method, providing avenues for future improvement. We will revise our claims in the introduction to be more precise about this contribution. ### **Regarding the ablation study on permutation invariance:** We appreciate this excellent suggestion for understanding not only the architecture of EARL-BO, but also the general permutation invariance of Bayesian optimization as an SDP. Following your recommendation, we will include an ablation study comparing an attention-only encoder (with position encodings) to our attention-DeepSets based encoder to exactly explore this point. We have preliminarily run this experiment on the 8D Ackley function for both 3-step and 5-step lookahead settings. Results can be found here: https://anonymous.4open.science/r/icml_2025_review-B587. The results confirm that our proposed method achieves significantly better optimization performance across different BO settings compared to the attention-only encoder, supporting our claims about the importance of permutation invariance. We will include these findings, which support our EARL-BO framework, in the revised version. ### **On evolution strategy baselines:** We thank the reviewer for suggesting additional baselines. We want to clarify that our primary aim is to extend the capabilities of non-myopic Bayesian optimization, rather than to compete with methods designed for very high-dimensional problems (100s-1000s of dimensions), and thus large numbers of samples (on the order of 10^6 in the provided references). We will clarify our contributions in the introduction and reference these evolution strategies and Learning to Optimize (L2O) as alternative approaches to BO for different problem settings. This important distinction will help position our work more accurately within the literature. ### **Regarding the minor point about aggregation functions:** This comment raises an important point about the potential trade-off between bias and learnability when choosing aggregation functions. While we mentioned arithmetic mean as a permutation-invariant alternative to summation, we agree that maintaining an appropriate level of inductive bias is crucial for effective RL training. Our neural network-based approach provides sufficient bias to facilitate learning, as you noted. We will clarify this discussion in the revised manuscript. ### **Additional improvements:** We thank the reviewer for the detailed feedback and will address the other suggestions in a revised version, including: fixing the inconsistent "K" symbols in Section 2, correcting the directory titles in the supplementary source code, improving clarity of Figure 1, and expanding our experimental evaluations with the ablation studies suggested throughout the reviews. We sincerely appreciate the reviewer's constructive feedback, which has helped us identify areas for improvement and clarification. We believe these changes will strengthen our paper significantly. --- Rebuttal Comment 1.1: Comment: I am generally satisfied with some responses of the authors. I still have following concerns: I have read all review comments (the other two reviewers and mine) and the corresponding responses from the authors. Especially, I found the computatioanl efficiency issue raised by reviewer euH5 is fatal for this paper, since the problem dimension this work could address is less than 20, while the solving time is so huge.' Regarding the ablation study on permutation invariance, the additional experimental results (https://anonymous.4open.science/r/icml_2025_review-B587) show very close performance with overlapping error bars, how could this be a significant demonstration? --- Reply to Comment 1.1.1: Comment: Thank you for acknowledging our efforts and for the continued engagement. ### [Computational Efficiency] Regarding computational efficiency comparisons, please see our discussions with Reviewer eu5H. We reproduce from that discussion a key point on computational time here: Despite strongly outperforming rollout-based methods, we acknowledge the reviewers’ concerns regarding computational efficiency compared to the cheaper (myopic) methods. We do acknowledge this limitation of EARL-BO (and non-myopic methods in general), but note that BO is specifically designed for optimization problems where function evaluations are expensive, such as hyperparameter tuning for ML models [1], engineering design/simulations [2], and chemical laboratory experiments [3]. In these settings, a single function evaluation can take hours or even days, not to mention monetary costs, while the overhead of BO remains relatively small (difficulty of these problems is also mentioned by Reviewer JCQV). Thus, even as EARL-BO introduces additional computational cost relative to *myopic* baseline BO approaches, its significant improvement in performance can justify this expense. Furthermore, the improved sample efficiency of our method means that fewer function evaluations are required to reach an optimal or near-optimal solution. This results in a (significant) net reduction in total cost when considering the entire optimization process. Future works can exploit the modularity of EARL-BO for faster settings, e.g., by warm-starting RL, alternative RL methods/tunings, using the actor for multiple BO iterations, etc. ### [Permutation Invariance] We agree that the performance gap between the DeepSets and Attention-Only frameworks is limited. Nevertheless, despite the standard deviations in this preliminary study we do already see a *consistent* performance improvement using DeepSets (noting the logarithmic y-axis): after 10 iterations, mean regret is 7.8 vs 9.1; after 20 iterations, 5.3 vs 6.2; after 30 iterations, 4.3 vs 4.9; after 40 iterations, 3.9 vs 4.4; and after 50 iterations, 3.6 vs 3.9. These slight, but consistent improvements are non-trivial in expensive settings, e.g., see our discussion above. While we hope to have time to produce more comprehensive results on this study in a revised version, these preliminary results already reveal some interesting conclusions: (1) these results begin to answer ‘*How Markovian is BO?*’. This is particularly interesting given that EARL-BO may exhibit some *synthetic* non-Markovian behavior, as the samples can become less reliable further into the lookahead horizon (see discussions on planning delusion with Reviewers euH5 and JCQV); (2) practically speaking, noting the modular design of EARL-BO, these results may suggest improvements for the encoder design in future works. We believe these are highly interesting areas to discuss in a revised version and are grateful to the Reviewer for suggesting this ablation study and direction of investigation. ### [Scalability of EARL-BO] To further highlight the relative efficiency of the EARL-BO framework, we will include optimization results on the 30-dimensional Ackley function. A preliminary version of this experiment (two random starts instead of ten for now) can be found at the same anonymous repository https://anonymous.4open.science/r/icml_2025_review-B587. EARL-BO significantly outperforms all comparison methods, again demonstrating its scalability to challenging optimization problems in practice. The computational requirements for this added experiment emphasize our scalability results: the rollout-based method would require even more Monte Carlo iterations to handle this search space, yet using even 1000 iterations already times out in the 3600s budget. On the other hand, EARL-BO runs in approximately 1600s, a moderate increase in time compared to the much less complicated setting of optimization over the 8-D Ackley function. Specifically, compared to 8-D 3-step lookahead EARL-BO, BO iterations in the 30-D problem require approximately 100% added CPU time, while myopic methods, such as EI and TuRBO require 533% and 215% more CPU time, respectively, to make decisions in 30-D compared to the 8-D optimization problem. We believe these results strengthen our contribution and will include a more comprehensive analysis in a revised version. **References:** [1] Snoek, J., Larochelle, H., & Adams, R. P. (2012). Practical Bayesian optimization of machine learning algorithms. NeurIPS 2012. [2] Jones, D. R., Schonlau, M., & Welch, W. J. (1998). Efficient global optimization of expensive black-box functions. Journal of Global Optimization, 13, 455-492. [3] Shields, B. J., Stevens, J., Li, J., Parasram, M., Damani, F., Alvarado, J. I. M., Janey, J. M., Adams, R. P., & Doyle, A. G. (2021). Bayesian reaction optimization as a tool for chemical synthesis. Nature, 590(7844), 89-96.
Summary: The paper proposes a new framework for combining RL (via meta-learning) with BO to provide effective optimisation. They propose a NN architecture that deals well with the sequential nature of data-collecting. For each BO step, off-line policy learning attempts to learn how to mimic TURBO, and then model-based RL is used to fine-tune the policy. Unlike exisiting RL-based optimisers, the RL agent is trained fro scratch from each BO step allowing a higher level of specificity. In particular, they focus on the task of non-myopic optimisation, providing empirical results. ## update after rebuttal Thanks for your convincing replies to the rebuttal. I maintain my already high score. Claims And Evidence: The paper provides convincing empirical evidence that their proposed method works. Most smaller claims along the way are well supported, except the quick but interesting discussion about "planning delusion". I would like to see some more robust discussion / specific experimentation backing up this point (more that provided in the Appendix), which feels quick weak. Methods And Evaluation Criteria: The banchmark datasets and experimental protocols seem valid + experiemntal details seem clear. Theoretical Claims: N/A Experimental Designs Or Analyses: Aside from Rollout VR, why are there not more non-myopic algorithms compared agaisnt? This requires justification in my eyes. Supplementary Material: Yes. Additional important experimental details are presented clearly here. Relation To Broader Scientific Literature: This paper sits within a popular trend of armortising certain prohibitively expensive operations with GP models, particulary in the context of optimisation. Scholarship is mainly good, but a couple of references are missing (see below). I do feel like this is a substantial step forward, providing a general framework for deploying complicated BO. I imagine that this work could spin out into lots of interesting future work, applying this framework to other difficult optimisation problems. Essential References Not Discussed: I believe that there are a couple of missing refereces, but scholarship is on the whole good. 1. The classic meta-learning optimiser paper: https://arxiv.org/abs/1611.03824 2. Work using a very similar NN arcitecture + meta-training over GP samples, but with the goal of accelerating model fitting https://arxiv.org/abs/2106.08185 3. Again, similar meta-training models attempting to mimic GPs through a similar NN architecture: https://arxiv.org/abs/1910.13556 4. A non-myopic meta-learning approach to BO: https://arxiv.org/abs/2302.11533 Other Strengths And Weaknesses: The paper seems novel and makes an important contribution. The paper is also very well written and was a pleaseure to read. Other Comments Or Suggestions: Have you though about building your RL environment using the SAASBO priors to sample your GP trajectories? This baseline seems to do well and it would be interesting if your method could learn to mimic some aspects of the behaviour + make it nonmyopic? Questions For Authors: I would be tempted to increase my score if the following two aspects could be addressed: 1) I want to see performance if you just train the RL agent on the GP prior (i.e. no step-sepcfici training) and add this as a baseline. This would really help provide context for how well your step-based episodic environments work 2) improved exploration of your hypothesis of planning delusion Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their thoughtful comments and positive feedback on our work. We appreciate the time taken to understand our contribution and the valuable suggestions to strengthen the paper. ### **Regarding the "planning delusion" concept:** We will conduct further experimentation on this concept as suggested. The text below is repeated from our response to Reviewer euH5: Regarding 'planning delusion,' or the effect of over-reliance on GP with high uncertainty, we agree this is an important component that warrants further exploration. We believe planning delusion occurs because EARL-BO repeatedly samples from the GP model, adds them to the dataset, updates the GP, and so on. This iterative process leads to increasingly less reliable GP models and RL environments further into the lookahead horizon. This explains the sub-optimal performance with excessively long lookahead horizons. We will include additional experiments testing 1 (i.e., standard myopic), 3, 5, and 7-step lookahead EARL-BO variants on the benchmark functions. If our hypothesis is correct, we expect the smaller lookahead horizons to perform better at earlier iterations, when the initial GP is more uncertain, but longer lookahead horizons to dominate at later iterations, when the initial GP is more representative of the black-box function. Our preliminary findings on the 8D Ackley function indeed follow this pattern: in earlier iterations, the 3-step lookahead method performs best; however, as optimization progresses, the 5-step lookahead method shows superior performance. The 7-step lookahead method performs worse consistently, suggesting the GP environment never reaches an accuracy amenable to such a long lookahead horizon. We will include and expand on these ablation study results in a revised version. Incorporating uncertainty-aware lookahead mechanisms is an excellent idea for future expansion. We agree that real and virtual data should be treated with different levels of trust. Uncertainty quantification would help mitigate planning delusion by accounting for the reliability of predictions, e.g., with a robust BO approach. Additionally, an adaptive horizon approach (using longer lookahead when uncertainty is low) could be a promising direction for future research. ### **On the comparison with other non-myopic algorithms:** We aimed to compare against N-step lookahead BO methods (N>2) that are general and can be applied to non-trivial black-box optimization problems (e.g., handling ~10 dimensions). The Rollout method with variance reduction represents the most recent approach meeting these criteria. Many other non-myopic methods face scalability challenges in higher dimensional spaces, making fair comparisons difficult (though we appreciate it if the reviewer can point us to suitable methods). Nevertheless, we appreciate this feedback and will clarify this justification in the revised manuscript. ### **Regarding the RL agent trained on GP prior only:** We are admittedly uncertain what the reviewer means by "no step-specific training." We took this to mean RL-based BO without lookahead steps. Our ablation study for planning delusion will include experiments with exactly 1-step RL-based BO. We expect these results to show that performance is comparable to EI-based BO, since both methods effectively optimize for immediate improvement (RL effectively serves as the "optimizer" for the EI acquisition function). Adding this ablation study will support that the multi-step lookahead approach is the main contributor to EARL-BO's enhanced optimization efficiency rather than the RL component. We will include this helpful analysis in a revised version. ### **On using SAASBO priors:** Thank you for this insightful suggestion. We envision EARL-BO as a flexible and modular framework for RL-based BO, which can be fine-tuned to particular problem settings. It can conduct off-policy learning with any state-of-the-art BO method, including SAASBO, and suggest points that will have optimal multi-step lookahead effects. Moreover, other aspects could also be switched, e.g., the choice of RL algorithm (see response to Reviewer eu5H). These directions indeed represent exciting avenues for future work, and we will highlight this in the revised manuscript. We thank the reviewer again for their constructive feedback and are pleased that they found our paper well-written and a "substantial step forward" in the field that can help with many difficult optimization problems. --- Rebuttal Comment 1.1: Comment: Please can you discuss how your work relates to the additional references that I provided? I am now increasingly worried about the computational cost issues raised by the other reviwers. This is something I didn't quite notice when I read the paper. These high costs do make this algorithm very difficult to use in practice and somewhat undermine my postive comment "I do feel like this is a substantial step forward, providing a general framework for deploying complicated BO. I imagine that this work could spin out into lots of interesting future work, applying this framework to other difficult optimisation problems." Do the authors have anything to suggest about in what settings people would likely use this algorithm? --- Reply to Comment 1.1.1: Comment: Thank you for acknowledging our scholarship efforts and pointing us to the omitted references. We will indeed expand our discussion of the broader context using these in a revised version. In particular: - [1] This paper is a seminal effort in meta-learning for black-box optimization and will be discussed when introducing meta-learning for Bayesian optimization (the authors nicely contrast their method with BO). - [2-3] These works use similar attention-based architectures to encode datasets, although for other applications as noted. These share commonalities to our method in their encoder, which takes in a full dataset with the aim of learning and encoding of statistics of a full stochastic process (cf. our training is purely task driven). We will discuss these references when introducing the learned representation of the dataset. - [4] This recent work discusses how meta-learning can be applied to the slightly different setting of movement-cost constrained BO, which the authors nicely compare to non-myopic BO (Section 4). We will include this line of research in our discussion of meta-learning for BO. ### [Applicability of EARL-BO] In response to the reviewers' concerns, we will highlight the relative efficiency of the EARL-BO framework using the 30-dimensional Ackley function. A preliminary version of this experiment (two random starts instead of ten for now) can be found at https://anonymous.4open.science/r/icml_2025_review-B587. EARL-BO significantly outperforms all comparison methods, again demonstrating its scalability to challenging optimization problems in practice. The computational requirements for this added experiment emphasize our scalability results: the rollout-based method would require even more Monte Carlo iterations to handle this search space, yet using even 1000 iterations already times out in the 3600s budget. We believe non-myopic methods are the best computational comparison point (see discussions with other reviewers). On the other hand, EARL-BO runs in approximately 1600s, a moderate increase in time compared to the much less complicated setting of optimization over the 8-D Ackley function. Specifically, compared to 8-D 3-step lookahead EARL-BO, BO iterations in the 30-D problem require approximately 100% more CPU time, while myopic methods, such as EI and TuRBO require 533% and 215% more CPU time, respectively, to make decisions in 30-D compared to the 8-D optimization problem. We believe these results strengthen our contribution and will include a more comprehensive analysis in a revised version. Despite strongly outperforming rollout-based methods with less time, we acknowledge the reviewers’ concerns regarding computational efficiency compared to the cheaper (myopic) methods. We do acknowledge this limitation of EARL-BO (and in fact non-myopic methods in general), but note that **EARL-BO, and BO in general, is specifically designed for optimization problems where function evaluations are expensive,** such as hyperparameter tuning for ML models [1], engineering design/simulations [2], and chemical laboratory experiments [3]. In these settings, a single function evaluation can require hours or even days, not to mention monetary costs, while the overhead of BO remains relatively small (difficulty of these problems is also mentioned the Reviewer). Thus, even as EARL-BO introduces additional computational cost relative to *myopic* baseline BO approaches, its significant improvement in performance can justify this expense. Furthermore, the improved sample efficiency of our method means that fewer function evaluations are required to reach an optimal or near-optimal solution. This results in a (significant) net reduction in total cost when considering the entire optimization process. Future works can exploit the modularity of EARL-BO for faster settings, e.g., by warm-starting RL, alternative RL methods/tunings, using the actor for multiple BO iterations, etc. **References:** [1] Snoek, J., Larochelle, H., & Adams, R. P. (2012). Practical Bayesian optimization of machine learning algorithms. NeurIPS 2012. [2] Jones, D. R., Schonlau, M., & Welch, W. J. (1998). Efficient global optimization of expensive black-box functions. Journal of Global Optimization, 13, 455-492. [3] Shields, B. J., Stevens, J., Li, J., Parasram, M., Damani, F., Alvarado, J. I. M., Janey, J. M., Adams, R. P., & Doyle, A. G. (2021). Bayesian reaction optimization as a tool for chemical synthesis. Nature, 590(7844), 89-96.
Summary: The paper introduces EARL-BO, a novel reinforcement learning (RL)-based framework for multi-step lookahead Bayesian Optimization (BO) in high-dimensional black-box optimization problems. Claims And Evidence: The paper makes several key claims: - EARL-BO improves multi-step lookahead BO performance in high-dimensional settings. - Evidence: Experiments on synthetic and real-world HPO tasks show that EARL-BO outperforms single-step BO methods and achieves comparable or better results than rollout-based BO in low dimensions. - EARL-BO is scalable to high-dimensional problems. - Evidence: The use of an Attention-DeepSets encoder and RL-based optimization allows it to handle up to 19D problems, unlike traditional rollout-based methods which are limited to low-dimensional cases. - EARL-BO mitigates myopic behavior in BO. - Evidence: Experimental results demonstrate that multi-step lookahead strategies improve long-term optimization results, particularly in complex and higher-dimensional search spaces. - The GP-based virtual environment enables efficient learning. - Evidence: The model-based RL approach significantly improves sample efficiency, reducing reliance on expensive function evaluations. While these claims are largely supported, some concerns include: - Planning Delusion: The paper acknowledges that EARL-BO can suffer from poor decision-making when the GP posterior has high uncertainty, especially in high-dimensional settings with sparse data. - Computational Cost: Although EARL-BO is scalable, the training overhead of RL-based BO is not extensively compared to other methods. Methods And Evaluation Criteria: The proposed methods are well-suited for multi-step lookahead BO and high-dimensional black-box optimization. The evaluation criteria include: - Synthetic benchmark functions (Ackley, Levy, Rosenbrock, Sum Squares) tested in 2D, 5D, and 8D. - Real-world HPO tasks (HPO-B dataset) with search spaces 6D, 8D, and 19D. - Comparison against state-of-the-art BO methods including EI, TuRBO, SAASBO, and rollout-based methods. The benchmarks and criteria are appropriate, but additional details on computational efficiency and runtime analysis would strengthen the evaluation. Theoretical Claims: The paper formulates BO as a finite-horizon Markov Decision Process (MDP) and proposes an RL-based solution. The theoretical framework is sound, leveraging: - Dynamic programming to justify multi-step optimization. - Attention-DeepSets encoders for permutation invariance. - PPO-based RL policy optimization. The theoretical justification appears correct, though more formal convergence guarantees for the RL policy would be beneficial. Experimental Designs Or Analyses: The experiments are well-structured and include: - Multiple replications (10 runs per experiment). - Ablation studies (e.g., lookahead horizon effects, learning rate sensitivity). - Comparisons against strong baselines. However, there are some limitations: - Lack of runtime comparisons: How does EARL-BO’s computational cost compare to TuRBO or SAASBO? - Planning Delusion Analysis: The effect of high uncertainty in GP modeling on EARL-BO's decision-making could be explored further. Supplementary Material: The supplementary material includes: - Experimental setup details. - Hyperparameter settings for PPO and the encoder. - Additional results (ablation studies on learning rates, planning delusion effects). The supplement is comprehensive and enhances reproducibility. Relation To Broader Scientific Literature: The paper is well-situated within the Bayesian optimization and reinforcement learning literature. The connections are well-documented, but a deeper discussion on the trade-offs between model-based vs. model-free RL approaches in BO would be valuable. Essential References Not Discussed: None Other Strengths And Weaknesses: Strengths: - Innovative integration of RL with BO for multi-step lookahead. - Scalability to high-dimensional problems. - Strong empirical validation across synthetic and real-world tasks. - Comprehensive ablation studies. Weaknesses: - Computational cost is not analyzed in depth. - Limited discussion on theoretical RL convergence properties. Other Comments Or Suggestions: - Provide a runtime comparison with existing BO methods. - Analyze how EARL-BO scales computationally with increasing dimensions. - Explore potential improvements to mitigate planning delusion (e.g., uncertainty-aware lookahead adjustments). Questions For Authors: - How does EARL-BO’s training cost compare to TuRBO or SAASBO? - A direct runtime comparison would clarify its practical applicability. - Would an uncertainty-aware lookahead mechanism help mitigate planning delusion? - Given that long-horizon lookahead can lead to compounding GP errors, would incorporating uncertainty quantification help? - How does EARL-BO perform when initialized with a poorly trained GP? - If the GP model is unreliable in early iterations, does EARL-BO suffer significant performance degradation? - Could alternative RL methods (e.g., model-free RL) be competitive? - PPO is used, but would Q-learning or offline RL approaches provide benefits? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate your thoughtful review and valuable suggestions for improving our work. Your feedback is extremely helpful, and we are grateful for your careful consideration of our work. ### **Planning Delusion and Uncertainty-Aware Mechanisms** Regarding 'planning delusion,' or the effect of over-reliance on GP with high uncertainty, we agree this is an important component that warrants further exploration. We believe planning delusion occurs because EARL-BO repeatedly samples from the GP model, adds them to the dataset, updates the GP, and so on. This iterative process leads to increasingly less reliable GP models and RL environments further into the lookahead horizon. This explains the sub-optimal performance with excessively long lookahead horizons. We will include additional experiments testing 1 (i.e., standard myopic), 3, 5, and 7-step lookahead EARL-BO variants on the benchmark functions. If our hypothesis is correct, we expect the smaller lookahead horizons to perform better at earlier iterations, when the initial GP is more uncertain, but longer lookahead horizons to dominate at later iterations, when the initial GP is more representative of the black-box function. Our preliminary findings on the 8D Ackley function indeed follow this pattern: in earlier iterations, the 3-step lookahead method performs best; however, as optimization progresses, the 5-step lookahead method shows superior performance. The 7-step lookahead method performs worse consistently, suggesting the GP environment never reaches an accuracy amenable to such a long lookahead horizon. We will include and expand on these ablation study results in a revised version. Incorporating uncertainty-aware lookahead mechanisms is an excellent idea for future expansion. We agree that real and virtual data should be treated with different levels of trust. Uncertainty quantification would help mitigate planning delusion by accounting for the reliability of predictions, e.g., with a robust BO approach. Additionally, an adaptive horizon approach (using longer lookahead when uncertainty is low) could be a promising direction for future research. ### **Computational Cost Analysis** We agree that runtime comparisons are also a crucial consideration that warrant detailed investigation. We provide computational cost results for the 8D Ackley function below: | Method | Average Runtime (s) | Std Dev (s) | Notes | | --- | --- | --- | --- | | EI | 0.28 | 0.05 | - | | TuRBO | 0.27 | 0.20 | - | | SAASBO | 168.6 | 27.6 | - | | Rollout_EI (3-step) | >3600 | - | 1000 MC iterations | | EARL-BO (3-step) | 840 | 147 | 4000 episodes | | EARL-BO (5-step) | 1075 | 134 | 4000 episodes | While multi-step lookahead methods inevitably require more computation time, we note that Bayesian optimization is typically applied to problems where function evaluations are expensive (e.g., materials discovery, hyperparameter tuning of large models, etc.). Nevertheless, it is important to highlight these performance tradeoffs, and we will include a comprehensive table of CPU times in the final version. ### **Performance with Poorly Trained GPs** We assume that poorly trained GPs affect all BO methods. However, we hypothesize the effects are more pronounced for multi-step lookahead methods, because looking multiple steps ahead via an untrustworthy environment may cause sub-optimal performance. However, as shown in our appendix experiment with the 19D optimization problem, using only 5 initial points (creating a challenging sparse data scenario, which can lead to poor choice of hyperparameters), EARL-BO initially struggles but eventually finds the global optimum faster than traditional methods such as EI, PI, UCB, and TuRBO. This demonstrates the robustness of our approach even in these challenging scenarios. ### **Alternative RL Methods** The question about alternative RL methods is rather insightful. In fact, off-policy methods might offer sample efficiency advantages in certain scenarios. As our intention is to present EARL-BO as a novel framework, rather than to tune to a particular application, we select PPO due to its well-documented stability, robustness to hyperparameters, and strong empirical performance in continuous action space tasks similar to our application. We will emphasize that the EARL-BO framework is agnostic to the choice of RL method, and the optimal design of an RL method represents an interesting direction for future research that could yield further improvements. We appreciate your constructive feedback and will incorporate your suggestions to strengthen the paper. --- Rebuttal Comment 1.1: Comment: Computational efficiency remains a major concern for me. The time cost of your EARL-BO is indeed quite high. Please provide a detailed explanation of what *Average Runtime (s)* specifically represents—is it the time for a single estimation, a single iteration, or something else? Comparing the computation time of iterations and episodes seems unfair. Please ensure that all baselines are evaluated under a unified standard for computational efficiency. --- Reply to Comment 1.1.1: Comment: Thank you for the continued engagement and interest in our work. We apologize for the unclear presentation of our preliminary results. We do believe “all baselines are evaluated under a unified standard for computational efficiency,” as explained in the following. ### [Analysis of Runtimes] The table of preliminary computational cost analysis reports the average runtime for *each Bayesian optimization iteration* following the respective frameworks. The notes column was intended to reflect some important hyperparameter choices, rather than to suggest we compare iterations against episodes, estimations, etc. We will include more comprehensive results and analyses in a revised version, including: 1. EI, TuRBO, and SAASBO are much faster, but are “myopic” in the sense they do not aim to solve BO as an SDP; 2. Rollout_EI is run with 1000 Monte Carlo iterations, which is still a small number, given we had to use similar numbers for the 2- and 4-D problems presented in the paper. Monte Carlo methods require a number of samples scaling with the search space and scenario tree (here based on dimensionality and lookahead horizon) to achieve reasonable accuracy, which becomes impractical as dimensionality increases; 3. Modern RL methods employ function approximation techniques such as actor-critic networks, which can learn useful policies from far fewer samples by leveraging generalization, rather than exhaustive sampling (see below on scalability). We found PPO to perform well with only 4000 episodes, but again note that our EARL-BO framework can integrate alternative/better RL algorithms seamlessly, e.g., by warm-starting RL, alternative RL methods/tunings, using the actor for multiple BO iterations, etc. ### [Scalability of EARL-BO] To further highlight the relative efficiency of the EARL-BO framework, we will include optimization results on the 30-dimensional Ackley function. A preliminary version of this experiment (two random starts instead of ten for now) can be found at https://anonymous.4open.science/r/icml_2025_review-B587. EARL-BO significantly outperforms all comparison methods, again demonstrating its scalability to challenging optimization problems in practice. The computational requirements for this added experiment emphasize our scalability results: the rollout-based method would require even more Monte Carlo iterations to handle this search space, yet using even 1000 iterations already times out in the 3600s budget. On the other hand, EARL-BO runs in approximately 1600s, a moderate increase compared to the much less complicated setting of optimization over the 8-D Ackley function. Specifically, compared to 8-D 3-step lookahead EARL-BO, BO iterations in the 30-D problem require approximately 100% more CPU time, while myopic methods, such as EI and TuRBO require 533% and 215% more CPU time, respectively, to make decisions in 30-D compared to the 8-D optimization problem. We believe these results strengthen our contribution and will include a more comprehensive analysis in a revised version. Despite strongly outperforming rollout-based methods with less time, we acknowledge the reviewers’ concerns regarding computational efficiency compared to the cheaper (myopic) methods. We do acknowledge this limitation of EARL-BO (and in fact non-myopic methods in general), but note that BO is specifically designed for optimization problems where function evaluations are expensive, such as hyperparameter tuning for ML models [1], engineering design/simulations [2], and chemical laboratory experiments [3]. In these settings, a single function evaluation can require hours or even days, not to mention monetary costs, while the overhead of BO remains relatively small (difficulty of these problems is also mentioned by Reviewer JCQV). Thus, even as EARL-BO introduces additional computational cost relative to *myopic* baseline BO approaches, its significant improvement in performance can justify this expense. Furthermore, the improved sample efficiency of our method means that fewer function evaluations are required to reach an optimal or near-optimal solution. This results in a (significant) net reduction in total cost when considering the entire optimization process. Future works can exploit the modularity of EARL-BO for faster settings, as noted above. **References:** [1] Snoek, J., Larochelle, H., & Adams, R. P. (2012). Practical Bayesian optimization of machine learning algorithms. NeurIPS 2012. [2] Jones, D. R., Schonlau, M., & Welch, W. J. (1998). Efficient global optimization of expensive black-box functions. Journal of Global Optimization, 13, 455-492. [3] Shields, B. J., Stevens, J., Li, J., Parasram, M., Damani, F., Alvarado, J. I. M., Janey, J. M., Adams, R. P., & Doyle, A. G. (2021). Bayesian reaction optimization as a tool for chemical synthesis. Nature, 590(7844), 89-96.
null
null
null
null
null
null
null
null
When Dynamic Data Selection Meets Data Augmentation: Achieving Enhanced Training Acceleration
Accept (poster)
Summary: Paper centers around the idea of combining dynamic data selection and augmentation in order to increase both quality and diversity of data. The proposed method is applicable specifically to multimodal data. The proposed methods selects augmentation candidates that are both low density and do not represent noisy outliers. The later selection criteria is fulfilled by focusing on the examples for which the samples' and labels' representations align (cos similarity of embeddings) according to a CLIP model fine-tuned on the original (pre-selection&augmentation) data. Authors demonstrate the effectiveness and efficiency of the proposed method and its components in various comparisons to various baselines. Claims And Evidence: The central claim of the paper is: selecting samples according to the proposed methodology improves model generalization with reduced training cost. The claim seem to be justified overall. Yet, I still have some doubts and questions regarding the experiments presented in the paper, see following sections. Methods And Evaluation Criteria: Yes, they make sense overall. Theoretical Claims: No theoretical claims. Experimental Designs Or Analyses: I checked the experimental design and analysis. I have the following questions and I apologize if I have overlooked something and the answers are already in the manuscript: - Is the only method that includes augmentation in Table 1 the methods presented in this paper? I would like to see at least the result of Random + augmentation (and optimally InfoBatch + augmentation) on all datasets (Table 4 only shows it for Tiny-ImageNet) - In the presented experiments, did authors fine-tune a pre-trained CLIP model with the selection&augmentation methods appleid? Or was training from scratch used? - iiuc authors have to fine-tune a CLIP model that is used for semantic alignment estimation on the data prior to filtering. It would be interesting to include the performance of this model in the results. - is the cost of fine-tuning CLIP included in the results presented in Table 2? - for Table 3, I would be interested in seeing the performance of top 2 methods with augmentation (i.e. in Table 3 also the only method using augmentation is the one presented in this work?) - when talking about selection ratio, e.g. Figure 3 x-axis, Table 4 etc., the ratio does not include the augmented examples, doe it? i.e. the numb er of examples the model is trained on is actually larger than e.g. 20% of the full dataset for selection ratio = 20%? What is the augmentation ratio (i.e. how many new examples are generated per selected example in the presented experiments?) Supplementary Material: I did not thoroughly review the supplementary material Relation To Broader Scientific Literature: While the title and the introductions are formulated rather generally, the paper's focus is very much on multimodal image-text data. Hence, the focus of the related works section is on multimodal or image only domains. I suggest authors make it more clear already in the intro that the focus of this work is on multimodal data specifically. Essential References Not Discussed: Given that the focus is image & multimodal image-text domains, the references are discussed sufficiently well to the best of my knowledge. Other Strengths And Weaknesses: I would like to see the experimental section as the main strength of this paper. The experiments are telling a nice story and are well designed in order to justify the main claims of the paper. However, given the various questions I had while reading this section (see experimental design section), it is also the main weakness of this paper at this point. Other Comments Or Suggestions: - typo in line 088 second column "riak" -> "risk" - can authors discuss how applcable this method would be to other domains like e.g. data selection for LLM training? Questions For Authors: See "experimental design" section. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear reviewer 3S6N, Thank you for your careful review and constructive suggestions on our work. We appreciate your recognition of our work's strengths, e.g, novelty, well-structured experiments. We provide responses to address the comments as follows: - **Q1: More results in Table 1.** - **A1:** Thank you for your suggestions. We have extended our evaluation to include more results in Tab. 1 (as Tab. 4 shows results on Tiny-ImageNet). As shown in Tab. D-1, while both baselines benefit from augmentation, our method consistently achieves higher accuracy. This indicates that the performance gains are not solely due to augmentation but stem from our method's ability to identify samples suitable for augmentation. Thus, by unifying dynamic data selection and augmentation, our framework achieves both training acceleration and enhanced performance. **Table D-1:** Additional results for Tab. 1. ||C-10|||C-100||| |-|-|-|-|-|-|-| |Selection ratio (%)|30|50|70|30|50|70| |Random*|93.3| 94.8| 95.2|76.8|77.9| 78.5| |Infobatch|94.8|95.3| 95.9|77.1|78.5 |78.7| |Ours|**94.9**|**95.5**|**96.0**|**77.6**|**78.9**|**79.5**| - **Q2: Clarification on the fine-tuning process.** - **A2:** We would like to clarify that the fine-tuning process does **NOT** use the selection\&augmentation methods. The CLIP backbone is kept frozen; we only train the lightweight adapter from scratch. - **Q3: Performance of the fine-tuned model.** - **A3:** Good question. First, in Tab. D-2, we show the accuracy of CLIP before and after fine-tuning. It shows that the fine-tuned CLIP achieves notable improvements, indicating enhanced dataset-specific alignment ability. Second, we employ the pretrained CLIP without fine-tuning in our method. In Tab. D-3, while the vanilla CLIP achieves high performance, the fine-tuned model further improves the performance. **Table D-2:** Performance of CLIP in classification accuracy. ||C-10|C-100| |-|-|-| |zero-shot|89.8|65.1| |fine-tuned|**94.8**|**76.8**| **Table D-3:** Performance of CLIP with Tiny-ImageNet. |Selection ratio (%)|30|50|70| |-|-|-|-| |w/o fine-tuning| 40.2|45.7|48.5| |Ours| **44.9**|**47.0**|**49.4**| - **Q4: Clarification on the cost in Tab. 2.** - **A4:** Yes, the cost of fine-tuning is **included** in the total costs reported in Tab. 2 to ensure a fair comparison with other methods. We freeze the CLIP backbone and only fine-tune a lightweight adapter (a simple linear layer constituting only 0.04% of CLIP ViT-B/32’s parameters) with minimal training iterations. Thus, the fine-tuning process can be completed efficiently. - **Q5: More results for Tab.3.** - **A5:** Thank you for the suggestion. As shown in Tab. D-4, our method maintains superior performance across both settings and selection ratios. Notably, Infobatch performs worse than the random baseline in some cases in noisy conditions. This is because it prioritizes high-loss samples, which are often noisy or corrupted under noisy scenarios. Applying augmentation to noisy samples further exacerbates their negative impact as more misleading training signals are introduced, ultimately degrading performance. This highlights that our performance gain is not solely due to augmentation, but rather due to selecting the semantic-representative samples for training. **Table D-4:** Additional results for Table 3. ||Noisy||Corrupted|| |-|-|-|-|-| ||20|30|20|30| |Random*|34.8|37.9|36.0|39.1| |Infobatch|34.9|37.1|35.1|38.1| |Ours|**35.9**|**39.6**|**39.1**|**42.0**| - **Q6: Clarification on the selection ratio.** - **A6:** We would like to clarify that, following common practice in data augmentation and selection research, our method trains models **only on the augmented data**. In each epoch, we generate **exactly one augmented sample per selected data point**, and only these augmented samples are used in both forward and backward passes. Thus, the selection ratio is exactly equal to the proportion of the data used for training. - **Q7: Clarification of multimodal focus in the Introduction.** - **A7:** Thank you for the suggestion. We will revise the intro part, clarifying the multimodal characteristic and focus of our framework. - **Q8: Typo.** - **A8:** We have corrected the mentioned typo and thoroughly reviewed the manuscript to ensure clarity and correctness. - **Q9: Applicability to LLM training.** - **A9:** Applying our method to other domains, such as data selection for LLM training, is a promising direction for future research. Since our current work focuses on the image and image-text domains, extending this framework to LLM training requires adapting the definitions and estimation for both semantic consistency and sample density in a purely textual feature space. Feasible solutions may involve leveraging pre-trained language models (e.g., BERT, GPT) to assess semantic alignment and correctness, and using task-specific metrics such as token-level uncertainty or embedding-space sparsity to approximate density distributions.
Summary: The authors propose a unified framework combining dynamic data selection and data augmentation to accelerate model training. An online nearest neighbour search is used to find low-density samples along with a semantic consistency score from a pre-trained CLIP model to filter out noisy data. The targeted augmentation of the filtered data helps fill data gaps and sharpen decision boundaries in sparse regions, improving model generalization and robustness. The authors claim to achieve lossless training acceleration with fewer data points using a similar amount of data. ## update after rebuttal The authors have sufficiently addressed all my concerns and included more analyses to show the strength of their method. I have accordingly increased my score as a result. Claims And Evidence: Most of the claims made by the authors are supported by clear and convincing evidence. From the results, integrating dynamic data selection with targeted augmentation seems to improve efficiency and generalization. One claim that needs a bit more validation though is the universal applicability of the method to all domains or tasks. The reliance on a pre-trained CLIP model for semantic consistency might not transfer well to domains where such models are less effective. It may also be useful to study the benefits and limits of the method under extreme noise conditions (both label and input noise) which might affect the augmenter part of the framework. Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense for the problem at hand. The authors use the CIFAR-10, CIFAR-100, Tiny-ImageNet, and also the ImageNet-Hard, ImageNet-A, ImageNet-R, and ImageNet-O datasets. They use the ResNet-18 and ResNet-50 models along with more advanced architectures such as ViT-B, ViT-L, and Swin-T to show the strength of their method. The selection ratio and accuracy is reported across all of these experiments. Theoretical Claims: There are no major theoretical claims or proofs in this paper. Experimental Designs Or Analyses: The experimental design and analyses seem sound and valid. The authors include extensive experiments to show evidence that their method outperforms several baselines. They include results showing that models trained across different selection ratios achieve comparable performance to models trained on the full dataset and the proposed method has one of the lowest computational costs. They also report results showing that their method works on several datasets and models, including more advanced architectures. They show the method's robustness to noise and perform an ablation study on the density distribution, consistency distribution, and augmenter to show that all three are necessary for the best results. The authors mention that they leverage a pre-trained CLIP model to embed images and text into a shared multimodal space for semantic alignment assessment. They mention that they use lightweight adapters to adapt embeddings to target domains, however, if there are target domains where CLIP-like models or adapters are less effective, this method may not work too well since this is a major part of the framework. Supplementary Material: There is no supplementary material. Relation To Broader Scientific Literature: The key contributions of this work are in combining the existing ideas of data selection and data augmentation to pick the best data to augment. Previous data selection methods are more static and might not find low-density samples. Previous data augmentation methods are not as targeted and might not fill in sparse gaps in the training distribution. Combining these two areas to both identify underrepresented samples while selecting ones that are best for augmentation fills a gap in the literature while reducing training costs and enhancing robustness to noise. Essential References Not Discussed: It may be useful to mention generative methods of data augmentation: Moreno-Barea, F.J., Jerez, J.M. and Franco, L., 2020. Improving classification accuracy using data augmentation on small data sets. Expert Systems with Applications, 161, p.113696. Some more recent works that talk about improving generalization by agreement and the gap between clean and augmented data: Atienza, R., 2022. Improving model generalization by agreement of learned representations from data augmentation. In Proceedings of the IEEE/CVF winter conference on applications of computer vision (pp. 372-381). He, Z., Xie, L., Chen, X., Zhang, Y., Wang, Y. and Tian, Q., 2019. Data augmentation revisited: Rethinking the distribution gap between clean and augmented data. arXiv preprint arXiv:1909.09148. Other Strengths And Weaknesses: Strengths: - Novel combination of the existing ideas of data selection and data augmentation to come up with data selection for data augmentation. - Extensive experiments across various models, datasets, and selection ratios. - Robustness to noise. - Low computational costs. Weaknesses: - The use of CLIP-like models in the framework which might not transfer to all tasks and domains. - Might be good to study the method's limits under extreme noise conditions (label and input noise). - A few typos (mentioned below). Other Comments Or Suggestions: Minor typos: - 012: "lossless performances" -> "lossless performance" - 042: "reinforces the model learning" -> "reinforces model learning" - 088: "riak" -> "risk" - 101: unclear "balances the distribution and the sample distribution and importance in selection" - 104: "Optimization-based methods formulates" -> "Optimization-based methods formulate" - 645: "occlution" -> "occlusion" Questions For Authors: 1. How exactly is $p_{sel}$ understood as the joint distribution that combines both density and consistency distributions? The definition and explanation need more clarity - it is defined but never mentioned again. 2. Have you tested the sensitivity of your approach to the choice of the pre-trained model or explored its applicability in domains where CLIP or the available pre-trained models/adapters might not perform well? More details on the pre-trained models and adapters would be very beneficial. Addressing these questions sufficiently would improve the strength of this method significantly. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear reviewer GA1s, Thank you for your meticulous review and valuable suggestions on our work. We appreciate your recognition of our work's strengths, e.g., novelty, sound and valid experiments. We provide responses to address the comments as follows: - **Q1: Universal applicability and reliance on CLIP.** - **A1:** Thank you for pointing this out. We agree that the applicability and effectiveness of pretrained CLIP may vary across domains, particularly in some highly specialized fields. However, we note that this concern is not unique to our work but is shared by many state-of-the-art works that leverage pretrained VLM such as CLIP. Since our work focuses on dynamic data selection in general-purpose vision domains, fully addressing CLIP’s applicability across all domains and tasks is, in our opinion, beyond the scope of this work but of interest for our future work. Some feasible solutions may include leveraging fine-tuned VLM tailored to the target domain, or incorporating domain-specific adapters or alignment modules to improve semantic consistency estimation in more specialized scenarios. We will add this discussion to the main paper on page 8 before the Conclusion section. - **Q2: Extreme noise conditions.** - **A2:** Thank you for the suggestion. To assess the robustness of our method under extreme noise conditions, we conducted experiments on Tiny-ImageNet with high label, input noise, and both (with 50% and 70% noise ratio). As shown in Table C-1, while the overall performance naturally degrades under severe noise, our approach consistently achieves significantly higher accuracy, e.g., over 10% accuracy improvement in the presence of input noise. Moreover, we further analyze the average proportion of noisy samples retained during training (Table C-2). It shows that the proportion of noisy samples in our selected datasets remains considerably low even in high-noise conditions. These results underscore the benefits of our approach: even in highly noisy environments, the multimodal semantic consistency used for sample selection helps retain informative, high-quality data and maintain model robustness. **Table C-1**: Comparison of accuracy with Tiny-ImageNet in high-noise conditions. |Noise Ratio (%)|50||70|| |-|-|-|-|-| |Selection Ratio (%)|20|30|20|30| |**Label Noise**|||| |Random*|25.5|26.1|17.6|18.0| |Ours|**27.6**|**30.7**|**19.2**|**21.8**| |**Input Noise**|||| |Random*|28.5|28.1|18.6|19.0| |Ours|**38.3**|**40.9**|**35.6**|**39.4**| |**Label\&Input Noise**|||| |Random*|24.1|25.2|14.2|15.9| |Ours|**26.1**|**28.1**|**16.2**|**18.5**| **Table C-2**: Further analysis of the label noise proportion (%). We report the average introduced noise ratio in the selected datasets through the entire training process. |Noise Ratio (%)|50||70|| |-|-|-|-|-| |Selection Ratio (%)|20|30|20|30| |Random|20.3|30.5|20.1|29.9| |Ours|**3.1**|**5.7**|**4.0**|**6.1**| - **Q3: More suggested related works.** - **A3:** We will include the suggested works in Sec. 2.2 of the revised version. Specifically, - Sec 2.2 line 152: add references "*Beyond these, generative data augmentation (Moreno-Barea et al., 2020) ...*" and "*Recent studies also emphasize representation consistency (Atienza, 2022) and address distribution gaps between clean and augmented data (He et al., 2019) ...*". - **Q4: Minor Typos.** - **A4:** We have corrected all the mentioned issues and thoroughly reviewed the entire manuscript to ensure clarity and correctness in the final version. - **Q5: Explanation of $p_{sel}$.** - **A5:** As defined in Eq. (3) of Sec. 3.4, $p_{sel}$ is computed as the product of $p_\rho$, which reflects the density distribution, and $p_{con}$, which captures semantic consistency. Specifically, $p_\rho$ assigns higher values to low-density samples, promoting coverage of underrepresented regions in the feature space. $p_{con}$ prioritizes samples with strong semantic alignment, ensuring their informativeness and relevance By combining the two, $p_{sel}$ encourages samples that are both structurally important and semantically meaningful. We appreciate your suggestion and will revise Sec. 3.4 to include this clarification. - **Q6: Sensitivity to the choice of pre-trained model.** - **A6:** Insightful question. To assess the sensitivity of our method to the choice of pre-trained models, we conduct experiments replacing CLIP with another pretrained multimodal model LanguageBind (LB)[a], which focuses on aligning diverse modalities (e.g., video, audio, infrared, and depth) into a shared language space. While CLIP is highly optimized for image-text alignment estimation, our framework still achieves consistent performance when using LB, with only a marginal drop (< 0.7%) in Table C-3. **Table C-3**: Sensitivity analysis on the pretrained model. |Selection Ratio (%)|30|50|70| |-|-|-|-| |Random*|41.5|42.8|43.1| |CLIP|44.9|47.0|49.4| |LB|44.2|46.5|48.7| [a] Zhu, Bin, et al. Languagebind, ICLR'24. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for the thorough response to all of my points! I will increase my recommendation accordingly. --- Reply to Comment 1.1.1: Comment: Dear reviewer GA1s, We would like to express our sincere gratitude to reviewer GA1s for acknowledging our work and providing constructive suggestions. Thanks again for the time and effort in reviewing our work.
Summary: Data selection---eliminating unhelpful samples---plays a crucial role in machine learning. While selecting high-value samples improves training efficiency without degrading performance, it can reduce data diversity and harm model generalization. To address this, this paper proposes a unified framework that integrates data selection and data augmentation, achieving both reduced computational cost and enhanced performance. Moreover, instead of naively combining these two techniques, they estimate each sample's local density and semantic alignment in a joint distribution with the help of multimodal model CLIP. Empirical results show that our approach accelerates training while improving model generalization. Claims And Evidence: 1. I agree that data selection can improve training efficiency and maintain comparable performance, but it inherently **reduces data diversity, which could impact generalization**. However, the claim that it harms model generalization lacks supporting evidence. Are there any preliminary results to validate this statement? And how to set up such experiments to present or measure model generalization. 2. In the proposed framework, if a data point is selected for augmentation, is the model trained on both the original and augmented versions, or only on the augmented one? From my understanding, only the augmented version is used, but wouldn’t retaining the original sample help preserve semantic completeness? 3. The proposed method does not explicitly control the outcomes of data augmentation. How can we ensure that post-augmentation samples maintain strong semantic alignment with the original data points and do not negatively impact model training? Introducing a verification mechanism for post-processing might be beneficial. 4. Furthermore, if the augmentation process preserves local structure (as mentioned in lines 210–211, right-hand side), is it necessary to continuously update the feature space during training to identify low-density samples? How does the relationship between a sample and its neighbors change before and after augmentation? Methods And Evaluation Criteria: Their evaluation criteria is reasonable to me, and most of the experimental setups align with standard practices in the data selection field. Theoretical Claims: There is no major parts for theoretical claims but i have walked through their problem setup. It looks correct to me. Experimental Designs Or Analyses: Yes, their experimental design makes sense to me. However, in their setup, the CLIP model and a ResNet-18 model are used as **embedding generators for filtering**, while the model being trained is from the **ViT series**. Could this lead to differences in the embedding space or quality? Additionally, would there be inherent biases due to the different pretraining datasets used for these models? Supplementary Material: Yes, I have reviewed their appendix. A follow-up question regarding Section A.1: How is an augmentation operation chosen for each sample, how many operations we will perform for a sample, and how do they ensure control over the augmented version after processing? Relation To Broader Scientific Literature: The key contribution of this paper is highlighting the downside of data selection—namely, **the reduction of data diversity and its potential impact on model generalization**. To address this, they propose **an integrated framework** that combines data selection and data augmentation in a unified approach, which is novel to data selection literature. Essential References Not Discussed: There are no major additional references to include, as the baseline methods used for comparison are commonly employed in the data selection literature. Other Strengths And Weaknesses: I suggest investigating the domain shift before and after data augmentation. If a significant shift exists, should we consider training with the original data points as well? If there is no significant shift, would it be necessary to repeatedly update the embedding space during training? Other Comments Or Suggestions: N/A Questions For Authors: See my questions above. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Dear Reviewer bQXA, We sincerely thank you for the careful review and insightful comments/questions. We appreciate your recognition of our work's strengths, e.g., novelty and empirical effectiveness. For the comments and questions, we provide our responses here: - **Q1: The impact of data selection on generalization.** - **A1:** Thank you for your insightful comment. To assess the impact of data selection on generalization, we have evaluated models trained on the selected data across various benchmark datasets (Tab. 1/3/4/5/6, Fig. 3,), which are commonly used to assess generalization performance. The results present a consistent trend across all baselines, as the selection ratio decreases, model test performance degrades, indicating reduced generalization. Despite this overall degradation, our method effectively alleviates this degradation by integrating data augmentation (DA) and selection. - **Q2: Clarification on the data used for training.** - **A2:** We would like to emphasize that only the augmented data is used for training. This follows common practice in many widely used DA and selection methods (e.g., TrivialAug, AutoAugment, InfoBatch, Moderate, etc.), where only the augmented version of selected data is used for training. Retaining the original and augmented versions would **double** the number of training data used for model forward and backward passes, thereby doubling the computation costs. This would **substantially undermine the training efficiency** that our method is designed to achieve. While only augmented data is used for training, our augmentation strategy is carefully designed to preserve semantic consistency. Please refer to **A3** and **A4** for details. - **Q3: Semantic alignment before and after augmentation.** - **A3:** Thanks for the insightful comments. While our method does not explicitly verify each augmented data post-augmentation, our DA strategy is designed to prioritize semantic alignment. Specifically, each selected data is augmented using **only one operation** with its magnitude contained within a **moderate, predefined range** (Sec. A.1). In contrast, widely-used methods, such as AutoAug, Fast-AA, and RandAug, typically apply **2-3 operations per sample** with stronger transformation strengths, emphasizing more diverse training data rather than maintaining semantic consistency. To further address your concern, we investigate the semantic alignment with a vanilla pretrained CLIP model using cosine similarity. Tab. B-1 shows that **the augmented data maintains a strong semantic consistency** with the original data. While our current method does not include explicit post-augmentation verification to maintain high efficiency, we agree that incorporating a lightweight verification mechanism, without compromising efficiency, is a promising direction for future work, which is also a key challenge in DA research. **Table B-1**: Semantic alignment between original and augmented data points. ||C-10|C-100| |-|-|-| |Avg. Sim.|0.94|0.93| |Std.|0.06|0.05| - **Q4: Clarification on local structure after augmentation.** - **A4:** To further assess the local structural stability, we investigate changes in each sample's local nearest neighbors before and after augmentation. As shown in Tab. B-2, the proportion of altered neighbors is extremely low (<= 0.3%), validating that our DA introduces only minimal changes to local structure. However, it is important to note that the feature space evolves continuously during training as the model updates. Thus, even if the augmented data remains semantically and structurally stable, it is **necessary to update the embedding space** to capture the model's current training state for dynamic data selection. This is also the core principle of dynamic data selection approaches. **Table B-2**: The ratio of changes in local nearest neighbors after augmentation. ||Change Ratio| |-|-| |C-10|0.2%| |C-100|0.3%| - **Q5: Clarification on the embedding generators.** - **A5:** As clarified in Sec. 3.1, only the CLIP model is used to derive the semantic consistency distribution for filtering, which captures the intrinsic representativeness of training data. Meanwhile, as introduced in lines 207-213, to adapt CLIP to the target domain, we use a lightweight adapter to enable domain-specific knowledge transfer while preserving CLIP's strong alignment capabilities. On the other hand, the task model (e.g., ResNet or ViT series) is used to estimate the evolving density distribution during training. Thus, our framework minimizes the inherent biases. - **Q6: Details of augmentation operations.** - **A6:** For each sample, we apply **only one random augmentation operation** from Sec A.1. Following common practice in data augmentation (e.g., AutoAug, TrivialAug), we control the number of applied operations and the applied strength via a predefined, bounded magnitude range, ensuring consistency and avoiding overly distorted augmentation. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I have carefully reviewed your explanations. 1. Regarding the generalization results, I appreciate you pointing them out. I believe model generalization refers to the ability to handle unseen tasks or transfer to a new domain, rather than evaluating performance on noisy or corrupted samples. The experiments in the paper seem to be more related to assessing a model's robustness to incorrect samples. 2. Post-verification plays a crucial role in this work, and how it is controlled can significantly impact selection performance. Additionally, if semantic alignment remains strong, how does this influence/improve data selection? Thank you for your time and response. --- Reply to Comment 1.1.1: Comment: Dear Reviewer bQXA, Thanks for appreciating our work and giving the valuable comment. It is a point worth discussing. We would like to provide more discussion on your comments. - **Q7: Regarding the impact of data selection research on generalization.** - **A7:** We would like to emphasize that, beyond evaluating robustness to noisy scenarios, our experiments have also included a dedicated evaluation of the model's **generalization** on ImageNet-A, -O, -R, and -Hard, as shown in Table 4. These benchmarks are specifically constructed to evaluate generalization to **unseen domains and distribution shifts**. All these results present a similar trend: **as the selection ratio decreases, model generalization performance tends to degrade.** To further address your comments, we conducted an additional **cross-domain transfer learning** experiment in Table B-3, where models are pre-trained on ImageNet-1k and fine-tuned on CIFAR-10. These two datasets differ significantly in resolution, label granularity, and visual domain, making this a strong test of generalization across domains. The results show a similar pattern: lower selection ratios lead to reduced performance. Importantly, this trend has also been observed in many existing data selection literatures, e.g., Moderate, InfoBatch, DP, MoSo, etc. Thus, higher selection ratios are typically applied to ensure generalization. **Table B-3:** Evaluation of cross-domain generalization. |Selection Ratio (%)|10|20|30|50|60|70|80|90| |-|-|-|-|-|-|-|-|-| |Acc. (%)|85.2|85.6|85.9|86.5|86.8|86.9|87.2|87.7| We hope these clarifications and additions address your comment. - **Q8: Clarification on Post-Augmentation and Semantic Alignment.** - **A8:** Thank you for pointing this out. We address your comments in two parts. **1. Regarding Post-Augmentation:** Although our method does **NOT** include an explicit post-augmentation verification step to maintain high training efficiency, following the best practices in recent data augmentation (DA) research, we control the augmentation process by applying only one operation with its strength constrained within a predefined, moderate range. While not an explicit verification mechanism, this augmentation design effectively avoids overly distorted augmentation while preserving semantic consistency (as discussed in **A3** and **A4**). Moreover, this setting is **uniformly applied across all experiments**, and our results consistently show that both the model and selection performance remain stable and consistently superior, validating the effectiveness. **2. Regarding Benefits of Semantic Alignment:** Since our framework uses augmented data for training and constructing the embedding space, maintaining strong semantic alignment provides several key benefits: (1) Because semantic alignment is preserved, augmented low-density samples remain within low-density areas of the embedding space. This enables the model to effectively learn from underrepresented or insufficiently learned areas (as discussed in Fig. 2 and Sec. 3.2, paragraph 2). (2) Strong semantic alignment ensures that augmented samples remain meaningful and unambiguous. This is critical for accurately identifying **truly informative low-density samples**, rather than being misled by distorted or over-augmented inputs. (3) Prior DA research (TrivialAug, KeepAug, MADAug) has shown that preserving semantic structures enhances DA and model performance. In our framework, this contributes to improved model performance, and further **more accurate density estimation and sample selection in the dynamic data selection process** during training (as discussed in Sec. 3.4, lines 206–219, page 4). Thank you again for your time and responses.
Summary: This paper combines data augmentation and dynamic data selection. The main idea is to augment examples that the model is uncertain about, while filtering noisy examples using semantic consistency. The experimental results show that with only 50% training compute, equal performance can be achieved to training on the full dataset. Claims And Evidence: Yes, the claims are well supported. Methods And Evaluation Criteria: I think a key baseline is missing: (Data-Efficient Augmentation) Liu, Tian Yu, and Baharan Mirzasoleiman. "Data-efficient augmentation for training neural networks." Advances in Neural Information Processing Systems 35 (2022): 5124-5136. This paper is perhaps the most relevant baseline for comparison as it similarly selects a subset of data for augmentation. It proposes a more theoretically rigorous approach to determine which examples are most useful for training. While uncertainty-based methods can sometimes prioritize difficult but unlearnable samples or even noisy samples, and semantic consistency might mitigate this issue, using model uncertainty as a heuristic for sample importance remains limited. The aforementioned paper instead leverages model gradients to identify the samples that contribute most significantly to learning, providing both theoretical guarantees and empirical validation of this approach's effectiveness. Theoretical Claims: N/A. Experimental Designs Or Analyses: Looks reasonable. Supplementary Material: No. Relation To Broader Scientific Literature: Discussed in essential references not discussed. Essential References Not Discussed: Missing a reference to some key works exploring similar ideas: - (Data-Efficient Augmentation) Liu, Tian Yu, and Baharan Mirzasoleiman. "Data-efficient augmentation for training neural networks." Advances in Neural Information Processing Systems 35 (2022): 5124-5136. The paper above is perhaps the most relevant baseline for this paper as it too selects a subset of data to augment. Other Relevant Work: - (Submodular Data Selection) Joshi, Siddharth, and Baharan Mirzasoleiman. "Data-efficient contrastive self-supervised learning: Most beneficial examples for supervised learning contribute the least." International conference on machine learning. PMLR, 2023. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: I would highly recommend the authors to include the missing baseline and references. I think this would strengthen the paper significantly. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer JbtQ, We sincerely thank you for the comments and constructive suggestions. We appreciate your recognition of our work's strengths, e.g., reasonable experimental designs and well-supported claims. For the comments, we provide our response as follows. - **Q1: Comparison with Data-Efficient Augmentation (DEA) [a].** - **A1**: Thank you for suggesting additional references and comparisons. We acknowledge the theoretical significance of Data-Efficient Augmentation, particularly its principled approach to rigorously determining which samples are most useful for training. In response to your suggestion, we compared our method with the suggested data-efficient augmentation using the same amount of training data and number of training iterations, across standard benchmarks (Table A-1), and under challenging noisy and corrupted scenarios (Table A-2). For ImageNet-1k, we utilized the reported results from [a]. The results show that our method consistently achieves higher accuracy than DEA across selection ratios and datasets, achieving at least 3% higher accuracy on ImageNet-1k, and notable gains on CIFAR-10 and Tiny-ImageNet. Moreover, as shown in Table A-2, our method demonstrates stronger robustness to both noisy and corrupted conditions. **Table A-1**:Comparison with [a] on CIFAR-10 with ResNet-18 and Tiny-ImageNet and ImageNet-1k with ResNet-50. |||C-10|||T-IN|||IN1k|| |-|-|-|-|-|-|-|-|-|-| |Selection Ratio (%)|30|50|70|30|50|70|10|30|50| |[a]|83.4|88.1|88.6|31.9|36.9|43.5|68.5|72.0|73.3| |Ours|**94.9**|**95.5**|**96.0**|**44.9**|**47.0**|**49.4**|**71.5**|**75.0**|**76.5**| **Table A-2**: Comparison with [a] under noisy and corrupted conditions using Tiny-ImageNet with ResNet-50. ||Noisy||Corrupted|| |-|-|-|-|-| |Selection Ratio (%)|20|30|20|30| |[a]|25.8|28.2|31.8|31.9| |Ours|**35.9**|**39.6**|**39.1**|**42.0**| [a] Liu, Tian Yu, and Baharan Mirzasoleiman. "Data-efficient augmentation for training neural networks." NeruIPS 2022. - **Q2: Suggested References.** - **A2:** Thank you for suggesting more related works. Since the revised manuscript can not be updated at the current stage, we will include these references in Section 2.1 in the final version. Specifically, - Sec 2.1, paragraph 3: add references "*The work (Liu \& Mirzasoleiman, 2022) proposes a theoretically rigorous approach to determine impactful data for training. SAS (Joshi \& Mirzasoleiman, 2023) improves data efficiency in SSL by proving and selecting the most beneficial data for contrastive training.*" We hope these additions and clarifications address your comments. --- Rebuttal Comment 1.1: Comment: Thanks for addressing my comments, I'd like to stick my rating to "accept" this paper! --- Reply to Comment 1.1.1: Comment: Dear reviewer JbtQ, We would like to express our sincere gratitude to reviewer JbtQ for acknowledging our work and providing insightful comments. Thanks again for the time and effort in reviewing our work.
null
null
null
null
null
null
Large Continual Instruction Assistant
Accept (poster)
Summary: To address catastrophic forgetting in Large Foundation Models (LFMs), this paper proposes a general continual instruction tuning framework. It introduces a dynamic exponential moving average update method to preserve prior knowledge while assimilating new information. For realizing the balance of stability and plasticity in LFMs, this paper derives a self-adaptive EMA weight for each update process. Furthermore, an instruction grouping strategy allows for retraining parameters of semantically similar instructions and selectively expanding those of semantically divergent ones. Experiments on distinct MLLMs and LLMs, including multimodal continual instruction benchmark and language continual instruction benchmark, all demonstrate the framework's strong resistance to forgetting and excellent continual instruction tuning performance. Claims And Evidence: All the claims made in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: The proposed methods and used evaluation criteria make sense for the problem or application at hand. Theoretical Claims: I checked the correctness of proofs as followings: 1. Equation (3) to Equation (12) and Equation (28) to Equation (39) are based on Taylor expansion and Lagrange multiplier method, deriving the optimal solution for EMA weights. 2. Equation (13) to Equation (15) and Equation (43) to Equation (45) employ several well-founded mathematical approximations, which obtain the highly accurate approximate solution, with effectively reducing both complexity and computational cost. Experimental Designs Or Analyses: Although the experimental designs in this paper are comprehensive and the analyses are sound, the authors may further enhance the completeness of the experiment from the following perspectives: 1. In Table 3, considering to include additional state-of-the-art methods would allow for a more comprehensive evaluation of the proposed method on the QWen-VL architecture. 2. Compared to the method with stable EMA weight, the additional dynamic adjustment time required by the proposed self-adaptive EMA weight remains unknown. It is recommended that the authors compare training time on a specific dataset with stable/dynamic EMA update methods to help readers better assess the efficiency of the proposed method. Supplementary Material: I reviewed all the supplementary material, including additional proofs, more experimental details, analyses and conclusions, as well as training algorithm processes. Relation To Broader Scientific Literature: The paper effectively cites research in related fields, particularly the latest developments in continual instruction tuning for Large Foundation Models. In contrast to the related scientific literature, the primary contributions of this paper are as follows: 1. Compared to the other former methods, this paper examines the catastrophic forgetting issue in continual instruction tuning from the perspective of the plasticity-stability trade-off in Large Foundation Models, which is the core of continual learning. Thorough rigorous mathematical derivation, the paper presents compelling results. 2. By exploring and exploiting the phenomenon of instruction reuse, this paper employs lightweight machine learning algorithm (TF-IDF) to cluster semantically similar instructions, enabling limited model expansion. In comparison to other model expansion techniques like L2P [1] and EProj [2], this approach significantly improves training efficiency while reducing computational and memory overhead. [1] Learning to prompt for continual learning [2] Continual instruction tuning for large multimodal models Essential References Not Discussed: This paper has already included all the essential references. Other Strengths And Weaknesses: Strengths + The experimental results in this paper demonstrate significant improvements in both anti-forgetting and continual tuning performance, especially on the LLaVA-7B and LLaVA-13B models, which outperform existing state-of-the-art methods. + The instruction grouping strategy utilizes the TF-IDF model for instruction matching, avoiding complex embedding calculations, reducing the computational burden and computational costs. + The proposed method exhibits good robustness and generality, making it applicable to various continual instruction tuning scenarios and large foundation models. Weaknesses - This paper contains several redundant phrases. Such as in line 412-414, "Additionally, we are surprised to find that our method can spontaneously suppress the occurrence of hallucinations appearing in the continual instruction tuning,", the phrase 'appearing in' is redundant and should be revised to: "Additionally, we are surprised to find that our method can spontaneously suppress the occurrence of hallucinations in the continual instruction tuning.". - This paper contains terminology issues. Such as in line 90-92, "Our method is model-free and can be easily applied to a wide range of CIT methods." should be modified as "Our method is model-agnostic and can be easily applied to a wide range of CIT methods.", which would be more accurate. Other Comments Or Suggestions: The use of some terms in the paper is not consistent enough, such as "forgetting" sometimes referring to "catastrophic forging" and sometimes referring to "forging metric". The authors are suggested to distinguish between these terms for clarity. Questions For Authors: This paper only provides the instruction reuse results on MLLM architectures (Table 10). While the instruction reuse results on LLM are unknown. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear reviewer 4GQq, thanks for your valuable suggestions. Here are our responses: **Response 1 (QWen Baselines)**: We newly implemented another two strong baselines: EProj and EWC on the QWen-VL architecture. The results are presented as: |Method|Venue|ScienceQA|TextVQA|ImageNet|GQA|VizWiz|Grounding|VQAv2|OCRVQA|Avg.ACC|Forgetting|New.ACC| |:------:|:------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:--------:|:-------:| |LoRA|ICLR’22|31.05|42.45|29.57|55.57|15.30|40.33|67.75|47.80|41.23|19.36|58.17| |EWC|PNAS’17|64.30|58.67|44.04|57.73|38.16|48.04|66.98|41.76|52.46|8.68|50.67| |PGP|ICLR’24|66.42|41.33|32.16|49.83|36.05|24.22|58.60|43.96|44.07|5.90|48.30| |EProj|ArXiv’24|63.19|59.28|62.96|55.51|40.69|41.20|66.89|45.30|54.38|2.52|56.59| |**Ours**|**-**|**66.52**|**59.44**|**53.56**|**57.81**|**39.57**|**47.44**|**70.36**|**50.44**|**55.64**|**1.62**|**56.19**| The results demonstrate that our method still owns the best performance, surpassing other state-of-the-art approaches. **Response 2 (Resource Consumption)**: Please kindly refer to the **Response 1** for the same question of Reviewer pe71. **Response 3 (Phrase)**: In the revised version, we have modified the sentence in lines 412-414 as follows: "Additionally, we are surprised to find that our method can spontaneously suppress the occurrence of hallucinations in continual instruction tuning." We also conducted a thorough revision of the paper to identify and eliminate other redundant expressions, ensuring the writing remains clear and precise. **Response 4 (Terminology)**: We have revised lines 90-92 as follows: "Our method is model-agnostic and can be easily applied to a wide range of CIT methods." Additionally, we carefully reviewed the manuscript to ensure consistent and accurate use of technical terminology throughout the paper. **Response 5 (Expression)**: To address this, we have clarified the distinction between the "forgetting metric" and "catastrophic forgetting" by adding the explanation that, in our paper, "Forgetting" (with an uppercase "F") refers to the "forgetting metric," whereas "forgetting" (with a lowercase "f") denotes the "catastrophic forgetting.” **Response 6 (Instruction Grouping Results on LLM)**: To address your concern, we have presented the instruction grouping results on LLM as follows. **Group 1** pubmedqa_classification = ... Output '1' if the passage has a defininte objective/aim/goal and output '0' if the passage does not have a definite objective/aim/goal … dstc3_classification = ... output the price range the user if looking for which can take one of four values: Cheap, Moderate, Expensive and Don't Care … air_dialogue_classification = ... select the goal of the conversation … deal_or_no_dialog_classification = ... answer 'Yes' if both participants agree to the deal, otherwise answer 'No' … craigslist_bargains_classification = ... classify the text into one of the labels from the two possible outputs - 'accepted'/'rejected' … **Group 2** mutual_multi_turn_dialogue = ... choose the most reasonable option … personachat_choose_next = ... Choose one and answer with the text … **Group 3** circa_answer_generation = ... generate an answer that is relevant to the question … convai3_sentence_generation = ... read the input, then generate a valid prediction of the user's response to the computer's clarifying question … air_dialogue_sentence_generation = ... find the answer of the previous dialogue … diplomacy_text_generation = ... generate the next message … curiosity_dialogs_answer_generation = ... find the dialogue that is basically a response given to a question or an aspect of the user ... smcalflow_sentence_generation = ... identify what will be users' command for that reply ... multi_woz_user_utterance_generation = ... Generate a language query such that it leads to this reply ... personachat_generate_next = ... generate the next utterance in a given dialogue ... dstc3_answer_generation = ... answer the given question based on the information present in the dialogue ... convai3_sentence_generation = ... generate a prediction of what the requester is actually trying to do ... smcalflow_sentence_generation = ... return what will be Agent's response/reply for that particular user's command or question ... **Group 4** storycommonsense_motiv_text_generation = ... write the character's motivation by doing a specific job, which is given in the sentence … It can be seen that **instructions for 19 tasks are categorized into 4 groups**, which reflects the limited model expansion property of our method. Thanks for your detailed and constructive review. Your review has significantly contributed to the refinement of our work, offering new insights and helping us address important aspects that were previously overlooked. We have made extensive revisions based on your suggestions, resulting in a clearer and more robust paper.
Summary: This paper proposes a novel framework to address the catastrophic forgetting in Continual Instruction Tuning (CIT). Starting from the trade-off between the plasticity and stability, the paper introduces an optimal balance weight of Exponential Moving Average (EMA), determined automatically by gradients and learned parameters. Additionally, the framework uses semantic similarity of instructions to decide whether to retrain or expand the model’s training parameters, and allocate the most suitable parameters to testing instances. Extensive experiments across multiple CIT benchmarks (e.g. CoIN, InstrDialog) and Large Foundation Models (LFMs, e.g. LLaVA, QWen-VL, T5) demonstrate that the proposed method not only reduces forgetting but also significantly improves overall continual tuning performance. ## update after rebuttal The authors have adequately addressed my concerns. My overall recommendation remains accept. Claims And Evidence: Yes, all the claims are supported by clear and convincing evidence. Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria make sense. Theoretical Claims: I have checked the correctness of all proofs for theoretical claims, including the optimal EMA weight deduction (Eq.(2)-Eq.(15) in the paper and Eq.(28)-Eq.(39) in the supplementary material) and the EMA expansion process (Eq.(16)-Eq.(21) in the supplementary material), etc. Experimental Designs Or Analyses: I have checked the soundness and validity of the experimental designs and analyses. The following two issues can be considered to further improve the paper’s quality. 1. In Table 6, the paper presents an ablation study results. However, it only compares results using a fixed weight of 0.99 to show the superiority of proposed Dynamical EMA method. I suggest adding additional results with other fixed weights and providing more evaluations. 2. In Table 3, compared baselines conducted on the QWen-VL architecture are insufficient, adding more state-of-the-art method with stronger baselines like PGP could be employed for further studies of the method. Supplementary Material: Yes, I have reviewed all the supplementary material. In the supplementary material, the authors provide more experimental details, theoretical proofs, experimental results on LLMs, and algorithms. Relation To Broader Scientific Literature: Compared to broader scientific literature on large model continual instruction tuning (e.g., EProj [1], CoIN [2]), the main contribution of this paper lies in 1. For the first time, this paper addresses the forgetting problem in continual instruction tuning from the perspective of the trade-off between plasticity and stability in Large Foundation Models, with providing strong and convincing theoretical proof, which offers insightful ideas. 2. This paper proposes a dynamic exponential moving average update strategy to balance the trade-off between plasticity and stability, achieving significant and surprising performance improvements. Additionally, the method can be applicable to various Large Foundation Model architectures, including LLMs and MLLMs (e.g., LLaVA, QWen-VL), demonstrating strong generalization. 3. Based on the phenomenon of instruction reuse and lightweight TF-IDF model, the paper achieves limited model expansion. Compared to other model expansion methods like L2P [3] and EProj [1], it effectively improves training efficiency and reduces memory usage, providing further insights into instruction for continual instruction tuning. [1] Continual instruction tuning for large multimodal models [2] Coin: A benchmark of continual instruction tuning for multimodel large language model [3] Learning to prompt for continual learning Essential References Not Discussed: The paper has discussed almost all essential references. Other Strengths And Weaknesses: **Strengths** 1. This paper presents extensive experiments across multiple Large Foundation Models, and several benchmark datasets, as well as natural language processing tasks. The experimental results demonstrate that the proposed method outperforms existing state-of-the-art methods in terms of anti-forgetting and continual tuning performance. 2. This paper proposes a method for dynamically adjusting EMA weights, theoretically deriving the optimal EMA weights to balance stability and plasticity in the continual learning process. Compared to traditional fixed EMA weight methods, the dynamic adjustment mechanism better adapts to continuously changing datasets and significantly mitigates catastrophic forgetting. 3. This paper proposes a semantic similarity-based instruction grouping strategy that clusters similar instructions using the TF-IDF model and assigns a trainable LoRA set to each instruction group. Compared to existing model expansion methods, this strategy not only limits the size of expansion parameters but also reduces computational costs. **Weaknesses** 1. Some tables’ title (e.g., Table 6) are too brief; it is recommended to add more detailed explanatory sentences for clarity. 2. The paper introduces the L1-Norm to approximate $\beta_t$. However, in line 248-251, the authors only state that the motivation for adopting L1-Norm to approximate the $\beta_t$ is L1-Norm occupies a few computation loads compared to other normalization methods. Providing experimental comparisons with other representative Normalization method (e.g. L2-Norm) would be more more convincing. Other Comments Or Suggestions: 1. In line 96-97, it has “especially significant enhancements in the performance of the baseline (as shown in Figure 1)”. Here, the author may clarify the “baseline” to a more detailed “LoRA Fine-tuning baseline”. Questions For Authors: 1. In Table 5, it can be observed that the Avg.ACC results for Origin and Diverse instruction types are slightly higher than those for the 10Type instruction type. What might explain this discrepancy? Is it related to the type of instruction template? 2. In Table 1-3, the authors utilize the Forgetting as the evaluation metric. However, in Table 8, they switch to the BWT metric. Why do the authors choose different evaluation metrics to assess the anti-forgetting ability of MLLMs and LLMs? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear reviewer 9qib, thanks for your valuable suggestions. Here are our responses: **Response 1 (Fixed EMA)**: To address your concern, we have conducted ablation studies with more fixed EMA weights, *i.e.* 0.993, 0.996 0.999 (as the EWA weight is usually set in [0.990, 0.999]) [1]. The results are shown as: |EMA Weight|Avg.ACC|Forgetting|New.ACC| |:--------------------------------:|:-----:|:--------:|:-----:| |0.990|48.09|16.24|62.30| |0.993|48.78|17.89|64.44| |0.996|51.14|15.56|64.76| |0.999|51.81|0.85|49.44| |Dynamical EMA|55.33|7.04|61.49| |Dynamical EMA+Instruction Grouping|64.64|1.93|66.33| From the Table, our proposed dynamic EMA method consistently achieves the best performance across all three metrics compared to certain fixed EMA weights (e.g., from 0.990 to 0.996). When compared to a fixed EWA weight of 0.999, our method demonstrates superior Avg.ACC and New.ACC but exhibits inferior resistance to forgetting. It is important to clarify that, as described by Equation (1) in our manuscript, a higher EMA weight generally results in lower Forgetting but also lower New.ACC. However, the lower Forgetting does not always indicate better overall performance due to the trade-off between plasticity and stability. In extreme cases, the Avg.ACC metric may be significantly impacted by a reduced New.ACC. As shown in the Table, under the 0.999 EMA weight setting, the New.ACC metric only reaches 49.44, which is considerably lower than the New.ACC values of other methods (others all exceeding 60). In summary, the low Forgetting in the 0.999 EMA weight setting comes at the cost of significantly reduced New.ACC. **Response 2 (QWen Baselines)**: Please kindly refer to the **Response 1** for the same question of Reviewer 4GQq. **Response 3 (Table Titles)**: We recognize that some table titles, including Table 6, may be too brief. To improve clarity, we have revised the titles of these tables to include more detailed explanations. For example, we have updated the title of Table 6 to “Ablation study results for each proposed components”. **Response 4 (L1-Norm)**: We have replaced L1-Norm in our method with L2-Norm and reconducted the whole experiment based on the LLaVA-7B architecture with the Origin instruction type. The results are shown as: |Normalization|Avg.ACC|Forgetting|New.ACC| |:-----------:|:-----:|:--------:|:-----:| |L1|64.64|1.93|66.33| |L2|61.17|4.08|64.73| We can see that the L1-Norm consistently outperforms L2-Norm, regardless of Avg.ACC, Forgetting, or New.ACC metrics, which further enhances our motivations of adopting L1-Norm approximation. **Response 5 (Phrase)**: We have replaced "baseline" with "LoRA Fine-tuning baseline" to specify which baseline is being referred to, making the context clearer for readers. The updated sentence likes: "especially significant enhancements in the performance of the LoRA Fine-tuning baseline (as shown in Figure 1)." **Response 6 (Avg.ACC Results For Instruction Template)**: As you have pointed out, this phenomenon is closely related to the type of instruction template. As we presented in the Table 9, the 10Type instruction type owns 10 different instruction templates for each task, which require the MLLM to memorize a larger amount of information. Consequently, the challenge of mitigating forgetting is harder compared to the Origin and Diverse instruction types, which utilize only a single instruction template for each task. This conclusion is further supported by the Forgetting values reported in Table 5 (Origin: 1.93, Diverse: 0.45, 10Type: 2.86). Therefore, due to the increased forgetting, the Avg.ACC of the 10Type instruction type is slightly lower than those observed of the Origin and Diverse instruction types. **Response 7 (Forgetting Metric)**: In fact, both the Forgetting and BWT metrics can be used to measure catastrophic forgetting. The key distinction lies in their typical applications: the BWT metric is more commonly employed in Task Incremental Learning (TIL), whereas the Forgetting metric is often used in Class Incremental Learning (CIL) [1,2]. In our paper, the choice of evaluation metric is guided by the need to ensure consistency with the benchmarks utilized [3,4]. Therefore, we follow their original metric setting in our evaluation. [1] Trgp: Trust region gradient projection for continual learning. [2] Learning to prompt for continual learning. [3] Coin: A benchmark of continual instruction tuning for multimodel large language model. [4] Citb: A benchmark for continual instruction tuning. We would like to express our sincere thanks for your thorough review and valuable suggestions. Your thoughtful comments have been crucial in identifying areas for improvement, allowing us to refine and enhance the quality of our paper. Thank you again for your effort and dedication in helping us improve our work.
Summary: This paper addresses an important catastrophic forgetting challenge and proposes a solution in the continual instruction tuning. Based on the ideal conditions of balancing plasticity and stability, meanwhile combined with the Exponential Moving Average (EMA) update, the authors adopt the optimization method to obtain the dynamic EMA weight through gradients (new knowledge) and parameters (old knowledge). The dynamic EMA weight successfully generalizes the model to the new dataset while still retaining knowledge on the old dataset during the continual tuning process. Furthermore, the authors propose a new model-expansion method with instruction grouping strategy based on the phenomenon of instruction reuse, which enables lightweight and limited model expansion on complex and variable datasets. The proposed method could be extended to various Large Foundation Models (LFMs), including MLLMs (LLaVA-7B, LLaVA-13B), LLMs (T5). The authors conduct comprehensive experiments on Multimodal Continual Instruction Tuning (MCIT) benchmark, which consists of Visual Question Answering, Image Classification, OCR, Knowledge Grounding, Reading Comprehension, Visual Reasoning, Visual Grounding etc. The experimental results demonstrate that this method outperforms the previous baselines in terms of Forgetting and Avg.ACC evaluation metrics and achieves the new SOTA results. ## update after rebuttal Thanks to the authors for addressing my concerns and providing additional results. I will keep my score. Claims And Evidence: The paper provides clear and convincing evidence to support the claims made in submission. Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense for the problem or application at hand. Theoretical Claims: The proofs of the deduction process of the optimal EMA weight and two approximate processes are thoroughly checked. Additionally, the complete proof process and further optimization process in the supplementary materials are also been carefully examined. All processes are accurate and convincing. Experimental Designs Or Analyses: In 5. Experiments section, the authors demonstrated the advantages of the proposed method by comparing to different baselines on various MLLMs, such as LLaVA-7B, LLaVA-13B, and Qwen VL, with common metrics Forgetting and Avg.ACC. The conducted experiments are soundness, convincingly showing the method's effectiveness. Although the experimental results are impressive and thorough, further modifications are suggested to improve the content of the paper. Especially with regard to the time and memory consumption of model training, it is recommended to further compare the time consumption caused by adopting the dynamic EMA update with the fixed EMA method, and the memory saving with the method that does not use instruction grouping strategy. Further discussing the above comparisons would enhance the understanding of the method. The ablation studies in Table 6 are insufficient as they only compares with the fixed EMA weight of 0.99. Introducing more fixed EMA weight would reduce randomness and increase the credibility. Supplementary Material: The entire supplementary material is reviewed. In the supplementary material, the authors introduce more experimental details and algorithm process, which improves the reproducibility. Moreover, the authors also include the detailed theoretical demonstration. Furthermore, the authors present continual instruction tuning results for LLM and instruction grouping results for MLLM. Although the contents in the supplementary material are significant and reasonable, some additional results are missing, such as instruction grouping results on LLM. Providing these results would further validate the effectiveness of the method. Relation To Broader Scientific Literature: Compared with other continual instruction tuning methods, this paper makes key contributions in the context of LLMs Essential References Not Discussed: The authors are suggested to include more continual instruction tuning references for LLM in the Related Work section, such as TRACE: A comprehensive Benchmark for Continual Learning in Large Language Models Other Strengths And Weaknesses: # Strengths Novelty: The paper proposes a novel framework to address the continual instruction tuning problem of Large Foundation Models, and its effectiveness has been validated through extensive experiments, demonstrating strong anti-forgetting ability and continual instruction tuning effects. By combining the balance conditions of stability and plasticity with traditional EMA method, the authors derive the optimal solution from an optimization method perspective. This paper offers new ideas and insights for the continual tuning of large models, which is highly inspiring The paper conducts experiments across various large foundation models and continual instruction tuning benchmarks. These experiments show that the proposed method demonstrates excellent generalizability ability and can be transferred to much more continual instruction tuning scenarios. The approach presented in this paper is independent of both model architecture and dataset. Therefore, this method can be easily applied to a wider range of models in practical scenarios, and further solves the catastrophic forgetting problem that exists among them. This paper provides the optimal EMA weight to achieve the plasticity-stability balance from an optimization perspective using the Lagrange multiplier method. Furthermore, by utilizing two optimization methods, the computational load and complexity of the approach are further reduced. The mathematical proofs in the paper are rigorous and reliable, providing a strong theoretical guarantee for the method. # Weaknesses Experiments: Lack of more comparative experiments (See Experimental Designs Or Analyses ) and more supplementary experiments (See Supplementary Material). References: Literature introduction on LLM continual instruction tuning is insufficient (See Essential References Not Discussed). Other Comments Or Suggestions: Some terminologies have not been explained well yet. Provide some explanations and significance of plasticity-stability balance in the introduction, which could increase the readability of the paper. Some phrasing is not very appropriate, such as "knowledge confusion" in line 31 can be changed to "knowledge interference". Questions For Authors: The authors demonstrated in the paper that the dynamic EMA weight may exceed the range of 0-1 (in line 244-246). Therefore, providing the values of EMA weights at each iteration on a specific dataset (such as TextQA) would provide more profound insights. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: Dear reviewer pe71, thanks for your valuable suggestions. Here are our responses: **Response 1 (Resource Consumption)**: We compare the time consumption between the dynamic EMA update and the fixed EMA method under same experimental settings (adopting LLaVA-7B). We measure the time consumption for each task on 4-NVIDIA H100 GPUs. The results are shown as: |Method|ScienceQA|TextVQA|ImageNet|GQA|VizWiz|Grounding|VQAv2|OCRVQA| |:---------:|:-------:|:-----:|:------:|:-----:|:----:|:-------:|:----:|:-----:| |Dynamic EMA|6 min|25 min|69 min|108 min|12 min|80 min|80 min|112 min| |Fixed EMA|6 min|23 min|62 min|100 min|11 min|76 min|75 min|102 min| We observe that our dynamic EMA update method requires a total of 492 minutes for training across eight tasks. In comparison, the fixed EMA method consumes 455 minutes, indicating that **our method only incurs an 8% increase in training time (37 minutes)**. However, as demonstrated in our experiments, the dynamic EMA update method significantly outperforms the fixed EMA method. Additionally, we compare the memory consumption of our method with that of approaches that do not use the instruction grouping strategy. Specifically, our method introduces two additional components: one for instruction groups and another for the corresponding LoRA parameters of different instruction groups. Compared to the LoRA parameters, which occupy approximately 1 GB of memory, the saving load for instruction groups—comprising small sentences of natural language text—is negligible, due to requiring only around 1 KB. For simplicity, we define a complete set of LoRA parameters inserted into the LLM as a unit of 1. Under this definition, the method that does not use the instruction grouping strategy shares a single set of LoRA across all tasks, resulting in a quantified memory saving factor of ×1. Our method, using Origin Instruction Type as an example, extends the LoRA parameters across four groups, leading to a quantified memory saving factor of ×4. Given that LoRA is a type of Parameter-Efficient Fine-Tunings (PEFTs), even with a fourfold increase in storage, the total memory load remains smaller compared to the substantial storage demands of LLMs, which often exceed tens of GB [1]. In summary, while adopting the dynamic EMA update and instruction grouping strategy introduces additional time and memory consumption, these costs remain manageable and are significantly outweighed by the substantial performance improvements they promote. [1] Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality. **Response 2 (Fixed EMA)**: Please kindly refer to the **Response 1** for the same question of Reviewer 9qib. **Response 3 (Instruction Grouping Results on LLM)**: Please kindly refer to the **Response 6** for the same question of Reviewer 4GQq. **Response 4 (Reference)**: We have incorporated this reference into the Related Work section and discussed it as “After that, TRACE, another continual instruction tuning benchmark is designed to evaluate the general ability, instruction following and safety for LLMs”. **Response 5 (Terminology)**: We have added clearer explanation and significance of the “plasticity-stability balance” in the Introduction section as “the trade-off dilemma for models to balance the new task learning and old task storing, which is the core of continual instruction tuning”. Besides that, we have checked the whole manuscript to revise other terminologies that have not been explained well. **Response 6 (Expression)**: We have changed "knowledge confusion" in line 31 to "knowledge interference", which is a more precise and widely accepted term. Besides that, we have checked the whole manuscript to revise other unsuitable phrases. **Response 7 (Dynamic EMA)**: We have summarized the values of EMA weights at each iteration on TextQA dataset with LLaVA-7B backbone and Origin instruction type. Due to the rebuttal space limitation, we provide the dynamic EMA weight in the first 10 epochs for the trainable parameters in the first three layers. |Layers|Epoch 1|Epoch 2|Epoch 3|Epoch 4|Epoch 5|Epoch 6|Epoch 7|Epoch 8 |Epoch 9|Epoch 10| |:----:|:-----:|:-----:|:------:|:-----:|:-----:|:------:|:-----:|:------:|:-----:|:------:| |1|0.03|0.65|0.85|0.91|0.93|**-0.28**|0.97|0.95|0.97|0.94| |2|0.41|0.74|**-0.07**|0.93|0.95|0.97|0.98|0.98|0.94|0.95| |3|0.75|0.88|0.84|0.96|0.97|0.98|0.96|**-0.33**|0.96|0.92| As we can see that the dynamic EMA weight can really exceed the range of 0-1, which demonstrates our theoretical deduction. Thank you for taking the time to review our manuscript and provide such detailed and constructive suggestion. Your insights have significantly improved our paper, making it much stronger overall. We have carefully considered all your suggestions and made the necessary revisions to ensure the research is more precise and accessible. We deeply appreciate your contribution to enhancing the quality of our work.
Summary: This paper presents a novel approach to Continual Instruction Tuning for vision-language models, addressing the problem of catastrophic forgetting. The proposed method is built on Exponential Moving Average (EMA)-based updates, dynamically adjusting weight balance based on Taylor expansion and Lagrange multiplier optimization. Additionally, the paper introduces an instruction grouping strategy to minimize redundancy and improve parameter efficiency. The experimental results demonstrate that the method significantly reduces forgetting and enhances continual instruction tuning performance across various vision-language instruction tuning benchmarks. ## update after rebuttal My original assessment was already supportive, so I will maintain it. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes Experimental Designs Or Analyses: Yes Supplementary Material: Yes, proofs are mainly checked but other parts are not fully reviewed. Relation To Broader Scientific Literature: This paper improves widely used EMA-based methods for mitigating forgetting via a dynamic weighting strategy based on theoretical analysis. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: 1. The paper presents a strong theoretical basis for its EMA-weighted update strategy, providing clear mathematical derivations. Weaknesses: 1. The proposed method provides a general improvement to EMA-based approaches and is not limited to vision-language tasks. However, experiments are conducted solely on vision-language tasks. Additional experiments across diverse task settings would better validate the method's effectiveness. 2. No impact statement is found. Other Comments Or Suggestions: I don’t have additional comments. Questions For Authors: 1. Can the authors provide more details about the multi-task setup listed in Table 1 and explain why its performance is unexpectedly low, given that the multi-task setting typically serves as a theoretical upper bound for continual learning? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear reviewer cghr, thanks for your valuable suggestions. Here are our responses: **Response 1 (Additional Experiments)**: To address your concerns, we set two diverse task settings: Conventional **Image Continual Classification** and **NLP Continual Classification** to further validate the effectiveness of our method. (1). We choose two **Prompt Continual Learning (PCL)** methods based on ViT backbone, namely L2P [1] and DualPrompt [2]. The **Image Continual Classification Benchmark** is dividing a whole image classification dataset into several splits and each split is seen as a task. We freeze the ViT backbone and only train the prompts. Based on the original L2P and DualPrompt baselines, we integrate our dynamical EMA update method. The continual learning dataset is 10-Split-CIFAR100 with **Class Incremental Learning (CIL)** setting. The experimental results are shown as: |**Method**|**Avg.ACC**|**Forgetting**| |:-----------------:|:---------:|:------------:| |L2P|83.77|6.63| |**L2P-Ours**|**86.47**|**4.83**| |DualPrompt|86.50|5.77| |**DualPrompt-Ours**|**88.47**|**3.19**| In conclusion, our method exhibits superior Avg. ACC and enhanced anti-forgetting performance compared to the original L2P and DualPrompt. (2). We implement our method on the **NLP Continual Classification Benchmark** (MRCC->SST-2->Sentiment140) with **Task Incremental Learning (TIL)** setting, based on the LLM (Mistral 7B [3]). The **NLP Continual Classification** is selecting multiple NLP classification datasets and each dataset is seen as a task. We freeze the LLM and insert trainable LoRA paradigm into each layer of the LLM respectively. For baseline comparisons, we adopt the original LoRA and one state-of-the-art continual learning method, CurLoRA [4]. The experimental results are shown as: |**Method**|**Avg.ACC**|**Forgetting**| |:--------: |:---------:|:------------:| |LoRA|60.00|19.00| |CurLoRA|82.00|0.00| |**Ours**|**88.00**|**0.00**| It can be seen that our method exhibits comparable stability while achieving superior plasticity in LLM continual tuning tasks compared to the state-of-the-art method in [4]. Notably, both CurLoRA and our method attain zero forgetting due to the simple TIL setting (knowing the task identifier in the testing stage). The above experiments demonstrate that **our method can perform well across diverse task settings** and greatly improve the baselines’ performance. These findings further indicate that our method has strong generalization capabilities. [1] Learning to prompt for continual learning. [2] Dualprompt: Complementary prompting for rehearsal-free continual learning. [3] Mistral 7b. [4] Curlora: Stable llm continual fine-tuning and catastrophic forgetting mitigation. **Response 2 (Impact Statement)**: Thanks for your kindly reminding. We apologize for the absence of the impact statement. In the revised version, we have added an Impact Statement section to discuss how our approach contributes to continual instruction tuning in LFMs. Specifically, we emphasize that: 1. This paper presents work whose goal is to advance the field of continual instruction tuning and mitigate catastrophic forgetting in LFMs. 2. Its relevance to real-world applications, such as lifelong AI assistants and MLLM continual evolution. We sincerely hope this addition can provide a clearer perspective on the significance and practical implications of our work. **Response 3 (Multi-Task)**: (1). The multi-task results presented in Table 1 of our paper are directly from the original CoIN paper (Table 2 in [5]). To maintain the authority of the experimental results, we choose to cite the multi-task results from the original report as a reference. (2). The authors of CoIN found that the performance of multi-task is not always higher than that of LoRA continual learning (shown in the following Table, copied from [5]). They explain that “unlike traditional continual learning, where the multi-task model often serves as the upper bound, in CoIN, the performance of the multi-task model is not the best due to the influence of task gaps”. Notably, this phenomenon is consistent with our results. (3). Our idea is that the theoretical upper bound could be understood as the ideal conditions with zero-forgetting, which is represented by the New.ACC metric (The result of training on the single task). While how to choose the upper-bound in continual instruction tuning is still a valuable research topic. |**Method**|**Backbone**|**Avg.ACC**| |:--------:|:--------:|:---------:| |LoRA|QWen|41.23| |Multi-Task|QWen|41.87| |LoRA|MiniGPT|25.45| |Multi-Task|MiniGPT|21.50| [5] Coin: A benchmark of continual instruction tuning for multimodel large language model. We sincerely appreciate your careful review of our paper. Your valuable suggestions have greatly enhanced the quality of our manuscript by pointing out areas that required improvement. We are truly grateful for the time and effort you dedicated to improving our work. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for addressing my concerns and providing additional results. I will keep my score. --- Reply to Comment 1.1.1: Comment: Thanks for your careful review of our paper. We appreciate the effort and time you have spent on improving our work.
null
null
null
null
null
null
Consensus Is All You Get: The Role of Attention in Transformers
Accept (poster)
Summary: In this paper, the authors the asymptotic properties of the attention mechanism of transformers. In order to do so, the authors introduce a continuous differential equation emulating the evolution of tokens across an increasing number of layers, and they show that asymptotically -- i.e., as the number of layers involved goes to infinity -- all the tokens tend to collapse to a single point under some assumptions on the initial configuration of the tokens and on the value of the Q, K and V matrices across layers. Claims And Evidence: The proofs of the theorems seem solid to me, and the experimental results provided are satisfactory and seem to back the theoretical results. Methods And Evaluation Criteria: The proof techniques employed are reasonable and the experimental setup is sensible. Theoretical Claims: I checked the proofs of Theorems 3.2, 4.2 and 4.3 without going into the details, and I could not spot any serious mistake. Experimental Designs Or Analyses: The experimental setup used for Section 5 is sensible. However, the code for the experiments was not provided, so that it is not possible to independently verify the experimental results presented in the paper. Supplementary Material: N/A Relation To Broader Scientific Literature: The main contribution of the paper is to generalize a series of results in the same setting (asymptotic evolution of tokens across attention layers) to broader configurations of the Q, K and V matrices and to multiple heads. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: The paper is generally well written, and the theory is complemented with many experiments. I have a couple of concerns, though: 1. I think the paper lacks motivation: why are the results important for the community? Is there any implication for real-world applications? 2. While experiments are important, I believe that the paper wastes too much space on the experimental side, when the main contributions seem to be on the theoretical side. In particular, it seems that the main novelty of the paper is to use different techniques (borrowed from control theory) to prove their results, compared to previous works in the same area. This novelty is completely lost in the paper, as there is no mention of it in the main paper after the introduction. I would have rather used more space to explain the intuition behind the proofs instead of including the random weights (e.g., first column of p. 6) or the random inputs (e.g., Table 2 or second column of p. 7) used in the experiments, which could be moved to the appendix. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the time devoted to reading our paper and providing feedback. **Why are the results important for the community? Is there any implication for real-world applications?** The paper answers the following question, what is the role of attention? The role is to bring the tokens together, i.e., to achieve consensus. In practice, our results show that transformers cannot be too deep since the information contained in the tokens disappears as they converge to a location that is independent of where the tokens start. Our results also show that a transformer has to be deep enough since, otherwise, the last token (used to predict the next token) is not sufficiently influenced by the tokens that precede it which would lead to a next token prediction that is independent from the query/context. Our results then raise the following question, what is the optimal depth of a transformer? This is a question we are trying to answer. **While experiments are important, I believe that the paper wastes too much space on the experimental side.** As can be seen by the different reviews and our replies, there are different opinions about experiments. We hope to reach a compromise by reducing the space devoted to experiments while providing more informative experiments (e.g., randomization over multiple prompts as suggested by other reviewers). **This novelty is completely lost in the paper, as there is no mention of it in the main paper after the introduction.** We will address the reviewer's concern by using the space freed by reducing the number of experimental plots to succinctly describe the techniques used in each proof and highlight their control theoretic origin. --- Rebuttal Comment 1.1: Comment: I thank the authors for their comments. I will maintain my positive score. --- Reply to Comment 1.1.1: Comment: Thank you for the positive score.
Summary: This paper theoretically demonstrates that a large language model using Transformers may collapse. The model collapse is analyzed through a mathematical analysis of the asymptotic properties of attention in Transformers. The authors claim that all tokens asymptotically converge to each other. This claim is supported by a simulation experiment, where GPT-2 XL is used iteratively by feeding its output back as the input. ## Update after rebuttal Thank you for the clarification. I revised the final score to 3. This reviewer encourages the authors to include experimental support using larger language models to back up the theory, and believes acceptance should be considered based on the to-be-conducted experiments and some writing clarifications (including the reviewers’ suggestions and the authors’ own suggested to-do list) in the next revision. Claims And Evidence: The claim appears reasonable under certain assumptions; however, the practical evidence supporting it may be insufficient. Methods And Evaluation Criteria: The paper presents a theoretical analysis suggesting that the model could eventually collapse, which could possibly occur under certain conditions. Theoretical Claims: Since my focus was primarily on the practical aspects of this paper, the theorems appear mostly correct under given assumptions. Experimental Designs Or Analyses: Experiments based on simulations are provided, but some aspects are not practical and problematic. Supplementary Material: I was unable to follow all the materials but some appeared to be particularly important. Relation To Broader Scientific Literature: The theoretical backup related to model collapse provided in this paper is an important contribution. Essential References Not Discussed: There is no related work section, despite the existence of many continuous Transformer-based papers that could be cited, none of which are referenced. While there may be differences in the terminology of "continuous" used in this paper compared to others, the authors should explicitly discuss how this work differs from previous studies. Other Strengths And Weaknesses: - The theoretical approaches in this paper give a sense of direction, but the exact goal is not clearly presented. From my perspective, the theories are limited by the provided assumptions and seem to hold only under specific conditions. As a result, this reviewer feels that the experimental support is limited and not clearly aligned with the theory. - In that regard, the presentation of this paper should be more refined to make it easier to grasp the authors' claims. Furthermore, all materials should be self-contained; for instance, it took multiple times to find that E (appears at the y-axis in the graphs such as Figures 6 and 7) was in the Appendix. - The experiment is designed as a simulation, as the authors stated, by using generated tokens as input for the next iteration of the generation to mimic model collapse. However, this setup does not accurately simulate the behavior of a large language model. It is unclear why the authors did not use larger models while given the availability of many such models up to 70B-scale, for example. - This reviewer speculates that the outdated GPT-2 XL, trained on a limited corpus with fixed token lengths, may easily generate meaningless tokens when iteratively fed its own outputs. This may not accurately reflect the model collapse described in the theory. Looking forward to seeing similar results with larger models, up to 32B (which I believe would be sufficient for testing). Other Comments Or Suggestions: This paper should be further refined to use more precise terminology and notations. This reviewer believes this paper can be improved by presenting its motivation and goals better. Additionally, scaling up and modernizing the experiments instead of just using GPT2-XL would support the claims effectively. Although this reviewer is not a theorist and partially follows the derivations, the attempt to address model collapse theoretically is valuable. If the authors address the concerns, I am willing to raise my score. Questions For Authors: - This reviewer could not get exactly why the continuous concept is necessary for deriving the equation from (4) to (5). Could the authors clarify this reasoning? - Why do the trained weights converge faster than the random ones? - A traditional element layer normalization in Transformers does not follow the norm-scaling formula described in eq.(2). - Lemma 4.2 does not exist in the reference Geshkovski et al., 2023a. - $y_1$ in Eq.(18) should be $y_0$? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for reading our paper and providing feedback. While there is no section titled "related work", after each theorem there is a remark titled "closest result available in the literature" where we provide a detailed comparison with the most relevant work. We would be happy to consider any paper the reviewer finds relevant. The reviewer wrote "The theoretical approaches in this paper give a sense of direction, but the exact goal is not clearly presented. From my perspective, the theories are limited by the provided assumptions and seem to hold only under specific conditions. As a result, this reviewer feels that the experimental support is limited and not clearly aligned with the theory." In the subsection "contributions of the paper" we wrote "The main contribution of this work is to provide a number of results (...) showing that all tokens converge to a single cluster thereby leading to a collapse of the model." This is the goal of the paper. We are happy to expand the goal's description. We interpret the second sentence as: the stated assumptions are too strong. Could the reviewer tell us which assumptions are too strong so that we can address them? Please see the reply to reviewer aNPn where we explain how specific assumptions can be relaxed. All the experiments support the conclusions of the theorems, i.e., they show evidence of consensus. The experiments either satisfy the theory's assumptions or do not. When the assumptions are satisfied, there is perfect agreement with the theory. When they are not, the experiments show that consensus still holds. This does not mean the results are wrong, it means that consensus can be proved under weaker assumptions Having the definition of E in the appendix was an oversight that will be corrected. The reviewer wrote "The experiment is designed as a simulation (...) However, this setup does not accurately simulate the behavior of a large language model. It is unclear why the authors did not use larger models (...)" The experiments in Figure 3, and Figures 5-7 were performed on the GP2-XL model and are not simulations. Since this is a theoretical paper, experiments do not require large models as we do not seek to compare performance metrics. The experiments serve two purposes: 1) illustrate the theorems proved in the paper; 2) show that conclusions hold under weaker assumptions. The reviewer wrote "This reviewer speculates that the outdated GPT-2 XL, trained on a limited corpus with fixed token lengths, may easily generate meaningless tokens when iteratively fed its own outputs. This may not accurately reflect the model collapse described in the theory." The experiments do not show GPT-2 XL providing meaningless tokens, they show that all the tokens converge to the same token as predicted by the theory. The function E is only zero when all the tokens are the same. In Figures 3, 5, 6, 7, and 8 we see the function E converging to zero which indicates that all the tokens converge to consensus, i.e., perfect agreement between experiments and theory. The reviewer wrote "This reviewer believes this paper can be improved by presenting its motivation and goals better. Additionally, scaling up and modernizing the experiments instead of just using GPT2-XL would support the claims effectively (...) If the authors address the concerns, I am willing to raise my score." We will revise the paper to provide more intuitive explanations for the theoretical results and better describe the objectives. We hope to have convinced the reviewer that: 1) theoretical claims are proved theoretically, not experimentally; 2) there is perfect alignment of theory with experiments. We will also provide more experiments to address the concerns raised by multiple reviewers and would be delighted by a raised score. **Why the continuous concept is necessary for deriving the equation from (4) to (5).** We can interpret (4) as a discrete-time dynamical system. Equation (5) is a differential equation, i.e., a continuous-time dynamical system. Continuous-time models enable the use of tools developed in physics, mathematics, and control theory. Corresponding tools for discrete-time either do not exist or are much more difficult to use. **Why do the trained weights converge faster than the random ones?** Reviewer aNPn had a similar question, please see the answer we provided. **A traditional element layer normalization in Transformers does not follow the norm-scaling formula described in eq.(2).** Eq. (2) has been used in concrete transformers, see the citation in the paper. See also the reply to reviewer aNPn who had a similar question. **Lemma 4.2 does not exist in the reference Geshkovski et al., 2023a.** Lemma 4.2 appears in versions 1-3 of the arXiv preprint, but in the latest version is now Lemma 6.4. This will be corrected. **$y_1$ in Eq.(18) should be $y_0$?** We will rectify this typo on the final version. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed responses. Overall, I think the to-do actions for my concerns presented in the rebuttal look great. For the concerns related to the strong assumptions, let me clarify my points: For example, 1) $U_{\eta}$ is the identity in Assumptions 3.1 and 4.1; 2) there is only one head in Assumption 4.4. Could these be too strong to proceed with the theory in order to ultimately match practice in Transformers, where they do not satisfy these assumptions? I might be wrong, so please let me know if I misunderstood any points. Furthermore, regarding the GPT-2-based experiments, although this paper is mainly theoretical, as long as the authors chose to include experimental support, the experimental setup should be more practical. Repeating GPT-2 with the generated tokens may not be sufficient experiments to support the theory, since the authors seem to aim at showing a case where practical Transformer-based models may collapse. I mean that the experiments should involve deeper models (likely easier to collapse) to show that tokens converge to the same outputs, rather than just repeating a smaller model. Is the theory mainly formulated for the repetitive model? I don’t see that from reading the theory from my perspective. --- Reply to Comment 1.1.1: Comment: To better understand the impact of the different assumptions it is convenient to refer to Table 1. The third column refers to the case where U does not need to be the identity. In this case, tokens still converge to consensus. But under the stronger assumption that U is the identity, convergence is guaranteed for almost all initial configurations whereas when U is not the identity we have the additional requirement that initial tokens belong to an hemisphere. We note that this is the first paper proving convergence to consensus when U is not the identity. We will also include experiments with tokens not starting in an hemisphere to illustrate that convergence still occurs in this case. The results in the last column can be easily extended to multiple heads when all the heads have the same U matrix (but different attention matrices). We will discuss this observation in the revised version of the paper. When the heads have different U matrices, the aggregated effect is a U matrix that is both time and state dependent and this is much more difficult to analyze. The empirical results show that consensus still occurs in this case although we do not know yet how to establish this theoretically. We take your point regarding larger models. In the revised version we will include experiments on the largest model that we can run on our lab's machines in the amount of time we have to prepare the final version. If the reviewer is satisfied with the proposed changes we would appreciate if the score could be raised.
Summary: This paper try to theoretically analyze the phenomenon that using auto-regressive attention model, all tokens will asymptotically converge to the "consensus set". Authors construct discrete/continues-time attention model and show that under full/auto-regressive attention cases, tokens will converge to some special cases. Authors also support their claim with some experiments. Claims And Evidence: I think the claims are supported by both previous literature and the toy experiments in Section 5 Methods And Evaluation Criteria: there is no criteria for the experimental parts in this paper. For the proposed theoretical model, I think in general it makes sense for the problem and may follow some framework from previous theoretical paper Theoretical Claims: I haven't checked the proof of main theorem in detail, but I haven't seen critical problems in the claim and lemmas (like App. B, or Lemma C.2) in appendix. Experimental Designs Or Analyses: I check the validity of experimental designs. I think they are toy experiments and it's acceptable for a theory paper. But in GPT2-XL experiment, authors should not just try one well-designed prompt, but need to try more diverse prompts randomly sampled from natural language dataset and see some average performance. Supplementary Material: I briefly saw section B and F. Relation To Broader Scientific Literature: I think the main contributions of the paper is to prove the "consesus convergence" phenomenon based on more relaxed conditions (like no requiring Q^TK to be identity). Previous work like Karagodin et al. 2024 (L230) or Geshkovski et al. 2023a (L172) have shown similar conclusion, but require stronger conditions (like P = Q^TK need to be time invariant or identity). It's nice that authors clearly discuss their contribution in the paper. Essential References Not Discussed: I think the main techniques of this paper is about analyzing the training dynamics of transformer and see the property of tokens, but there are a lot of transformer-dynamics related works recently that may be worth to be cited/discussed, although they may have different kinds of modeling [1,2,3,4,5]. [1] Nichani E, Damian A, Lee J D. How transformers learn causal structure with gradient descent[J]. arXiv preprint arXiv:2402.14735, 2024. [2] Cheng X, Chen Y, Sra S. Transformers implement functional gradient descent to learn non-linear functions in context[J]. arXiv preprint arXiv:2312.06528, 2023. [3] Huang Y, Cheng Y, Liang Y. In-context convergence of transformers[J]. arXiv preprint arXiv:2310.05249, 2023. [4] Tian Y, Wang Y, Chen B, et al. Scan and snap: Understanding training dynamics and token composition in 1-layer transformer[J]. Advances in neural information processing systems, 2023, 36: 71911-71947. [5] Li Y, Li Y, Risteski A. How do transformers learn topic structure: Towards a mechanistic understanding[C]//International Conference on Machine Learning. PMLR, 2023: 19689-19729. Other Strengths And Weaknesses: Strengths: +: the paper is in general easy to follow +: toy experiments are interesting and support the theoretical findings. Weakness: -: I think one key problem is that the contribution is somewhat too incremental. As shown in "Relation To Broader Scientific Literature*" part, because similar claims have been shown in previous paper, although with kindly stronger condition. However, this paper still requires some strong condition that is far aways from real-world cases (like U still need to be identity, but this is related to value matrix, and but in real-world we indeed need to update the value matrix). And there seems no additional new theoretical findings in this paper (for example, convergence rate for the consenus phenomenon). -: The authors haven't detailly compare the techniques between them and previous literature to show why they can achieve relaxed conditions. These can be down by adding some discussions or proof scratch -: Title seems confusing to me, as consensus is an asymptotically phenomenon that in practice don't happen a lot for SOTA LLM, it can not be 'all you get', and the role of attention can be much more complex (as previous works mentioned in additional reference above) -: in GPT2-XL experiment, authors should not just try one well-designed prompt, but need to try more diverse prompts randomly sampled from natural language dataset and see some average performance. Other Comments Or Suggestions: 1. I think if author don't explain V_\eta(t) in the main paper in detail, (5) better not use it (like using V_\eta(t)) (L166) 2. Seems all the "[0, \infty]" are "[0, \infty[" 3. I think it's better to include the metrics (like the definition of 'E' in Figure3/5, i.e., (18)) 4. I think it's necessary to define 'attractive' or 'attractivity' in Theorem 3.2 in the main paper. Questions For Authors: What's the key difference in technique between your work and previous works such that you only need relaxed conditions? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the time devoted to reading our paper and providing feedback. We were not aware of the referenced papers. All but the second paper discuss training dynamics whereas we study already trained transformers. Hence, they do not seem relevant. The second paper shows that the evolution of tokens along a transformer can be interpreted as an instantiation of gradient descent. We cannot see any direct connection to our work but we will be studying this paper out of intellectual curiosity. The reviewer states that the contribution is "somewhat too incremental" and supports this opinion with "similar claims have been shown in previous paper" and "still requires some strong condition (...) like U still need to be identity". The latter statement is factually incorrect. Section 4.2 discusses the case where the matrix U is not the identity. The reviewer can find in Assumption 4.4 that we only require U to be symmetric and time-invariant. See also our reply to reviewer aNPn where we explain that the time-invariance assumption can be relaxed. We would also like to politely disagree with the incremental characterization of our work and with "similar claims have been shown in previous paper". Theorems are written as implications, say a implies b, where a is called the antecedent and b the consequent. Two theorems are not similar because they have the same consequent. The strength of two theorems offering the same consequent is measured by how weak the antecedent is. A weaker antecedent means the implication applies to a larger class of systems and is thus a stronger result. Here is one example, before people could travel by aircraft, the US could be reached from Europe by boat in several weeks. Once air travel began, the US could be reached from Europe in several hours. In both cases the consequent is the same, the US can be reached from Europe, but these scenarios cannot be characterized as being "similar". The reviewer writes "The authors haven't detailly compared the techniques between them and previous literature to show why they can achieve relaxed conditions." It is not clear to the authors what is meant by a detailed comparison of techniques. An intuitive comparison is provided in the 3rd paragraph of the introduction. Intuitively speaking, some prior work used a mean-field model that resulted in a partial differential equation whose solution was interpreted as a distribution. We used ordinary differential equations that enabled us to use control theoretic techniques such as Input-to-State Stability as well as existing results on consensus on spheres. Existing work that also used ordinary differential equations did not use control theoretic techniques. We would be happy to expand this intuitive comparison in the final version of the paper and, in particular, highlight which control theoretic techniques were used in each proof, if this is what the reviewer has in mind. We are more than happy to change the paper's title. We agree with the criticism related to the need to use multiple prompts. We will provide such experiments in the final version. The matrix V_\eta is defined in the second paragraph of Section 2.2 as the value matrix. Please let us know if additional explanations regarding this matrix are needed. **We could not understand the remark «Seems all the "[0, \infty]" are "[0, \infty["».** We should have made this clear in the paper. We follow Bourbaki's notation (introduced in Éléments de mathématique) and write [a,b[ to denote the interval including the point a and excluding the point b. This is done so that there is no confusion between the ordered pair (a,b) and the interval ]a,b[ that excludes the points a and b. We did find a few instances of sets in the appendix using the notation (a,b) and we will correct them. Not including the definition of E in the main part of the paper was an oversight of our part. It will be rectified. Not defining attractivity was an oversight, it will be corrected. The reviewer asks about the "Key difference in technique". This is the first paper that uses control theoretic techniques such as Input-to-State Stability or draws inspiration from results available in the control literature on consensus on spheres. As we mention in the reply to the last reviewer, we propose to comment on which control theoretic techniques are used in each proof so as to provide deeper insights into our approach. We hope this change addresses the concerns of both reviewers. --- Rebuttal Comment 1.1: Comment: Thanks for the clarification and sorry for my late reply. The authors' reply (together with the discussion with reviewer aNPn) address most of my problems and sorry I misunderstand some of the key contributions of this paper before. I agree that more intuitive comparison will be helpful for reader to understand, and looking forward to the modification authors mentioned. I will increase my score. --- Reply to Comment 1.1.1: Comment: Thank you for increasing the score.
Summary: This paper theoretically investigates the phenomenon of token representation collapse in transformers as the number of layers blow up. The authors analyze a continuous-time differential equation of the attention model and show that all tokens converge asymptotically. They present results under different assumptions on the key, query, and value matrices, improving upon previous work that required stricter conditions on the composed key-query matrix, the value matrix, and the number of attention heads. Finally, they provide small-scale experiments to validate their theory. **Update after rebuttal** I thank the authors for engaging during the rebuttal period. They addressed most of my key questions and agreed to make appropriate additions to the paper including some additional experiments. Overall, I believe that it is a good paper that improves upon the previous theory on representation collapse, and I keep my positive rating. Claims And Evidence: All claims and evidence seem to be well-supported. The key contribution—demonstrating that the continuous-time differential equation model collapses asymptotically under different sets of assumptions on the model parameters—is backed by theoretical results. While limited, the toy experiments provided support for these claims. Methods And Evaluation Criteria: It’s mainly a theory paper. I don’t think this question is very valid, but whatever toy experiments they have make sense. Theoretical Claims: No, I did not read the proofs of the theorems. Experimental Designs Or Analyses: Since this is a theory paper, the experiments are primarily toy examples designed to validate the theory. They involve prompting a pre-trained GPT-2 XL and analyzing representation collapse, which the results confirm and align with previous findings. From this perspective, the experiments appear sound, though I have some questions and suggestions (see the questions section). Supplementary Material: No, I did not check the supplementary material. Relation To Broader Scientific Literature: The paper theoretically examines representation collapse in tokens as the number of layers in transformers increases. While previous work has studied this using mean-field techniques, this paper appears to apply ideas from control theory to analyze the asymptotic behavior. Additionally, the theoretical insights developed here could potentially be used in the future to identify architectural components that contribute to representation collapse. Essential References Not Discussed: I believe the recent work by Wu et al. (2024) is highly relevant and missing from the discussion. They also investigate representation collapse, focusing on the role of layer normalization and attention masks. While their approach differs, it should be addressed in this paper, along with a comparison of their theoretical results. For instance, Wu et al.'s results appear to be finite-time, whereas this paper's findings are asymptotic. A comparison of their respective assumptions would also be valuable. Wu et al. (2024) On the role of attention masks and layer norm in transformers. Other Strengths And Weaknesses: The paper studies an important problem from a theoretical point of view, well-written in general, and interesting to read. Other Comments Or Suggestions: Please add a small conclusion/discussion section with open questions and drawbacks of the results presented in the paper. Questions For Authors: 1. The theory is developed for the ellipsoid projection which is playing the role of layer-normalization in a standard transformer. Do you think it's possible to extend these results to the standard layer-norm? 2. General question for all results: the value-matrix $U$ is time-independent (which also appears to be the same in previous works). I understand the assumptions are less strict from previous works, but it seems a little strange. Do you believe the results still hold if $U$ is time-dependent, similar to the composed key-query matrix $P$? 3. A few questions on the assumptions: For Thm 3.2, what happens if the initial position of the tokens do not like in some hemisphere? Related to this, for the experiment in figure 2, do you still see consensus if the tokens do not start in a hemisphere? 4. Re the experiments: The evaluation metric used in all plots should be discussed in the main body, it is just referred to as some equation in the appendix. I like the toy experiments in general, but I think the results should be averaged over multiple prompts (for example figure 5 with random prompts), and an average with some confidence intervals should be reported. This is especially important given the variance we see for different prompts in figure 7. 5a. Why do you think collapse is less prominent for random weights in almost all the figures? The theory does not seem to distinguish between different sets of weights. 5b. Comparing Figure 6 to Figure 5, removing periodicity seems to reduce collapse. Why do you think this happens? Again, the theory does not appear to separate these cases. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the time devoted to reading our paper and providing feedback. We were not aware of the paper Wu et al. (2024). Upon reading it we found that it confirms the results in our paper since it provides several scenarios of rank degeneration, i.e., consensus. It also provides an example where consensus does not occur but this example requires the query and key matrices to be constant and equal to zero which does not happen in practice. We will be happy to add a conclusion/discussion section as suggested. **Layer normalization** Thank you for this question, we should have included a discussion about this in the paper. At an intuitive level, any normalization technique has the objective of restricting the tokens to a compact space, hence the results should not depend on how the normalization is done. At the technical level, we can show that layer normalization (removing the mean, dividing by standard deviation, multiplying by a learned scale, and adding a shift) corresponds to projecting on a suitably translated sphere when the learned scale parameters are all equal and non-zero. Hence, our results apply to this case. We suspect the same holds for arbitrary scale parameters but we need to perform a more careful analysis to ensure the topology/geometry of the resulting compact space is homeomorphic/diffeomorphic to that of a sphere. **Matrix U time-varying** Our proof technique is based on projecting the dynamics along the eigenvector of U corresponding to the largest eigenvalue to obtain a scalar differential equation that is easier to analyze. When U is time varying, the eigenvectors of U are time varying. This has two consequences: 1) the consensus point is now time-varying; 2) the scalar ODE has one additional term coming from the time derivative of the eigenvector (this derivative is zero in the time-invariant case). Provided the eigenvector changes slowly, all the tokens will asymptotically converge to the time-varying eigenvector and consensus is still reached. We are happy to discuss this extension in the final version if the reviewer finds it useful. **Theorem 3.2, tokens starting outside an hemisphere** The conclusions of Theorem 3.2 still hold when the tokens don't start in an hemisphere but enter an hemisphere at some future time. We proved that hemispheres are invariant (once the tokens enter one, they cannot leave) and thus the conclusion follows. In our experiments we always observed consensus, independently of the initial condition. **Experiments** Defining the function E in the appendix was an oversight that we will correct. We will also be happy to average the results and provide confidence intervals in the final version. **Collapse is less prominent for random weights/Removing periodicity seems to reduce collapse.** This is an interesting question. Our educated guess is that convergence is exponential when the matrices are constant (this is supported, under additional assumptions, by Theorem 6.3 in version 4 of https://arxiv.org/abs/2312.10794). One may speculate that the rate of convergence is related to the time derivative of the matrices. Constant matrices, zero derivative, faster convergence, random matrices, highest derivative, slowest convergence. The periodic case seems to lie in between the constant and random cases but we do not have a good explanation for this. --- Rebuttal Comment 1.1: Comment: Thanks for the response to my questions, I will keep my positive score. I think it would be useful to discuss the time-varying $\mathbf{U}$ scenario in the paper, and regarding that how do you ensure that the eigenvectors change slowly through layers? In general, it is obviously not true, but perhaps there you can make some argument at initialization (for standard initialization techniques), using RMT. Other than that, maybe it is possible to track it across different layers at different training steps, empirically. It would also be useful to add the LN comment in the paper, appendix is fine, but the connection will be useful for the reader. Lastly, do you have any exps where you see consensus when not starting in a hemisphere as you say? I just see Fig. 2 where you do start inside, the other case should be there in the paper. --- Reply to Comment 1.1.1: Comment: Thank you for the positive score. We will make the discussed changes including an empirical analysis of the eigenvalues and experiments with tokens starting outside an hemisphere.
null
null
null
null
null
null
SOLD: Slot Object-Centric Latent Dynamics Models for Relational Manipulation Learning from Pixels
Accept (poster)
Summary: The paper proposes a model-based RL (Dreamer-like) algorithm that utilizes pre-trained (and then fine-tuned) slot-attention based object-centric representations as the underlying state representation, in contrast to the standard single-vector representation (holistic) typically employed when learning from pixels. The proposed approach outperforms or performs comparably to recent model-based RL approaches. Claims And Evidence: * The paper claims model-based RL can improve sample efficiency over model-free RL, yet does not compare with one. In addition, there are sample-efficiency analyses. * There are no comparisons with model-free RL object-centric approaches. While the authors cited several works like SMORL (https://arxiv.org/abs/2011.14381) and Haramati et al. (https://arxiv.org/abs/2404.01220v1), I believe an empirical comparison should be done, especially if the authors wish to claim their method is more sample-efficient (I’ll note that both methods are goal-conditioned, but it seems like it should be possible to apply them on the environments in this paper, and applying SOLD in their environments as well). I also believe that the environments in both mentioned works are more challenging than the ones used in this work. * Generalizability: I’m worried about the generalizability of this method to more pixel-wise challenging tasks where SAVi (DDLP - https://arxiv.org/abs/2306.05957), and slot-attention in general (DINOSAUR - https://arxiv.org/abs/2209.14860), do not work well and fail to decompose the scenes. * The claim (line 326) “This part-whole segmentation highlights the ability of slots to meaningfully identify and represent separate parts of a larger object, such as the gripper jaws of the robot” is not well-positioned. Do the authors claim that the ability to segment objects like humans (e.g., whole gripper instead of joints) is better for control? Usually state-based representations in robotics include the positions of the joints, so one could claim that aggregating all the joints to one object might actually hurt the performance. Methods And Evaluation Criteria: * The method makes sense for the problem, as well as the datasets. * However, I find the environments and tasks used in this work quite simple (also visually simple) and more challenging environments/tasks would help provide more evidence for the claims in this work. Theoretical Claims: There are no theoretical claims in this paper. Experimental Designs Or Analyses: My only issue, as mentioned above, is that I find the experimental benchmark not challenging and that the chosen benchmark does not shed much light with respect to sample-efficiency. Supplementary Material: I reviewed the supplementary material and I appreciate the authors’ effort in making it detailed. Relation To Broader Scientific Literature: * The main contribution of this work is extending slot-based video prediction models (OCVP) with actions and reward to make them world-models. * While I find this a solid contribution, I don’t think the performance of the proposed method, nor the chosen benchmark, are impressive enough to be convinced regarding the claims and the promise of the method. Essential References Not Discussed: I would like to point-out a concurrent work on object-centric world models: OC-STORM (https://arxiv.org/abs/2501.16443). However, the methods are very different. Other Strengths And Weaknesses: **Strengths**: * Open-source code (!) * Very detailed appendix and a nice project site. * Paper reads-well and easy to follow. **Weaknesses/Limitations**: * Reading all the training details, it seems the method is very brittle, and requires a lot of tuning (e.g., for learning rates and clipping values for each component). * Performance: I like the idea of the paper and I believe there is a lot of promise in unsupervised object-centric representations, especially for control tasks, and I was surprised that the performance is not much better compared to non-object-centric baselines in most environments. I do wonder if the problem is the choice of easy/simple tasks/environments or the choice of underlying object-centric representations (slot-attention). Other Comments Or Suggestions: None Questions For Authors: * (Repeated from above) The claim (line 326) “This part-whole segmentation highlights the ability of slots to meaningfully identify and represent separate parts of a larger object, such as the gripper jaws of the robot” is not well-positioned. Do the authors claim that the ability to segment objects like humans (e.g., whole gripper instead of joints) is better for control? Usually state-based representations in robotics include the positions of the joints, so one could claim that aggregating all the joints to one object might actually hurt the performance. * Line 401: the authors report the returns on Cartpole-Balance and Finger-Spin, but without any comparison these numbers are meaningless. Should the reader have knowledge of other algorithms’ performance on these tasks? * Appendix, line 737: what does “whose initialization is learned via backpropagation” mean? How does one learn initialization? * Ablations: have the authors ablated the choice of positional encoding, or just used AliBi by default? Similarly, the number of register tokens? * What is the effect of freezing/fine-tuning the SAVi backbone quantitatively? * Stability: from personal experience, Slot-Attention can be very hard to get working (as also been shown in many previous works, like DINOSAUR and DDLP mentioned above), and not very consistent (i.e., different runs with same hyper-parameters but sometimes Slot-Attention does not provide a good decomposition to slots). This is more evident on more visually complex/real-world datasets. I’m curious regarding the authors’ experience with SAVi’s stability in that sense, was it common to have runs where SAVi did not provide good decompositions (e.g., multiple objects in a single slot)? * How long does it take to train the model (roughly, wall-clock hours)? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We want to thank the reviewer for appreciating the potential of our idea and the clarity of the paper. We address your questions below. ***"I’m worried about the generalizability of this method to more pixel-wise challenging tasks where SAVi fails to decompose the scenes."*** Please see our response to Reviewer tPrN (Question 2) regarding challenges with OCRs in complex scenes and potential future work. SOLD itself is designed to be agnostic to the specific OC model, allowing it to benefit from future OCR improvements. ***"The claim (line 326) 'This part-whole segmentation highlights the ability of slots to meaningfully identify and represent separate parts of a larger object, such as the gripper jaws of the robot' is not well-positioned."*** Our intention with the phrasing “part-whole segmentation” was to highlight SAVi's ability to learn nuanced representations by assigning distinct slots to the gripper jaws (visualized in brown/gray in Figure 3) separate from the main robot arm body (orange). We view this emergent separation as beneficial, particularly as the gripper jaws have independent actuation via a dedicated component of the action vector, making their distinct representation potentially valuable for learning control. We did not intend to claim this specific granularity is universally superior, and indeed, as shown in Figures 10 and 11 in the Appendix, SAVi can represent the robot arm at varying levels of subdivision depending on the visual prominence and context. We will revise the phrasing in the paper for better clarity. ***“I find the environments and tasks used in this work quite simple (also visually simple) and more challenging environments/tasks would help provide more evidence for the claims in this work.”*** While we intentionally limit the visual complexity to study control learning when OCRs are obtainable with current methods (see also our response to Question 1 of Reviewer o1iz), we agree that addressing also more visually complex scenarios will be an interesting avenue for future work. **Why is the performance ***“not much better compared to non-OC baselines in most environments”***?** While SOLD outperforms baselines overall and especially on distinct tasks (Table 1), we deliberately included easier tasks (Reach/Push-Specific) to paint a fuller picture of the level of complexity at which OCRs yield payoffs and to be fair to the baseline methods. We found that the upfront cost of pre-training OCRs pays off as complexity increases. On the easiest Reach-Specific task, TD-MPC2 converges quickly (within the SAVi pre-training window), making the effort to acquire OCRs less warranted *for that specific task.* TD-MPC2 (which operates reconstruction-free) can rapidly extract the minimal necessary information (target/end-effector position) to solve the task. We feel that leaving out such configurations would distort the picture of where significant improvements due to OCRs, accounting for the cost of both pre-training and behavior learning, were made. ***“What does 'whose initialization is learned via backpropagation' mean?”*** This refers to having a parameter vector to which the slot is initialized, which is trained with back-propagation. We link the implementation at https://anonymous.4open.science/r/sold-rebuttal/slot_initialization/learned.py ***“Ablations: have the authors ablated the choice of positional encoding, or just used AliBi by default?”*** We added ablations at https://anonymous.4open.science/r/sold-rebuttal/ABLATIONS.md ***“I’m curious regarding the authors’ experience with SAVi’s stability [...]?”*** Please see our answer to Reviewer tPrN (Question 3). **How long does it take to train the model?** On a single A100 (40GB): SAVi pre-training takes ~2-3 days. SOLD training takes up to ~10 days for the longest tasks (Push/Pick, 50 steps per episode). For comparison, DreamerV3 (optimized JAX code, holistic repr.) took ~4.5 days. **What is the effect of freezing/fine-tuning the SAVi backbone quantitatively?** We added experiments comparing these two settings quantitatively: https://anonymous.4open.science/r/sold-rebuttal/SAVI_FINETUNING.md Interestingly, there is only a small drop in performance from the fine-tuned to the frozen SAVi model. We hypothesize this to be due to the SAT model, which can leverage a long history of slots, compensating for deteriorating representations. However, we chose to emphasize again the strong qualitative results in Figures 14 and 15 because it clearly indicates a potential failure mode of current OCRL approaches (e.g., SMORL, EIT), which pre-train the OC model and freeze it for the downstream RL task. We believe highlighting this issue and exploring fine-tuning as a strategy to maintain representation quality under distribution shift is an important discussion point for the OCRL community. --- We hope our responses have adequately addressed your concerns, and we would be grateful if you consider updating your score. --- Rebuttal Comment 1.1: Comment: Thank you for your response and clarifications. After reading the response, the other reviewes and responses, some of my concerns regarding ablations have been addressed, but the concerns regading the choice of the object-centric algorithm and performance remain. In addition, it seems that there is a general agreement between reviewers regarding the weaknessess, and specifically, it seems that reviewer tPrN and I are aligned. As such, I'm going to keep my score for now. --- Reply to Comment 1.1.1: Comment: We are pleased to have addressed your concerns regarding the SAT architecture ablations and SAVi fine-tuning. We appreciate the opportunity to address the performance of SOLD compared to the baseline methods, as well as the choice of the OC algorithm below. **Regarding the choice of the OC encoder-decoder framework.** In the following we outline why we deliberately choose to use SAVi as the OC encoder-decoder method in our work. The goal of our research is to provide insight into the potential of MBRL from OCRs when such representations can be well learned with current methods. In addition to SAVi, we have experimented with the use of STEVE (https://arxiv.org/pdf/2205.14065) and VideoSAUR (https://arxiv.org/pdf/2306.04829). However, we found that the size of the pre-training dataset was the central factor in determining the quality of the decompositions for all three methods. Because STEVE and VideoSAUR add complexity to the OCR learning problem and make it difficult to verify the isolation of object-specific information in each slot (this is because STEVE uses an autoregressive transformer decoder instead of predicting per-slot alpha masks, and VideoSAUR predicts ViT embeddings at the patch level), we found SAVi to be the most appropriate model to achieve the goals of our investigation. Namely, to study behavior learning from OCRs that sufficiently disentangle visual scenes, and to explore interpretable patterns in the learned attention weights of the model. To this end, we achieved the goal of demonstrating improved relational reasoning and interpretability. In summary, while SOLD itself is designed to be agnostic to the specific OCR learning method used, beyond the ability to convert visual observations into a set of vectors, we explicitly considered this decision and are convinced that SAVi is the best method for the goals of this investigation. However, we agree that extending our work to real-world and visually complex data is an interesting avenue where the agnostic design of SOLD will be beneficial. Nevertheless, we consider this to be a distinct investigation of learning from complex, realistic visual data, separate from the goals of this work. **Regarding the performance compared to the baseline methods.** The overall performance gap between SOLD (82% success rate) and the second best method DreamerV3 (56.4% success rate) is just over 25%, which we consider a significant improvement. This performance gap is achieved despite our choice to include simpler problems, such as the Specific task variants, where limited OC reasoning is required to solve the control problem. Nevertheless, we are able to show that SOLD significantly outperforms SoTA non-OC methods across the considered task suite. Moreover, SOLD is able to learn the Reach-Distinct-Groups task, where no baseline method currently makes progress, which speaks to its difficulty and the ability of SOLD to open up new relational reasoning capabilities. While we agree that considering even more complex tasks is a valuable investigation for future work, we believe that this outperformance is significant and demonstrates the utility of SOLD’s OC structure. --- Finally, we want to thank the reviewer for their detailed feedback on our work, which has helped us to clarify the rationale for design and evaluation decisions regarding both the OC encoder-decoder model and the subsequent behavior learning. Since the reviewer mentions to be in agreement with reviewer tPrN regarding remaining weaknesses, we also kindly point to our reply to the rebuttal comment of reviewer tPrN, which discusses remaining limitations and open questions in detail. We sincerely hope that our response addresses your remaining concerns. If it does, we would greatly appreciate your consideration in updating your score.
Summary: The manuscript introduces the slot object-centric latent dynamics (SOLD) model, a reinforcement learning (RL) algorithm that leverages an object-centric latent world model, which is learned directly from pixels, for behavior learning. The world model is an extension of object-centric video prediction (OCVP), a model that leverages slot attention for video (SAVi) for object centric representation discovery and a transformer-based prediction model with two attention mechanisms for separately capturing temporal information of the same object and relational information about interactions between objects. The world model in SOLD differs from OCVP in that 1) it conditions the transformer’s predictions on past and current actions in addition to the observation history, 2) SAVi is pre-trained on random observation sequences from random episodes, and 3) a reward prediction model is added to enable behavior learning purely from imagined trajectories. For behavior learning SOLD uses a very similar approach to DreamerV3. The proposed method is evaluated against DreamerV3, TD-MPC2, and SOLD w/ a CNN encoder instead of SAVi on a suite of eight object-centric robotic control environments, exhibiting superior performance. The method is also shown to work on environments that are not designed with object-centric learning in mind. ## update after rebuttal The authors' rebuttal addressed some of my concerns. However, I share the concern with other reviewers that the evaluation is no thorough enough. For instance, the authors mentioned that hyperparameters for baselines were copied from prior work which used different environments, while for SOLD some hyperparameters are tuned. Comparison on more established benchmarks would make the results more convincing. I still believe the work contributes valuable insight for the community and lean towards acceptance. Claims And Evidence: * The model is shown to work well on the considered relational reasoning tasks. * “We introduce SOLD, the first object-centric MBRL algorithm to learn directly from pixel inputs [...]” I believe several works mentioned in the “Essential References Not Discussed” section of this review are implementing object centric world models that are trained with pixel supervision. Please clarify or adjust this claim. * Regarding SOLD outperforming state-of-the-art methods (SOTA): Those methods are SOTA on other benchmarks than the ones used in this study. The claim should be more narrow (i.e. on a new benchmark suite focusing on relational reasoning) or supported by additional comparisons on widely-used benchmarks. Methods And Evaluation Criteria: At a high level SOLD replaces the DreamerV3 world model with one based on OCVP. The method is compared with DreamerV3, TD-MPC2 and SOLD w/o object-centric representation on a suite of robotic control tasks that require basic relational reasoning skills. The level of relational reasoning is limited to one-vs-all comparisons of color or spatial relation between the arm and one or two objects. How does SOLD compare with the baselines on the two tasks from the Meta-World benchmark or other more commonly used tasks? Theoretical Claims: N/A Experimental Designs Or Analyses: * I could not find a description of how hyperparameters were selected * Appendix E mentions that the capacity of the Non-Object-Centric Baseline was increased for fair comparison. Did you evaluate whether that actually helps its performance? Model size or capacity is not always the best criterion for fairness. Is the CNN a standard architecture used in similar environments? Supplementary Material: I had a brief look at the provided behavior and open loop prediction videos. Relation To Broader Scientific Literature: According to my understanding the present manuscript is not the first to propose slot-based world models. The method’s differences to other object-centric world models should be discussed in (more) detail. Essential References Not Discussed: Given the state of object-centric representations being underexplored in RL, as mentioned in the paper, I’d suggest to briefly discuss the relation to the following works: 1. Biza, O., Platt, R., van de Meent, J. W., Wong, L. L., & Kipf, T. Binding Actions to Objects in World Models. In ICLR2022 Workshop on the Elements of Reasoning: Objects, Structure and Causality. 2. Collu, J., Majellaro, R., Plaat, A., & Moerland, T. M. (2024). Slot Structured World Models. arXiv preprint arXiv:2402.03326. 3. Heravi, N., Wahid, A., Lynch, C., Florence, P., Armstrong, T., Tompson, J., ... & Dwibedi, D. (2023, May). Visuomotor control in multi-object scenes using object-aware representations. In 2023 IEEE International Conference on Robotics and Automation (ICRA) (pp. 9515-9522). IEEE. 4. van Bergen, R. S., & Lanillos, P. (2022, September). Object-based active inference. In International Workshop on Active Inference (pp. 50-64). Cham: Springer Nature Switzerland. 5. Zadaianchuk, A., Seitzer, M., & Martius, G. (2020). Self-supervised visual reinforcement learning with object-centric representations. arXiv preprint arXiv:2011.14381. Other Strengths And Weaknesses: * The paper is very well organized and explains motivation, methodology and experimental setup clearly. * Providing performance comparisons with the baselines on more established benchmarks and a discussion of more works in the intersection of object-centric representations and RL would significantly improve the paper and I'd probably increase the score. Other Comments Or Suggestions: The below citation only shows the first of multiple authors without “et al.”: Cho, K. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Conference on Empirical Methods in Natural Language Processing (EMNLP), 2014. Questions For Authors: On potential limitations: * How could one address the problem of slot attention potentially not separating objects very well in more complex environments? What other methods for object-centric representations have you considered? * Does slot attention occasionally collapse to degenerate solutions? * Have you considered partially observable environments (beyond simple occlusions)? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We want to thank the reviewer for acknowledging the clarity of our paper and methodology and for the valuable references, which we will discuss in the final version. In the following, we aim to address your questions. ***"The method’s differences to other OC world models should be discussed in (more) detail."*** While we discuss these references in detail in the final version, we want to briefly point out differences to our method here: - While Bize et al. (https://arxiv.org/pdf/2204.13022) consider the problem of adding actions to an OC world model, they do so only for action-cond. prediction, not for behavior learning with MBRL (“A dataset of expert demonstrations is available [...] and the model is evaluated on its ability to predict the block positions”, Section 4.3). - Collu et al. (https://arxiv.org/pdf/2402.03326) learns a slot-attention (SA)-based OC dynamics model. While it is referred to as a world model, no reward prediction, action-conditioning or behavior learning is done. - Heravi et al. (https://arxiv.org/pdf/2205.06333) use OCR learning on robotic manipulation tasks, showing that SA-based representations yield performance improvements for object location prediction. However, they consider neither world models nor RL (“We only consider the imitation learning setup and learn a policy from a dataset of demonstrations”, Section IV.E). - van Bergen et al. (https://arxiv.org/pdf/2209.01258) consider the prediction of video frames on the basis of OCRs but follow the active inference framework. - While Zadaianchuk et al. (https://arxiv.org/pdf/2011.14381) use SCALOR to infer OCRs from pixel inputs, they use model-free RL to learn their policies (“our policy can be trained with any goal-conditional model-free RL algorithm”, Section 4.2). We want to make sure to depict the novelty of our method accurately and to give credit to all prior works, so we would greatly appreciate the reviewer’s opinion on the rephrased contribution to differentiate SOLD from prior works given in response to Reviewer kT4P (Question 1). ***"How could one address the problem of SA potentially not separating objects well in more complex environments? What other methods for OCRs have you considered?"*** While the objective of our research is to provide insights into the potential of MBRL from OCRs when such representations can be learned well with current methods, rather than to improve the OCR learning method itself, we concur that learning meaningfully separated objects on complex visual data is a central point for expanding the range of applications of our method. In addition to SAVi, we have experimented with using STEVE (https://arxiv.org/pdf/2205.14065) and VideoSAUR (https://arxiv.org/pdf/2306.04829). However, we observed that the size of the pre-training dataset was the central factor in determining the quality of the decompositions for all three methods. Because STEVE and VideoSAUR add complexity to the OCR learning problem and make it challenging to verify the isolation of object-specific information in each slot (this is because STEVE uses an autoregressive transformer decoder instead of predicting per-slot alpha masks, and VideoSAUR predicts ViT embeddings at the patch level), we found SAVi to be the most suitable model to achieve the goals of our investigation. Namely, to study behavior learning from OCRs that sufficiently disentangle visual scenes, and to explore interpretable patterns in the model's learned attention weights. However, there are several avenues to extend our work in the future to apply it to visually complex data, including the prediction of ViT embeddings instead of pixels (especially for larger images), and the use of vision-foundation models to guide slot initializations. **Does SA occasionally collapse to degenerate solutions?** SAVi training can occasionally collapse (e.g., poor object separation despite good reconstruction) due to misconfiguration (not enough slots), insufficient pre-training data, or seed dependence. See examples: https://anonymous.4open.science/r/sold-rebuttal/SAVI_FAILURE_CASES.md. However, we found SAVi robust during fine-tuning once converged, which is valuable for MBRL. **Have you considered partially observable environments?** Occlusions have induced the demand for challenging long-horizon reasoning for the considered tasks. A key objective of our work was to ascertain whether the proposed models are successfully able to resolve the resulting ambiguities. Additionally, occlusion, as an object-dependent source of partial observability, enables interpretable visualization of the attention patterns to inspect how the model is retrieving missing information. Investigating other sources of partial observability is important future work, especially for real-robot deployment. --- We hope that we have been able to address open questions and better situate SOLD in the OC-RL landscape. If so, we would be thankful if you considered updating your score. --- Rebuttal Comment 1.1: Comment: Thank you for providing these thoughtful clarifications! I'm still hesitant to increase the score, since the authors did not comment on the following items mentioned in my review: 1. Regarding SOLD outperforming state-of-the-art methods (SOTA): Those methods are SOTA on other benchmarks than the ones used in this study. The claim should be more narrow (i.e. on a new benchmark suite focusing on relational reasoning) or supported by additional comparisons on widely-used benchmarks. 2. Providing performance comparisons with the baselines on more established benchmarks 3. I could not find a description of how hyperparameters were selected 4. Appendix E mentions that the capacity of the Non-Object-Centric Baseline was increased for fair comparison. Did you evaluate whether that actually helps its performance? Model size or capacity is not always the best criterion for fairness. Is the CNN a standard architecture used in similar environments? --- Reply to Comment 1.1.1: Comment: We are glad the reviewer found our clarifications helpful. We were constrained by space and appreciate the opportunity to address the remaining questions. **The claim regarding outperformance of SoTA methods should be more narrow or supported by additional comparisons on widely used benchmarks.** We want to thank the reviewer for raising this important point, helping us to ensure that our claims reflect the obtained results accurately. We do not mean to claim that SOLD is universally superior to SoTA RL methods. Instead, akin to prior works investigating OCRs in RL (https://arxiv.org/abs/2302.04419, https://arxiv.org/abs/2011.14381, https://arxiv.org/pdf/2404.01220), SOLD is able to make progress on *specific but crucial* abilities, namely relational reasoning and interpretability. While interpretability is not easily quantifiable, we are encouraged that all reviewers highlighted and appreciated this aspect of our work - an aspect that is rarely considered in RL approaches and where we believe SOLD adds value to the current MBRL landscape. Relational reasoning, on the other hand, is a measurable skill and long recognized as a cornerstone of human intelligence. To evaluate this, we introduced the proposed benchmark. Importantly, for tasks that require reasoning over multiple objects, we are able to make improvements to the efficiency with which such tasks can be learned, which become more pronounced as the difficulty of the task increases. To accurately reflect these results, we will revise the description of our contribution according to the reviewer's suggestions to make these aspects more explicit in the final version. In addition, we will ensure that the role of the non-OC tasks studied is more clearly outlined as a means of demonstrating the potential of our OC dynamics model to generalize to such scenes, not that OC reasoning generally is superior for solving the associated control problems. **How were the hyperparameters (HPs) selected?** Since an exhaustive HP search is not feasible, we indicate values chosen by us through experimentation in square brackets, with bold numbers indicating the chosen values, while other HPs were adopted based on previous work, with the rationale explained following the respective tables. | SAVi HPs |Value| |:---|:---| |Slot Dim. $D_Z$| [64, **128**, 256]| |Num. Slots $N$|2-10| |Slot Initialization|[Learned-Random, **Learned**]| |SA Iterations|3/1 | We consider both SAVi and OCVP to select standard HPs. These include the number of SA iterations, the architecture of the models employed by SAVi, and the batch size used for training. We run experiments with both initialization strategies that SAVi introduces and find that Learned leads to better specialization of the slots and thus better decomposition of the visual scene, i.e., the isolation of information in the slots, a property we are interested in to study control learning and to interpret how information is retrieved by the model, was better on our problems. |Dynamics Model HPs|Value| |:---|:---| |Num. Layers|4| |Token Dim.|256| |Residual|True| Teacher-forcing|False| For the OC dynamics model, we select HPs on the basis of ablations already performed in the OCVP paper, which evaluated residual vs absolute prediction of slots, the use of teacher-forcing, and the number transformer layers (Table 3 in the OCVP paper), which we use as a reference for our act-cond. prediction model. Since we found this model to be capable of accurately modeling OC dynamics, no further HP search was performed on its architecture. For the SAT, we use the OC dynamics model as the reference for the standard HPs. Additionally, we have added ablations regarding the employed positional encoding and the prediction of scalar values. ***”[...] the capacity of the Non-OC Baseline was increased for fair comparison. Did you evaluate whether that actually helps its performance? Is the CNN a standard architecture used in similar environments?”*** Yes, the baseline is a standard CNN autoencoder architecture similar to that used in SAVi or DreamerV3. While the performance difference to SOLD is substantial, we agree that it is helpful to ablate whether capacity is the best metric for a fair comparison and we are running ablations to compare the performance of using a larger vs standard SAT architecture. At this time, it is too early to evaluate the exact results, but the performance appears to be similar. For the final version, we will add whichever method performs better and explain this ablation in the Appendix. Finally, we want to thank the reviewer for their detailed feedback which has been substantially helpful to better situate SOLD in the full landscape of related work, ensure our claims precisely reflect the obtained results, and add experiments and details to clarify questions. We sincerely hope that our response has addressed your remaining concerns, and we would greatly appreciate your consideration in updating your score.
Summary: The paper proposes SOLD, a method for model-based reinforcement learning (MBRL) with object-centric world models. The paper is mainly a combination of ideas from OCVP object-centric video prediction and DreamerV3-style MBRL with latent dynamics model being shaped to be object-centric. The main motivation is to use object-centric representations as an inductive bias to improve sample efficiency and interpretability. For evaluation, the authors compare their methods with DreamerV3, TD-MPC2 and a variant of their architecture that excludes the object-centric representation. They build their own in environments on top of what seems to be Gymnasium-robotics environments to create object-centric manipulation tasks. Their approach proves to be equal or better to the baselines and also successful on some select benchmark tasks from DMC and MetaWorld. ## update after rebuttal The authors addressed some of my concerns, but not all (missing MFRL baselines and standardized benchmarks). Nonetheless, their arguments and new evaluations are sufficient to raise my score. Claims And Evidence: The paper claims to be the first to do object-centric MBRL, which to the best of my knowledge is true. In addition, the authors claim that object-centric representations are a good inductive bias to improve 1) interpretability and 2) sample efficiency of MBRL. The qualitative results shown by the authors in terms of slot rollouts do support the claim of improved interpretability. However, sample efficiency is hard to judge given that the authors only demonstrate this feature on environment that they made. It is then hard to say whether the environment was created in a way that favors the method or if the method is generally more sample efficient. Methods And Evaluation Criteria: The proposed method makes absolute sense. Object-centric representations are long-due for MBRL and are a great idea to improve interpretability, performance, and sample efficiency of RL methods. The evaluation methods are not sufficient. The authors did a good job on the research direction and method implementation side of things. However, they fail to properly validate their proposed method. It is crucial to evaluate the proposed method on standard RL and manipulation benchmarks (e.g., MetaWorld, Robosuite, RLBench...). Most manipulation benchmarks require object-centric representations since manipulation by definition requires the robot to manipulate some objects in its environment. The choice of baselines is adequate, though it is quite minimal (check my comment in the weakness section). Theoretical Claims: No theoretical claims. Experimental Designs Or Analyses: Experimental design lacks using standard benchmarks from the manipulation literature. Supplementary Material: The additional open-loop rollout figures are quite nice to showcase the inner workings of the method. In addition, the implementation details are good for reproduciblity. My only concern in the supplementary material is how shallow the related work on MBRL is, it only looks at very recent works from the field, while MBRL is way richer than these new methods. Relation To Broader Scientific Literature: The paper is a great combination of OCVP and DreamerV3, bringing recent ideas from video prediction to model-based RL for visual environments. Essential References Not Discussed: The related work section is missing some key works from the MBRL literature to name few: [1] Deisenroth, Marc, and Carl E. Rasmussen. "PILCO: A model-based and data-efficient approach to policy search." Proceedings of the 28th International Conference on machine learning (ICML-11). 2011. [2] Nagabandi, Anusha, et al. "Neural network dynamics for model-based deep reinforcement learning with model-free fine-tuning." 2018 IEEE international conference on robotics and automation (ICRA). IEEE, 2018. Other Strengths And Weaknesses: **Strengths:** - Using object-centric representation for MBRL is a great idea and a very promising direction. - The paper is well-written an easy to follow - The method includes multiple key design choices that enabled its performance on the target tasks. - The proposed approach improves interpretability of MBRL, which is a major claim of the work that is well-supported by the experiments. - The evaluation showcases the improvement in sample efficiency on the chosen tasks. **Weaknesses:** - My main concern is with the evaluation of the work. The authors should include some evaluation on more standard benchmarks that are also object-centric such as the ones from RLBench, MetaWorld, and Robosuite. The authors show the success of their methods on some select standardized benchmarks but they do not compare their method to baselines in these domains. - The paper lacks ablations of the key design choices proposed in the paper. - The choice of baselines is minimal (sufficient to validate the claims, but not sufficient to highlight the benefits of the paper). I would suggest adding a baseline on model-free RL with the object-centric representations. - While the method is interesting, the novelty of the work is quite limited by the fact that it is a mere combination of OCVP and DreamerV3 Other Comments Or Suggestions: None Questions For Authors: - Given that author benchmark include object-centric tasks, what was the main motivation for building custon environments to evaluate the method? from an outsider's perspective and without you properly motivating this choice, it sounds like the environments were built to favor the proposed method. - How would your method perform on environments with more complex action spaces? Since action-conditioning is one novelty from the side of object-centric video prediction, it would be interesting to see this working on more complex action spaces. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We want to thank the reviewer for the detailed feedback. We are encouraged that they found our idea strong and clear, and that they appreciate the improvement in interpretability and sample efficiency. We aim to answer remaining questions and concerns below, but will incorporate all feedback for the paper (such as the discussion of key works from the model-based RL (MBRL) literature) into the final version. **Why are the evaluations performed on non-standard benchmarks/ custom environments?** We agree that comparative evaluation on diverse standard benchmarks, such as RLBench, is a crucial direction to explore. Our motivation for the selected evaluation is twofold: (1) feasibility with current OC representation (OCR) learning methods and (2) explicit investigation into relational reasoning capabilities. At the moment, many of these benchmarks are still beyond the capabilities of OC methods. The main purpose of our work is to evaluate the benefit OCRs can bring to learning control policies on environments where current methods can learn sufficiently disentangled representations to accurately investigate this question. Moreover, while we agree that all robotic manipulation is at its core object-centric, we are interested in specifically studying skills that go beyond the abilities of current methods operating on holistic representations (akin to prior work investigating OCRs in RL (https://arxiv.org/abs/2302.04419, https://arxiv.org/abs/2011.14381, https://arxiv.org/abs/2404.01220)). Improved relational reasoning, characterized as a cornerstone of human intelligence, is such a skill that is insufficiently challenged in standard benchmarks and that becomes increasingly important as RL agents are tasked to perform tasks based on high-level semantic instruction. **Did you perform ablations of key design choices?** We have added ablations to the key design choices here: https://anonymous.4open.science/r/sold-rebuttal/ABLATIONS.md **I would suggest adding a baseline on model-free RL with the OCRs.** We thank the reviewer for characterizing the comparative baselines we employed as adequate, but agree that adding an OC model-free (MF) RL method would add an interesting dimension to compare the performance of MB vs. MF methods operating on OCRs. Given the one-week timeframe for the rebuttal, implementing and evaluating such a baseline is beyond our means, and we have focused our efforts on the extensive ablations and additional experiments requested by the reviewers that were feasible within this time frame. We thank the reviewer for this suggestion and consider the comparison to OC-MFRL to be an interesting direction for future work. **Is the method ***"a mere combination of OCVP and DreamerV3"***?** We agree that extending OCVP to an action-conditional setting and designing an architecture to perform MBRL on the basis of this representation is central to our work. However, we believe that *“a mere combination of OCVP and DreamerV3”* undervalues the contribution. Both the action-conditional OC prediction and the MBRL on the basis of sequences of object slots in latent imagination (enabled by the SAT architecture) are novel contributions that address non-trivial challenges in OC-MBRL, which we believe to be of interest to the community. ***"How would your method perform in environments with more complex action spaces?"*** We appreciate this question, since accurately modeling the effects of actions is crucial for learning control. While we have included predictions on DM-Control environments, where the action-spaces are substantially different from the position + gripper control of our task suite, we aim to show the feasibility of our world model to apply to more complex action-spaces. Therefore, we have performed additional experiments on: 1. The Sketchy dataset (https://arxiv.org/pdf/1909.12200), which features 7-dimensional actions, extending the action-space of our environments by a rotational component, and 2. A custom Moving Shapes environment featuring multiple shapes of distinct colors, where the action-space encodes information about both *what* happens (e.g., move upwards) and *to which* object (e.g., the red cube) and allows for actions to simultaneously apply to all objects. On the Sketchy dataset, results show that our prediction model is capable of accurately predicting the movement of the robot even under gripper rotations. For the Moving Shapes environment, we observe that the dynamics model is able to correctly associate actions given as a single vector to the different objects they apply to. We encourage you to view the result of this investigation here https://anonymous.4open.science/r/sold-rebuttal/COMPLEX_ACTION_SPACES.md --- We appreciate the reviewer's characterization of OCRs as long due for MBRL and his recognition of our idea. We hope that we have been able to address outstanding questions and concerns, and would be very grateful if you would consider updating your score. --- Rebuttal Comment 1.1: Comment: The authors addressed some of my concerns, but not all (missing MFRL baselines and standardized benchmarks). Nonetheless, their arguments and new evaluations are sufficient to raise my score. --- Reply to Comment 1.1.1: Comment: We are glad that we were able to clarify questions and address some of the concerns with the additional evaluations. We agree that evaluating on standard benchmarks and more visually complex data is an important consideration. However, as noted also by other reviewers, it is currently challenging to evaluate across standard RL benchmarks, largely due to the limitations of slot attention and OCR learning methods in general. Despite this challenging setting, we believe there to be significant value in exploring the combination of RL and OCRs, as we are glad was noted by all reviewers, and believe SOLD to make an important contribution in this direction. Without meaning to claim that with the current OCR learning methods our approach is universally superior, we have been able to demonstrate the potential for both improved relational reasoning and interpretability, which we believe to be crucial capabilities for embodied agents. Exploring further how OCRs can generalize to arbitrary visual RL settings is a fascinating problem for future work, but we believe it to be beyond the self-contained insights that SOLD can bring to the underexplored research area of OC-MBRL. Finally, we would like to thank the reviewer for their insightful feedback both to more clearly position our current work and contributions, and outline crucial points for investigation in future work.
Summary: The work focuses on the development of a novel model-based reinforcement learning (RL) algorithm that utilizes an object-centered representation of visual scenes. The authors extend the standard model-based approach of Dreamer by incorporating a slot representation generated using the SAVi transformer model. Instead of relying on the traditional Dreamer recurrent state model (RSSM), the authors employ a newly developed object-centric dynamics model OCVP. One of the key innovations introduced by the authors is the utilization of a unique model for the reward function - the slot aggregation transformer (SAT). This model does not directly generate a scalar reward, but instead generates softmax logits of the softmax distribution from which the actual reward is sampled. Additionally, register tokens and ALiBi-based position encoding are employed. The authors conduct experiments in their custom environment involving a robotic arm and larger objects, comparing their approach with Dreamer and TD-MCP. They also indicate extension of their experiments to other platforms such as MetaWorld and simpler versions of DMControl. ## update after rebuttal During the rebuttal phase, the authors conducted further experiments to evaluate the impact of SAT on the overall architecture and performed preliminary experiments on generalization. I agree with the authors that they have shown the promise of an object-centric approach that works at the level of monolithic approaches. However, this is not the first work in this field to demonstrate this. In general, I believe the proposed method has novelty, so I leave the current assessment Weak Accept. Claims And Evidence: The work is written in a clear and easy-to-understand language, although there are some issues with the notation and the introduction of the RL problem. The authors acknowledge some advantages of object-centric representations over monolithic ones, but they only discuss these advantages in a specific env. While it is possible to agree with some of the claims made by the authors, it is difficult to accept their assertion that this is the first approach based on an object-centric world model (see discussion of the literature in this area below). Methods And Evaluation Criteria: The authors use a combination of existing models, including Dreamer, SAVi, and OCVP. However, the main new contribution of their work is the SAT reward model. For this, they have applied a number of novel techniques, resulting in a high-quality model. One challenge they faced was using their own environment, as it is common in the research community to use more established benchmarks such as RoboSuite, CasualWorld, and Crafter. Additionally, the experiments with MetaWorld and DM-Control were not thoroughly conducted and were not discussed in detail. Theoretical Claims: The authors do not offer any theoretical analysis of the proposed model, but generally justify the loss functions used. Experimental Designs Or Analyses: The experiments conducted are not very convincing. Even in a setting designed specifically for their method, with large, clearly defined objects and without noise, the SOLD approach does not show a significant advantage over monolithic models in a standard setting, actually demonstrating slower learning. Additionally, the authors did not report the number of steps used to pre-train the OCVP model. There are only good results in a very specific context. Analysis of the results and comparisons with other methods in the MetaWorld and DM-Control environments were not provided. Another major drawback is the lack of comparison to other object-centric, model-free, and MBRL approaches (as described in the literature). Supplementary Material: Additional materials include videos of the resulting model working in different environments. The code is missing, which makes it impossible to evaluate the correctness of the implementation and experiments. Relation To Broader Scientific Literature: The authors do not mention all the available work on object-centric representations in RL. Authors especially miss such works in the field of MBRL, for example, GOCA (see the literature section). At a minimum, they need to be discussed, and not to claim that such works do not exist. Essential References Not Discussed: The authors are encouraged to discuss and conduct an experimental comparison with the following approaches: 1) Model-based - GOCA https://arxiv.org/abs/2310.17178 2) Model-free – OCRL (cited as Yoon et al., 2023), OC-SA (https://arxiv.org/pdf/2208.03374) и OCARL (https://arxiv.org/abs/2210.07802). Other Strengths And Weaknesses: I am very impressed by the object-centric approach and agree with the authors that it is a promising area of research. However, there are currently some difficulties in applying it in interesting and acceptable environments within the RL community, largely due to limitations of slot attention and visual self-supervised learning methods in general. In this regard, it is important to mention the work done in this area and avoid expanding the range of specific environments. Instead, it would be better to focus on standard environments like CasualWorld or MetaWorld, which would allow for more reliable comparisons and evaluations. I would recommend that the authors conduct their experiments using these environments as a basis, using them as a demonstration and additional reference. Additionally, I would like to see evidence of the benefits of object-based learning, such as improved generalization and transferability to other tasks. Other Comments Or Suggestions: The authors did not set the RL task itself, and part of the notation is not entered or deciphered, which makes it difficult to read the article. For example, the concepts of R_t, e_\eta, etc. are not introduced. Also, when describing the method, it is difficult to separate the authors' introductions and previously used techniques introduced back in DreamerV3. Questions For Authors: 1) How do the authors assess the possibilities of generalizing object-centric models, for example, to change colors? 2) How is the required amount of data estimated for OCVP pre–training? 3) I would like to see an ablation study on the SAT-related part of the model - how will it work with and without it? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We want to thank the reviewer for acknowledging that our approach, and the combination of object-centric (OC) representations and RL generally, is interesting and a promising research area. Further, we want to thank them for acknowledging the SAT architecture as one of the key novel contributions of our work. We address the raised questions below and will incorporate all the feedback for the paper and the provided references to related works into the final version. **Is this ***“the first approach based on an OC world model”***?** We want to thank the reviewer for raising this important point and providing a reference to the GOCA paper. While GOCA (called ROCA in the newest version) is specifically focused on value-based MBRL, using the world model to improve return estimates of the critic instead of learning through imagination inside world model rollouts, we want to make sure to characterize the novelty of our method accurately and not overlook any prior contributions. To this end, we add a more detailed discussion of the differentiation of our method to prior work on OC representations (OCRs) in RL (including all references provided by the reviewers) to the Appendix of our paper. Moreover, we would appreciate the reviewer’s feedback on the characterization of SOLD as the first method to learn purely inside imagined rollouts of an object-centric world model trained from pixel inputs. **Why are the evaluations performed on non-standard benchmarks/ custom environments?** To make the most of the limited space, we kindly refer to our reply to this question in the response to o1iz. **Why does SOLD not show a significant advantage over monolithic models in a standard setting?** We assume standard setting to refer to tasks that only consider distractor objects, without requiring explicit relational reasoning between those objects (Reach-, Push-, and Pick-Specific). While we want to point out that SOLD still outperforms state-of-the-art baseline methods on the more challenging Push and Pick instantiation of this task, TD-MPC2 performs best on the Reach-Specific environment. The experience used in pre-training to learn OCRs is a cost in the SOLD algorithm that is “paid” upfront and becomes valuable through improved representations over the course of training. On the easiest Reach-Specific task, TD-MPC2 has converged even before the experience required for pre-training SAVi is used up. To us, this is neither unexpected nor troubling, since the problem is too easy to master to warrant the extra effort in acquiring strong representations upfront. Specifically TD-MCP2, which operates in a reconstruction-free manner, can quickly extract the only relevant information from the visual representation, which in this case is the position of the green sphere and the robot’s end-effector. We still include this easier task to paint a fuller picture about when the use of OCRs has yielded a payoff for the total sample complexity and when baseline methods are already performing well. **How many frames are used to pre-train SAVi?** We have included this in the details about SAVi Pretraining in Section D.4 in the Appendix (Line 797). We pre-train SAVi for approximately 1 million frames (including the full episode in which the count of 1 million frames is reached) on all tasks, which we have found empirically to provide a sufficient basis to learn disentangled OC representations. ***"The code is missing, which makes it impossible to evaluate the correctness of the implementation and experiments."*** The source code is available on the project website linked at the end of the abstract. **What do $R_t$, $e_\eta$, etc. refer to?** $R_t$ represents the return from time-step $t$. $e_\eta$ and $d_\eta$ represent a SAVi encoder and decoder model with parameters $\eta$, respectively. Due to the limited space for the main text, we have added explanations for $R_t$, $e_\eta$, and $d_\eta$ to the Notation in Section B of the Appendix. ***"I would like to see an ablation study on the SAT-related part of the model - how will it work with and without it?"*** We have added ablations to key design choices of our method to better justify the decisions. The results of all experiments are available at https://anonymous.4open.science/r/sold-rebuttal/ABLATIONS.md **How do the authors assess the possibilities of generalizing OC models, for example, to change colors?** This is an interesting point for further investigation, and we have conducted a preliminary experiment changing the set of colors from originally 16 to 32 novel values (see https://anonymous.4open.science/r/sold-rebuttal/COLOR_CHANGE.md). The results indicate that both DreamerV3 and SOLD are able to generalize to this setting well and that stronger out-of-distribution experiments are required to inspect this property in a dedicated investigation. --- We hope our response has clarified your concerns, and would appreciate it if you considered updating your score. --- Rebuttal Comment 1.1: Comment: I would like to express my gratitude to the authors for responding to my comments and questions. Additionally, I want to mention that the authors have conducted further experiments to evaluate the impact of SAT on the overall architecture and performed preliminary experiments on generalization. Overall, I appreciate the work, but the experimental results regarding monolithic versions may still not be sufficiently convincing to bring the community closer to understanding the usefulness of object-based representations. --- Reply to Comment 1.1.1: Comment: We are pleased that we were able to address the reviewers' comments, and that they appreciate the additional experiments we performed. We are glad that we have been able to clarify technical questions and we want to take this opportunity to comment on the outlook-oriented question of why we *do* believe that our work helps bring the community closer to understanding the utility of OC representations for control. While we do not claim that our learning method is universally superior to SoTA methods, we make important progress on *specific but crucial* properties, namely relational reasoning capabilities and interpretability of the agent's decision-making. While interpretability is not straightforwardly quantifiable, we are encouraged that all reviewers commented positively on this aspect of our work. Investigating attention weights to understand the workings of transformer models, while prominent in self-supervised CV and NPL research, is rarely considered in behavior learning. Visualizing which specific parts of a visual scene the agent considers relevant for decision making is, to our knowledge, a novel feature in RL, and we believe it adds an interesting dimension to the current MBRL landscape. Moreover, as OC representation learning methods improve and generalize to larger images of higher visual complexity, the ability to selectively attend to task-relevant information while ignoring other visual elements will only become more important. Relational reasoning skills, on the other hand, are explicitly measurable through problems like ranking (Specific-Relative) or odd-one-out (Distinct), and we aim to assess them through the introduced task suite (which we will also make publicly available). In this respect, we are able to improve the efficiency with which these problems can be learned. Specifically, SOLD with an overall success rate of 82% outperforms the second best method DreamerV3 (56.4% success rate) by just over 25%, which we do consider to be a significant improvement. This performance gap is achieved despite our choice to include simpler problems, such as Reach-Specific (to which we assume the reviewer’s comment *"the SOLD approach does not show a significant advantage over monolithic models in a standard setting, actually demonstrating slower learning"* refers). We would like to emphasize that this outcome is neither unexpected nor worrisome to us, as the entirety of our evaluation clearly illustrates how the benefits of learning OC representation becomes increasingly pronounced as task-complexity grows. We are convinced that these results, combined with the improved interpretability, are compelling in highlighting the utility and potential of OCRs for control. Finally, we would like to express our gratitude to the reviewer for providing detailed feedback on our work, including adding relevant related work, helping us to clarify the intent and claims of our work, and improving the clarity of the notation. We are encouraged that they highlight the potential of our approach and of OCRs for control in general.
null
null
null
null
null
null