title stringlengths 15 163 | paper_decision stringclasses 4
values | review_1 stringlengths 853 32.6k | rebuttals_1 stringlengths 0 15.1k | review_2 stringlengths 1.03k 35.6k | rebuttals_2 stringlengths 0 15.1k | review_3 stringlengths 807 27.4k ⌀ | rebuttals_3 stringlengths 0 15k ⌀ | review_4 stringlengths 780 22.2k ⌀ | rebuttals_4 stringlengths 0 15.1k ⌀ | review_5 stringclasses 171
values | rebuttals_5 stringclasses 166
values | review_6 stringclasses 25
values | rebuttals_6 stringclasses 24
values | review_7 stringclasses 4
values | rebuttals_7 stringclasses 4
values |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Zero-Shot Learning of Causal Models | Reject | Summary: Learning the causal generative process from observational data is a challenging problem bottlenecked by the necessity of learning a separate causal model for each dataset. This paper studies a unifying framework to enable zero-shot inference of causal generative processes of arbitrary datasets by training a single model. The authors adapt a recent advancement in causal generative modeling (FiP) to infer generative SCMs conditional on empirical dataset representations in a supervised setup, where the SCM is reformulated as a fixed-point problem. They propose an amortized procedure that takes in a dataset and its causal graph and learns a dataset representation. Then, the authors train a model conditioned on dataset embeddings to learn the functional mechanisms of the generative SCM. This framework enables both observational and interventional sample generation in a zero-shot manner. Empirical results show that the method performs competitively with baseline models for in-distribution and out-of-distribution settings.
## Update After Rebuttal
The authors have done a great job in addressing all of my questions and concerns about this work. Therefore, I am strongly in favor of **acceptance**.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: N/A
Experimental Designs Or Analyses: Yes, I checked the results for noise prediction, sample generation, and interventional sampling for all datasets. Furthermore, I checked the out-of-distribution performance.
Supplementary Material: Yes, I reviewed all the additional empirical results.
Relation To Broader Scientific Literature: This paper is one of the first to consider generalizing the learning of functional mechanisms of structural causal models from arbitrary datasets and causal graphs and is a significant step toward building causally-aware foundation models.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: ## Strengths
- The paper is written well with clear intuitions and explanations as to how it relates to similar work (e.g., FiP).
- Although the assumptions are a bit strong (additive noise model, causal graphs known, noise variables known), the general idea of using an amortized procedure to approximate SCM distributions in a zero-shot manner is quite interesting.
- The empirical results are convincing and show interesting observations, especially the performance of sample generation under distribution shifts and as the causal graphs scale up. It is certainly impressive that Cond-FiP can approximate the SCM distribution of 50 and 100 graph nodes quite well given that it was trained on only datasets with 20-node graph size.
## Weaknesses
- In a synthetic data scenario, assuming access to the noise samples is a feasible assumption, but for real-world datasets, this will not typically hold. Using the noise samples as the supervision for the dataset embedding model may easily become unrealistic. The authors have an evaluation on a real-world benchmark (Sachs) in the appendix where they fit a Gaussian model. However, interventional sample results are not provided.
- The idea to directly infer the SCM distribution under the additive noise model assumption is interesting. However, the feasibility of this assumption may not always hold. It is true that we often parameterize causal models as linear/nonlinear additive noise models, but this can be violated in practice. It seems that this approach would only hold under the strict assumption of additive noise models.
- Knowledge of the causal graph for several datasets can potentially be a strong assumption. In real-world datasets, the causal graph may be unknown and must be discovered. However, for the sake of this work, the synthetic scenarios are good for proof of concept.
Other Comments Or Suggestions: N/A
Questions For Authors: - Could the authors explain why Cond-FiP performs similar to some baselines in noise prediction and sample generation, especially when the node scale is the same or smaller than used in training? How is the FiP model implemented? In the original paper, it seems that the task is the recovery of the topological ordering. Is the FiP baseline here aware of the causal structure of datasets?
- How does the alternating application of transformer blocks E work? Is this just an alternating optimization method where you optimize for samples when nodes are fixed and optimize for nodes when samples are fixed?
- The main capabilities of the proposed framework are noise prediction and observational/interventional sample generation. However, individual counterfactual sample generation is also important in many applications. Can this framework enable counterfactual inference?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive and insightful feedback! We appreciate your recognition of the soundness of our framework and the diversity of our experiments. We now address the concerns raised by the reviewer below.
> Access to noise samples
Thank you for raising this point. We agree with the reviewer that most real-world problems do not provide such supervised signals. However, it is important to note that during inference, Cond-FiP does not require access to noise samples. Instead, it only needs the observations (and a predicted or true causal graph) to infer the functional mechanisms. This allows Cond-FiP to be applied to real-world datasets, as demonstrated in our experiment on the Sachs dataset (Appendix C).
An interesting extension of this work would be to explore a semi-supervised setting, where synthetic and real-world data are mixed during training. However, we believe this is outside the scope of the current paper.
> Interventional results on Sachs
Regarding the lack of interventional generation results on the Sachs dataset, the main issue is that Cond-FiP (along with the other baselines considered in this work) only supports hard interventions, whereas the interventional data available for Sachs involves soft interventions (i.e., the exact interventional operations are unknown). As a result, we are unable to provide a comprehensive evaluation of Cond-FiP, or the other baselines, for interventional predictions on Sachs.
> Additive Noise (ANM) Assumption
While the ANM assumption may be seen as a limitation, we would like to clarify that our method relies on the ANM assumption only for training the encoder. This is because we need the encoder to predict the noise from the data in order to obtain embeddings, which is simplified under the ANM assumption, as explained in Appendix A.2. However, it is important to emphasize that, while the ANM assumption is required for training the encoder, it is not necessary for training the decoder.
An interesting avenue for future work would be to explore a more general dataset encoding approach, potentially using self-supervised techniques. However, we believe this falls outside the scope of the current work.
> Knowledge of causal graph
We agree with the reviewer that assuming knowledge of the true causal graphs is a strong assumption. However, as outlined in the manuscript (line 406), we can relax this assumption by inferring the causal graphs in a zero-shot manner via state-of-the-art prior works, such as AVICI. In Appendix D, we provide experimental results where we do not assume prior knowledge of the true graphs and infer them via AVICI. These results demonstrate that Cond-FiP can be extended to zero shot infer the full SCMs using only observational data.
> Cond-FiP performance against baselines
First, it is important to note that all baselines use the true causal graphs for a fair comparison with Cond-FiP, and we employ their original implementations. Additionally, the baselines are trained on each test dataset, serving as the gold standard that our zero-shot approach aims to match. In contrast, although Cond-FiP is trained on specific scales, it generalizes well to both smaller and, more importantly, larger instance problems, while maintaining performance comparable to the other baselines. Furthermore, in scarce data regimes, Cond-FiP demonstrates superior generalization (Appendix E).
> FiP implementation
We use the original code provided by the authors from their paper. In their implementation, the authors offer an adaptation of FiP when the causal graph is known. For a fair comparison, we evaluate Cond-FiP against this variant of FiP in our work.
> Alternating application of transformer blocks E
The alternating block transformer is a feedforward neural network that takes as input a tensor of shape $(B,n,d,d_h)$ where $B$ is the batch size, $n$ is the number of samples, $d$ is the number of nodes and $d_h$ is the hidden dimension. In practice, we first permute the second and third dimensions before applying an attention mechanism to perform attention over the sample dimension. Afterward, we permute them back to apply an attention layer over the node dimension.
> Counterfactual inference results
Thank you for raising this point. Cond-FiP is indeed capable of performing counterfactual generation, and we have conducted an experiment to evaluate this. The results can be found via the [anonymous link](https://anonymous.4open.science/r/icml_2025_cond_fip_rebuttal-27D2/Counterfactual_Generation_Exps.pdf). We observe that Cond-FiP performs slightly worse than the baselines in this task, and we believe that improving its performance in counterfactual generation will be a valuable direction for future work.
Thank you once again for your constructive comments! We are open to further discussion and would be happy to address any remaining concerns.
---
Rebuttal Comment 1.1:
Comment: The authors have given satisfactory clarifications and have provided some new experimental results to evaluate counterfactual inference to address my questions and concerns. Overall, I believe this is a well-written and well-motivated paper that sets forth some interesting ideas for the future of developing robust causally-aware foundation models. Therefore, I keep my rating as Accept.
---
Reply to Comment 1.1.1:
Comment: Thank you for your support and constructive feedback! We really appreciate your thoughtful comments. Based on the discussion during the rebuttal, we will revise the manuscript accordingly. In particular, we will include the counterfactual inference results in the updated draft, as well as provide more clarifications regarding the other points on interventional results on Sachs dataset, access to noise variables during training, etc. | Summary: This paper introduces a method called Cond-FiP for transfer learning of causal mechanisms in causal systems, specifically Structural Causal Models (SCMs). Given the causal variables and their graph, the approach aims to learn a single model capable of inferring the distributions of causal variables without dataset-specific training. Cond-FiP utilizes an encoder to create embeddings of datasets based on observations and causal graphs, and then conditions a fixed-point approach (FiP) decoder on these embeddings to infer the functional mechanisms of SCMs. Experiments are presented to show that Cond-FiP can perform similarly to state-of-the-art methods trained for individual datasets.
## Update after rebuttal
The rebuttal and further discussion with the authors have addressed some of my concerns. However, the specific practical application targeted by this setup remains unclear and the assumptions taken are very strong. Thus, I maintain borderline on this paper.
Claims And Evidence: The paper makes several claims regarding the capabilities of Cond-FiP, and these are generally supported by the evidence presented, primarily through empirical evaluations. The central claim of zero-shot causal mechanism inference is addressed in the experiments by demonstrating that Cond-FiP, trained on a distribution of SCMs, performs sufficiently well on unseen datasets. The claim of achieving performance on par with SoTA methods is supported by the comparative results against baselines like FiP, DECI, and DoWhy across various tasks (noise prediction, sample generation, intervention) and datasets (AVICI, CSuite, Sachs).
The claim of generalizing to out-of-distribution graphs is not well supported. As stated in Section B.1, the noise variables are limited to Gaussian or Laplace distributions. Both distributions have very similar patterns, but no more complex distributions like bi-modal Gaussians, distributions with complex random transformations, etc. have been tested. Thus, the claims are effectively limited to fixed, known noise distributions.
Methods And Evaluation Criteria: Cond-FiP appears to be a sensible approach for the setting considered in the paper. The benchmarks cover standard synthetic and real-world inspired settings.
Theoretical Claims: No theoretical claims in the paper.
Experimental Designs Or Analyses: I have checked the experiments presented in the main paper, most carefully the synthetic data generation. It is generally sound, but limiting in its diversity and out-of-distribution consideration. This limits the claims as mentioned above.
Supplementary Material: I skimmed the appendix section A and B. However, I have not studied in detail all additional experimental results listed.
Relation To Broader Scientific Literature: The definition of "zero-shot" deviates from standard literature and makes the paper's claims confusing. In this paper, zero-shot is defined in line 51 as "zero-shot inference (without updating parameters)", which does not fit in current literature. "Shots" commonly refer to examples that the model sees. In current regimes for LLMs and foundation models, few-shot learning rarely updates the parameters and instead inputs the examples as context. Thus, this paper does not perform "zero-shot" under the current literature. The authors should reconsider whether zero-shot is the best way of terming this setup.
Essential References Not Discussed: Most essential references have been discussed to my knowledge.
Other Strengths And Weaknesses: The paper misses to make a strong case for the practical application and relevance of the proposed setup. One requires a lot of prior knowledge of the system that one is interested in learning (the causal graph, the general distributions to apply the right model, etc.), where it is further difficult to obtain samples from. For instance, the paper discussed the possible setup of first learning the causal graph with a standard causal discovery approach before applying their method. However, most standard causal discovery approaches require a sufficient number of samples to accurately estimate independence tests or learn the mechanisms themselves.
Further, it is a strong assumption to have access to the noise variables during training. This is generally not possible in the real-world, so the training must be solely performed on simulation data. This requires the simulation to be very accurately matching the distribution and causal relations of the GT model.
The method is restricted to additive noise models. This restricts its applicability to more complex settings. It is unclear how important this assumption is and whether it could be relaxed.
As mentioned above, the term “zero-shot” is oddly used in the context and needs to be justified.
Finally, the prediction of the noise variables introduces problems that the paper does not discussed. In particular, to predict the noise variables from causal variable observations, the map between noise variables and causal variables must be invertible. Otherwise, the noise variables are not unique. Further, no assumption is taken that the noise must follow a certain distribution. As in this setup, any arbitrary invertible transformation can be applied to the noise variables, it is unclear how the model should be able to predict the noise variables.
Other Comments Or Suggestions: Typos:
- Line 155: missing punctuation in "nodes on the current one However, the"
Questions For Authors: Why are the noise variables predicted? There has been no explicit assumption taken that the map between noise variables and causal variables must be invertible. Further, no assumption is taken that the noise must follow a certain distribution. As in this setup, any arbitrary invertible transformation can be applied to the noise variables, it is unclear how the model should be able to predict the noise variables. How would your method behave on diverse noise distributions with varying complexity?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We thank the reviewer for their insightful feedback! We appreciate that they found our claim regarding Cond-FiP’s performance relative to state-of-the-art methods well justified. Additionally, we highlight our experiments in scarce data regimes (Appendix E), where Cond-FiP demonstrates superior generalization compared to baselines.
We now address the reviewer’s concerns below.
> Diverse noise distributions
Thank you for raising this point! Following the reviewer’s recommendation, we experiment with a mixture of gaussian noise for the Large Backdoor and Weak Arrow datasets from CSuite. Specifically, the noise is sampled with equal probability from either $N(0,1)$ or $N(0,2)$. Results accessible via the [anonymous link](https://anonymous.4open.science/r/icml_2025_cond_fip_rebuttal-27D2/GMM_Exps.pdf) show that Cond-FiP is competitive with baselines for sample generation, and slightly worse on other tasks (still competitive with DECI). We emphasize that the baselines are trained from scratch specifically for the mixture of gaussian noise, while Cond-FiP has been pretrained only on gaussian noise.
We also want to clarify the noise distribution choices for the main experiments follow the prior works (Lorch et al. 2022, Scetbon et al. 2024). Finally, our ablation studies in Appendix F.3 also evaluate Cond-FiP’s performance under varying noise distribution complexity. We assess sensitivity to distribution shifts by adjusting noise parameters, controlling the shift magnitude. Results in Tables 25–27 show Cond-FiP’s OOD generalization drops (as expected) as the severity of the shift increases.
> Assumption of known causal graphs
We agree that assuming knowledge of the true causal graphs is a strong assumption. However, standard causal discovery methods are not necessary for inferring causal graphs at inference time. As outlined in the manuscript (line 406), we can relax this assumption by inferring the causal graphs via state-of-the-art amortized causal discovery techniques, such as AVICI. In Appendix D, we provide experimental results where we do not assume prior knowledge of the true graphs and infer them via AVICI. These results demonstrate that Cond-FiP can be extended to infer full SCMs (without updating parameters) using only observational data.
> Additive Noise Model (ANM) assumption
While the ANM assumption may be seen as a limitation, we would like to clarify that our method relies on the ANM assumption only for training the encoder. This is because we need the encoder to predict the noise from the data in order to obtain embeddings, which is simplified under the ANM assumption, as explained in Appendix A.2. However, it is important to emphasize that, while the ANM assumption is required for training the encoder, it is not necessary for training the decoder.
An interesting avenue for future work would be to explore a more general dataset encoding approach, potentially using self-supervised techniques. However, we believe this falls outside the scope of the current work.
> Justification behind predicting noise variables
We agree with the reviewer that the map between the noise variables and causal variables must be invertible. In our setting, we adopt the ANM assumption, which ensures invertibility since the jacobian w.r.t noise is a triangular matrix with nonzero diagonal. Please check Appendix A.1 for more details. Notably, this does not require assuming a specific noise distribution.
> Assumption of noise variables during training
Thank you for raising this point. We agree with the reviewer that most real-world problems do not provide such supervised signals. However, it is important to note that during inference, Cond-FiP does not require access to noise samples. Instead, it only needs the observations (and a predicted or true causal graph) to infer the functional mechanisms. This allows Cond-FiP to be applied to real-world datasets, as demonstrated in our experiment on the Sachs dataset (Appendix C).
An interesting extension of this work would be to explore a semi-supervised setting, where synthetic and real-world data are mixed during training. However, we believe this is outside the scope of the current paper.
> Zero-shot terminology
We agree with the reviewer that the term "zero-shot" in the context of in-context learning and LLM literature typically refers to the number of examples a model sees. However, in our work, we adopt this terminology following the literature on amortized causal learning (Zhang et al. 2023, Nilforoshan et al. 2023, Gupta et al. 2023), where "zero-shot" refers to making predictions without updating the model parameters. We are open to adjusting our notation and adopting the terminology "amortized causal learning" if the reviewer prefers this.
Thank you once again for your constructive comments! We are open to further discussion and would be happy to address any remaining concerns. If you believe your concerns have been addressed, kindly increase your score.
---
Rebuttal Comment 1.1:
Comment: Thank you for your answers.
> Diverse Noise Distributions
The chosen mixture of Gaussians is still very similar to a single Gaussian. My question was targeting more complex distributions that significantly differ from the standard Gaussian shape, like a mixture of N(-2,1) and N(2,1)?
> Zero-Shot Terminology
Thank you, I believe using "amortized causal learning" would be more widely fitting for the goal of this paper.
---
Reply to Comment 1.1.1:
Comment: Thank you for your response to our rebuttal! We appreciate your feedback and have taken your concerns into account.
> Diverse Noise Distributions
Thank you for highlighting this point! Following your recommendation, we have conducted additional experiments with various noise distributions, each modeled as a multi-modal gaussian mixture. Specifically, we considered the following cases:
- Noise is sampled with equal probability from either $N(-2, 1)$ and $N(2, 1)$.
- Noise is sampled with equal probability from either $N(-2, 2)$ and $N(2, 2)$.
- Noise is sampled with equal probability from either $N(-2, 1)$ and $N(2, 2)$.
- Noise is sampled with equal probability from either $N(-5, 1)$ and $N(5, 1)$.
- Noise is sampled with equal probability from either $N(-5, 2)$ and $N(5, 2)$.
- Noise is sampled with equal probability from either $N(-5, 1)$ and $N(5, 2)$.
We ran experiments using these $6$ noise distributions on both the Large Backdoor and Weak Arrow datasets from the CSuite benchmarks, leading to a total of $12$ experimental settings. The results, available via this [anonymous link](https://anonymous.4open.science/r/icml_2025_cond_fip_rebuttal-27D2/Multi_Modal_GMM_Experiments.pdf) , demonstrate that Cond-FiP remains competitive with baselines across all tasks. Importantly, while baselines were trained from scratch for each specific gaussian mixture noise distribution, Cond-FiP was pretrained only on gaussian noise and generalizes effectively to these settings.
> Zero-Shot Terminology
We appreciate your suggestion and agree that "amortized causal learning" better captures our approach. Since changes to the draft cannot be reflected during the rebuttal phase, we outline below the planned revisions:
- The title will be updated to "Amortized Learning of Structural Causal Models."
- Phrases such as "we zero-shot infer causal mechanisms" will be revised to "we infer causal mechanisms without any parameter updates."
Thank you again for your thoughtful and constructive feedback! We will incorporate the gaussian mixture model experiments into the final draft and update our terminology accordingly. If you believe we have satisfactorily addressed your concerns, we would greatly appreciate an increase in your score. | Summary: This paper addresses the problem of inferring structural causal models (SCMs) from observational data. Unlike previous approaches that train separate models for each observational dataset, this work proposes learning a single model across a distribution of problem instances, enabling zero-shot inference of the underlying SCM. This pipeline comprises of an encoding network that encodes the observed dataset and the underlying graph and a conditional fixed point method that infers SCM conditioned on the observed dataset. Experimental results indicate that the proposed method performs comparably to existing approaches that train individual models for each dataset.
Claims And Evidence: The paper's central claim is that a single model, trained using the proposed pipeline, can perform comparably to training separate models for each dataset. The authors validate this claim through extensive experiments across various problem instances. In each experiment, the reported results indicate that the proposed pipeline performs on par with existing methods (DoWhy, DECI, and FiP) that train distinct models for each dataset.
Methods And Evaluation Criteria: The proposed method consists of two key components: (1) an encoder that captures information from observations and the underlying causal graph, and (2) a conditional variant of the Fixed-Point Approach (FiP), called Cond-FiP, which infers the SCM conditioned on the encoding from the first step. This approach is benchmarked against existing methods that train separate models for each dataset, namely DoWhy, DECI, and FiP. All methods are evaluated using the MSE metric across three tasks: noise prediction, sample generation, and interventional generation, for both in-distribution and out-of-distribution problem instances.
The proposed methods and/or evaluation criteria makes sense for the problem at hand - this pipeline learns a single model that can perform zero-shot inference of SCMs given observational data and the causal graph for a variety of problem types; the benchmarks cover a number of techniques used in learning distinct models for each dataset; and the various metrics measure how well the SCM was learned based on both the function and noise approximation.
Theoretical Claims: There are no theoretical claims in this paper.
Experimental Designs Or Analyses: The experiments cover a number of problem instances (linear/non-linear causal relationships, different numbers of nodes, diverse graph structures etc.), a number of relevant benchmarks and relevant metrics for both in and out of distribution evaluation samples. The experiments in the Appendix, particularly the real-world experiment further validates the usefulness of the proposed method. See the questions sections for some concerns regarding the sparse data tables.
Supplementary Material: I reviewed some of the experimental set-up and results presented in the supplementary materials, namely the AVICI benchmark, the flow cytometry experiment, and results with less data or no access to the true causal graph.
Relation To Broader Scientific Literature: The main contribution of this paper is amortizing the learning of the functional relationships to directly infer the SCMs. Other works either learn a separate model per observational dataset or propose techniques for tasks like amortized causal structure learning, average treatment effect (ATE) estimation etc.
Essential References Not Discussed: All essential references are discussed to the best of my knowledge.
Other Strengths And Weaknesses: This paper introduces a novel framework that enables amortized learning of causal mechanisms across different instances within the functional class of SCMs. This can help in leveraging shared information between datasets and also has the added benefit of training and storing just a single model for a variety of tasks requiring SCM inference. The paper is very well written, well positioned and the experiments (including the appendix section) is thorough.
The paper demonstrates an average level of originality as it builds on pre-existing ideas like the transformer architecture for SCM inference and FiP. See the questions section for more concerns, particularly about measuring the benefits of this new approach.
Other Comments Or Suggestions: See questions section.
Questions For Authors: 1. The main benefits of the proposed pipeline lie in learning a single model for inferring various SCMs, as well as the advantages of leveraging shared information. I have two questions regarding the benefits of the framework:
1.1. The paper claims that the proposed method performs well in low-data scenarios. However, Table 11 (Appendix E) suggests that DoWhy often matches or outperforms the proposed approach. Could you provide some insights into why this happens?
1.2. A key advantage of the proposed method is that it learns a single model instead of multiple models. How does this compare in terms of memory requirements, inference time, and computational efficiency?
2. Given that the primary evidence supporting the claims of the paper comes from experiments, are there any plans to open-source the code to facilitate replicability and transparency?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive and insightful feedback! Thank you for acknowledging the soundness of our framework and diverse experiments. We now address the concerns raised by the reviewer below.
> The paper claims that the proposed method performs well in low-data scenarios. Table 11 (Appendix E) suggests that DoWhy often matches or outperforms the proposed approach. Could you provide some insights into why this happens?
We agree with the reviewer's observation that Cond-FiP performs comparably to DoWhy in Table 11 for the LIN IN and LIN OUT cases. However, we emphasize that Cond-FiP significantly outperforms DoWhy in the RFF IN and RFF OUT cases, particularly when the total number of nodes is 50 or 100. This strong performance in non-linear settings reinforces our claim that Cond-FiP exhibits superior generalization in scarce data regimes. Note that learning effective solutions with limited data is more challenging for the non-linear functional relationships (RFF IN/OUT) scenario. As a result, DoWhy struggles in these scenarios, while Cond-FiP demonstrates a clear advantage.
> A key advantage of the proposed method is that it learns a single model instead of multiple models. How does this compare in terms of memory requirements, inference time, and computational efficiency?
*Memory Requirements.* We trained Cond-FiP on a single L40 GPU with 48GB of memory (see line 323 in the paper), using an effective batch size of $8$ with gradient accumulation. Below, we outline the detailed memory computation:
- Each batch consists of $n=400$ samples with dimension $d=20$, requiring less than $1$ MiB of data in FP32 precision.
- Storing the model on the GPU requires under 100 MiB.
- Our transformer architecture has 4 attention layers, a 256-dimensional embedding space, and a 512-dimensional feedforward network. Using a standard (non-flash) attention implementation, a forward pass consumes approximately 30 GiB of GPU memory.
Compared to the baselines, Cond-FiP has similar memory requirements to DECI and FiP, as all three train neural networks of comparable size. The main exception is DoWhy, which fits simpler models for each node, but this approach does not scale well as the graph size increases.
*Computational Cost.* Like other amortized approaches, Cond-FiP has a higher training cost than the baselines, as it is trained across multiple datasets. While the cost of each forward-pass is comparable to FiP, we trained Cond-FiP over approximately 4M datasets in an amortized manner. However, Cond-FiP offers a significant advantage at inference time since it requires only a single forward pass to generate predictions, whereas the baselines must be retrained from scratch for each new dataset. Thus, while Cond-FiP incurs a higher one-time training cost, its substantially faster at inference.
> Given that the primary evidence supporting the claims of the paper comes from experiments, are there any plans to open-source the code to facilitate replicability and transparency?
Thanks for this point! We plan to open-source the code along with comprehensive documentation to facilitate reproducibility of our experiments. For the rebuttal phase, we have prepared an anonymized version of the codebase, which can be accessed via this [link](https://anonymous.4open.science/r/icml_2025_cond_fip_rebuttal-27D2/).
Please note that while the codebase is not directly executable, it provides full access to the implementation of all components of our framework:
- `cond_fip/models` contains the implementation of the transformer-based encoder and the Cond-FIP architecture.
- `cond_fip/tasks` includes the training and inference methods associated with our framework.
> The paper demonstrates an average level of originality as it builds on pre-existing ideas like the transformer architecture for SCM inference and FiP.
We agree with the reviewer that our framework builds upon prior works, specifically AVICI (Lorch et al. 2022) and FiP (Scetbon et al. 2024). However, we want to clarify that our main contribution relies on integrating these two frameworks to enable zero-shot inference of generative SCMs, a problem not previously addressed. To achieve this, we made substantial modifications to the FiP architecture, as it needed to be conditioned on datasets. Additionally, to facilitate this dataset conditioning we propose to learn dataset embeddings via the noise prediction task.
While most existing studies on amortized causal learning focus on treatment effect estimation or causal discovery (as discussed in Section 2), our work tackles the novel task of amortized learning to infer the causal mechanisms of SCMs.
Thanks again for your constructive comments! We are open to further discussion and would be happy to address any remaining concerns. | null | null | null | null | null | null | null | null |
Enhancing Certified Robustness via Block Reflector Orthogonal Layers and Logit Annealing Loss | Accept (spotlight poster) | Summary: Lipschitz neural networks are either trained or constructed such that their Lipschitz constant is small, enabling easy verification of the network to adversarial perturbations. Ways of obtaining networks with small Lipschitz constants include a) regularising the network at training time or b) designing layers which have a small Lipschitz constant by construction while maintaining expressiveness. This work introduces Block Reflector Orthogonal (BRO) layers which are a new type of orthogonal layer designed to have a Lipschitz constant of 1 using a new low-rank parameterisation. Such orthogonal layers are developed for both linear and convolutional operators. The authors further propose a new Logit Annealing (LA) loss for training Lipschitz neural networks which aims at balancing robustness considerations for different data points better by focusing on improving robustness for data points with a small margin. The evaluation on a number of datasets including the very large ImageNet shows that the new BRONet architecture outperforms existing approaches in terms of l2-certified and standard accuracy while being computationally efficient.
Claims And Evidence: - The authors state that the BRO layers "unlock the potential of applying orthogonal layers to more advanced architectures". However, as far as I understand the paper they present an orthogonal formulation for dense and convolutional layers, both of which were already supported by other methods in the literature.
- The authors claim that they achieve state-of-the-art performance on a number of datasets which is mostly supported by the experimental results. However, especially for CIFAR10 and CIFAR100 in Table 1 the gains of the proposed method seems relatively small when compared to the literature. For TinyImageNet and ImageNet, the gains compared to the baselines are larger, but it should be noted that significantly fewer baselines were run for these datasets. In Table 4 the performance gain when using the proposed BRO backbone is relatively small in most cases.
- On large perturbations the performance of BRO seems to decline sharply (see Table 1), though this is acknowledged in the limitations section.
Methods And Evaluation Criteria: - Proposing layers with a small Lipschitz constant to obtain networks that are easier to certify is a well-known approach and therefore makes sense
- The benchmark datasets are standard ones that are used in a variety of previous works. The fact that the authors run experiments on Imagenet is commendable, given the substantial resources that are required for this.
Theoretical Claims: I checked the proofs for the orthogonality of both the dense and convolutional layers proposed in the work (Appendix A1 and A2) which appear to be the most important contribution of the work. These look correct to me and I couldn't find any issues with them. The theoretical claims made in the paper appear sound to me. I found the Rademacher-complexity-based argument that is used to motivate the logit annealing loss function somewhat difficult to understand, but the theoretical analysis is accompanied by a more intuitive motivation which I find convincing.
Experimental Designs Or Analyses: The design of the experiments is sound, the approach is compared to all (to the best of my knowledge) relevant previous works on networks robust to perturbations in the $\ell_2$ norm and the evaluation is conducted across a variety of datasets which are normally used in the certified training literature.
Supplementary Material: I reviewed the proofs in appendix A1/A2 and also appendix B, D, E, F
Relation To Broader Scientific Literature: The key contributions of the work, namely the BRO layer and the related network architectures, are related to a number of previous works on designing neural networks layers which exihibit small Lipschitz constants while also preserving sufficient expressivity. The BRO approach differs in that the layer itself is not a universal approximator, but the authors demonstrate that it is nevertheless expressive enough to outperform competing approaches in a number of cases. The orthogonal convolution layer introduced by the authors builds on a number of insights from the paper by Trockman and Kolter, but the specific parameterisation used for both the convolutional and the dense layers is novel.
Essential References Not Discussed: To the best of my knowledge, all relevant related works in the field of $\ell_2$ robustness certification are discussed by the authors. There is also a large body of work on networks that are robust to $\ell_\infty$ perturbations (such as [1-3]) which could briefly be discussed. Other works in the field such as [4] also discuss these and the relation of their work on $\ell_2$ robustness to $\ell_\infty$ robustness.
[1] Mao, Y., Müller, M.N., Fischer, M. & Vechev, M. (2023) TAPS: Connecting Certified and Adversarial Training. doi:10.48550/arXiv.2305.04574.
[2] Mueller, M.N., Eckert, F., Fischer, M. & Vechev, M. (2023) Certified Training: Small Boxes are All You Need. In: 1 February 2023 p. https://openreview.net/forum?id=7oFuxtJtUMH.
[3] De Palma, A., Bunel, R., Dvijotham, K., Kumar, M. P., Stanforth, R., & Lomuscio, A. (2023). Expressive Losses for Verified Robustness via Convex Combinations. arXiv preprint arXiv:2305.13991.
[4] Xu, X., Li, L. & Li, B. (2022) LOT: Layer-wise Orthogonal Training on Improving l2 Certified Robustness. Advances in Neural Information Processing Systems. 35, 18904–18915.
Other Strengths And Weaknesses: A weakness I noticed is that there are a number of parameters which need to be tuned, for example, the LA loss introduces three parameters and for the BRO layer the rank $n$ needs to be chosen. The authors seem to explicitly tune this parameter on some datasets (see Table 9) while choosing certain values for other datasets. For example, it was unclear to me how the authors arrived at choosing $n=\frac{m}{2}$ for ImageNet but then at $n=\frac{m}{8}$ for TinyImageNet. The fact that the values of $n$ that are used differ not only between different datasets, but also between different network architectures (see Appendix D2) raises some concerns about how easy it would be to choose this parameter in practice without having to conduct a number of tuning runs.
The tuning of the three hyperparameters that the LA loss introduces seems somewhat unclear to me, an ablation study for $\beta$ is presented in Table 14 but for $T$ and $\xi$ the authors state that they "slightly adjust the values used in Prach & Lampert (2022) to find a better trade-off position" - how this is done remains unclear.
Other Comments Or Suggestions: Small typo: Line 68, right column: Assuming $f(x)$ is the output logits of a neural network --> Assuming $f(x)$ **are** the output logits of a neural network
Overall, the paper is well-written. One thing I noticed is that articles are often omitted in places where they should be present. Some examples:
- Line 90f: We construct various Lipschitz networks using BRO method --> We construct various Lipschitz networks using **the** BRO method
- Line 308: the annealing mechanism, draws inspiration from Focal Loss --> the annealing mechanism, draws inspiration from **the** Focal Loss
- Line 309: During training, LA loss initially promotes --> During training, **the** LA loss initially promotes
- Line 315: Consequently, LA loss allows Lipschitz models to learn --> Consequently, **the** LA loss allows Lipschitz models to learn
I did not write all of the cases in which this is done down, but to improve the reading flow it would be nice if the authors could do a round on the paper where they focus on eliminating these kinds of mistakes.
Questions For Authors: 1. Could the authors share any insights they might have regarding the performance of BRO on large perturbations? I find it surprising that other methods would suddenly perform so much better than BRO on these.
2. Do the authors have any heuristics or similar that they would use for selecting hyperparameters such as the rank $n$? How would they propose their training approach is used on new/unknown datasets or with modified variants of the BRONet architectures?
3. Could the way in which $T$ and $\xi$ were chosen be clarified by the authors?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: **Q: Regarding performance on large perturbations**
BRONet indeed achieves the best performance on both clean and certified accuracy at $\varepsilon = 36/255$, but is less consistent for larger perturbations $\varepsilon$. Interestingly, we have observed that less expressive Lipschitz models tend to yield slightly higher certified accuracy at large $\varepsilon$, but at the cost of lower clean accuracy and certified accuracy at smaller $\varepsilon$. This tendency to fit only certain examples well to achieve large certified radii is not ideal, as our ultimate goal is to develop a robust model that maintains strong natural classification performance while simultaneously achieving favorable certified robustness as an additional benefit.
---
**Q: Selecting the rank**
For selecting the rank $n$, we recommend starting with $n = m/2$ and iteratively reducing it by half if needed, as supported by the experiments in Table 9. Intuitively, since $n$ determines the proportion of $+1$ and $-1$ eigenvalues in the orthogonal matrix $W$ (Proposition 1), an imbalanced distribution may reduce the diversity of $W$.
---
**Q: Hyperparameters in the LA loss**
We perform a hyperparameter grid serach around the values recommended by AOL (Prach & Lampert, 2022). Specifically, we evaluated the temperature $T \in \\{ 0.25, 0.5, 0.75, 1.0 \\}$ and the offset parameter $\xi \in \\{ 0.5, 1.0, 1.5, 2.0, 2.5, 3.0 \\}$ with LipConvNet-10-32 on CIFAR-100.
When used on other datasets and architectures, these hyperparameters performed very well, so we did not fine-tune them further to reduce computational cost. We will include this information in Appendix D.4 in the revision.
---
**Q: Regarding the statement "unlock the potential..."**
We agree with your observation. Our intent was to highlight that BRO provides a promising parameterization that enhances robustness while reducing resource requirements for orthogonal layers. This, in turn, improves their applicability in more advanced architectures. We will revise the sentence for clarity.
---
**Q: Performance in Table 1&4**
For Table 4, we present an ablation study on different Lipschitz layers. The improvements are relatively small because all methods incorporate the proposed LA loss, which already narrows performance differences across backbones. Since Lipschitz networks have long faced scalability issues, there are still few baselines in the literature for TinyImageNet and ImageNet. Nevertheless, we would like to emphasize that combining both BRO and LA leads to a notable improvement on these more challenging datasets, outperforming the state-of-the-art LiResNet.
---
**Q: Regarding typos**
We appreciate your effort in identifying the typos. We have revised them accordingly.
---
**Q: $\ell_\infty$-norm robustness**
We appreciate your suggestion and will include a brief discussion on the literature regarding $\ell_\infty$-norm robustness.
Additionally, we conduct empirical test against $\ell_{\infty}$ AutoAttack [1] on CIFAR-10, which we will provide it in the revision. The results for the $\ell_{\infty}$ certifed baselines are from the literature [2][3].
| Method | Clean | Adv. Acc$\ell_{\infty}=2/255$ | Adv. Acc.$\ell_{\infty}=8/255$ | Certified Acc. |
| - | - | - | - | - |
| STAPS $\ell_{\infty}=2/255$ | 79.75 | 65.91 | N/A | 62.72 ($\ell_{\infty}=2/255$) |
| SABR $\ell_{\infty}=2/255$ | 79.52 | 65.76 | N/A | 62.57 ($\ell_{\infty}=2/255$) |
| IBP $\ell_{\infty}=8/255$ | 48.94 | N/A | 35.43 | 35.30 ($\ell_{\infty}=8/255$) |
| TAPS $\ell_{\infty}=8/255$ | 49.07 | N/A | 34.75 | 34.57 ($\ell_{\infty}=8/255$) |
| SABR $\ell_{\infty}=8/255$ | 52.00 | N/A | **35.70** | 35.25 ($\ell_{\infty}=8/255$) |
| BRONet-L (Ours) | **81.57** | **68.76** | 21.02 | 70.59, 57.15 , 42.53 ($\ell_{2}=36/255,72/255,108/255)$ |
[1] Croce, et al. "Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks." International Conference on Machine Learning (ICML), 2020.
[2] Mueller, et al."Certified training: Small boxes are all you need." International Conference on Learning Representations (ICLR), 2023.
[3] Mao, et al. "Connecting certified and adversarial training." Advances in Neural Information Processing Systems (NeurIPS), 2023.
---
Rebuttal Comment 1.1:
Comment: Thank you for the rebuttal, I believe that all the questions that I raised were sufficiently addressed. While the performance gains achieved by BRO are probably not groundbreaking, I am aware of the fact that gains in this area are becoming increasingly difficult to achieve. I do think that this is a sound paper which proposes a novel approach that achieves better results, so in light of this, I will raise my score. | Summary: Lipschitz neural networks allow certified robustness without inference overhead; they are built by composing constrained layers. In this paper, the authors propose two improvements over the previous state of the art: they introduce a novel parametrization to construct orthogonal convolutions (BRO convolution), which aims to achieve a good performance/cost tradeoff thanks to a low rank parametrization that achieves orthogonalization without the need for an iterative algorithm. They also introduce a "Logit annealing loss function" that does not suffer the gradient issue of the previous losses.
Claims And Evidence: This paper's approach performs well on standard benchmarks. It improved the previous state of the art by a decent margin. (However, an absolute comparison is subject to questions; see Supplementary material section.)
About LA loss:
Thanks to both theoretical and experimental work, the motivations behind LA loss are clear, and the experiments show a small but consistent improvement. This opens interesting perspectives about the properties of a good loss for robustness. Also, they explore the impact of the newly introduced hyper-parameter.
About BRO:
The extensive experiments cover many facets, such as speed/memory, robustness performance, and numerous ablation studies. This consequent work could be more impactful with a presentation centered on the paper's central claims. (See Experimental Designs Or Analyses)
Methods And Evaluation Criteria: yes
Theoretical Claims: - ap A.4: although I agree with the reasoning, the padding must account for the fact that once a $k\times k$ kernel is orthogonalized in frequency domain, its spatial domain equivalent is not a $k\times k$ convolution anymore (it becomes an $s\times s$ convolution). Under this light, the input should be padded with $s$ value instead of $k$ values.
- Although the proof in A.1 is likely true, I did not understand the part showing that the parametrization results in real weights, especially since it is unclear how this conclusion is drawn from equation 16 ( line 792 ). An explanation of what $F$ and $\bigotimes$ are could alleviate this issue (equation 13 and line 754).
Experimental Designs Or Analyses: Experiments do not efficiently show that BRO reduces resource requirements. In particular, resource requirements might not correlate with the number of parameters, especially if BRO is under-parametrized and based on FFT (which scales differently with respect to the number of channels and input size compared to usual convolution). The performance should be seen through a tradeoff between the number of parameters <-> performance <-> training cost.
Also, Cayley is missing in performance evaluation (Fig 2 and 5), which is surprising given the close proximity between the two methods (BRO can be seen as an under-parametrized version of Cayley).
Supplementary Material: A quick review of the code showed it to be clear and well-structured. However, I have a doubt about how inputs are normalized: previous papers report robustness radii assuming non-standardized inputs in the [0,1] range, but the dataloaders return standardized inputs in a larger range (roughly x2), without the adequate correction in the reported epsilons (ie. robustness certificate for $\epsilon = \frac{36}{255} $ in your standardized range is equivalent to a robustness radius of $ \epsilon = 0.4465*\frac{36}{255} \approx \frac{16}{255} $ ). **I might be wrong, but if true, it may result in robustness performances that cannot be compared with previous papers (in terms of absolute values). Please clarify if I missed the certificate correction in the code.** I also want to emphasize that this issue could already be present in LiResNet paper (I didn't check this). Also, Tables 2 and 4 show *a relative* comparison with other layers. **Finally, I think BRO layers can be of great interest even though their performance does not achieve absolute state-of-the-art.**
Relation To Broader Scientific Literature: The paper situates itself well within the current literature, clearly identifying its unique contributions relative to existing methods.
Essential References Not Discussed: I did not noticed any missing essential references.
Other Strengths And Weaknesses: My main concern is the certificate correction discussed in the supplementary material. *I will raise my score if the authors show me the part of the code I may have missed or if their results have been corrected*. I think those results can be impactful even without the method being state-of-the-art.
My second main concern is about stating more clearly how the layer scales with respect to the number of channels, and input size, which would help a user select the adequate architecture.
Besides these two issues, I think this paper can be impactful for two reasons:
- it shows that under-parametrization can create a regime where the reduced resource cost can outweigh the reduced expressiveness.
- a careful analysis of the loss's gradient can lead to more effective losses for robustness
Other Comments Or Suggestions: typos:
- l761 Proposition -> lemma
- l753: define $F$ and $\bigotimes$
Suggestion:
- figure 2 uses the number of layers as x axis. This is pretty uninformative about how the layer scale ( as expected, it shows a linear increase for both runtime and memory, which is displayed in log scale for runtime). Especially when the displayed layers scale very differently with respect to the image size. I do expect BRO > SOC for small inputs like 16x16, but expect the opposite for large inputs like 224x224 (explaining why BROnet starts with a stem of an unconstrained convolution with stride 7 in imagenet)
Questions For Authors: I'm curious about the use of BRO parametrization for dense layers (which seems implemented in your code). Does the gain in terms of runtime outweigh the under-parametrization ?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: **Q: Regarding Standardization and Certification**
For the BRONet/LiResNet experiments, the dataloader outputs data in the [0,1] range without any additional standardization or normalization, as could be confirmed by the dataset functions in the `bronet/tools/dataset/` folder.
For the LipConvNet experiments, the images are standardized in the dataloader, but the certification process has been adjusted to ensure that the reported budgets are correctly aligned with the [0,1] range. Specifically, as shown in line 202 of `lipconvnet/trainer/trainer.py`, `L = 1 / torch.max(std)`, which is then passed to `evaluate_certificates()` in line 265.
We appreciate your diligence in reviewing the implementations and believe this explanation resolves your concerns.
---
**Q: Regarding Scaling behavior and comparative analysis**
We provide new runtime and memory comparison plots for LipConvNet-20 via the anonymous links below.
[Link 1](https://ibb.co/9mJntHvw)
[Link 2](https://ibb.co/HfsKGXbN)
The x-axis represents the initial channels, while the y-axis shows runtime and memory consumption. The plots also include experiments with various input sizes, denoted by $s$. A missing dot indicates that an experiment encountered an out-of-memory error. The results demonstrate the scalability of BRO with respect to both channel and input size. We will include the figures in the next revision to help users select an adequate architecture. The experimental settings are consistent with those used in Figure 5 of the main paper.
We also include the results of Cayley layer in the revised Figure 2 via the below anonymous link.
[Link 3](https://ibb.co/jYD2Gm1)
---
**Q: Regarding zero-padding**
Thank you for pointing this out. We will revise our description on line 214. For clarity, the current implementation reduces the effect of convolution across edges but does not fully eliminate it. Since zero-padding does not affect the validity of the certification, one may also consider applying it post-orthogonalization, albeit at an increased computational cost. We will include this note in the revision..
---
**Q: Regarding the proof in A.1**
We apologize for the unclear notations and have revised them. For $\mathcal{F}\_c = S_{c, s^2} \left(I\_c \otimes (F \otimes F)\right)$ (Line 754), $F$ is the DFT matrix, and $\otimes$ is Kronecker product.
Regarding Equation (13), it is a result from Trockman & Kolter (2020), showing that any 2D circular convolution operation can be block-diagonalized by a Fourier-related matrix $\mathcal{F}.$
As for Equation (16), it is intended to explicitly demonstrate the orthogonality of BRO and its real-preserving property. The right-most term explicitly shows that the BRO convolution we use is orthogonal, as it consists of three unitary matrices. The middle term shows that $\text{BRO}(C)$ is real if $C$ is real-valued, because that $\text{BRO}(\cdot)$ applied to a real matrix always produces a real matrix. As we parameterize $C$ in the real domain, the output of the BRO convolution is guaranteed to be real-valued.
---
**Q: Regarding parameterization for dense layers**
For ConvNets, the computational cost is primarily dominated by convolution operations, so we did not find much advantage in swapping out the dense layers. That being said, it could be worth exploring in other architectures that rely more heavily on large dense layers and could benefit from the properties of orthogonal parameterizations.
---
Rebuttal Comment 1.1:
Comment: The author's answer alleviated my main concerns (mainly about the certificate correction). I will raise my score. | Summary: This paper introduces a new 1-Lipshitz layer using the Block Reflector Orthogonal (BRO) parameterization of low-rank orthogonal matrices for constructing image classifiers with certified robustness. In addition a new logit annealing loss function is developed to balance margin learning across data points, addressing the limited capacity issue inherent in Lipschitz networks. The resulting architecture, BRONet, is demonstrated to achieve state‐of‐the‐art l2 certified robustness on benchmarks including CIFAR‑10, CIFAR‑100, Tiny‑ImageNet, and ImageNet.
Claims And Evidence: The paper claims significant gains in certified robustness and computational efficiency over prior methods (e.g., LOT, SOC) through the novel BRO layer and LA loss.
To my knowledge, the use of the block reflector parameterization has not been used before for 1-Lipschitz layers.
Evidence includes runtime and memory comparisons, as well as ablation studies showing marginal increases (around 1–2 percentage points) in certified accuracy.
Although the improvements are modest, the architecture is technically sound and empirical evidence is quite thorough.
Methods And Evaluation Criteria: The main application of 1-Lipschitz layers is certified robustness of classifiers using a margin argument.
The method is evaluated on benchmarks for certified robustness of image classification problem with respect to the l2-norm perturbation threat model. Although this l2 threat model is not very practical as model for modeling adversarial attacks in images, it is widely studied in the certified robustness literature. Other common models such as l1 and l-infinity threat models are not addressed by this work.
Theoretical Claims: The main theoretical claims of the architecture are based on well-established parameterizations of orthogonal matrices which I believe to be correct.
The risk bound derived in Theorem 1 and Proposition 3 seem to be correct and based on established results and are used to motivate the design of the logit-annealing (LA) loss. In particular it justifies the intuition that we should not maximize the margin uniformly in the presence of Lipschitz constraints. I think this is a good motivation for the LA loss.
I do think the proof for Theorem 1 could be stated more clearly in a proof environment or in a subsection "proof of Theorem 1", I couldnt find a proof for Proposition 3 in the paper. Even if it is immediate from previous work (I assume its Ledoux and Talagrand 2013), its worth stating briefly. If its being cited from another paper, it would be better to cite the result explicitly.
Experimental Designs Or Analyses: The experimental design is comprehensive, with comparisons across multiple architectures and datasets, including ablation studies that assess the effect of the LA loss.
Regarding runtime shown in Figure 2, I would also be curious to compare the runtime and memory efficiency of methods which do not require matrix inversion in their forward pass such as AOL or SLL. They may not be as competitive for certified accuracy, but will scale better.
Supplementary Material: I reviewed the additional results on certified robustness and run-times. I also reviewed the proofs for Theorem 1 and Prop 1 and Prop 3.
Relation To Broader Scientific Literature: The work builds on established concepts such as orthogonal parameterizations (e.g., via the Cayley transform) and Lipschitz network designs. The paper does a pretty good job of contextualizing and benchmarking many other relevant methods in literature (figure 1 is quite nice in this regard). Introducing the low-rank block reflector approach and a new loss function, which are valuable incremental contributions. Additionally, I can see the LA loss becoming a standard ablation in the area of certified robustness (I say ablation because sometimes the CR loss is still better).
However, many other papers have pursued similar incremental improvements of certified robustness in these l2-norm threat models and the overall contribution is not paradigm shifting. Its not easy to make even small improvements in certified robustness, but I hope the community will begin to consider more realistic threat models as image classifiers with a clean accuracy of 50% are not very useful.
Essential References Not Discussed: I think they have covered the literature of 1-Lipschitz layers and certified robustness well.
Other Strengths And Weaknesses: Strengths:
The I think the low rank orthogonal architecture is a nice way to balance efficiency and performance, avoiding iterative schemes of SOC and LOT to perform matrix inversion.
The theoretical justification behind the LA loss is fairly interesting and seems to make improvements experimentally in most cases. I can see this loss being a common ablation in future works on certified robustness.
Weaknesses:
BRO still seems to have a significant computational overhead when computing the sparse matrix inverse that may limit its scalability.
Although making even incremental improvement in certified l2 robustness (as many other papers have pursued) is challenging, there is not really anything paradigm shifting here. These parameterizations still perform very poorly on CIFAR100 and ImageNET clean accuracy, where 50% accuracy is far from acceptable in practice. I hope the community will begin to consider more realistic threat models and benchmarks as even the maximum l2 perturbation size considered $\epsilon = 108/255$ can not saturate a single pixel.
Other Comments Or Suggestions: None
Questions For Authors: How sensitive is the LA loss to hyperparameter settings across different architectures? It seems $\beta=5$ is used in all benchmarks. I see in Table 14 that this is about optimal for CIFAR100, but it is generally best across the other datasets as well? How do you recommend tuning these parameters?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: **Q: Regarding Theorem 1 and Proposition 3**
Thank you for your feedback. We will enhance the clarity of Theorem 1 by presenting its proof in a dedicated subsection. For Proposition 3, it is from (Ledoux and Talagrand 2013) and we will explicitly cite it in the statement for clarity.
---
**Q: Regarding Figure 2**
We appreciate the reviewer's curiosity. Indeed, 1-Lipschitz non-orthogonal layers could offer better computational efficiency but less competitive for certified robustness. As shown in Table 1, the large-scale SLL models do not achieve the same level of certified accuracy. This highlights the limitations of these layers when it comes to scaling for robustness.
---
**Q: The hyperparameters in the LA loss**
We selected the hyperparameters in the LA loss by testing them on LipConvNet-10-32 and applied the same values across all experiments. When used on other datasets and architectures, we found these hyperparameters to perform well. Therefore, we did not fine-tune them further in order to reduce computational cost. In practical applications, users can further refine the hyperparameters using grid search or random search, which may lead to even better results.
---
Rebuttal Comment 1.1:
Comment: My concerns have been addressed. I have adjusted my score accordingly. | Summary: The paper proposes a new method to construct 1-Lipschitz neural networks, namely, the L2 norm Lipschitz constant for each layer is 1. A 1-Lipschitz network is very useful for guaranteeing the robustness of neural networks. The paper claims to outperform existing 1-Lipschitz network designs such as SOC and Cayley layers.
Claims And Evidence: The author claims that the proposed parametrization for 1-Lipschitz neural networks is efficient and outperforms existing methods. Experiments are conducted on multiple datasets, multiple perturbation epsilons, and multiple baseline methods.
Methods And Evaluation Criteria: The main method is based on the block reflector construction of orthogonal matrices. This construction is not discussed in prior works (under the context of 1-Lipschitz networks), although some techniques used are similar (e.g., the use of FFTs for orthogonal convolution). The authors also proposed to adjust the commonly used cross-entropy loss for 1-Lipschitz networks, inspired by its limited capacity to learn large margins.
Theoretical Claims: Rademacher complexity is used to prove that the capacity of 1-Lipscitz network is limited and may not fit large margins well. The analysis is based on the standard learning theory approach, and inspires the development of the LA loss.
Experimental Designs Or Analyses: The experimental design follows existing work in this field: clean accuracy and certified accuracy comparisons on a few datasets on different perturbation epsilons. Many baselines are included. The work in general performs well, although the improvements are relatively marginal and inconsistent in some settings.
Supplementary Material: Supplementary materials contain code, but I did not try to reproduce the results using the provided code.
Relation To Broader Scientific Literature: The design and creation of 1-Lipschitz network can be an important building block for creating neural networks with provable guarantees such as robustness and safety. In general, the techniques proposed by this may be valuable for the scientific community.
Essential References Not Discussed: Essential references are discussed and experimentally compared.
Other Strengths And Weaknesses: The main strength of the paper is the novel formulation of 1-Lipschitz neural network layer, and the adjusted cross-entropy loss for 1-Lipschitz training.
My biggest confusion is about the evaluation - reading the paper, it sounds like Table 1 includes BRO layers only and does not mention LA loss. However, appendix B.5 mentioned that LA loss is used in all BROnets models. The LA loss is a generic approach that can be applied to any 1-Lipschitz network approach - it does not require any special features of BRO. Technically, it can be added to any existing Lipschitz approach, and it is uncertain that in Table 1 the major improvements come from the BRO, or actually the LA loss. Especially there are many newly introduced hyperparameters, such as the rank and the logic annealing hyperparameters. It is possible that the LA loss is the main contributor and the BRO may not outperform existing parameterization in a fair setting, since it is not a universal approximation.
To support the claim that the “expressive power of deep neural networks constructed using BRO is competitive with that of LOT and SOC” (line 243), we hope to see Table 1 listing both BRO and BRO + LA loss numbers.
Other Comments Or Suggestions: None
Questions For Authors: Unlike many prior works on orthogonal layers, the construction presented in this paper is low-rank. Is a low rank necessary for this parameterization? For the scenarios where we need more model capacity, how can we get a full-rank orthogonal matrix?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **Q: Regarding Table 1 Description**
Thank you for your feedback. Table 1 indeed presents the combined results with the LA loss, as stated in the Appendix. To ensure clarity, we will explicitly indicate this by adding a (+LA) notation in the revised version.
---
**Q: Fair Comparison of Different Parameterizations**
We understand your concerns about fairness in comparison. To clarify, our comparisons in Table 2 (LipConvNet Benchmark) and Table 4 (Backbone Comparison) are conducted under fair conditions, with all baselines incorporating the LA loss. To ensure transparency and avoid any confusion, we will explicitly state this in the table descriptions.
---
**Q: Regarding Low-rank parameterization**
The orthogonal matrix $W$ in BRO ($W=I-2V(V^TV)^{-1}V^T$) is consistently full-rank, and the trainable parameters $V$ are constrained to be low-rank to prevent $W$ from degenerating into a negative identity matrix, as shown in Proposition 1. To increase model capacity, experiments (Appendix D.1, Table 9) suggest that enlarging $W$ and setting $V$ to half its rank improve data fitting.
---
Rebuttal Comment 1.1:
Comment: Thank you for answering my questions. My main concern is still about the fair comparison, where we ideally want to present the main table (Table 1) with LA loss on and off. That will truly compare the effectiveness of the proposed parameterization and give readers a true understanding of the expressiveness of the proposed parameterization.
The results in Table 2 and Table 4 are too limited and may not sufficiently support the claim that BRO is better than existing methods, since the benefits may come from LA loss, which can be directly combined to many existing methods.
Thus, I believe this submission is borderline and cannot strongly support its acceptance.
---
Reply to Comment 1.1.1:
Comment: **Table 1 Revision**
We appreciate your concern and have taken steps to provide a more comprehensive clarification. To explicitly address this, we will include the results both with and without the LA loss in Table 1. The performance of BRONet, both with and without LA, is presented below alongside the baseline LiResNet for comparison. On average, BRONet and LA contribute improvements of +0.32/+0.15 (CIFAR-10), +0.22/+0.57 (CIFAR-100), +0.72/+1.67 (Tiny-ImageNet), and +0.97/+1.47 (ImageNet), respectively, across different evaluation metrics.
| CIFAR-10 | Clean | $\ell_{2}=36/255$ | $\ell_{2}=72/255$ | $\ell_{2}=108/255$ |
| ------------------ | -------- | ----------------- | ----------------- | ------------------ |
| LiResNet | 81.0 | 69.8 | 56.3 | 42.9 |
| **BRONet-L** | 81.0 | 70.2 | 57.1 | **43.0** |
| **BRONet-L (+LA)** | **81.6** | **70.6** | **57.2** | 42.5 |
| CIFAR-100 | Clean | $\ell_{2}=36/255$ | $\ell_{2}=72/255$ | $\ell_{2}=108/255$ |
| ------------------ | -------- | ----------------- | ----------------- | ------------------ |
| LiResNet | 53.0 | 40.2 | 28.3 | 19.2 |
| **BRONet-L** | 53.6 | 40.2 | 28.6 | 19.2 |
| **BRONet-L (+LA)** | **54.3** | 40.2 | **29.1** | **20.3** |
| Tiny-ImageNet | Clean | $\ell_{2}=36/255$ | $\ell_{2}=72/255$ | $\ell_{2}=108/255$ |
| ---------------- | -------- | ----------------- | ----------------- | ------------------ |
| LiResNet | 40.9 | 26.2 | 15.7 | 8.9 |
| **BRONet** | 40.5 | 26.9 | 17.1 | 10.1 |
| **BRONet (+LA)** | **41.2** | **29.0** | **19.0** | **12.1** |
| ImageNet | Clean | $\ell_{2}=36/255$ | $\ell_{2}=72/255$ | $\ell_{2}=108/255$ |
| ---------------- | -------- | ----------------- | ----------------- | ------------------ |
| LiResNet | 47.3 | 35.3 | 25.1 | 16.9 |
| **BRONet** | 48.8 | 36.4 | 25.8 | 17.5 |
| **BRONet (+LA)** | **49.3** | **37.6** | **27.9** | **19.6** |
We believe that these, combined with Table 2 and 4 (comparing with baselines all using the LA loss), should help the reader fully understand the expressiveness of the proposed parameterization, independent of the LA loss. | null | null | null | null | null | null |
Selective Preference Aggregation | Accept (poster) | Summary: This paper proposes aggregating ordinal preferences by producing selective rankings. The proposed selective aggregation framework explicitly reveals and controls dissent. The authors develop efficient graph-based algorithms (Algorithm 1 and Algorithm 2) with theoretical guarantees on correctness, uniqueness, and runtime. Experiments across diverse real‐world datasets demonstrate that selective rankings are more robust, transparent, and fair compared to standard voting rules and ranking algorithms.
Claims And Evidence: - The paper asserts that standard aggregation methods hide disagreement by forcing a total order, while selective aggregation naturally reveals dissent. Although the paper provides theoretical definitions and proofs, the empirical evaluation is questionable. The experiments appear contrived and lack a convincing demonstration that selective rankings offer a meaningful advantage in real-world scenarios. Moreover, the claim of “extensive” evaluation is overstated given the limited scope and sometimes arbitrary selection of datasets.
- The algorithms (Algorithm 1 and Algorithm 2) are presented as efficient and unique (optimal) solutions for the selective aggregation problem.
- The selective rankings are claimed to be more robust and transparent compared to traditional methods (Borda, Copeland, Kemeny, MC4). The robustness metrics show some promise, yet the overall performance differences are marginal and not convincingly argued to translate into practical benefits. Furthermore, the paper glosses over the trade-offs involved in choosing the dissent parameter $\tau$.
Methods And Evaluation Criteria: - The main contribution, the idea of using selective rankings to “agree to disagree” is not clearly motivated by practical needs. In many applications, a total order is still necessary, and the paper does not adequately address how its partial order can be converted or used in those contexts.
- The algorithms are described in mathematical detail.
- The evaluation is carried out on several datasets converted into pairwise preferences.
Theoretical Claims: The paper’s theoretical results (e.g., Theorems on uniqueness and robustness) are mathematically rigorous under ideal assumptions.
Experimental Designs Or Analyses: - The experiments do not convincingly show that selective aggregation improves decision quality; instead, they often merely illustrate that the method can “abstain” from making comparisons—a trivial consequence of the design.
- The paper fails to explore how sensitive the results are to the choice of $\tau$.
Supplementary Material: .
Relation To Broader Scientific Literature: - The paper is grounded in classic social choice theory and traditional ranking algorithms.
- The paper also relates to contemporary research in machine learning where transparency and robustness of aggregated annotations (e.g., in RLHF) are increasingly important.
- There is little discussion of alternative robust ranking methods, and key references (e.g., for baselines like ORPO or CPO) are not adequately introduced when they first appear.
- The approach of simply abstaining from comparisons does not address situations where a complete ranking is essential, and the paper lacks discussion on how its method compares to other state-of-the-art aggregation techniques under realistic conditions.
Essential References Not Discussed: .
Other Strengths And Weaknesses: ### Strengths
- The paper is mathematically detailed, providing rigorous proofs and a comprehensive supplementary section.
- It introduces the idea of selective aggregation, which is novel in its explicit treatment of dissent.
### Weaknesses
- The overall novelty is limited; the idea of abstaining from forced arbitration is not revolutionary and may offer little benefit in many practical applications.
- The method’s sensitivity to the dissent parameter $\tau$ is underexplored, and the paper does not offer practical guidance on setting this parameter.
- Presentation issues (such as unclear figures and poor integration of citations) further detract from the work.
Other Comments Or Suggestions: - The paper would benefit from a clearer statement of its primary contribution. Is the focus on the algorithmic framework for selective aggregation, or on its application to a specific problem domain?
- More extensive and realistic experiments are needed to justify claims of improved transparency and robustness.
- A discussion on how to convert a selective (partial) order into a total order when required would strengthen the practical relevance of the approach.
Questions For Authors: See above
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Response
Thank you for your feedback! We address them and include tables at https://tinyurl.com/2ybsfs95
> Is the focus on the algorithmic framework for selective aggregation, or on its application to a specific problem domain?
The primary contribution is the proposed algorithmic framework. We will revise the text to state this. The explored problem domains serve as illustrations of the framework's capabilities in different contexts.
> The approach of simply abstaining from comparisons does not address situations where a complete ranking is essential …
We appreciate this concern. Many relevant real-world tasks (e.g. RLHF, content moderation, search ranking) do not require total orders. When necessary, the dissent parameter can be raised to 0.5, and further until graph disconnect (with loss of guarantees). If there is still no total order, there is no majority preference between certain items or a cycle is formed.
If a total ordering is truly necessary, other methods can be applied within tiers to produce local complete orderings. This process distinguishes comparisons supported by the algorithm's guarantees (between tiers) from those less well-founded. Online, the approach highlights where more information could resolve disagreement. SPA serves as a robust first step: identify consensus comparisons first, then apply targeted methods (experts, more users) where disagreement persists. We’re happy to add this to the text.
> The paper glosses over the trade-offs involved in choosing the dissent parameter.
We have several strong points to consolidate a new subsection in section 3 (Algorithms), which are already in the paper:
The path algorithm (Appendix) allows us to avoid preselecting a tau value. Importantly, computing all possible rankings for any $\tau$ < 0.5 incurs no additional asymptotic cost.
Theoretical guarantees can guide its selection. The discussion starting at line 281 gives an example of how we might select tau based on assumptions about noise or missing preferences.
> There is little discussion of alternative robust ranking methods, and key references (e.g., for baselines like ORPO or CPO) are not adequately introduced when they first appear.
We're happy to add discussion of ORPO/CPO/other methods. ORPO focuses on alignment during the training process. SPA is less constrained - it may be possible to use it in tandem; SPA could filter for high-quality responses with ORPO used for fine-tuning.
CPO more closely resembles SPA, but its purpose is to contrast “near-perfect but flawed translations”. While comparing top tiers in SPA could have a similar purpose, our method captures preferences across all items. As a result, we feel rank aggregation methods are more appropriate baselines.
> The selective rankings are claimed to be more robust and transparent … yet the overall performance differences are marginal and not convincingly argued.
We appreciate this and will highlight SPA's benefits. SPA provides practical benefits through transparency and robustness. It reveals underlying consensus strength (e.g., Sushi's weak majority via path, App D.3). Existing methods obscure this. As noted, SPA offers stronger robustness guarantees (see dGCS). SPA exhibits no inversions while others vary significantly - this stability is crucial in domains like RLHF, preventing reward signals from flipping due to small sample changes. SPA consistently yields lower disagreement rates (Table 1) compared to baselines (often 0-6% for SPA vs. 4-12%+ for baselines).
> The overall novelty is limited; the idea of abstaining from forced arbitration is not revolutionary and may offer little benefit in many practical applications.
> The main contribution, the idea of using selective rankings to “agree to disagree” is not clearly motivated ….
Beyond abstaining, SPA reveals preference strength, crucial for trustworthy AI. For instance, methodologies aiming to learn from demonstrations [Brown et al. 2019] or generative reward models [Mahan et al. 2024] use steps with simple majority rules. Our rankings are always supported by a majority of users, clarify where preferences conflict, remain stable under noise, and can be adjusted. Our response to sHCm further details future use cases in ML workflows like RLAIF and personalized models.
* Mahan, et al. "Generative reward models." arXiv preprint arXiv:2410.12832 (2024).
* Brown, et al. "Extrapolating beyond suboptimal demonstrations via inverse reinforcement learning from observations." ICML. PMLR, 2019.
> More extensive and realistic experiments are needed ….
We appreciate the feedback. Table 8 (in our response) shows performance in scenarios with and without consistent user preferences. Only SPA reveals user contradictions. See also our responses to dGCS (ablation) and fgRQ (dropped comparisons).
We hope that our response resolves any misunderstandings! Thank you again for your time, and we look forward to resolving any remaining concerns that you may have. | Summary: The paper introduces a new preference aggregation solution, called Selective Preference Aggregation (SPA). Its essential feature is to return a partial order of items based on beyond-majority principles. More precisely, for any $\tau \in [0, 0.5)$, SPA constructs a total order over the finest partition of the items such a derived order $i \succ j$ cannot occur if more than a fraction $1-\tau$ of users disagree with $i \succ j$. In a sense, the partial order is "safe", i.e. it only makes ordering decision if a large, potentially overwhelming, majority agree with the decision. The paper proves desirable property of SPA and evaluates it empirically on several datasets.
Claims And Evidence: The paper shows that:
- The values of all SPAs for $\tau \in [0, 0.5)$ can be computed efficiently, using graph algorithms.
- For $\tau \to 0.5$, SPA isolates an existing Condorcet winner.
- SPA is "safe": under reasonable conditions, adding missing preferences cannot modify previously orders.
- Adding a new item to SPA (with comparisons to other items) will not invert orders (though it may cancel some).
While I have not carefully analyzed the provided proofs, I am fully convinced that the theorems hold.
The paper is also evaluated on 5 datasets, with findings consistent with the intuition, i.e. SPA makes fewer mistakes by abstaining to return a total order, though this can lead to a very coarse partition (and thus a very sparse comparison graph). The results seem reasonable.
Finally, the paper shows how SPA can be used for preference learning in a machine learning context (and thus, in principle, with generalizations to non-evaluated items). I am less convinced by the value of this experimental setup (see below).
Methods And Evaluation Criteria: The experiments in Section 5 make sens to me, and I am satisfied with their design, their presentation and the results.
However, the experiment in Section 6 is less compelling. In particular, the value of doing machine learning is to allow for generalization to non-evaluated items. Yet, as far as I understand, the trained models are evaluated on the training set (line 422, left column). I would suggest that they instead separate the DiCES dataset into two subsets, one for training and the other for evaluation. The reported prediction error should then correspond to the evaluation set.
Also, I am not sure I fully understood how SPA was adapted to Section 6.
Theoretical Claims: While I have not carefully analyzed the provided proofs, I have overviewed the Appendix.
I am fully convinced that the theorems hold, as they leverage classical graph theoretical constructions.
Experimental Designs Or Analyses: As I said above, I believe that Section 6 would gain by separating the dataset into a training and an evaluation set, to evaluate the out-of-training-set predictions of the SPA-based trained model.
Supplementary Material: I did not thoroughly review the supplementary material. However, the results seem reasonable to me.
Relation To Broader Scientific Literature: The literature review seems satisfactory to me.
Perhaps the authors could add references to more datasets that fit their algorithms (see suggestions below).
Essential References Not Discussed: The paper does seem to be missing any key reference.
Other Strengths And Weaknesses: I really appreciated the paper's motivation section, especially with respect to the limitations of the principle of majority and the need to go beyond this, not only for legitimacy reasons, but also because of security. This yields an originality that is of great strength to the paper.
The paper is also very well written, with an exception for Section 6 which I found confusing at times (see below).
I think that, especially in the current age of irreconcilable judgments on AIs' preferred behaviors, especially in the context of social media moderation and content amplification, the paper is of very high significance.
Other Comments Or Suggestions: I did not understand how SPA was precisely defined in Section 6 (line 383, right column). The paper writes "SPA for the largest value of dissent, which leads to the clearest distinctions among conversations". How is "distinctions among conversations" measured? Moreover SPA creates a partition. Are the authors guaranteeing a partition into two subsets (toxic and non-toxic)? What if there are more subsets? I would appreciate clarifications on this definition.
Additionally (and perhaps relatedly), the definition of label error (line 420, left column) introduces a variable $t$ which is never defined (except using the word "consensus" which I do not seem to understand, see below), and whose value in the experiments does not seem to be given. Could the authors clarify?
Less importantly, I would urge the authors to look into the data of https://pol.is [1]. There has been recent efforts to make such data openly available, and they really match the DiCES dataset structure. But there are many more such data. Most importantly, I believe that SPA would be a great addition to the pol.is website.
The authors could also be interested to look into the Tournesol dataset [2], though the comparisons in this dataset are a lot more sparse, which raises additional challenges.
Finally, I am confused about the use of the word "consensus" throughout the paper. In plain English, "consensus" seems synonymous with "quasi-unanimity", which suggests using $\tau \to 0$. But this is not how it seems to be used, e.g. lines 323, right column, or line 420, left column.
[1] Polis: Scaling Deliberation by Mapping High-Dimensional Opinion Spaces. Christopher Small, Michael Bjorkegren, Timo Erkkilä, Lynette Shaw, Colin Megill (2021). Departament de Filosofia, Sociologia i Comunicació Audiovisual i Publicitat.
[2] The Tournesol dataset: Which videos should be more largely recommended? Lê-Nguyên Hoang, Romain Beylerian, Julien Fageot, Louis Faucon, Aidan Jungo, Adrien Matissart, Nathaël Noguès (2024). https://openreview.net/forum?id=5WFzk0H27p
Questions For Authors: Could the authors more rigorously define the adaptation of SPA in Section 6?
I would also be interested in the authors' thoughts on how SPA could be adapted for highly sparse comparisons, as is the case in RLHF and recommendation AIs (see e.g. [3]).
Additionally, one interesting feature of pol.is is to leverage community detection to find agreement across (unbalanced) communities. How could SPA be adapted to such settings (especially when the communities are not predefined, and have instead to be learned)?
(note that combining the two problems, namely sparsity and cross-community agreements, makes the problem even more challenging, as the sparsity may be adversarial, i.e. some community overrates some items rather than others)
[3] Plackett-luce regression mixture model for heterogeneous rankings. M Tkachenko, HW Lauw (2016). https://dl.acm.org/doi/abs/10.1145/2983323.2983763
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your time and feedback! We appreciate your feedback and your detailed suggestions for improvement, including further datasets to improve our work. We provide tables at https://tinyurl.com/2ybsfs95
> However, the experiment in Section 6 is less compelling … The reported prediction error should then correspond to the evaluation set.
In our revised setup, we use only a subset of users and items (80/20 train/test split) to create rankings for the given items. We switch to using a pre-trained model (bert-mini) as a starting point, and replace pairwise majority with the Copeland method for consistency with our experiments. We then note the total per-user error, as well as how well a model trained on these rankings generalizes to new users in Table 10.
We also provide a table of model generalization to new items (with train users), new users (on train items), and new users on new items. We binarize to pick the threshold with the Maximum TPR s.t FPR is capped to 10% (Table 11). We hope to spend further time exploring other pre-trained setups and architectures to ensure higher performance for test set items and users, and to demonstrate SPA’s performance advantages with other choices of threshold.
> Also, I am not sure I fully understood how SPA was adapted to Section 6.
We made an error in our description of the adaptations, now corrected. DICES uses annotators who rate an item toxic/non-toxic, but the level of toxicity beyond that is not clarified. In our new setup, in order to avoid excessive (and incorrect) levels of “ties”, we have only included preferences where there is distinction (toxic vs non-toxic). We then scale the weights of each preference pair to the same total weight, to make each item-pair equally important.
>How is "distinctions among conversations" measured?
“Distinctions between conversations” is used to specify the greatest number of tiers (greatest comparability). We have added text to our manuscript to make that distinction clear. There are several tiers created - the chosen threshold determines which are grouped under toxic/non-toxic.
> The definition of label error (line 420, left column) introduces a variable t … Could the authors clarify?
$t$ in this instance represents the majority threshold, at which point a majority of users rate a conversation toxic. We have revised the text to make this clear as well.
> Finally, I am confused about the use of the word "consensus" throughout the paper. In plain English, "consensus" seems synonymous with "quasi-unanimity", which suggests using . But this is not how it seems to be used, e.g. lines 323, right column, or line 420, left column.
We agree that the term was not well-defined and should be clarified. In this setting, we use “consensus” to refer to the true majority vote of annotators.
> Additionally, one interesting feature of pol.is … especially when the communities are not predefined, and have instead to be learned)?
This is a fascinating direction for potential future work! We take the question to mean that we want to find cases where multiple distinct communities agree (please correct us if needed!). One could imagine applying this approach hierarchically in such a setting - finding a selective preference aggregation for each community individually, then treating each of those tiered rankings as an individual “judge” and finding a preference aggregation across communities. If communities need to be learned, one could imagine assigning each individual a weight based on an estimated probability of belonging to each community (see our response to reviewer dGCS for further details on adding weights to our approach). SPA could also be used as part of the identification process; identifying groups of users who agree with each other in areas where overall disagreement is high (within a tier, for instance) can be used to help determine these communities.
> I would also be interested in the authors' thoughts on how SPA could be adapted for highly sparse comparisons, as is the case in RLHF and recommendation AIs (see e.g. [3]).
In future work, we plan to explore different ways of adapting SPA for use in larger/sparser datasets. The most straightforward is to build on existing work using LLMs to make judgments (RLAIF). We could use multiple LLMs to make judgements to avoid amplifying model biases; one could also use lightweight models and “recruit” larger models only when preferences conflict to save on computational cost.
SPA could also be used with modeling. SPA could be used to limit uncertain responses from existing models, or could be used to help create models that better model users. Comparisons within tiers could be used to train a base model for accuracy, and identified groups (see previous response) could be used to train personalized models with high accuracy for each group.
We hope our responses clarify your points! Thank you for your feedback and suggestions. | Summary: The paper introduces a new framework for ranking via preference aggregation while allowing for disagreement of the voters. Unlike many traditional methods that enforce a total order, the approach aims to construct a partial ranking, only comparing items where a sufficient majority agrees. The paper proposes an algorithm graph based algorithm to construct ordering relationships of the items. They provide a correctness analysis and asymtotical runtime of it. The authors also apply their method on small datasets.
Claims And Evidence: - The authors claim that the algorithm is fast and scalabile however this is not convincing, expecially from an experimental perspective.
Indeed:
- The datasets are way too small. The higher number of items is 175. Since the runtime seems to be quadratic (in the items), I don't see reasons why not to apply it on thousands of items (also because other methods can handle such dimension)
- Often, there are (of the order of) n*log n pairwise comparisons available (this is because it is enough to rank the n items with such amount of comparisons), but the case analyzed in the paper with % of missing pairwise comparisons different from 0, is only the one with 7 items. It would be interesting to see the behaviour of the algorithm in different cases, i.e., for missing pairwise comparisons for bigger datasets.
- SPA_0 and SPA_min very often do not produce actual rankings. Often the number of Tiers is 1 which would mean that all items are incomparable, and thus not very informative
- in figure 4 there is not reported the unit measure of the runtime. I think it is seconds, but 60-100 seconds for 500 items seems rather slow.
From a theoretical perspective, the runtime of O(n^2 * p) is reasonable.
Methods And Evaluation Criteria: See the issues above on the datasets and experiments.
Theoretical Claims: The theoretical analysis seems rigorous and sound. I checked the algorithm 1 and its runtime.
Experimental Designs Or Analyses: Besides the issues previously mentioned in Claims And Evidence, the rest is valid and clear.
Supplementary Material: I review the runtime of algorithm 1. And read the additional theoretical results.
Relation To Broader Scientific Literature: The distinction with ranking with ties is not very clear. The reason why the method would deem items as incomparable and not equivalent is not very well explained. In particular, the items deemed incomparable might actually be evaluated and deemed equal, by some individuals/evaluators. I think that there is the need to better clarify this distinction theorically.
Also the empirical analysis should benefit from a direct comparisons with ranking with ties methods.
Essential References Not Discussed: Perhaps the Bradley-Terry model, which is commonly used to rank from pairwise comparisons colud be mentioned:
Bradley, Ralph Allan; Terry, Milton E. (1952). "Rank Analysis of Incomplete Block Designs: I. The Method of Paired Comparisons". Biometrika. 39 (3/4): 324–345.
Other Strengths And Weaknesses: NA
Other Comments Or Suggestions: Typos and similar:
- line 48R, the sentence seems incomplete
- appendix A2, issues with the references and an extra ,
Questions For Authors: - In the problem statement $SPA_\tau$, what is exactly Comparisons(T )? Also $\mathbb{T}$ is undefined, I guess it is the family of tiered rankings.
- What the colors in figure 2 represent?
- What is the unit measure of the runtimes in Figure 4? Seconds?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Response
Thank you for your time and feedback! We include supplementary tables here: https://tinyurl.com/2ybsfs95
> The authors claim that the algorithm is fast and scalabile however this is not convincing …
> … 60-100 seconds for 500 items seems rather slow.
As you’ve noted, our empirical runtimes were slower than our asymptotic analysis suggests. That is due to some inefficiencies that we have since resolved in our implementation. We provide a table of updated runtimes (in seconds) in Table 6 and have updated the figure in our manuscript. (Note that the naive method remains slow because it does not leverage the path approach’s tricks.)
We’re happy that you agree our proven asymptotic runtime of O(n^2*p) is reasonable. Indeed, many competing methods can be quite computationally intensive - for instance, Kemeny-Young is NP-Hard and must be solved with integer programming in many contexts. Our algorithm’s asymptotic scaling - which is linear in the number of input pairwise preferences - gives a strong characterization of our algorithm’s scalability, independent of any quirks in implementation or hardware.
> Often, there are (of the order of) n*log n pairwise comparisons available…, for missing pairwise comparisons for bigger datasets.
We appreciate your point regarding the lack of missing pairwise comparisons. We provide Table 7 highlighting the number of comparisons at $\tau$_max with 5% of all pairwise comparisons dropped. We also refer you to values of Δ-sampling, which serve as a median change with 10% of samples dropped, equivalent to 10% missing comparisons.
> SPA_0 and SPA_min very often do not produce actual rankings …
To clarify, we do feel these values (even when creating minimal tiers) highlight key information. At SPA_0, any tiers > 1 indicate unanimous preference among users. The dissent rate at which we see SPA_min can be informative as well - in certain datasets such as Sushi, this value is high (> 0.4), which reveals high levels of underlying disagreement. This information can also be found essentially for free, since Algorithm 2 gives these tiered rankings as part of the process of finding more granular rankings like SPA_max.
> The datasets are way too small …
We appreciate the reviewer's suggestion. Existing larger datasets are generally sparse; we discuss future adaptations with reviewer sHCm that would enable SPA. We note that the number of potential pairwise preferences processed in existing scenarios is already substantial - over 7.5 million for DICES (Section 6) – demonstrating SPA's ability to handle a considerable volume of preference data. SPA’s design and theoretical guarantees (Section 4) ensure predictable behavior at any scale. Several applications remain at small values of n; reviewer dGCS points out potential applications of SPA in RLHF at n < 5, and we note applications that exist at the current scale. If the reviewer wants further clarity on larger scale data, we are willing to conduct experiments on synthetic data.
> The distinction with ranking with ties is not very clear … this distinction theorically.
We use 'incomparable' when no strict pairwise preference exists between items. Items in the same tier are thus 'incomparable', which can arise from various situations like cycles, evenly split preferences, or judges explicitly marking items as equivalent (see Fig 3). Therefore, 'incomparable' (in the same tier) does not necessarily mean 'equivalent'. For example, if ⅔ of judges think A > B, ⅔ think B > C, and ⅔ think C > A, that does not necessarily mean A = B = C. Perhaps 0 judges think any of those items are equivalent. Users can state equivalence, and our ranking considers it. However, the distinction between abstention/disagreement and asserted equality is a key benefit within our tiers.
> Also the empirical analysis should benefit from a direct comparisons with ranking with ties methods.
Regarding comparison to tie-handling methods: standard baselines like Borda or MC4 do produce ties when scores are equivalent, but by design these instances are unlikely. Our mechanism —abstaining based on exceeding a pairwise dissent threshold — differs from methods inducing ties via score equivalence.
> what is exactly Comparisons(T )? Also T_mathcal is undefined, I guess it is the family of tiered rankings
We define $\Comparisons{\tierset} := \sum_{i,j \in \intrange{n}} \indic{\aggprf{i}{j}{\tierset} \neq \dnc{}}$ in Section 2, although we acknowledge issues with formatting in our submission that we have now fixed. Please let us know if you would like further clarification on our definition of Tiered Rankings (T_mathcal) in Definition 2.1.
> What the colors in figure 2 represent?
The colors denote items in the same tier.
> Perhaps the Bradley-Terry model….
We have added a reference, and include an additional experiment in our response to reviewer mXsg.
We hope these clarifications address your concerns! We are happy to resolve any further questions.
---
Rebuttal Comment 1.1:
Comment: Thanks for your reply. However, several of my concerns have not been addressed.
1) the method has not been shown to work with n*log n number of pairwise comparisons (10% missing comparisons is almost like having all of them). Collecting n^2 is almost always unavailable indeed the authors admit that: 'Existing larger datasets are generally sparse' but there is no test on these real datasets
2) the datasets are way too small, with way too few items
3) Minor, but I feel some baselines against methods with ties are needed. Bradley-Terry has extensions with ties ([Rao and Kupper, 1967] - Rao, P. and Kupper, L. L. Ties in paired-comparison experiments: A generalization of the bradley-terry model. Journal of the American Statistical Association, 62(317): 194–204, 1967.), but also one could simply deem as ties when two scores of a method are below a certain threshold. You could easily make tiers based on the ties.
4) Furthermore, Reviewer mXsg is on point with 'The experiments do not convincingly show that selective aggregation improves decision quality; instead, they often merely illustrate that the method can “abstain” from making comparisons—a trivial consequence of the design.' Did you consider datasets with a ground truth ranking, besides the observed pairwise comparisons to rank from?
---
Reply to Comment 1.1.1:
Comment: Thanks! We've included some responses below – but we’d like to start with a broader misunderstanding.
> Collecting n^2 is almost always unavailable… but there is no test on these real datasets
Our work focuses on a large set of real-world tasks like college rankings, subjective recommendations, and toxicity prediction (Sec 5). These tasks cover major applications of preference aggregation and benchmark our approach across domains, preference types, disagreement, missingness, and noise. The datasets in our paper are smaller because they are cases where we can gather sufficient pairwise comparisons to make reliable claims.
It seems like you are concerned we did not test our method on a sparse dataset with millions of items and/or users. This is not because our method doesn’t scale but because we know how it would behave given the degree of sparsity. In such tasks, where we are missing so many preferences, every selective ranking would have a single tier. To be clear, **this is a feature, not a bug.** There is no collective preference claim that we can make that is robust.
We want to be clear that we see the problems as important, but out-of-scope. In this case, we can extend our paradigm to handle them in several ways (e.g., by imputing missing preferences from a plausible distribution and constructing selective rankings). We see them as out of scope since it requires fundamentally different approaches and detracts from the fact that we need a different paradigm in the first place. Here, it is important to establish the foundations of the method and highlight that it works correctly for important problems.
> method has not been shown to work n*log n number of pairwise comparisons
There is no reason that it would not. We can scale up the synthetic datasets.
If you meant collecting ~n log(n) comparisons per judge (assuming transitivity), our approach is fully compatible via preprocessing, although this assumption may not hold in the real-world.
> 10% missing comparisons is almost like having all of them.
We can include ablation studies where we drop more comparisons (or the threshold fraction of dropped preferences at which we obtain a single tier). Again, this is a feature not a bug.
> ..baselines against methods with ties are needed.
We note that the methods do have ties. The broader issue is that they do not arise often. We are happy to include these in the table and discuss them in a revision.
> Did you consider datasets with a ground truth ranking?
> You could easily make tiers based on the ties.
Thanks for bringing this up. We’d like to use this as an opportunity to address an important point: **there is no ground truth for many tasks where we aggregate preferences** – i.e., what is the ground-truth when we vote, ranking colleges, ranking sushi? In such tasks, we’d view ground-truth as the set of individual preferences. Standard aggregation will distort the ground truth as a result of reconciliation. In contrast, selective aggregation would return "as much ground-truth as possible” .
Our algorithm is exactly the subset of items where sufficient users agree to disagree. We want to highlight that this behavior is a result of deliberate algorithm design. Thresholding doesn't inherently capture preference cycles or the disagreement depth revealed by dissent levels.
We do want to say that we have cases where we show that we return a ground truth ranking. Specifically, in Section 6, we return a ranking that minimizes per-user disagreement relative to other methods and generalizes to new users (see sHCM). See our response to reviewer dCGS for recovery of rankings with user noise.
> The experiments do not convincingly show that selective aggregation improves decision quality; instead, they often merely illustrate that the method can “abstain” from making comparisons—a trivial consequence of the design.
We agree that the discussion points do not clearly articulate this. We have been revising the paper and this should come across far more clearly now.
Our experiments show that selective aggregation leads to better decisions because:
- It only highlights where people agree
- It is robust by design, when other methods are brittle
In comparison, existing methods lead to "bad decisions" because:
- They overrule users (because they aim to return complete orders)
- Their output changes drastically under different conditions
The first point shows that existing methods will not lead to "good decisions" because they inherently overrule users. The second shows that – even in settings where they are willing to tolerate disagreement – existing methods may still lead to bad decisions because their output is sensitive to the realities of preference data and aggregation. Specifically, we show that their output will change dramatically when we drop a little data or add a little noise. These are all realistic scenarios that would lead existing methods to fail. However, our method is robust by design. | Summary: This paper introduces selective preference aggregation (SPA), a framework that aggregates ordinal preferences into partial orders (tiered rankings) to avoid arbitrating disagreements. The core contributions include a graph-based algorithm, theoretical guarantees (e.g., stability under missing data), and empirical validation across datasets like NBA rankings and toxicity detection. SPA demonstrates improved transparency and robustness compared to traditional methods (e.g., Borda, Kemeny).
Claims And Evidence: Most claims are supported by evidence, but critical gaps in statistical rigor and baseline comparisons weaken their persuasiveness.
1. SPA’s 0% inversion rate under Δ-Gaming (Table 1) supports robustness claims, but the absence of p-values or confidence intervals undermines statistical significance.
2. SPA’s 18.4% label error in toxicity detection (Figure 5) is compelling, but the expert baseline’s high error (43%) limits its interpretability.
3. RLHF Applicability: The claim that SPA generalizes to RLHF with small n (\(<5\)) lacks empirical support.
Methods And Evaluation Criteria: The proposed methods are theoretically sound, but evaluation criteria and baseline selections are outdated and contextually limited.
Theoretical Claims: Key theorems are valid under stated assumptions but require clarification to resolve inconsistencies.
- Proposition 4.2: Stability under missing data relies on imputing preferences as indifference ($\pi$=0), which may not hold for non-random missingness.
Experimental Designs Or Analyses: Experimental designs are generally rigorous but lack critical controls and scalability tests.
1. SPA’s linear runtime is untested on large n (>10^4), limiting real-world applicability.
2. Robustness claims (e.g., Δ-Gaming) lack significance tests, making it hard to validate improvements.
Supplementary Material: Yes. I have reviewed Appendix D to understand the experiment sections (Section 5/6).
Relation To Broader Scientific Literature: The paper situates itself within social choice theory and machine learning.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: The paper’s originality and practical impact are strong, but structural and clarity issues hinder accessibility.
**Weaknesses**
1. Related work is embedded in the Introduction, lacking a dedicated section. This prevents readers from gaining a systematic understanding of the field’s context.
2. Non-essential theorems (e.g., Section 4) and complex proofs in appendices impede readability.
Other Comments Or Suggestions: **Typos**
- Missing notation of "$\Delta$-Gaming" in Table 1. It should be "$\Delta$-Adversarial", based on the caption.
- L349, "We randomly sample or flip" should be "We randomly drop or flip "
Questions For Authors: 1. In RLHF settings where the number of items n is typically small (<5), SPA’s tiered rankings may collapse to trivial solutions (e.g., all items in a single tier due to insufficient data). To validate SPA’s utility in AI alignment tasks, could you provide ablation studies or theoretical analysis demonstrating its behavior for n < 5? This would clarify whether SPA’s advantages (e.g., robustness, transparency) persist in the small-scale preference comparisons characteristic of RLHF.
2. SPA assumes uniform weighting of user preferences. However, in real-world applications (e.g., expert-driven labeling), users may have heterogeneous weights (e.g., experts’ preferences matter more). Does SPA can be directly applied in such a scenario? and analyze how it affects tiered rankings and guarantees (e.g., stability under missing data)? This would enhance SPA’s applicability to scenarios where users have varying levels of credibility or expertise.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank you for your response! We provide tables at https://tinyurl.com/2ybsfs95
> In RLHF settings where the number of items n is typically small (<5) …This would clarify whether SPA’s advantages (e.g., robustness, transparency) persist in the small-scale preference comparisons characteristic of RLHF.
Certainly! SPA's ranking robustness primarily hinges on the number of judges (p) and their agreement (1 - $\tau$). High consensus is needed for admitted pairwise preferences. More judges clarify patterns and enable rankings at lower dissent.
Low granularity suggests the need for more judges. Deep disagreement might warrant other methods (like expert input), highlighting SPA's transparency. (See reviewer responses mXsg/FgRQ for details).
We provide an example in Table 9. Increasing p (10 -> 30) allows more tiers (3 -> 4) at lower dissent (0.5 -> 0.333). Limited rankings (dissent < 0.5) can signal the need for more judges and identify items with strong agreement.
> SPA assumes uniform weighting of user preferences. However, in real-world applications (e.g., expert-driven labeling), users may have heterogeneous weights (e.g., experts’ preferences matter more). Does SPA can be directly applied in such a scenario? and analyze how it affects tiered rankings and guarantees (e.g., stability under missing data)?
This is possible. The most trivial way is to include expert preferences as equivalent to multiple judges. We can also use arbitrary nonnegative weights on each judge. The guarantees then become relative to not overruling a certain sum over weighted preferences of judges.
Just set
$\\textrm{Disagreements}(T) := \\max_{i,j \\in [n]} \\sum_{k=1}^p w_k \\mathbb{1}[\\pi_{i,j}^{k} \neq 1, \\pi_{i,j}(T) = 1]$
Where $w_k$ is the weight of a given judge, renormalized so that $\sum_{k\in\intrange{p}} w_k = p$. We can maintain many of the guarantees premised on the disagreement constraint - for example, stability under missing data. Some theory does change, for example, $|\mathcal{W}|$ is no longer necessarily linear in p.
> Key theorems are valid under stated assumptions but require clarification to resolve inconsistencies …may not hold for non-random missingness.
Proposition 4.2 does hold even for adversarial, non-random missingness - that’s one of the more fascinating ramifications of our approach, actually. The proposition holds because indifference is a conservative assumption to make - it counts as the same or more disagreement with any possible comparison than would any other possible value. We never make comparisons that would be invalid given the true values, regardless of the missingness mechanism.
It may be possible to model missing preferences to extend coverage. See our response to reviewer sHCm (future work).
> SPA’s linear runtime is untested on large n (>10^4), limiting real-world applicability.
We appreciate your prior note that there are applications for n<5; please also see our response to FgRQ and sHCm, which address the scale of experiments and use cases. We would like to note that in many domains (e.g., fine-tuning), a limited amount of examples (50-100) is enough to improve performance. From Meta: “A general trend we’ve seen is that quality is more important than quantity … documentation suggests even a 50- to 100-example dataset can potentially make a difference.”
* https://ai.meta.com/blog/how-to-fine-tune-llms-peft-dataset-curation/
> SPA’s 0% inversion rate under Δ-Gaming (Table 1) supports robustness claims, but the absence of p-values or confidence intervals undermines statistical significance.
> Robustness claims (e.g., Δ-Gaming) lack significance tests, making it hard to validate improvements.
We would like to clarify the nature of the Δ-Gaming (Δ-Adversarial) metric reported in Table 1. SPA exhibited zero inversions across 100 simulations, each with 10% adversarial preference flips. This 0% reflects the maximum observed inversion rate under the worst-case condition, demonstrating strong stability against manipulation. As such, CIs/p-values on this reported maximum are not directly applicable. We refer you to Appendix C for guarantees on stability and robustness.
> The proposed methods are theoretically sound, but evaluation criteria and baseline selections are outdated and contextually limited.
We wish to clarify our intent with the choice of metrics. Our chosen evaluation criteria (e.g., Disagreement Rate, Abstention Rate, Δ-Adversarial robustness) directly quantify the core characteristics of SPA – its ability to manage the trade-off between coverage and dissent, and the resulting robustness gained through abstention. If the reviewer feels specific metrics would enhance the evaluation, we are open to revising/adding these where possible.
We hope our responses clarify the points raised and are happy to address any remaining questions. | null | null | null | null | null | null |
Targeted Low-rank Refinement: Enhancing Sparse Language Models with Precision | Accept (poster) | Summary: The paper introduces a novel method to improve the performance of pruned large language models (LLMs) by combining sparsity with a low-rank approximation. The authors propose an iterative refinement algorithm that updates the sparse weight matrix while incorporating a low-rank component to approximate the difference between the original dense matrix and the pruned sparse matrix. This approach aims to recover information lost during pruning without requiring extensive retraining or large datasets, maintaining the sparsity pattern for hardware efficiency. Key contributions include:
1. An iterative weight update method (Algorithm 1) that refines the sparse matrix and adds a low-rank patch, progressively increasing the rank from 2 to a target \( k \) over \( T \) iterations, preserving the sparsity pattern using a binary mask.
2. The method reduces perplexity compared to baseline pruning techniques (e.g., magnitude pruning and Wanda) across various sparsity levels.
3. It also achieves competitive performance on benchmark datasets like TruthfulQA, GSM8K, ARC-c, and MMLU w.r.t. Magnitude and Zero-shot SVD
4. The paper provides a theoretical analysis proving sparsity preservation (Theorem 4.1), convergence to a solution (Theorem 4.2), and monotonic error reduction after a certain iteration (Theorem 4.3).
Claims And Evidence: - **Claim 1**: The method bridges the gap between dense and sparse models using a low-rank component.
- **Assessment**: Well-supported
- **Claim 2**: The iterative algorithm enhances precision by prioritizing larger singular values.
- **Assessment**: Well-supported, though the empirical comparison with PCP could be expanded to quantify precision gains more explicitly.
- **Claim 3**: Significant perplexity improvements, especially at high sparsity levels (e.g., 99.6% reduction at 70% sparsity).
- **Assessment**: There is evidence though the lack of zero-shot task performance (e.g., accuracy on downstream tasks) limits the scope of evaluation compared to Wanda’s broader benchmarks (e.g., Table 2 in Wanda).
- **Claim 4**: The method enables a reduction in model parameters while maintaining 50% sparsity and meeting a specific performance target.
- **Issue**: Lacks details and remains vague. The paper does not specify the performance target or provide a direct comparison showing parameter reduction versus performance trade-offs.
Methods And Evaluation Criteria: - Methodology yes.
- Evaluation criteria is appropriate for language modeling (perplexity) and generalizability (benchmarks). However, unlike Wanda, which includes zero-shot task accuracies (e.g., Table 23) and few-shot results (e.g., MMLU in Table 21), this paper lacks zero-shot accuracy metrics (and few-shots on MMLU), limiting its comparability on downstream tasks. Adding such evaluations would strengthen the assessment of practical utility.
Theoretical Claims: The proofs generally seem to be correct but rely on assumptions about singular value decay that could be sensitive to matrix properties.
Experimental Designs Or Analyses: - **Perplexity Comparison (Tables 1, 2)**: Tests LLaMa-7B and LLaMa-13B across sparsity levels with \( k=128 \).
- **Soundness**: The design is valid, using WikiText-2 perplexity as a standard metric, consistent with Wanda and SparseGPT. Results are reproducible with a fixed \( k \) and \( T=50 \).
- **Issues**: The paper lacks details on calibration data (e.g., size, source), unlike Wanda (128 sequences from C4). This affects reproducibility. Additionally, only magnitude pruning and Wanda are baselines, omitting SparseGPT (a key competitor in Wanda’s Table 3).
- **Benchmark Evaluation (Table 3)**: Assesses performance on four datasets.
- **Soundness**: The choice of datasets is reasonable for LLM evaluation, and comparisons with dense and sparse models are fair.
- **Issues**: Sample sizes and statistical significance (e.g., variance across runs) are not reported, unlike Wanda’s robustness analysis (Table 18). This reduces confidence in the results’ stability.
- **Iterative Analysis (Figures 4, 5)**: Visualizes singular value spectra, energy retention, and convergence.
- **Soundness**: The synthetic and real-world (LLaMa-7B) analyses are well-designed to support theoretical claims.
- **Issues**: Limited to one sparsity level (50%) in Figure 4, missing higher sparsity cases (e.g., 70%) where Wanda shows larger gaps
The experiments are sound but incomplete compared to Wanda, which includes zero-shot accuracies, few-shot tasks, and robustness analysis, highlighting gaps in broader evaluation.
Supplementary Material: Section A
Relation To Broader Scientific Literature: - **LLM Pruning**: Extends magnitude pruning (Han et al., 2015), SparseGPT (Frantar & Alistarh, 2023), and Wanda (Sun et al., 2023) by adding low-rank refinement, addressing the performance gap noted in the Junk DNA Hypothesis (Yin et al., 2024).
- **Low-rank Approximation**: Leverages SVD and iterative refinement, drawing from matrix completion (Chandrasekaran et al., 2011) and robust PCA (Candès et al., 2011), adapting them for sparsity preservation.
Essential References Not Discussed: - **SparseGPT (Frantar & Alistarh, 2023)**: Cited but not experimentally compared in Tables 1–3, despite being a key baseline in Wanda (Tables 2, 3). Its inclusion would contextualize the method’s superiority over second-order pruning approaches.
- **LoRA (Hu et al., 2021)**: Wanda uses LoRA for fine-tuning (Section 5), showing performance recovery. The absence of fine-tuning comparisons here misses a practical recovery baseline.
- **Yin et al. (2024) - Junk DNA Hypothesis**: Cited, but its implications (irreversible loss at high sparsity) could be explored more deeply with zero-shot task evaluations, as Wanda does.
These omissions limit the paper’s positioning against state-of-the-art recovery and pruning methods.
Other Strengths And Weaknesses: - **Strengths**:
- **Originality**: Creative integration of sparsity and low-rank refinement, distinct from Wanda’s metric-based pruning.
- **Significance**: Addresses high-sparsity performance degradation, a critical issue for LLM deployment.
- **Clarity**: Well-written with clear figures (e.g., Figure 1) and algorithmic exposition.
- **Weaknesses**:
- **Evaluation Scope**: Lacks zero-shot and few-shot task evaluations (cf. Wanda’s Tables 2, 21), limiting practical relevance.
- **Reproducibility**: Missing details on calibration data and iteration specifics (e.g., \( T \) selection).
- **Comparison Depth**: Omits SparseGPT and fine-tuning baselines, reducing comparative strength against Wanda.
Other Comments Or Suggestions: - **Typos**:
- Page 7, Table 2: "Dense" perplexity should be 5.09 (per Wanda), not 4.57.
- **Suggestion**: Include a runtime comparison (e.g., vs. Wanda’s Table 4) to quantify “computationally efficient.”
Questions For Authors: 1. **Calibration Data Details**: What calibration data (size, source) was used for perplexity experiments? Wanda specifies 128 C4 sequences; this omission affects reproducibility. A response detailing this would strengthen confidence in the results.
2. **Performance Target for Parameter Reduction**: The abstract claims an 8.6% parameter reduction for a specific target at 50% sparsity. What is this target, and can you provide supporting data? Without this, the claim feels unsubstantiated.
3. **Zero-shot Task Evaluation**: Why were zero-shot accuracies (e.g., as in Wanda’s Table 2, also Tables in the appendix e.g., Table 23) not included? Adding these could align your evaluation with Wanda’s, enhancing practical relevance. A justification or additional results would influence my view on the method’s applicability.
4. **SparseGPT Comparison**: Why was SparseGPT excluded from experimental comparisons despite its relevance (cf. Wanda’s Table 3)? Including it could better position your method; its absence weakens the competitive analysis.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We authors greatly thank the reviewer for constructive comments on this work. We would like to clarify the following points:
**W1: Evaluation Scope: Lacks zero-shot and few-shot task evaluations (cf. Wanda’s Tables 2, 21), limiting practical relevance.**
**Q3: Zero-shot Task Evaluation: Why were zero-shot accuracies (e.g., as in Wanda’s Table 2, also Tables in the appendix e.g., Table 23) not included? Adding these could align your evaluation with Wanda’s, enhancing practical relevance. A justification or additional results would influence my view on the method’s applicability.**
Below, we provide additional results on more benchmarks.
Table: Performance of Llama-7B on three additional benchmarks, along with the results from Table 3.
| Model | *HellaSwag* | *WinoGrande* | *ARC-e* | TruthfulQA | GSM8K | ARC-c | MMLU |
| ----------------------------- | ----------- | ------------ | -------- | ---------- | ------- | -------- | -------- |
| Dense baseline | 76.2 | 70.0 | 72.8 | 34.1 | 10.3 | 44.7 | 32.1 |
| | | | | | | | |
| Magnitude 50% | 60.9 | 59.3 | 54.3 | **35.3** | 1.0 | 33.5 | 24.6 |
| Magnitude 50% + Zero-shot SVD | 69.2 | **65.5** | 63.6 | 34.3 | 1.5 | 36.9 | **26.0** |
| **Magnitude 50% + Ours** | **69.8** | 65.3 | **64.3** | 34.2 | **3.4** | **41.5** | **26.0** |
**W2: Missing details on calibration data and iteration specifics (e.g., ( T ) selection).**
1. The proposed iterative refinement method is entirely data-free and does not require calibration data.
As shown in Algorithm 1, the only inputs are:
- Dense weight matrix $W$
- Binary mask $P$ (from pruning)
- Target rank $k$
- Number of iterations $T$
We use WikiText-2 for perplexity evaluation and 'allenai/c4' as the calibration data for Wanda pruning as well as Wanda + Ours.
2. We consistently uses T=50 across experiments, which is sufficient for achieving most of the potential error reduction while maintaining computational efficiency. Since the overall time complexity is $O(T \cdot min(mn^2, m^2n))$, where $m$, $n$ are the number of rows and columns of $W$.
**W3: Omits SparseGPT and fine-tuning baselines, reducing comparative strength against Wanda.**.
**Q4: SparseGPT Comparison: Why was SparseGPT excluded from experimental comparisons despite its relevance (cf. Wanda’s Table 3)? Including it could better position your method; its absence weakens the competitive analysis.**
Thank you for the suggestion.
We provide the results of SparseGPT 2:4 and SparseGPT 2:4 + Ours in the following table.
Table: Performance of SparseGPT 2:4 and SparseGPT 2:4 + Ours using Llama-7B on three additional benchmarks, along with the results from Table 3.
| Method | *HellaSwag* | *WinoGrande* | *ARC-e* | TruthfulQA | GSM8K | ARC-c | MMLU |
| -------------------- | ----------- | ------------ | -------- | ---------- | ------- | -------- | -------- |
| SparseGPT 2:4 | 58.6 | 63.9 | 56.6 | **36.5** | 2.0 | 33.1 | 25.4 |
| SparseGPT 2:4 + Ours | **65.1** | **66.9** | **59.9** | 33.8 | **2.7** | **36.1** | **29.1** |
**Q1: Calibration Data Details: What calibration data (size, source) was used for perplexity experiments? Wanda specifies 128 C4 sequences; this omission affects reproducibility. A response detailing this would strengthen confidence in the results.**
Thank you for your suggestion, we will add these details in revised version.
We also use 128 C4 sequences as the calibration data for Wanda pruning as well as Wanda + Ours. For perplexity evaluation, we use 128 sequences from WikiText-2 dataset.
**Q2: Performance Target for Parameter Reduction: The abstract claims an 8.6% parameter reduction for a specific target at 50% sparsity. What is this target, and can you provide supporting data? Without this, the claim feels unsubstantiated.**
The performance target is the perplexity metric on WikiText-2 dataset, we will revise the abstract to clarify this. We provide the detailed results in Figure 3(a).
---
Rebuttal Comment 1.1:
Comment: Thank you for adding the new results. What is the latency for this method? Compared to the other methods, what is the computational time?
---
Reply to Comment 1.1.1:
Comment: # Inference Latency Analysis
Thank you for raising this important question about comparing inference atency with other methods.
## Without Hardware Acceleration
First, to ensure a fair comparison with the dense baseline model, we adopted a conservative evaluation strategy. This involved storing all matrices in their full dense format (as torch.Tensor objects) and retaining all zero elements without utilizing sparse matrix representations or hardware-specific optimizations for speed or memory footprint improvements.
The overhead is slightly higher than dense models, which suggests that the low-rank refinement is computationally efficient despite some additional parameters.
Table: Parameter Count and Evaluation Time for Complete ARC-Challenge Benchmark.
| Model Configuration | Non-Zero Parameter Count | Parameter Count | ARC-C | Relative Time |
| ---------------------------- | ------------------------ | --------------- | ----- | ------------- |
| *Llama-2-7B* | | | | |
| Dense baseline | | | 50.5s | 1.0x |
| Magnitude 50% sparse | 3.5B | 6.7B | 50.7s | ~1.0x |
| Proposed Method (k=128, 50%) | 3.8B | 7.1B | 50.5s | ~1.0x |
| *Llama-2-13B* | | | | |
| Dense baseline | 13.0B | 13.0B | 73.2s | 1.0× |
| Magnitude 2:4 sparse | 6.7B | 13.0B | 73.9s | ~1.01× |
| Proposed Method (k=128, 2:4) | 7.2B | 13.5B | 77.9s | ~1.06× |
## With Sparse Matrix Formats and Hardware Acceleration
Here’s an expanded and rephrased version of your text with improved clarity and flow:
To leverage the benefits of sparsity, we convert the weight matrices into a compressed sparse format. Specifically, we apply `torch.sparse.to_sparse_semi_structured` to transform the weights into `torch.sparse.SparseSemiStructuredTensor` objects, which are optimized for efficient storage and computation. The table below compares the memory footprint of our method with and without sparse matrix representations.
Inference performance also varies depending on workload characteristics such as batch size and input sequence length. When using hardware-accelerated N:M structured sparsity (supported on NVIDIA Ampere GPUs and later architectures), we observe an average inference speedup of ~1.1x in wall-clock time over dense models and a 38% reduction in GPU memory consumption.
However, this acceleration is highly dependent on hardware support—unstructured sparsity patterns or GPUs lacking dedicated sparse tensor cores may lead to slower inference compared to dense matrix operations. Thus, the efficiency gains depend on both the pruning structure and the underlying hardware capabilities.
Table: A comparison of memory usage with and without end-to-end inference acceleration.
| Model Configuration | Memory Usage | Relative Memory Usage |
| ------------------------------------------------------------------------------------------- | ------------ | --------------------- |
| *Llama-2-13B* | | |
| Proposed Method (k=128, 2:4), weight matrices are `torch.Tensor` | 2x14.6GB | 1.0x |
| Proposed Method (k=128, 2:4), weight matrices are `torch.sparse.SparseSemiStructuredTensor` | 2x9.1GB | ~0.62x | | Summary: In this work, the authors proposes a low-Rank refinement method to factorize a dense full matrix into a sparse matrix and a low-rank matrix, bridging the performance gap between dense and sparse models. This approach iteratively improves the sparse weight matrix through a low-rank adjustment, thereby increasing model accuracy, particularly at higher levels of sparsity.
## update after rebuttal
I will keep my score.
Claims And Evidence: Yes, the claims are supported by clear and convincing evidence.
Methods And Evaluation Criteria: Yes, in this work, the proposed methods and evaluation criteria make sense.
Theoretical Claims: Yes. I checked the correctness of them and found no issues among them. The iterative refinement process looks right. The theoretical analysis about the convergence property and the error bound is well-established.
Experimental Designs Or Analyses: Yes, I have reviewed the soundness and validity of the experimental designs and analyses related to large language model pruning. The experiments are carried out on the well-known Llama 7B and 13B models at various levels of sparsity. In addition to evaluating PPL, several standard benchmarks are also assessed. The experiments are comprehensive.
Supplementary Material: Yes, I reviewed all the supplementary materials, including additional theoretical analysis and experimental results.
Relation To Broader Scientific Literature: The key contributions of the paper on Targeted Low-rank Refinement are closely related to prior findings in pruning, low-rank approximation, and post-pruning performance recovery.
The study builds on the extensive body of research on pruning techniques, particularly magnitude pruning, which removes low-magnitude weights to reduce model size [1]. It addresses a known limitation of pruning: performance degradation due to the loss of important information, especially at high sparsity levels, as discussed in works such as the Junk DNA Hypothesis[2].
The idea of using low-rank approximations to restore lost model capacity has been explored in previous research [3], but prior methods often struggle with maintaining the structured sparsity patterns needed for hardware efficiency, which this paper explicitly addresses.
[1] Han, S., et al. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding.
[2] Yin, L., et al. Junk DNA hypothesis: Pruning small pre-trained weights irreversibly and monotonically impairs ”difficult” downstream tasks in llms. ICML 2024.
[3] Zhou, T. and Tao, D. Godec: Randomized low-rank & sparse matrix decomposition in noisy case. In Proceedings of the 28th International Conference on Machine Learning, ICML 2011.
Essential References Not Discussed: No. The authors has discussed almost all essential references in this work.
Other Strengths And Weaknesses: Strengths:
1. The proposed approach is a data-free and plug-in-and-play method, orthogonal to existing pruning methods.
2. The proposed iterative refinement method addressing a key limitation of existing low-rank refinement techniques, i.e. prior methods often struggle with maintaining the structured sparsity patterns needed for hardware efficiency, which this paper explicitly addresses.
3. Experiments on LLaMa models demonstrate improvements over conventional magnitude pruning and Wanda pruning.
Weaknesses:
1. While the paper discusses parameter efficiency of low-rank refinement, it does not thoroughly analyze the computational cost of the iterative update algorithm compared to alternative approaches such as the optimization-based PCP method.
2. The proposed method is theoretically hardware-efficient as it enables structured N:M pruning. However, the inference latency and memory consumption of low-rank refinement is not measured.
Other Comments Or Suggestions: In abstract “Nonetheless, these methods often create a gap between the original dense and the pruned sparse model, …”, “create a gap” is slightly awkward, “introduce” is a smoother expression.
Questions For Authors: 1. The rank k is manually set in the paper, will a poor choice of k lead to unnecessary computational overhead or insufficient recovery of pruned weights?
2. If given fixed inputs, does the iterative refinement method always converge to a stable solution?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your time and effort in reviewing our paper. We appreciate your constructive feedback and suggestions.
**W1 (Computational Complexity Analysis): While the paper discusses parameter efficiency of low-rank refinement, it does not thoroughly analyze the computational cost of the iterative update algorithm compared to alternative approaches such as the optimization-based PCP method.**
The primary computational bottleneck in the proposed algorithm is the SVD computation performed in each iteration (line 171, page 4). For a weight matrix $\mathbf{W} \in \mathbb{R}^{m \times n}$, the time complexity of SVD is $O(mn^2)$ (assuming $m \geq n$). With $T$ iterations, the overall time complexity is $O(Tmn^2)$.
In contrast, the PCP baseline would typically use an iterative optimization method (like Adam) that also requires SVD computations in each iteration to compute the nuclear norm. Furthermore, the PCP baseline requires substantially more iterations ($T=5000$) to achieve comparable results to the proposed method ($T=50$).
Table: Computational Efficiency Comparison
| Method | Time Complexity | Typical Iterations | Relative Computational Cost |
| ------------------ | --------------- | ------------------ | --------------------------- |
| Proposed Algorithm | $O(Tmn^2)$ | 50 | 1× |
| PCP Baseline | $O(Tmn^2)$ | 5000 | ~100× |
| Zero-shot SVD | $O(mn^2)$ | 1 | ~0.02× |
**W2 (Inference Latency and Memory Consumption): The proposed method is theoretically hardware-efficient as it enables structured N:M pruning. However, the inference latency and memory consumption of low-rank refinement is not measured.**
Below, we provide some missing measurements and analyses. Note that we deliberately employed a conservative evaluation approach to ensure fair comparison with the dense baseline model. We maintained all matrices in their dense representation format, preserving zero elements rather than utilizing sparse matrix formats or specialized hardware acceleration.
Table: Parameter Count and Evaluation Time for Complete ARC-Challenge Benchmark.
| Model Configuration | Non-Zero Parameter Count | Parameter Count | ARC-C | Relative Time |
| ---------------------------- | ------------------------ | --------------- | ----- | ------------- |
| *Llama-2-7B* | | | | |
| Dense baseline | | | 50.5s | 1.0x |
| Magnitude 50% sparse | 3.5B | 6.7B | 50.7s | ~1.0x |
| Proposed Method (k=128, 50%) | 3.8B | 7.1B | 50.5s | ~1.0x |
| *Llama-2-13B* | | | | |
| Dense baseline | 13.0B | 13.0B | 73.2s | 1.0× |
| Magnitude 2:4 sparse | 6.7B | 13.0B | 73.9s | ~1.01× |
| Proposed Method (k=128, 2:4) | 7.2B | 13.5B | 77.9s | ~1.06× |
**Q1: The rank k is manually set in the paper, will a poor choice of k lead to unnecessary computational overhead or insufficient recovery of pruned weights?**
Yes. The choice of k leads to a trade-off between the performance recovery and the computational cost. The low-rank component adds $k(m+n)$ parameters and $2k(m+n)$ FLOPs. Therefore, an unnecessarily high k value directly increases these costs with diminishing returns. Figure 2(b) in the paper shows the cumulative energy retention for different k values, and the curve begins to flatten as k increases, indicating diminishing returns.
**Q2: If given fixed inputs, does the iterative refinement method always converge to a stable solution?**
Yes. As shown in Theorem 4.2, Theorem 4.3 and Corollary 4.4 in the manuscript, the iterative refinement method always converges to a stable solution as $T \to \infty$ and the approximation error is bounded by:
$$
\left\\|W - \left(S^{(t)} + L_k^{(t)}\right)\right\\|_F \leq \sqrt{(r-k)} \sigma\_{k+1}\left(L^{(t)}\right),
$$
where $r$ is the rank of the weight matrix $W$, $\sigma_{k+1}\left(L^{(t)}\right)$ is the $(k+1)$-th largest singular value of $L^{(t)}$. | Summary: This paper introduces a novel approach to improve the performance of sparse language models through low-rank refinement. The main contribution of the paper is a method that refines sparse models using a low-rank refinement, which leads to improved precision. This approach is theoretically grounded, with proofs and additional lemmas provided in the appendix to support the claims.
## update after rebuttal
I will keep my ratings since most of my concerns are solved.
Claims And Evidence: The claims made in the submission appear to be supported by theoretical analysis and experimental results.
Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria appear to be well-suited for the problem of enhancing sparse language models.
Theoretical Claims: Yes. The paper includes theoretical analysis with proofs and additional lemmas to support its claims, particularly regarding the iterative weight update algorithm. The theoretical analysis demonstrates the favorable convergence properties of the proposed method and provides a rigorous foundation for its effectiveness.
However, this paper does not include the full details of lemma A4.2 in the Appendix A.
Experimental Designs Or Analyses: Yes, I checked the soundness and validity of the experimental designs. The paper presents experimental results to validate the proposed method, particularly focusing on the Llama models.
Supplementary Material: Yes. Particularly the proofs for the theoretical claims.
Relation To Broader Scientific Literature: This paper addresses the challenge of improving the performance of sparse language models, a well-studied problem in machine learning. Prior work has explored techniques like unstructured pruning and N:M structured pruning to reduce parameter count while maintain performance. The paper’s experimental results on the Llama model, align with prior findings that higher sparsity levels can lead to more severe performance degradation. This observation is consistent with the literature on sparse model performance, where sparsity is often traded off against computational cost and precision.
Essential References Not Discussed: While the paper mentions magnitude pruning and N:M structured pruning, it does not discuss structured sparsity techniques, such as block sparsity or channel pruning, which have been shown to improve hardware efficiency and model performance.
Other Strengths And Weaknesses: Strengths:
1. This paper is well motivated and this approach effectively bridges the gap between dense and sparse models.
2. The paper is well-structured, with a clear presentation.
3. The theoretical analysis is solid.
4. Code is provided for reproduction.
Weaknesses:
1. While the paper mentions magnitude pruning and N:M structured pruning, it does not discuss structured sparsity techniques, such as block sparsity or channel pruning, which have been shown to improve hardware efficiency and model performance.
2. This paper does not include the full details of lemma A4.2 in the Appendix A.
3. The iterative refinement process introduces new hyperparameters (e.g., rank k, number of iterations T), but the paper does not provide a clear guideline on how these should be selected across different models and sparsity levels.
4. The computational cost of iterative updates may be high as the size of the weight matrix increases, which may limit the applicability of the method to very large models.
Other Comments Or Suggestions: The proposed method is effective at high sparsity levels, but at low sparsity levels, the performance gain is not as significant.
Questions For Authors: 1. In line 6 of Algorithm 1 $r(t) = 1 + \frac{k-1}{T-1}$, what is the intuition behind this equation?
2. How to select the hyperparameters? See weaknesses 3.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **W1: While the paper mentions magnitude pruning and N:M structured pruning, it does not discuss structured sparsity techniques, such as block sparsity or channel pruning, which have been shown to improve hardware efficiency and model performance.**
We thank the reviewer for pointing out the importance of structured sparsity techniques.
It is straightforward to apply the proposed method to structured sparsity techniques such as block sparsity or channel pruning by by simply changing the binary mask matrix P.
The core algorithm is compatible with any arbitrary binary mask pattern, including structured ones. For structured patterns like block sparsity (e.g., 4×4 blocks) or channel pruning, P would have the corresponding pattern of 0s and 1s.
Theoretically, Theorem 4.1 (Sparsity Preservation) guarantees that the method preserves whatever sparsity pattern is defined by $P$.
**W2: This paper does not include the full details of lemma A4.2 in the Appendix A.**
The proof of Lemma A4.2 currently appears before the lemma itself. In the revised version, we will relocate the proof to its proper position following Lemma A4.2.
*proof of Lemma A4.2*: Let $m_{ij}$ and $p_{ij}$ be the elements of $M$ and $P$ respectively. By definition of the Frobenius inner product and element-wise multiplication:
$$
\langle M, P \odot M \rangle_F = \sum_{i=1}^m \sum_{j=1}^n m_{ij} (p_{ij} m_{ij}) = \sum_{i=1}^m \sum_{j=1}^n p_{ij} m_{ij}^2\leq \sum_{i=1}^m \sum_{j=1}^n m_{ij}^2 = \langle M, M \rangle_F
$$
**W3: The iterative refinement process introduces new hyperparameters (e.g., rank k, number of iterations T), but the paper does not provide a clear guideline on how these should be selected across different models and sparsity levels.**
- **Rank parameter k**:
We consistently set $k=128$ across the experiments to demonstrate the effectiveness of low-rank refinement.
In Figure 4(a), we visualize the singular value spectrum of the residual matrix $L=W-S'$ for different values of $k$ (from 64 to 512) and $T=50$.
As $k$ increases from 64 to 512, we see higher magnitudes for the top singular values. This indicates that larger $k$ values allow the method to retain more information.
But when $k$ is too large, the performance gain diminishes and the computational cost increases.
Therefore, the choice of $k$ should strike a balance between the performance and the computational cost.
- **Number of iterations T**:
The primary computational bottleneck in the proposed algorithm is the SVD computation performed in each iteration (line 171, page 4). For a weight matrix $\mathbf{W} \in \mathbb{R}^{m \times n}$, the time complexity of SVD is $O(mn^2)$ (assuming $m \geq n$). With $T$ iterations, the overall time complexity is $O(Tmn^2)$.
On the other hand, as shown in Figure 4(d) and the diminishing convergence speed as stated by Theorem 4.5 (Error Bound), $T=50$ appears to be a reasonable choice for the number of iterations and further iterations yield diminishing returns in terms of error reduction.
**W4: The computational cost of iterative updates may be high as the size of the weight matrix increases, which may limit the applicability of the method to very large models.**
As the size of the weight matrix increases, the computational cost of the proposed method increases accordingly.
The primary computational bottleneck in the proposed algorithm is the SVD computation performed in each iteration (line 171, page 4). For a weight matrix $\mathbf{W} \in \mathbb{R}^{m \times n}$, the time complexity of SVD is $O(mn^2)$ (assuming $m \geq n$). With $T$ iterations, the overall time complexity is $O(Tmn^2)$.
In contrast, the PCP baseline would typically use an iterative optimization method (like Adam) that also requires SVD computations in each iteration to compute the nuclear norm. Furthermore, the PCP baseline requires substantially more iterations ($T=5000$) to achieve comparable results to the proposed method ($T=50$).
Table: Computational Efficiency Comparison
| Method | Time Complexity | Typical Iterations | Relative Computational Cost |
| ------------------ | --------------- | ------------------ | --------------------------- |
| Proposed Algorithm | $O(Tmn^2)$ | 50 | 1× |
| PCP Baseline | $O(Tmn^2)$ | 5000 | ~100× |
| Zero-shot SVD | $O(mn^2)$ | 1 | ~0.02× |
**Q1: In line 6 of Algorithm 1 $r(t) = 1 + (k-1) / (T-1)$, what is the intuition behind this equation?**
This equation in Algorithm 1 defines a linear schedule for increasing the rank from 2 to k across T iterations.
Starting with low rank forces the algorithm to capture the most important singular components first, and each iteration gradually incorporates more subtle details from higher singular components.
The gradual increase in rank helps maintain numerical stability and helps the algorithm to converge faster.
**Q2: How to select the hyperparameters? See weaknesses 3.**
Please refer to the response to W3.
---
Rebuttal Comment 1.1:
Comment: Solved most of my concerns. I will keep my ratings. | Summary: Magnitude pruning removes weights that have the smallest absolute values. However, traditional pruning methods require re-training the model to recover performance, which is computationally expensive and requires extensive data or teacher model. To address this, the authors propose to address this by approximating the dense matrix as the sum of a sparse matrix with maintained sparsity and a low-rank matrix. The main contributions are as follows:
- Weight $W$ is dismantled into sparse part $S$ and low rank part $L$. The authors transform searching $S$ and $L$ into an optimization problem as Eq. (3).
- The authors propose to incorporate the binary mask $P$ into the optimization process to ensure that the sparsity pattern of $S^′$ is fixed as $S$.
- Iterative refine sparse weight with adaptive rank increase.
In the experiments, the proposed method is validated on WikiText-2, TruthfulQA, GSM8K, ARC-C and MMLU with Llama models. Experimental results demonstrate that low-rank refinement significantly enhances model performance, particularly at high sparsity levels.
Claims And Evidence: Most of the claims are clear and convincing.
Methods And Evaluation Criteria: Most of the methods and evaluation criteria make sense.
Theoretical Claims: Most of the proofs make sense.
Experimental Designs Or Analyses: Most of the experimental designs are valid.
Supplementary Material: There is no supplementary material.
Relation To Broader Scientific Literature: This study is closely related to previous post-training pruning works, such as SparseGPT and Wanda.
Essential References Not Discussed: No missing related works.
Other Strengths And Weaknesses: Strengths:
1. The paper is well organized and well written. The technical content is explained in sufficient details. The equations are very clear and easy to understand.
2. Comprehensive experiments performed in this paper. The reviewer appreciates the authors' effort in validating generation tasks such as GSM8K, rather than simply providing perplexity results. Since generation tasks is usually harder, the improvement is significant.
Weaknesses:
1. End-to-end inference acceleration is missing. It's better to report speedup for completeness.
Other Comments Or Suggestions: Comments:
1. I suggest that the authors add add some frontier models, such as Llama 3.1 8B. Llama is quite old and may lack reasoning and mathematics abilities. This may affect the GSM8K results.
Questions For Authors: In general, this paper performs well in its clarity, structure and completeness. The idea is innovative and the improvement is significant. However, end-to-end inference acceleration is missing. I will recommend a weak accept, and I suggest the authors pay attention to the inference part, as well as results on latest language models. Should those weaknesses and questions be addressed I will raise my scores accordingly.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for the insightful feedback and constructive suggestions.
**W1: End-to-end inference acceleration is missing. It's better to report speedup for completeness.**
1. Without sparse matrix formats or specialized hardware acceleration, the inference time is as follows:
Firstly, for a fair comparison with the dense baseline model, we took a conservative approach in our evaluation.
Specifically, we stored all matrices in their full dense format and kept all zero elements (the weight matrices are `torch.Tensor` objects), without leveraging any sparse matrix formats or hardware-specific optimizations for acceleration.
Table: Parameter Count and Evaluation Time for Complete ARC-Challenge Benchmark.
| Model Configuration | Non-Zero Parameter Count | Parameter Count | ARC-C | Relative Time |
| ---------------------------- | ------------------------ | --------------- | ----- | ------------- |
| *Llama-2-7B* | | | | |
| Dense baseline | | | 50.5s | 1.0x |
| Magnitude 50% sparse | 3.5B | 6.7B | 50.7s | ~1.0x |
| Proposed Method (k=128, 50%) | 3.8B | 7.1B | 50.5s | ~1.0x |
| *Llama-2-13B* | | | | |
| Dense baseline | 13.0B | 13.0B | 73.2s | 1.0× |
| Magnitude 2:4 sparse | 6.7B | 13.0B | 73.9s | ~1.01× |
| Proposed Method (k=128, 2:4) | 7.2B | 13.5B | 77.9s | ~1.06× |
2. With sparse matrix formats and hardware acceleration, we convert the weight matrices to their sparse format. Specifically, we use `torch.sparse.to_sparse_semi_structured` to convert the weight matrices to `torch.sparse.SparseSemiStructuredTensor` objects. The following table shows the memory usage of the proposed method with and without sparse matrix formats.
The actual inference speedup varies based on factors like batch size and input sequence length. With hardware-accelerated N:M structured pruning (on NVIDIA Ampere GPUs and newer), we can observe approximately ~1.1x faster inference in wall-clock time compared to dense models. However, it's important to note that for unstructured pruning patterns or on GPUs without dedicated sparse acceleration support, using sparse operations can actually result in slower inference times compared to dense computation.
Table: A comparison of memory usage with and without end-to-end inference acceleration.
| Model Configuration | Memory Usage | Relative Memory Usage |
| ------------------------------------------------------------------------------------------- | ------------ | --------------------- |
| *Llama-2-13B* | | |
| Proposed Method (k=128, 2:4), weight matrices are `torch.Tensor` | 2x14.6GB | 1.0x |
| Proposed Method (k=128, 2:4), weight matrices are `torch.sparse.SparseSemiStructuredTensor` | 2x9.1GB | ~0.62x |
**W2: I suggest that the authors add add some frontier models, such as Llama 3.1 8B. Llama is quite old and may lack reasoning and mathematics abilities. This may affect the GSM8K results.**
Thank you for the suggestion. We have added Llama-3.1-8B to the experiments. The results are shown in the following table.
Table: Llama-3.1-8B results.
| Model | HellaSwag | WinoGrande | ARC-e | TruthfulQA | GSM8K | ARC-c | MMLU |
| -------------------- | --------- | ---------- | -------- | ---------- | ------- | -------- | -------- |
| Dense baseline | 78.9 | 73.6 | 81.1 | 45.2 | 49.8 | 53.4 | 63.5 |
| *Pruning Method* | | | | | | | |
| Magnitude 50% | 56.4 | 57.6 | 56.7 | **42.9** | 1.3 | 35.8 | 35.3 |
| Magnitude 50% + Ours | **66.8** | **67.8** | **68.7** | 38.9 | **6.5** | **42.6** | **45.7** |
---
Rebuttal Comment 1.1:
Comment: Thank you for adding inference acceleration results and Llama 3.1 results. The results are reasonable, and I will recommend a weak accept for your paper. | null | null | null | null | null | null |
Stable Offline Value Function Learning with Bisimulation-based Representations | Accept (poster) | Summary: The paper tackles the field of offline policy evaluation (OPE) and addresses methodology to find good state-action-pair representations. It introduces a kernel based state-action representation and gives theoretical properties for it. It then presents experimental results of the introduced KROPE method on different benchmarks.
## update after rebuttal
Score increased. See rebuttal comment.
Claims And Evidence: - The statement in the abstract "Therefore, it is critical to stabilize value function learning by explicitly shaping the state-action representations." is not supported by the experiments presented. It is shown in the paper that for some cases value function learning was successful without learning any state-action representation.
- The statement "In this work, we investigate how to explicitly learn state-action representations to stabilize value function learning." is misleading. The work introduces one way to do so, but does not investigate different approaches in my understanding.
- There is a list of 5 contributions under the headline "Can bisimulation-based representation learning stabilize offline value function learning?". The fourth and fifth claim are slightly misleading. The fifth claim can not be part of the main paper since its contents are inside the appendix. The claim that KROPE representations can be successfully used in OPE is shown in the paper. To the best of my understanding the evidence is not convincing that KROPE representations always lead to more stable and accurate offline value function learning.
- The Takeaway #1 is not true in general and thus, it needs clarification.
- Takeaway #2 is not true in general. At least from my understanding this is not given in general.
Methods And Evaluation Criteria: The proposed methods and benchmarks seem reasonable for the scope of the work.
Theoretical Claims: - All theoretical claims are based on the standard coverage assumption that gives a non-zero probability of state-action pairs appearing in the offline dataset for finite state spaces.
- The LSPE stabilization looks fine to me.
- The Bellman completeness proof looks fine as well.
Experimental Designs Or Analyses: Table 1 is not informative. This needs thorough revision. Stating numbers in a table is a good idea in general, but the mean +/- the standard error should be the standard to do so.
Supplementary Material: I did not review supplementary material. I did review parts of the appendix.
Relation To Broader Scientific Literature: The paper contributes to the OPE literature.
Essential References Not Discussed: The reference to Van Hasselt et al., 2018 that coined the term "deadly triad" which is used throughout the paper is missing.
Other Strengths And Weaknesses: - The paper lacks clarity in some parts of the submission.
- Formatting is not ICML compliant, e.g., the uppercased abbreviations.
- General formatting needs streamlining.
- In 209 right side it says that continuity is important. But why?
- The ylabels in several plots are very hard to read and leave room for improvement.
- Overall this work is an interesting read with valuable contents in need of a major revision and re-evalution of the made claims.
Other Comments Or Suggestions: - Paragraph title "Remarks..." in 190 is weirdly formatted, as well as subsection title of 4.2
- fitted *Q*-evaluation should have at least an uppercased Q
Questions For Authors: - Can you elaborate on my concerns regarding the claims made in the paper?
- In the experiments in the appendix, you contradict the statement that KROPE always leads to more stable results, or do I missinterpretate the presented results?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you very much for your thoughtful comments and feedback. Thank you for acknowledging that the work was an interesting read with valuable contents.
Your comments are helpful in making our paper precise. We do believe, however, that these adjustments involve minor reframings/edits. We address your concerns below.
**Claims and evidence**
All the concerns in this subsection are easily addressable since we acknowledge the shortcomings of KROPE explicitly in Section 5 and Appendix C.3.1. All changes are sentence-level clarifications and involve bringing discussions from Section 5 earlier in the paper.
**Re: “’Therefore, it is critical to stabilize value function learning by explicitly shaping the state-action representations.’ is not supported by the experiments presented.”**
That statement is a general desired property we want from representation learning algorithms for offline policy evaluation. With regards to KROPE, we already discuss KROPE’s inability to stabilize in all settings in Section 5 and Appendix C.3.1. That said, we understand the concern and will discuss the limitation earlier in the paper.
**Re: “"In this work, we investigate how to explicitly learn state-action representations to stabilize value function learning." is misleading.”**
The statement means we show how one may go about learning KROPE representations. But we understand that this can be potentially confusing. We do not investigate different approaches to shape the representations. We propose only one way to do so, and we will clarify that.
**Re: “The fourth and fifth claim are slightly misleading."**
We will adjust contribution 4 based on our discussion in Section 5 and Appendix C.3.1 to state that it does indeed improve the stability of OPE compared to 7 other baselines on 10/13 datasets (note: 10 does not include just highlighted blue errors in table 1). Regarding Contribution 5, we will try to move it to the main paper.
**Re: “The Takeaway #1 is not true in general and thus, it needs clarification.”**
We understand the point, we will rephrase the takeaway to explicitly say that the statement is true under theoretical assumptions.
**Re: “Takeaway #2 is not true in general. At least from my understanding this is not given in general.”**
Takeaway #2 is true in an aggregate sense across datasets (10/13) and compared to the 7 other baselines. But we understand the point. We will state that KROPE improves the chances of stability for OPE compared to other representation learning baselines.
The more accurate characterization of our work is the following: *our theoretical results prove that if state-action features satisfy the KROPE relation (Definition 2), then they will lead to stable value function learning. Practically, since KROPE relies on a semi-gradient method (like DQN/fitted Q-evaluation; see Section 5 and Appendix C.3.1), the algorithm may still lead to divergence. Empirically, KROPE improves stability compared 7 other baselines, and leads to stable and accurate OPE in 10/13 cases (note: 10 does not include just highlighted blue errors in table 1). Therefore, KROPE improves upon other baselines in learning stable and accurate representations for OPE.*
We will accordingly discuss this earlier in the paper instead of waiting till Section 5 and modify the claims/takeways.
**Re: “Table 1 is not informative. This needs thorough revision. Stating numbers in a table is a good idea in general, but the mean +/- the standard error should be the standard to do so.”**
Thanks for your feedback. However, the table does include information for statistical rigor. Each value is the MSE over 20 trials and the range shows the 95% confidence interval, which are all important measures of statistical rigor (see caption of Table 1) [1]. We also include learning curves in Figures 6/7 (appendix).
**Deadly triad reference**
Thanks for the reference. We will include this.
**Formatting issues: ICML compliant, ylabels, line 190 format, FQE upper case**
Thanks and we will update this and the graphs to be clearer.
**Meaning of continuity**
We will add clarification on this and we refer the reviewer to the work of Le Lan et al. [2] for continuity of metrics in the context of RL (see their Figure 1 in Le Lan [2]). Briefly, ideally, states that have similar values are close to each other in feature space. This statement is true for general machine learning too: inputs with similar outputs should ideally be close to each other in feature space.
Once again thank you for making our work more precise. We believe the main concern on claims is easily addressable. Please let us know if we have addressed your concerns. If we have, we would greatly appreciate it if you could re-evaluate your review.
---
[1] Patterson et al. 2024. Empirical Design in Reinforcement Learning.
[2] Le Lan et al. 2021. Metrics and continuity in reinforcement learning.
---
Rebuttal Comment 1.1:
Comment: Thank you for your comprehensive rebuttal and for addressing the points raised in my initial review.
After reading the rebuttal and the other reviews, I am considering raising my score from 2 to 3.
---
Reply to Comment 1.1.1:
Comment: Thank you so much for raising the score and for your response! We appreciate your effort in making the paper stronger. When you get a chance, please do update the score in the main review. We would greatly appreciate it. Thanks again! | Summary: This paper introduces Kernel Representations for Offline Policy Evaluation (KROPE), a kernel-based representation learning algorithm based on bisimulation metric-like ideas. They study a class of representations which emerge as the solution to the representation learning loss, and prove that it has desirable theoretical properties, in particular being stable for off-policy value function learning and is Bellman-complete under additional assumptions. They then evaluate across a range of both tabular and larger-scale environments, and both validate their theoretical results and compare KROPE across a range of baselines for OPE.
Claims And Evidence: All claims made in the submission are supported by proper evidence.
Methods And Evaluation Criteria: The methods and evaluation done make sense for the problem at hand.
Theoretical Claims: I checked all proofs, and did not find any issues.
Experimental Designs Or Analyses: The experimental design and validity seems good to me (I only very quickly skimmed the code provided).
Supplementary Material: I have reviewed the entirety of the supplementary material.
Relation To Broader Scientific Literature: In a sense the key contributions of this paper can be seen as extending the analysis/understanding of bisimulation-based representation learning to stability-type results, which I think is an important contribution on its own, and I expect it to lead to further research.
Essential References Not Discussed: I don't believe any essential references are not discussed.
Other Strengths And Weaknesses: **Strengths**
- The application of stability analysis to bisimilarity metric-type representations is a nice perspective and I expect it to be built upon in the future. Additionally the proof of Theorem 1 appears rather general (I think it should apply with at least any reasonable choice of immediate similarity kernel).
- The paper is well-written and easy to follow.
- The choice of experiments in Section 4.2. nicely complement the theoretical results of section 3.
**Weaknesses**
- Theorem 2 is dependent on a very strong assumption (that the reward function $r^\phi$ is injective), without any discussion around the assumption or what it may entail. I think that this assumption is violated in almost any non-contrived setting I can think of (from large-scale complex ones like Atari/MuJoCo to gridworlds, maze-based, CartPole, etc), which I believe limits the impact of the result.
Other Comments Or Suggestions: - I quite like the introduction and discussion of the KROPE representation $\Phi$ in Definition 2. To reassure the reader, can you add perhaps a minor which states that if $\Phi$ is a KROPE representation, then the inner product between two state-action pairs is equal to the kernel evaluated at them?
- It could strengthen the paper to shorten sections 1 & 2 and move some of Appendix C.3. to the main text. There is quite a bit of background/discussion before the novel contributions begin (halfway through page 4), and the main takeaways should be in the main body (takeaway #3 is currently in the appendix).
- In section C.2., in the description of ROPE, minor nit: "Its additional learning rate is the output dimension of $\phi$."
Questions For Authors: - I'm a bit confused by Definition 2 -- in particular what does $\mathbb{E}_{\mathcal{D}} [\Phi\Phi^T]$ represent for a matrix $\Phi \in \mathbb{R}^{|\mathcal{X}| \times d}$ (what depends on $\mathcal{D}$)?
- Similar to my first comment in the previous section -- is it possible for there to be a feature mapping $\Phi$ such that $\langle \phi(x,a), \phi(y,b)\rangle = k^{\pi_e}(x,a;y,b)$ but $\Phi$ is not a KROPE representation?
- Can there be a weaker statement similar to Theorem 2, without assuming the fact that $r^\phi$ is injective (likely the statement would depend on $r^\phi$, and the current theorem might appear as a special case).
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you very much for appreciating our work and the clarity of our writing. Thank you for mentioning the strengths of our empirical and theoretical work, especially Theorem 1 and our choice of experiments. We also appreciate your acknowledging the significance of our results and potential for further research
**Theorem 2 is dependent on a very strong assumption (that the reward function is injective)**
The injective reward assumption simply means that each abstract state-action group will have a distinct associated reward from every other abstract state-action group. While we agree that it is strong, it comes as a tradeoff. Chen and Jiang [1] proved that bisimulation abstractions are Bellman complete. Instead of assuming injective rewards, they assumed that two states were grouped together if each state’s transition dynamics led to next states that are also grouped together (one of the conditions for exact bisimulations). This condition is also considered strict and inefficient to compute [2].
In our work, we relax this exact transition dynamics equality by considering independent couplings between next state distributions, thereby making the KROPE algorithm efficient to compute. However, the drawback is that preserving distinctness between abstract state-actions may be lost (see page 4 Section 3.1 on remarks on $k^{\pi_e}$). To ensure the distinctness between state-action abstractions, we assumed that the reward function is injective. This then allowed us to show Bellman completeness in a similar way to that shown in Chen and Jiang [1].
Regarding a weaker version: it may be possible to relax this assumption and consider the error induced in Bellman completeness. Ultimately, what is needed is the ability to preserve distinctiveness between state-action features/abstractions. Ensuring such a property with a weaker condition would be interesting to investigate.
Note that this assumption is only for the BC proof. Theorem 1 does not make this assumption. We will include the rationale behind the injective reward assumption for Theorem 2 in the appendix.
**Definition 2 clarification**
Yes, we will add the point that the inner product equals the kernel evaluated for those features to the camera ready. Regarding your other question, each row in the $\Phi$ matrix corresponds to the feature vector for a state-action pair, so the dependence on $\mathcal{D}$ means that the state-actions (i.e., the features) are sampled according to their appearance in the batch of data.
**Other inner products/feature maps for KROPE kernel**
This is a really interesting question that is worth looking into. The features are shaped based on the PSD kernel used. Currently, Definition 2 defines a KROPE representation to be one that satisfies the relationship in Definition 2 in terms of linear dot products (which is a function of $k_1$, which is PSD (see Lemma 4 in Appendix)), which are easier to reason about. That said, Definition 2 can be broadened to just include short-term ($k_1$, which is PSD) and long-term similarity, where long-term similarity is determined by, say, a Gaussian kernel. This could also be a valid KROPE representation, but unclear if it will also be a stable representation as defined in our work (in terms of the spectral radius of the features).
**Deferring background in Sections 1/2 to Appendix and moving Appendix C.3 up**
We will definitely re-review Sections 1 and 2 and see what we can defer to the Appendix. We do believe that it is important to be complete, even if slightly redundant for readers who may already be familiar with the background knowledge. If possible, we will then move Appendix C.3 to the main text since we also agree that Appendix C.3 brings useful insights regarding the potential instability of KROPE.
---
[1] Chen and Jiang. 2019. Information-Theoretic Considerations in Batch Reinforcement Learning
[2] Castro. 2020. Scalable methods for computing state similarity in deterministic Markov Decision Processes
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their rebuttal, and I maintain my positive rating of the paper.
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate your appreciation of our paper, your feedback, and your responding to our rebuttal. | Summary: The paper introduces Kernel Representations for Offline Policy Evaluation (KROPE), a novel algorithm designed to stabilize offline value function learning in reinforcement learning. KROPE leverages π-bisimulation to shape state-action representations, ensuring that similar state-action pairs are represented consistently. This approach enhances convergence and reliability. The authors provide theoretical foundations that demonstrate KROPE's stability through non-expansiveness and Bellman completeness. Empirical results indicate that KROPE outperforms other baselines in terms of stability and accuracy.
Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria do make sense for the problem.
Theoretical Claims: Partially. We have carefully examined some theoretical proofs, including Lemma 1. However, due to the inconsistency between our research direction and other theoretical knowledge, we are unable to determine whether the theoretical proof is correct.
Experimental Designs Or Analyses: We have confirmed the rationality of the experimental design in the revised paper. The experiment is mainly divided into two parts. The first part is to verify the stability and qπe consistency of the KROPE algorithm in the Garnet MDPs environment. The second part is to verify whether the KROPE algorithm can bring low and stable MSVE and whether the KROPE algorithm is sensitive to hyperparameters on 4 tasks in DMC and 9 datasets in D4RL.
Supplementary Material: Partially. We have reviewed supplementary material, including background, partial theoretical results, and partial empirical details.
Relation To Broader Scientific Literature: This paper focuses on learning a value network that accurately estimates the expected discounted return for each state. The author enhances the estimation accuracy by learning a more effective representation, setting this approach apart from previous studies like FQE [1]. However, the paper does not further elaborate on the advantages or potential applications of learning such a value network.
[1] Le, Hoang, Cameron Voloshin, and Yisong Yue. "Batch policy learning under constraints." International Conference on Machine Learning. PMLR, 2019.
Essential References Not Discussed: Related works that are essential to understanding the key contributions of the paper are currently cited/discussed in the paper.
Other Strengths And Weaknesses: Strengths:
1.The author introduces a novel approach for evaluating the value function, with experimental results demonstrating its superior accuracy and stability in estimation compared to other baseline methods.
2.The theoretical proof of this paper is rigorous, and the proof ideas are also clear.
Weaknesses:
1.This paper demonstrates that the KROPE algorithm can learn a value network that accurately estimates the expected discounted return for a given state. However, it does not further elaborate on the role or potential applications of this learned value network.
Other Comments Or Suggestions: Overall, the paper is well-organized, with clear ideas and compelling theoretical and experimental evidence. However, the paper's starting point could be articulated more clearly. For instance, how do value networks further contribute to accurately predicting the expected discounted return from a given state?
Questions For Authors: 1.We aim to gain insights into the particular implementation of representing the (s, a) pair. Specifically, we are curious about whether states and actions should be concatenated prior to being fed into the network for representation, or if they should be processed by separate networks and concatenated subsequently.
2.Currently, this work has been focused on the D4RL dataset, which comprises physical states. We are interested in exploring the effectiveness of the method on the V-D4RL dataset, wherein states are depicted as images.
3.We are curious about the potential applications of value networks when they can accurately estimate the expected discounted return for each state. We believe that such an accurate value network could facilitate learning an effective policy. In this context, we hope the author can provide a comparison between the performance of the policy learned through KROPE and classical offline RL algorithms (such as TD3-BC [1], CQL [2], etc.). If the author can provide the above experimental results and demonstrate that the policy learned by the KROPE algorithm performs comparably to or better than classical offline RL algorithms, we will consider increasing the paper's score.
[1] Fujimoto, Scott, and Shixiang Shane Gu. "A minimalist approach to offline reinforcement learning." Advances in neural information processing systems 34 (2021): 20132-20145.
[2] Kumar, Aviral, et al. "Conservative q-learning for offline reinforcement learning." Advances in neural information processing systems 33 (2020): 1179-1191.
Ethical Review Concerns: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for acknowledging that our empirical results and theoretical results are rigorous, and that the paper is clear and well-organized.
Below we address your concerns.
**Re: reason to learn the value network (“the paper does not further elaborate on the advantages or potential applications of learning such a value network.”)**
In our context, our goal is to estimate the value of a policy. Our contribution is particularly relevant to the off-policy evaluation (OPE) literature [1, 4, 5]. In OPE, we want to use an offline dataset generated by different policies to estimate the performance of another target policy since deploying the target policy directly in the environment may be risky or costly. One way to evaluate the performance of this target policy is to compute its value function. Therefore, accurately estimating the value function for a target policy becomes important.
OPE is particularly important in safety-critical tasks such as healthcare [2] or in situations where it may be monetarily costly to deploy a potentially poor performing policy such as in recommendation systems [3].
**Re: state-action input (“we are curious about whether states and actions should be concatenated prior to being fed into the network for representation, or if they should be processed by separate networks and concatenated subsequently.”)**
This is an interesting question. In our work, we concatenate the state-action pair and feed it directly into the network (i.e., we do not process them separately). This practice is fairly common (eg, [6, 7]). Further investigation, which is beyond the scope of this work, would be required to determine how this alternative approach performs for OPE.
**Re: image states (“We are interested in exploring the effectiveness of the method on the V-D4RL dataset, wherein states are depicted as images”)**
This is an interesting future direction. In this present version of KROPE, we focussed on illustrating that KROPE improved the stability of OPE theoretically and on D4RL and deepmind control suite environments (across 13 datasets). A next step would be to apply these ideas to visual domains.
**Re: using KROPE for offline control (“could facilitate learning an effective policy” and “demonstrate that the policy learned by the KROPE algorithm”)**
Thanks for this suggestion. We also expect KROPE to help in the offline control setting and it is an interesting direction to explore. In the current scope, however, we focussed on OPE. While control is interesting, studying OPE independently is important due to: 1) its practical significance in AI safety and building trust worthy RL agents and 2) prediction is a fundamental part of RL and is worth studying in isolation.
Please let us know if we have addressed your concerns. If we have, we would greatly appreciate it if you could re-evaluate your review.
---
[1] Voloshin et al. Empirical Study of Off-Policy Policy Evaluation for Reinforcement Learning. 2021
[2] Gottesman et al. 2018. Evaluating Reinforcement Learning Algorithms in Observational Health Setting
[3] Li et al. 2011. Unbiased offline evaluation of contextual-bandit- based news article recommendation algorithms
[4] Fu et al. 2021. Benchmarks for Deep Off-Policy Evaluation
[5] Uehera et al. 2022. A Review of Off-Policy Evaluation in Reinforcement Learning
[6] Chang et al. 2022. Learning Bellman Complete Representations for Offline Policy Evaluation
[7] Pavse et al. 2023. State-Action Similarity-Based Representations for Off-Policy Evaluation
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed response. It has addressed most of my concerns. I have decided to increase my score to 3.
---
Reply to Comment 1.1.1:
Comment: We are glad that we were able to address your concerns. Thank you for your response, increasing your score, and making our paper stronger! We will incorporate all the clarifications in the camera ready. | Summary: This paper addresses offline policy evaluation in offline RL, which involves estimating expected returns of state-action pairs under a fixed policy using offline datasets. Stability in this estimation process is critical for accurate evaluation. The authors propose KROPE, a new method combining bisimulation-based representation learning (ROPE) with kernel methods. KROPE constructs state-action representations such that pairs with similar immediate rewards and subsequent policy-induced states share similar representations. Experimental results suggest that KROPE enhances the stability of offline value function learning compared to baseline methods.
Claims And Evidence: The paper provides robust theoretical and experimental support for its claims, detailed in Sections 3 and 4, respectively. The theoretical part are solid.
Methods And Evaluation Criteria: While fundamentally sound, the methodology could be strengthened by addressing the evaluation score choice. The experimental section primarily presents squared value error, whereas the DOPE benchmark suggests using MAE for such evaluations (Fu et al., 2021). An explanation for this deviation or inclusion of MAE results would enhance the paper's credibility and relevance.
----------
Fu et al., 2021.Benchmarks for Deep Off-Policy Evaluation. ICLR 2021
Theoretical Claims: The theoretical framework is well-constructed with no apparent flaws in the claims or proofs provided.
Experimental Designs Or Analyses: The presentation and clarity of results in Table 1 need substantial improvement for supporting the claims regarding KROPE's stability:
1. Numerical results are currently presented with only one decimal place, making it difficult to discern clear differences among methods, e.g., 0.0 vs 0.0 or 0.1 vs 1.0. Higher numerical precision is necessary for meaningful comparisons.
1. The absence of MAE results (as recommended by the DOPE benchmark) further complicates interpreting and validating the results.
1. Notably, ROPE appears to diverge in the current experiments, contradicting convergence reported in the original ROPE paper (Pavse & Hanna, 2023a). This discrepancy raises concerns regarding experimental consistency and soundness. The authors should clearly explain this difference.
Learning curves illustrating training stability should be included prominently in the main text, given the paper's central claim regarding stability.
Additionally, some learning curves presented in Figure 7 (Appendix) reveal that KROPE does not converge in certain environments (e.g., Walker and Hopper). Besides, the large squared value errors from certain baseline methods obscure meaningful comparisons with more stable methods. Addressing this visualization issue is recommended.
--------
Pavse & Hanna, 2023a. State-action similarity-based representations for off-policy evaluation. NeurIPS 2023.
Supplementary Material: I have reviewed the Appendix.
Relation To Broader Scientific Literature: NA
Essential References Not Discussed: NA
Other Strengths And Weaknesses: NA
Other Comments Or Suggestions: 1. The addition of algorithm pseudocode specific for KROPE method would enhance clarity.
1. I would raise my score if the concerns related to the experimental results are thoroughly addressed.
Questions For Authors: 1. Could the authors clarify whether any experimental settings differ from those used in the original ROPE paper?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for acknowledging that our empirical results and theoretical results are solid. Below we address the concerns you raise.
**Use of Mean Absolute Error vs. MSE**
This suggestion is valid since we understand the MAE may be more robust to outliers. However, it does not diminish the validity of the results since the MSE is a common metric used in the OPE literature, see this OPE benchmark paper [1]. The MSE is considered to be a valid metric due to its relation to the variance and bias of estimators (even though bias/variance may not be reported). We also refer the reviewer to many fundamental and prominent works that evaluate OPE algorithms based on the MSE [3,4,5,6,7].
**Numerical precision in Table 1**
After analyzing the raw data, we found that including additional precision did not alter the big picture. Moreover, we were tight on horizontal space, so we truncated it at 1 decimal.
**Relations to Pavse et al. 2023**
This is a great point, which we will make clear in the paper. Below we state why we opted to use this setup and then explain the differences. Briefly, the cited ROPE paper did: 1) use FQE as the OPE algorithm and 2) used a tanh activation function on the last layer of the encoder.
Instead of adopting Pavse et al.’s approach, we opted to use our alternative setup for two reasons: 1) By using LSPE as the OPE algorithm, instead of FQE, we can precisely quantify the stability properties of the representations in terms of the spectral radius (Theorem 1 and Section 4.2), which is harder to do when using FQE as the OPE algorithm; 2) while a valid architectural choice, for this current work, we viewed the use of the tanh function as obfuscating the true stability properties of the representations and so opted to avoid it. More practically, it is reasonable to use the tanh as part of the architecture.
In more detail, the main differences are:
1. The original ROPE paper used ROPE as a pre-training step, fixed the representations, and then fed them into FQE for OPE. In our case, we also pre-train the representations but with FQE as a representation learning algorithm (for value predictive representation [8]) along with other representation learning algorithms as auxiliary tasks. The fixed learned representations are then fed into LSPE for OPE.
2. The original ROPE encoder architecture (in the cited paper) had a tanh activation function on the output layer, which we effectively serves a clipping mechanism to the features [9], which is similar to how public implementations of FQE clip the return to avoid divergence [2].
We will make these differences explicit in the Appendix in the camera ready where we discuss the different baselines
**Including learning curves**
Thanks for the suggestion. These are included in the Appendix (Figure 6 and 7) due to lack of space in the main paper.
**KROPE does not always converge**
Yes, it is more accurate to say that KROPE improves the stability of OPE compared to other baselines, and we will reframe the paper as such. KROPE may not converge since KROPE relies on a semi-gradient learning algorithm. We discuss this limitation in Section 5 and provide insight into when this might occur in Appendix C.3.1.
Briefly, it may be possible to leverage the Legendre-Fenchel transformation and replace the fixed-point loss function of semi-gradient methods with an equivalent expression that avoids semi-gradient learning. However, a drawback with this approach is that the new learning objective is a minimax procedure, which can be challenging to optimize in practice [10].
We will modify our claim to say that *KROPE improves the stability of OPE compared to 7 other baselines on 10/13 datasets.* (not just highlighted blue errors in table 1).
**Pseudo-code**
Appendix (page 15) includes one, but we will make it clearer.
Please let us know if we have addressed your concerns. If we have, we would greatly appreciate it if you could re-evaluate your review.
---
[1] Voloshin et al. Empirical Study of Off-Policy Policy Evaluation for Reinforcement Learning. 2021
[2] https://github.com/google-research/google-research/blob/master/policy_eval/q_fitter.py#L101
[3] Chaudhari et al. 2024. Abstract Reward Processes: Leveraging State Abstraction for Consistent Off-Policy Evaluation
[4] Liu et al. 2018. Breaking the Curse of Horizon: Infinite-Horizon Off-Policy Estimation
[5] Thomas et al. 2016. Data-Efficient Off-Policy Policy Evaluation for Reinforcement Learning
[6] Hanna et al. 2019. Importance Sampling Policy Evaluation with an Estimated Behavior Policy
[7] Sachdeva et al. 2023. Off-Policy Evaluation for Large Action Spaces via Policy Convolution
[8] Lehnart and Littman. 2020. Successor features combine elements of model-free and model-based reinforcement learning
[9] Bhatt et al. 2024. CrossQ: Batch Normalization in Deep Reinforcement Learning for Greater Sample Efficiency and Simplicity
[10] Feng et al. 2019. A Kernel Loss for Solving the Bellman Equation
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for the rebuttal. Most of my concerns are addressed. I’m raising my score to 3.
---
Reply to Comment 1.1.1:
Comment: We are glad that we were able to address your concerns. Thank you for your response, increasing your score, and making our paper stronger! We will incorporate all the clarifications in the camera ready. | null | null | null | null | null | null |
ZipAR: Parallel Autoregressive Image Generation through Spatial Locality | Accept (poster) | Summary: The paper proposes ZipAR, a training-free plug-and-play decoding method for accelerating auto-regressive visual generation models. It decodes spatially adjacent tokens in the column dimension in parallel. It emplys an adaptive local window assignment scheme with reject sampling strategy. Experimental results demonstrate that ZipAR can decrease the number of required forward passes without compromising the quality of generation.
## Update after rebuttal
The rebuttal satisfactorily addresses my concerns regarding the evaluation and theoretical justification. However, I remain concerned about the noticeable artifacts in the images. As a result, I am increasing my score to Weak Accept.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: 1. I think the evaluation is not enough. Authors only use CLIP to evlauate the quality of text-to-image generation, which fails to evaluate the image appearance. I would suggest authors to consider Aesthetic Score, Human Preference Score v2, and ImageReward to fully evaluate the method.
2. I see that ZipAR with diverse window sizes (for example from 3 to 15 in LlamaGen-XL) achieves similar CLIP score. Can you also show some qualitative comparison to see if different window sizes will lead to different behaviors?
Theoretical Claims: A theoretical proof demonstrating the effectiveness of speculative decoding in identifying short local window sizes that lead to insufficient information is currently lacking.
Experimental Designs Or Analyses: I check the soundness/validity of experimental designs and analyses. Please find my concerns in Evaluation Criteria setting.
Supplementary Material: Yes.
Relation To Broader Scientific Literature: NA
Essential References Not Discussed: NA
Other Strengths And Weaknesses: As the number of steps decreases, some artifacts become noticeable in Fig. 8. Does ZipAR enhance efficiency at the cost of visual quality? I recommend that the authors utilize additional metrics, as outlined in the Methods and Evaluation Criteria section, to further investigate this trade-off.
Other Comments Or Suggestions: NA
Questions For Authors: NA
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks to the reviewer for the valuable comments.
**Q1: Utilize additional metrics to fully evaluate the method.**
To address this concern, we have expanded our evaluation by assessing ZipAR's performance using multiple metrics, including VQAScore, Human Preference Score v2, ImageReward, and Aesthetic Score, across three models: LlamaGen-XL-512, Lumina-mGPT-768, and Lumina-mGPT-1024. The results presented below demonstrate that our method significantly improves generation efficiency with little impact on output quality across various benchmarks.
| Model | Method | Steps | VQAScore | HPSv2 | Image Reward | Aesthetic Score |
| ---- | ---- | ---- | ---- | ---- | ---- | ---- |
| LlamaGen-XL | NTP | 1024 | 0.6439 | **0.2647** | -0.0818 | 5.38 |
| LlamaGen-XL | ZipAR-15 | 562 | 0.6534 | 0.2637 | **-0.0690** | **5.39** |
| LlamaGen-XL | ZipAR-11 | 451 | **0.6581** | 0.2630 | -0.0982 | 5.37 |
| LlamaGen-XL | ZipAR-7 | 324 | 0.6410 | 0.2625 | -0.1683 | 5.33 |
| LlamaGen-XL | ZipAR-3 | 185 | 0.6343 | 0.2599 | -0.3121 | 5.32 |
| Lumina-mGPT-768 | NTP | 2352 | 0.6579 | 0.2743 | **0.4164** | 6.10 |
| Lumina-mGPT-768 | ZipAR-20 | 1063 | **0.6595** | **0.2747** | 0.3971 | **6.13** |
| Lumina-mGPT-768 | ZipAR-17 | 915 | 0.6433 | 0.2732 | 0.3049 | 6.12 |
| Lumina-mGPT-768 | ZipAR-14 | 740 | 0.6589 | 0.2739 | 0.3646 | 6.10 |
| Lumina-mGPT-768 | ZipAR-11 | 588 | 0.6490 | 0.2730 | 0.2861 | 6.10 |
| Lumina-mGPT-1024 | NTP | 4160 | 0.6718 | **0.2762** | **0.4232** | **5.97** |
| Lumina-mGPT-1024 | ZipAR-20 | 1331 | 0.6705 | 0.2761 | 0.3913 | 5.95 |
| Lumina-mGPT-1024 | ZipAR-17 | 1150 | **0.6797** | 0.2761 | 0.4018 | 5.94 |
| Lumina-mGPT-1024 | ZipAR-14 | 964 | 0.6732 | 0.2747 | 0.3298 | 5.94 |
| Lumina-mGPT-1024 | ZipAR-11 | 772 | 0.6723 | 0.2746 | 0.3222 | 5.95 |
**Q2: Show some qualitative comparisons of different window sizes.**
We have provided qualitative visualizations of different window sizes, as referred to Figures 1, 8 in the paper and Figures 9, 10 in the supplementary material. Since we can not update PDF version in the current phase, we will add more qualitative comparisons of different window sizes in the revised version.
**Q3: The effectiveness of speculative decoding in identifying short local window sizes that lead to insufficient information.**
Let $p_s$ and $p_{s+1}$ be the token distributions for window sizes $s$ and $s+1$. As referred to Eq. 2 in the paper, the acceptance probability for a candidate $x_s \sim p_s$ is defined as
$$
\alpha(x_s)= \min\left(1, \frac{p_{s+1}(x_s)}{p_s(x_s)}\right)
$$
The expectation of the acceptance can be formulated as:
$$
\mathbb{E}_{x \sim p_s}\left[\alpha(x_s)\right]
$$
$$
= \sum_x p_s(x) \min(1, \frac{p_{s+1}(x)}{p_s(x)}) = \sum_x \min(p_s(x), p_{s+1}(x))
$$
**Theorem 1** (Relationship between Pairwise Minimum and Total Variation).
Let $p$ and $q$ be two probability distributions over the same discrete support $\mathcal{X}$. The sum of their element-wise minima satisfies:
$$
\sum_{x \in \mathcal{X}} \min(p(x), q(x)) = 1 - \text{TV}(p, q),
$$
where $\text{TV}(p, q) = \frac{1}{2} \|p - q\|_1$ is the total variation distance between $p$ and $q$.
**Proof**
For any $x \in \mathcal{X}$, observe that:
$$
\max(p(x), q(x)) + \min(p(x), q(x)) = p(x) + q(x).
$$
Summing over all $x$ yields:
$$
\sum_{x} \max(p(x), q(x)) + \sum_{x} \min(p(x), q(x)) = \sum_{x} p(x) + \sum_{x} q(x) = 2.
\quad (1)
$$
By definition of the L1-norm, we have:
$$
\|p - q\|_1
$$
$$
= \sum_{x} \|p(x) - q(x)\|
$$
$$
= \sum_{x} \max(p(x), q(x)) - \sum_{x} \min(p(x), q(x)).
\quad (2)
$$
Let $S_{\min} = \sum_{x} \min(p(x), q(x))$ and $S_{\max} = \sum_{x} \max(p(x), q(x))$.
Then Equation (1) can be reformulated as:
$$
S_{\max} = 2 - S_{\min}. \quad (3)
$$
By substituting Equation (3) into (2), we have:
$$
\|p - q\|_1
$$
$$
= (2 - S_{\min}) - S_{\min}
$$
$$
= 2 - 2S_{\min}.
$$
Rearranging and using $\text{TV}(p, q) = \frac{1}{2} \|p - q\|_1$:
$$
S_{\min} = 1 - \text{TV}(p, q). \quad
$$
Therefore, the expected acceptance rate is formulated as:
$$
\mathbb{E}_{x \sim p_s}\left[\alpha(x_s)\right] =
$$
$$
\sum_x \min(p_s(x), p_{s+1}(x))=1-\text{TV}(p_{s+1},p_s) \quad (3)
$$
If $s$ is insufficient, the distributions $p_s$ and $p_{s+1}$ diverge significantly, implying $\text{TV}(p_s, p_{s+1}) \geq \Delta$ for a threshold $\Delta > 0$. By (3), the expected acceptance rate is upper bounded by:
$$
\mathbb{E}[\alpha] \leq 1 - \Delta.
$$
A low acceptance rate ($\leq 1 - \Delta$) prompts the algorithm to increase the window size to $s+1$.
If $s$ is sufficient, $p_s$ and $p_{s+1}$ are statistically indistinguishable ($\text{TV}(p_s, p_{s+1}) \approx 0$). By (3), the expected acceptance rate approaches 1:
$$
\mathbb{E}[\alpha] \geq 1 - \epsilon \quad (\epsilon \approx 0),
$$
allowing immediate sampling from $p_{s+1}$ without window expansion. | Summary: This paper presents ZipAR, a training-free framework for accelerating autoregressive visual generation. It leverages the local structure of images by allowing parallel decoding of spatially adjacent tokens, alongside the standard next-token prediction. An adaptive local window assignment with rejection sampling ensures contextual alignment. This method significantly reduces the number of forward passes needed for image generation—by up to 91% on the Emu3-Gen model—without requiring retraining, thus enhancing efficiency in visual generation tasks.
Claims And Evidence: The claims made in the paper are clear and supported by experiments.
Methods And Evaluation Criteria: Perhaps we can add experiments with ImageNet 512 × 512, because generating images with higher resolutions often requires more acceleration. In addition, we can add more AR models to the evaluation, such as MAR.
Theoretical Claims: There is nothing wrong with the theoretical statement.
Experimental Designs Or Analyses: I think the experiments are a bit too few, because for the accelerated experiments, it would be better to add higher resolution tests to prove the effectiveness of the method in high resolution experiments. In addition, it would be good if the method can be effectively applied to other AR models.
Supplementary Material: I reviewed the appendices of the paper.
Relation To Broader Scientific Literature: The paper's use of rejection sampling and adaptive local window assignment draws from the concept of speculative decoding, which has been explored in natural language processing (NLP) to improve efficiency. Previous approaches, such as those used in language models, have shown that generating multiple tokens simultaneously can lead to significant efficiency gains. ZipAR applies this principle to visual generation, demonstrating that similar techniques can be effective beyond text.
Essential References Not Discussed: I think related work has been discussed in the paper.
Other Strengths And Weaknesses: I think the advantage of this paper is that it provides motivation for the acceleration of AR model reasoning through methods such as attention map visualization analysis. And the acceleration module designed in this paper is reasonable and the effect is significant.
I think the disadvantage of this paper is mainly the lack of experiments with 512 resolution, which cannot illustrate the acceleration ability of the model at higher resolutions. In addition, the experiments in this paper are mainly implemented using the llama gen model, and lack verification of other models.
Other Comments Or Suggestions: No other suggestions.
Questions For Authors: Please provide a brief pseudo code to facilitate understanding of the sampling process.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks to the reviewer for the valuable comments.
**Q1: The lack of experiments with 512 or higher resolution.**
For clarity, we would like to highlight that our experimental results already include higher-resolution evaluations, as referred to Table 2 in the paper. Specifically, the LlamaGen-XL model operates at 512x512 resolution and the Lumina-mGPT model operates at 768x768 resolution. To improve transparency, we will explicitly state these resolutions in the revised Table 2 caption. Moreover, to further address your point, we have conducted additional experiments using the Lumina-mGPT-1024 model at 1024x1024 resolution on various benchmarks. Due to the character limit here, please refer to Q1 in our response to Reviewer ELf9 for the evaluation results. These results demonstrate our method's effectiveness across multiple resolution scales.
**Q2: Experiments are mainly implemented using the LlamaGen model.**
As referred to lines 299-300 in the paper, we integrate ZipAR with three state-of-the-art next-token AR visual generation models: LlamaGen, Lumina-mGPT and Emu3-Gen. Quantitative results can be found in Table 1-2 in the paper and the visualization results of these models can be found in Figures 1, 8 in the paper and Figures 9, 10 in the supplementary material.
**Q3: Lack verification of other models, such as MAR.**
It should be noted that ZipAR is a training-free, plug-and-play parallel decoding framework for **vanilla next-token AR** visual generation models. However, MAR does not follow a next-token prediction generation paradigm.
**Q4: Provide a brief pseudo code for the sampling process.**
Thanks for your valuable comment. We have provided a pseudo code for the sampling process, as shown below
```python
# Pytorch-style Pseudo Code for ZipAR Sampling Process
# Image latent dimensions: H x W
# Minimum window size: s_min
# Initialize variables
total_columns = W # Number of columns
total_rows = H # Number of rows
decoding_rows = [0] # Rows actively being decoded
decoded_tokens = [[] for _ in range(H)] # Decoded tokens for each row
pending_starts = [] # Tentative new rows awaiting validation
while decoding_rows or pending_starts:
# --- Step 1: Decode one token in each active row ---
for row in decoding_rows:
if len(decoded_tokens[row]) < total_columns:
# Decode next token using AR model
new_token = generate_token(row, len(decoded_tokens[row])) ## Generate token at position (row, len(decoded_tokens[row]))
decoded_tokens[row].append(new_token)
# --- Step 2: Process pending_rows
new_pending = []
for (new_row, old_token, old_prob) in pending_starts:
new_prob, new_token = speculative_generate(new_row, 0) ## Tentative generation of token at position (new_row, 0)
# Calculate acceptance probability
r = uniform_sample()
if r < min(1, new_prob[old_token] / old_prob[old_token]): ## Eq. 2 in the paper
# Accept token and start decoding
decoding_rows.append(new_row)
decoded_tokens[new_row].append(new_token)
else:
# Resample from difference distribution
resampled_prob, resampled_token = resample_distribution(new_prob, old_prob) ## Eq. 3 in the paper
new_pending.append( (new_row, resampled_token, resampled_prob) )
pending_starts = new_pending
# --- Step 3: Check for completed s_min-1 tokens to initiate new rows ---
for row in list(decoding_rows):
if len(decoded_tokens[row]) == s_min and \
row+1 < total_rows and \
row+1 not in decoding_rows and \
row+1 not in [p[0] for p in pending_starts]:
# Generate tentative token with window size s_min
prob, pend_token = speculative_generate(row+1, 0) ## Tentative generation of token at position (row+1, 0)
pending_starts.append( (row+1, pend_token, prob) )
# --- Step 4: Cleanup completed rows ---
decoding_rows = [r for r in decoding_rows
if len(decoded_tokens[row]) < total_columns]
``` | Summary: This paper proposes ZipAR, a training-free method to accelerate the decoding speed of the AR image generation model. They first show that significant attention scores are allocated to tokens in the same column of previous rows. Therefore, decoding the next row is not necessary to wait for the finishing of the last row. Based on this idea, they design the ZipAR method and use an adaptive window size to control the number of tokens in one step. The results show that this method can accelerate the generation speed without a significant performance drop.
Claims And Evidence: Yes
Methods And Evaluation Criteria: I think only evaluating text-to-image on MSCOCO with CLIP-Score is not robust enough. The quality results show no distinct difference between different models.
Theoretical Claims: There is no proof of theoretical claims.
Experimental Designs Or Analyses: 1. Missing speed comparison with VAR and MaskGIT.
2. Need more benchmark to prove the robustness of ZipAR
Supplementary Material: Sec A
Relation To Broader Scientific Literature: Accelerating current AR image generation
Essential References Not Discussed: VAR published in NeurIPS 2024.
Other Strengths And Weaknesses: Sec 3.3 is a little hard to understand intuitively the details.
Other Comments Or Suggestions: No
Questions For Authors: No
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks to the reviewer for the valuable comments.
**Q1: Essential reference VAR is not discussed.**
As noted in our related work section (lines 157-160 in the paper), we do discuss VAR and its approach to visual generation. Moreover, it should be noted that VAR requires specialized multi-scale tokenizers and must be trained from scratch as a complete generation framework. In contrast, our proposed ZipAR is a training-free, plug-and-play parallel decoding solution for existing vanilla (raster order) next-token autoregressive visual generation models without any architectural modifications or retraining.
**Q2: Missing speed comparison with VAR and MaskGIT.**
As noted in Q1, ZipAR aims to accelerate existing vanilla next-token AR visual generation models, which is not directly comparable with VAR or MaskGIT. However, to address this concern, we have evaluated the generation efficiency of ZipAR, VAR and MaskGIT with similar model sizes. The results are presented below. Compared with vanilla next-token AR models, ZipAR greatly improves generation efficiency and narrows the efficiency gap with VAR and MaskGIT without any additional training.
| Resolution | Model | Throughput (img/s) |
| ---- | ---- | ---- |
| 256x256 | LlamaGen-L | 40.9 |
| 256x256 | ZipAR-11 | 47.0 |
| 256x256 | ZipAR-7 | 58.1 |
| 256x256 | ZipAR-3 | 80.8 |
| 256x256 | MaskGIT | 120.0 |
| 256x256 | VAR-d16 | 126.7 |
| 512x512 | LlamaGen-L | 6.1 |
| 512x512 | ZipAR-11 | 12.4 |
| 512x512 | ZipAR-7 | 16.5 |
| 512x512 | ZipAR-3 | 22.9 |
| 512x512 | MaskGIT | 50.8 |
| 512x512 | VAR-d16 | 55.3 |
**Q3: More benchmarks are needed to prove the robustness of ZipAR.**
To address this concern, we have expanded our evaluation by assessing ZipAR’s performance using multiple metrics, including VQAScore, Human Preference Score v2, ImageReward, and Aesthetic Score, across three models: LlamaGen-XL-512, Lumina-mGPT-768, and Lumina-mGPT-1024. The results presented below demonstrate that our method significantly improves generation efficiency with little impact on output quality across various benchmarks.
| Model | Method | Steps | VQAScore | HPSv2 | Image Reward | Aesthetic Score |
| ---- | ---- | ---- | ---- | ---- | ---- | ---- |
| LlamaGen-XL | NTP | 1024 | 0.6439 | **0.2647** | -0.0818 | 5.38 |
| LlamaGen-XL | ZipAR-15 | 562 | 0.6534 | 0.2637 | **-0.0690** | **5.39** |
| LlamaGen-XL | ZipAR-11 | 451 | **0.6581** | 0.2630 | -0.0982 | 5.37 |
| LlamaGen-XL | ZipAR-7 | 324 | 0.6410 | 0.2625 | -0.1683 | 5.33 |
| LlamaGen-XL | ZipAR-3 | 185 | 0.6343 | 0.2599 | -0.3121 | 5.32 |
| Lumina-mGPT-768 | NTP | 2352 | 0.6579 | 0.2743 | **0.4164** | 6.10 |
| Lumina-mGPT-768 | ZipAR-20 | 1063 | **0.6595** | **0.2747** | 0.3971 | **6.13** |
| Lumina-mGPT-768 | ZipAR-17 | 915 | 0.6433 | 0.2732 | 0.3049 | 6.12 |
| Lumina-mGPT-768 | ZipAR-14 | 740 | 0.6589 | 0.2739 | 0.3646 | 6.10 |
| Lumina-mGPT-768 | ZipAR-11 | 588 | 0.6490 | 0.2730 | 0.2861 | 6.10 |
| Lumina-mGPT-1024 | NTP | 4160 | 0.6718 | **0.2762** | **0.4232** | **5.97** |
| Lumina-mGPT-1024 | ZipAR-20 | 1331 | 0.6705 | 0.2761 | 0.3913 | 5.95 |
| Lumina-mGPT-1024 | ZipAR-17 | 1150 | **0.6797** | 0.2761 | 0.4018 | 5.94 |
| Lumina-mGPT-1024 | ZipAR-14 | 964 | 0.6732 | 0.2747 | 0.3298 | 5.94 |
| Lumina-mGPT-1024 | ZipAR-11 | 772 | 0.6723 | 0.2746 | 0.3222 | 5.95 |
**Q4: Sec 3.3 can be more informative.**
To clearly demonstrate the details of ZipAR, we have provided a pseudo code for ZipAR's sampling process. Please refer to Q4 in our response to Reviewer CC9X due to the character limit here. We will include it in the revised version. | Summary: This paper introduces a novel technique to conduct parallel decoding in AR-based image generation. The proposed approach can be directly applied to off-the-shelf pretrained AR-based image generation models, speeding up the generation with small performance drop.
## update after rebuttal
Given the updated results with more evaluation metrics, I would like to keep my score of weak accept.
Claims And Evidence: Yes, claims made in the submission are supported by clear and convincing evidence.
Methods And Evaluation Criteria: Yes, the proposed methods and evaluation makes sense.
Theoretical Claims: Not applicable.
Experimental Designs Or Analyses: Yes, I have checked the soundness of all experimental designs and analyses. Overall, the experiments can validate the effectiveness of the proposed method. However, one issue is that this paper could benefit from more numerical results. Currently, only FID and CLIP-scores are provided.
The reviewer believe that some human evaluation results would enhance the significance of the paper. If human evaluation is not feasible, then at least more diverse automatic evaluation approach such as VQA-score [1], image reward [2] should be considered.
[1] Evaluating Text-to-Visual Generation with Image-to-Text Generation. Lin. et al.
[2] ImageReward: Learning and Evaluating Human Preferences for Text-to-Image Generation. Xu. et al.
Supplementary Material: Yes. The appendix.
Relation To Broader Scientific Literature: The key contributions can be related to the autoregressive-based image generation models. These models are known for their low generation speed. The proposed approach could alleviate such problem, and thus incentivize more researchers to explore AR-based image generation.
Essential References Not Discussed: Related works are properly discussed.
Other Strengths And Weaknesses: Strength:
1) The proposed algorithm is simple and can be applied without the need of retraining.
2) The proposed approach is well-motivated and demonstrate promising results.
Weakness:
1) Please see "Experimental Designs Or Analyses".
2) The paper could also benefit from more ablation studies or discussions. For example, the author could study/discuss whether the proposed approach affects the optimal token-sampling-hyperparameters such as sampling temperature or CFG scale.
Other Comments Or Suggestions: Not applicable.
Questions For Authors: Please see weakness (1), (2)
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks to the reviewer for the valuable comments.
**Q1:More diverse automatic evaluation approach should be considered.**
To address this concern, we have expanded our evaluation by assessing ZipAR’s performance using multiple metrics, including VQAScore, Human Preference Score v2, ImageReward, and Aesthetic Score, across three models: LlamaGen-XL-512, Lumina-mGPT-768, and Lumina-mGPT-1024. The results presented below demonstrate that our method significantly improves generation efficiency with little impact on output quality across various benchmarks.
| Model | Method | Steps | VQAScore | HPSv2 | Image Reward | Aesthetic Score |
| ---- | ---- | ---- | ---- | ---- | ---- | ---- |
| LlamaGen-XL | NTP | 1024 | 0.6439 | **0.2647** | -0.0818 | 5.38 |
| LlamaGen-XL | ZipAR-15 | 562 | 0.6534 | 0.2637 | **-0.0690** | **5.39** |
| LlamaGen-XL | ZipAR-11 | 451 | **0.6581** | 0.2630 | -0.0982 | 5.37 |
| LlamaGen-XL | ZipAR-7 | 324 | 0.6410 | 0.2625 | -0.1683 | 5.33 |
| LlamaGen-XL | ZipAR-3 | 185 | 0.6343 | 0.2599 | -0.3121 | 5.32 |
| Lumina-mGPT-768 | NTP | 2352 | 0.6579 | 0.2743 | **0.4164** | 6.10 |
| Lumina-mGPT-768 | ZipAR-20 | 1063 | **0.6595** | **0.2747** | 0.3971 | **6.13** |
| Lumina-mGPT-768 | ZipAR-17 | 915 | 0.6433 | 0.2732 | 0.3049 | 6.12 |
| Lumina-mGPT-768 | ZipAR-14 | 740 | 0.6589 | 0.2739 | 0.3646 | 6.10 |
| Lumina-mGPT-768 | ZipAR-11 | 588 | 0.6490 | 0.2730 | 0.2861 | 6.10 |
| Lumina-mGPT-1024 | NTP | 4160 | 0.6718 | **0.2762** | **0.4232** | **5.97** |
| Lumina-mGPT-1024 | ZipAR-20 | 1331 | 0.6705 | 0.2761 | 0.3913 | 5.95 |
| Lumina-mGPT-1024 | ZipAR-17 | 1150 | **0.6797** | 0.2761 | 0.4018 | 5.94 |
| Lumina-mGPT-1024 | ZipAR-14 | 964 | 0.6732 | 0.2747 | 0.3298 | 5.94 |
| Lumina-mGPT-1024 | ZipAR-11 | 772 | 0.6723 | 0.2746 | 0.3222 | 5.95 |
**Q2: Ablation studies on whether ZipAR affects the optimal token-sampling-hyperparameters.**
We performed a grid search to determine the optimal token-sampling hyperparameters, namely, sampling temperature and classifier-free guidance scale, for ZipAR. The results are shown below. Here, "*" denotes the results obtained from LlamaGen's paper. These results indicate that ZipAR sampling does not alter the optimal sampling temperature and classifier-free guidance scale.
| model | cfg | FID |
| ---- | ---- | ---- |
| LlamaGen-L* | 1.5 | 4.74 |
| LlamaGen-L* | 1.75 | 3.15 |
| LlamaGen-L* | 2.0 | **3.07** |
| LlamaGen-L* | 2.25 | 3.62 |
| ZipAR-16 | 1.5 | 6.18 |
| ZipAR-16 | 1.75 | 3.72 |
| ZipAR-16 | 2.0 | **3.14** |
| ZipAR-16 | 2.25 | 3.44 |
| model | Temperature | FID |
| ---- | ---- | ---- |
| LlamaGen-L | 0.96 | 3.53 |
| LlamaGen-L | 0.98 | 3.24 |
| LlamaGen-L* | 1.0 | **3.07** |
| LlamaGen-L | 1.02 | 3.14 |
| ZipAR-16 | 0.96 | 3.35 |
| ZipAR-16 | 0.98 | 3.25 |
| ZipAR-16 | 1.0 | **3.14** |
| ZipAR-16 | 1.02 | 3.34 | | null | null | null | null | null | null |
Dual Feature Reduction for the Sparse-group Lasso and its Adaptive Variant | Accept (poster) | Summary: The #6131 presents dual feature reduction framework, a novel bilevel screening method specifically for the sparse-group Lasso (SGL) and its adaptive variant (aSGL). SGL works by applying $\ell\_1$ (variable-level) and $\ell_2$ (group-level) shrinkage, and the paper's problem setting minimizes a convex differentiable loss function $f(\beta)$ with a sparse-group penalty $\\|\beta\\|\_{\mathrm{sgl}}=\alpha\\|\beta\\|\_1+(1-\alpha) \sum\_{g=1}^m \sqrt{p\_g}\left\\|\beta^{(g)}\right\\|\_2$. The proposed dual reduction technique reduces computational complexity by pre-screening and eliminating inactive groups and variables before it runs into optimization. It achieves this using strong screening rules derived from dual norms and Lipschitz assumptions on the gradients, specifically leveraging the subdifferential characterizations of the SGL norm. DFR performs bi-level screening, i.e., initially at the group-level (discarding groups satisfying $\left\\|\nabla\_g f\left(\hat{\beta}\left(\lambda\_k\right)\right)\right\\|\_{\epsilon\_g} \leq \tau\_g\left(2 \lambda\_{k+1}-\lambda\_k\right)$ ) and then variable-level within active groups (discarding variables when $\left|\nabla\_i f\left(\hat{\beta}\left(\lambda\_k\right)\right)\right| \leq$ $\alpha\left(2 \lambda\_{k+1}-\lambda\_k\right)$ ). KKT conditions are checked to correct any screening violations, ensuring optimality.
Lastly, their numerical experiments (synthetic and real datasets) show the DFR significantly reduces computational cost while maintaining robustness and achieving identical optimal solutions to standard SGL methods.
Claims And Evidence: The evidence is generally convincing.
Methods And Evaluation Criteria: Yes the evaluation criteria are appropriate. The work uses metrics like the improvement factor (the ratio of computational time with and without screening) and input proportion (the fraction of variables retained after screening), which directly measure the efficiency gains from the proposed method.
Theoretical Claims: I did not find any major mathematical mistakes.
The proof (as with many strong rules) relies on a Lipschitz assumption for the gradients. In practice, if the loss function $f$ does not have a Lipschitz‐continuous gradient (or, if the constant is underestimated), the assumption might fail. The authors are aware of this issue and use KKT checks to guard against potential violations. The derivation of the KKT check conditions (expressed in terms of a soft-thresholding operator) as well as many tools is standard in literature series.
Experimental Designs Or Analyses: They are reasonable from my perspective.
Supplementary Material: I thoroughly examined the derivations in the appendix and most of the experimental results there, but I did not have the time to verify the authors' codebase.
Relation To Broader Scientific Literature: The paper’s key contributions build on a rich literature in sparse estimation and feature screening rule. Prior work on screening rules for the lasso (most notably the strong rules by Tibshirani et al. 2010) and safe screening techniques (like those by El Ghaoui et al. 2010) provided the conceptual and mathematical framework for discarding inactive features prior to optimization.
Essential References Not Discussed: There do not appear to be any significant missing references.
Other Strengths And Weaknesses: - Although KKT checks prevent incorrect feature elimination, they introduce additional computational costs that are not explicitly benchmarked in isolation. The paper notes increased KKT violations for DFR-aSGL compared to DFR-SGL, suggesting the Lipschitz assumptions are less robust when adaptive penalties are introduced.
- While DFR is empirically faster, the computation of the $\epsilon$-norm has a worst-case complexity of $O(p_g log p_g)$, which could become a bottleneck for very large group sizes.
Other Comments Or Suggestions: Integrating the pseudocode from the appendix into the main body could improve readability and help readers grasp the key ideas more effectively.
Questions For Authors: 1. Can your DFR be extended to handle nonlinear models like kernel-based methods or neural networks, or other losses beyond simple square and logistic loss? If not, what are the fundamental barriers to applying this method beyond convex optimization problems?
2. The approach is developed specifically for convex SGL penalties, and it's unclear if it would extend to non-convex sparse-group penalties like sparse-group SCAD.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the time and thought they invested in our manuscript and in providing helpful feedback. In the camera-ready version, we will add the pseudocode from the appendix into the main text to improve readability. In response to specific points raised:
>Although KKT checks prevent incorrect feature elimination, they introduce additional computational costs that are not explicitly benchmarked in isolation. The paper notes increased KKT violations for DFR-aSGL compared to DFR-SGL, suggesting the Lipschitz assumptions are less robust when adaptive penalties are introduced.
We did not provide a detailed timing breakdown as **our focus was on end-to-end efficiency for model users**, which is most relevant in practice. We agree that a detailed runtime breakdown would be insightful. We have added an analysis below, and we will add a more comprehensive analysis with figures in the camera-ready version.
*Breakdown comparison*: We ran a breakdown analysis for Figure 1 and found that on average across all values of $\alpha$ the following breakdown occurred for fitting a full path (as a percentage of the total runtime):
* **Fitting algorithm.** DFR-SGL: 88% ($133$s), DFR-aSGL: 86% ($134$s).
* **$\epsilon$-norm evaluation.** DFR-SGL: 3.9% ($3$s), DFR-aSGL: 3.6% ($3$s).
* **Group screening.** DFR-SGL: 3.9% ($3$s), DFR-aSGL: 3.6% ($3$s).
* **Variable screening.** DFR-SGL: 0.01% ($0.01$s), DFR-aSGL: 0.01% ($0.01$s).
* **KKT checks.** DFR-SGL: 0.6% ($0.4$s), DFR-aSGL: 0.6% ($0.4$s).
Additionally, we ran a breakdown analysis on the real dataset *scheetz* for fitting a full path:
* **Fitting algorithm.** DFR-SGL: 77% ($275$s), DFR-aSGL: 65% ($244$s).
* **$\epsilon$-norm evaluation.** DFR-SGL: 0.46% ($1.64$s), DFR-aSGL: 0.57% ($2$s).
* **Group screening.** DFR-SGL: 0.46% ($1.66$s), DFR-aSGL: 0.58% ($2.18$s).
* **Variable screening.** DFR-SGL: 0.01% ($0.01$s), DFR-aSGL: 0.02% ($0.08$s).
* **KKT checks.** DFR-SGL: 0.2% ($0.68$s), DFR-aSGL: 0.3% ($1.14$s).
The analysis shows that screening adds minimal overhead while significantly improving fitting efficiency. In Figure 1, without screening, the average fitting time was $544$s for SGL and $603$s for aSGL.
> Can your DFR be extended to handle nonlinear models like kernel-based methods or neural networks, or other losses beyond simple square and logistic loss? If not, what are the fundamental barriers to applying this method beyond convex optimization problems?
The core assumptions are that the **loss function is convex and differentiable**, so DFR can be applied to any loss function satisfying these. Convexity prevents multiple optimal solutions that could complicate screening, and we require differentiability to derive the screening rules and KKT checks (we need access to $\nabla f$).
In the manuscript, we have focused on linear and logistic regression as the solver used, ATOS, has the additional assumption that the loss must also have a Lipschitz gradient (also known as L$-smooth). However, we have also showcased that DFR can be efficiently implemented using BCD, which does not have these limitations. Therefore, a potential future direction can be the exploration of applying DFR to other loss functions.
>The approach is developed specifically for convex SGL penalties, and it's unclear if it would extend to non-convex sparse-group penalties like sparse-group SCAD.
DFR, like any screening rule, is derived using the subdifferentials of SGL. Extending it to other penalties requires deriving their subdifferentials. Strong rules work best with uniqueness properties on subgradients, well-behaved KKT conditions, and strong duality—properties that are often absent in non-convex cases. Thus, DFR is not inherently limited to SGL, aside from the usual challenges of screening non-convex penalties, as discussed.
The two-layer screening framework used for DFR applies to any sparse-group model, but literature on screening rules applied to non-convex penalties is relatively light (see (a)).
**Please don't hesitate to ask us any additional questions about our work.**
*References*
(a) Alain Rakotomamonjy, et al. “Screening rules for Lasso with non-convex Sparse Regularizers”. PMLR, 2019.
---
Rebuttal Comment 1.1:
Comment: Thank you for providing additional simulations. I reviewed this work on previous conference and based on the rebuttal I don't have other concerns. | Summary: This paper introduces Dual Feature Reduction (DFR), a novel screening method to enhance the computational efficiency of Sparse-Group Lasso (SGL) and its adaptive variant (aSGL).
DFR applies two-layer screening:
- Group Reduction eliminates inactive groups using a strong screening rule based on dual norms and KKT conditions.
- Variable Reduction further removes inactive features within active groups.
DFR is the first bi-level strong screening method for SGL and the first screening rule for aSGL, producing the same optimal solution with significantly lower computational cost than GAP Safe and sparsegl.
Experiments on synthetic and real-world datasets show that DFR achieves significant speedup while maintaining selection accuracy. It enables expanded hyperparameter tuning and makes SGL more scalable for high-dimensional learning tasks, particularly in ML modeling in genetics.
Claims And Evidence: **Well-Supported Claims**
1. DFR reduces computational cost while preserving solution optimality.
- Evidence: The theoretical analysis shows that DFR maintains the same optimal solution by leveraging dual norms and KKT-based strong screening rules.
- Experimental Support: DFR substantially lowers the number of variables to be optimized, resulting in significant computational savings.
2. DFR outperforms existing screening methods.
- Evidence: Experiments indicate that DFR reduces the input dimensionality more effectively than both GAP and sparsegl, yielding a lower optimization cost.
3. DFR enables expanded hyperparameter tuning.
- Evidence: Because DFR significantly cuts computation time, it becomes feasible to jointly tune \(\lambda\) and \(\alpha\) in SGL-based models.
**Claims Requiring More Evidence**
1. The “Improvement Factor” (IF) is a fair and reliable efficiency metric.
- Issue: While IF appears to be based on overall runtime, the paper does not explicitly confirm this. Its fairness is uncertain for several reasons:
- Baseline Dependence: If the no-screening solver is suboptimal, IF might overstate DFR’s efficiency gain.
- KKT Overhead: Without a runtime breakdown for screening, KKT checks, and optimization, it’s unclear whether KKT overhead negates the benefit of reduced dimensionality.
- Comparison with GAP Safe: IF could favor heuristic methods like DFR over exact approaches like GAP Safe, which might be more conservative in screening.
2. DFR outperforms GAP Safe.
- Issue: The paper does not compare GAP Safe in certain synthetic scenarios (e.g., high dimensionality, uneven group sizes, logistic models) and omits real-data comparisons. Therefore, the practical advantage of DFR over GAP Safe remains uncertain.
3. DFR’s two-layer screening does not significantly increase false exclusions.
- Issue: There were some KKT violations, especially in the adaptive SGL setting. The frequency and consequences of these violations are not thoroughly discussed, leaving open questions about screening accuracy.
Methods And Evaluation Criteria: **Strengths**
1. DFR directly addresses SGL’s computational bottleneck, leveraging dual norms and KKT-based screening for feature reduction.
2. Comprehensive evaluation includes synthetic data (controlled sparsity, correlation, dimensionality) and real datasets (genetics, machine learning).
3. Baseline comparisons with GAP Safe (exact screening) and sparsegl (heuristic screening) contextualize DFR’s performance.
**Potential Issues**
1. GAP Safe is omitted from several synthetic data and real data experiments, leaving uncertainty about its practical efficiency.
2. Computational breakdown is missing, making it unclear how much speedup comes from screening vs. solving the reduced problem. Please consider provide absolute runtime comparisons and a screening vs. KKT vs. optimization time breakdown.
3. Need to clarify solver settings across all methods. Potential solver bias if different methods are not optimized consistently.
Theoretical Claims: Generally checked and no major issues found in the submission.
Experimental Designs Or Analyses: **Strengths**
1. Broad Experimental Scope: The paper evaluates DFR on synthetic datasets (varying sparsity, correlation, dimensionality) and real datasets (genetics, classification), capturing both controlled and practical scenarios.
2. Comparison with Established Baselines: In parts of the synthetic experiments, GAP Safe and sparsegl are used as benchmarks, offering a relevant performance context for DFR.
3. Robustness Checks: The analyses investigate factors like signal strength, data correlation, and logistic vs. linear models, highlighting how DFR fares under diverse conditions.
**Potential Issues**
1. Incomplete Details for Increasing Dimensionality: While DFR is tested in high dimensions, the paper lacks a clear description of how the synthetic data are generated in the increasing-dimensionality scenarios. This hampers reproducibility and leaves open questions about the underlying correlation structures and signal placement.
2. Omission of GAP Safe in Later Synthetic Tests: In the high dimensionality, robustness and logistic experiments, GAP Safe is not included despite being considered earlier. This limits insight into how GAP Safe compares to DFR under these more diverse conditions.
3. Reliance on Improvement Factor: Though informative, the improvement factor might overlook the runtime overhead for KKT checks and solver differences. Absolute runtime or time-breakdown analyses could provide a fuller picture.
4. Solver Consistency: The paper does not clearly state if all methods (DFR, GAP Safe, sparsegl) use identical solver settings, raising potential fairness concerns in runtime comparisons.
Supplementary Material: Majorly reviewed, except the proof parts.
Relation To Broader Scientific Literature: This paper extends strong screening frameworks (Tibshirani et al., 2010) to the Sparse-Group Lasso (SGL) and adaptive SGL, building on prior safe/strong rules for the group lasso (e.g., Ndiaye et al., 2016). By employing dual norms and KKT-based subgradients, the method follows the dual polytope projection logic found in other screening approaches (Wang et al., 2013). However, unlike GAP Safe (Ndiaye et al., 2016) which iterates for exact screening, DFR uses single-pass strong rules coupled with KKT checks. The bi-level screening structure aligns with earlier group-sparse penalties but adds a second screening stage to reduce dimensionality within each group—an innovation that refines prior one-layer methods like sparsegl (Liang et al., 2022). By extending the framework to adaptive SGL, the paper addresses the oracle property aspect (Poignard, 2020), contributing to literature on sparsity-inducing regularization with adaptive weights.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: NA
Other Comments Or Suggestions: - Reorganize Experimental Section: The paper’s experiment write-up often references both the main text and multiple Appendix sections, causing fragmented reading. Consider consolidating key experimental details or providing clearer cross-references so readers don’t have to jump back and forth.
- Clarify Improvement Factor Usage: Although the paper uses the improvement factor metric, it might overlook runtime overhead (e.g., KKT checks). Presenting absolute runtimes or a time breakdown alongside the improvement factor would help address fairness concerns.
Questions For Authors: 1. The paper does not compare GAP Safe in certain synthetic scenarios (e.g., increased dimensionality, uneven group sizes, logistic models) and omits it for real data. Could you provide results—or at least partial findings—on those settings to clarify whether DFR consistently outperforms GAP Safe across diverse conditions?
2. Would it be possible to offer a more detailed runtime breakdown—including screening, KKT checks, and reduced optimization—alongside the improvement factor?
3. Could you elaborate on how the synthetic datasets were generated for the increasing-dimensionality experiments (e.g., correlation structures, signal placement, group definitions)?
Ethical Review Concerns: NA
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We want to thank the reviewer for taking the time to review our work and for their helpful comments. In the camera-ready version, the experimental section will be restructured to improve readability, reducing the need for frequent cross-referencing. In response to specific points raised:
>Claims Requiring More Evidence: The “Improvement Factor” (IF) is a fair and reliable efficiency metric.
We use IF as it provides **direct insight into how screening impacts the user**. For instance, an IF of $2$ means DFR halves fitting time. The metric does not inherently favour any method. We also report Input Proportion as an alternative screening impact measure, independent of computational considerations. Both metrics show DFR is effective. Additionally, raw runtimes are available in the appendix, enabling direct method comparison.
>If the no-screening solver is suboptimal, IF might overstate DFR’s efficiency gain.
We considered this when using ATOS with DFR. To ensure IF does not unfairly benefit DFR over GAP safe due to the solver, we also implemented DFR with BCD (the solver used for GAP) and found similar results (Figure 1).
>Claims Requiring More Evidence: DFR’s two-layer screening does not significantly increase false exclusions.
We detailed our findings on KKT violations in the 'KKT violations' section (Lines 363-377). The higher violations in aSGL likely stem from the Lipschitz assumptions' dependence on additional hyperparameters. However, across all results (synthetic and real), KKT violations for DFR-SGL and DFR-aSGL were minimal. Specifically, DFR-SGL had only one violation overall, and DFR-aSGL had a violation every $108$ fits (Figures 1-3, Table A4).
>Would it be possible to offer a more detailed runtime breakdown—including screening, KKT checks, and reduced optimization—alongside the improvement factor?
We did not provide a detailed timing breakdown as our focus was on end-to-end efficiency for model users, which is most relevant in practice. We agree that a detailed runtime breakdown would be insightful. We have added an analysis below, and we will add a more comprehensive analysis with figures in the camera-ready version.
*Breakdown comparison*: We ran a breakdown analysis for Figure 1 and found that on average across all values of $\alpha$ the following breakdown occurred for fitting a full path (as a percentage of the total runtime):
* **Fitting algorithm.** DFR-SGL: 88% ($133$s), DFR-aSGL: 86% ($134$s).
* **$\epsilon$-norm evaluation.** DFR-SGL: 3.9% ($3$s), DFR-aSGL: 3.6% ($3$s).
* **Group screening.** DFR-SGL: 3.9% ($3$s), DFR-aSGL: 3.6% ($3$s).
* **Variable screening.** DFR-SGL: 0.01% ($0.01$s), DFR-aSGL: 0.01% ($0.01$s).
* **KKT checks.** DFR-SGL: 0.6% ($0.4$s), DFR-aSGL: 0.6% ($0.4$s).
Additionally, we ran a breakdown analysis on the real dataset *scheetz* for fitting a full path:
* **Fitting algorithm.** DFR-SGL: 77% ($275$s), DFR-aSGL: 65% ($244$s).
* **$\epsilon$-norm evaluation.** DFR-SGL: 0.46% ($1.64$s), DFR-aSGL: 0.57% ($2$s).
* **Group screening.** DFR-SGL: 0.46% ($1.66$s), DFR-aSGL: 0.58% ($2.18$s).
* **Variable screening.** DFR-SGL: 0.01% ($0.01$s), DFR-aSGL: 0.02% ($0.08$s).
* **KKT checks.** DFR-SGL: 0.2% ($0.68$s), DFR-aSGL: 0.3% ($1.14$s).
The analysis shows that screening adds minimal overhead while significantly improving fitting efficiency. In Figure 1, without screening, the average fitting time was $544$s for SGL and $603$s for aSGL.
>Need to clarify solver settings across all methods. Potential solver bias if different methods are not optimized consistently.
See lines 260-266 and the 'Comparison to BCD' section, where we state that DFR uses ATOS while GAP safe uses BCD. To check the solver does not bias results, we did implement DFR with BCD and found similar outcomes (Figure 1).
>The paper does not compare GAP Safe in certain synthetic scenarios (e.g., increased dimensionality, uneven group sizes, logistic models) and omits it for real data.
The GAP safe rules failed in most simulations we tested, with issues in convergence and solution optimality. Using the authors' implementation (see Link 1), we are confident the problem lies with the approach, not the code. We appreciate that a comparison is important, so we included it where feasible (Figures 1-3). The poor performance suggested further investigation would not be fruitful.
Link 1: https://github.com/EugeneNdiaye/Gap\_Safe\_Rules
>Could you elaborate on how the synthetic datasets were generated for the increasing-dimensionality experiments (e.g., correlation structures, signal placement, group definitions)?
For the increasing dimension case, the setting is the same as for the other cases (described in Section 3.1), aside from the grouping structure. As explained in the 'Increasing dimensionality' section, the variables were grouped into groups of sizes $20$, so that there were $p/20$ groups for each case of $p$.
**Please don't hesitate to ask us any additional questions about our work.** | Summary: This paper introduces a new feature reduction method in order to improve the computational complexity in solving Sparse-Group Lasso (SGL) problems.
The Dual Feature Reduction (DFR) method that is presented relies on two screening stages (one for inactive groups and another for inactive variables within a group) and the authors provide both theoretical groundings and experimental results to support their claims.
### Update after rebuttal.
I have acknowledged the response of the authors and will maintain my recommendation. Although for the completeness of the experimental protocol, I would still recommend including GAP Safe in all experiments.
Claims And Evidence: The authors provide evidence through coherent theoretical bases building upon well-established literature, and convincing empirical results.
Refer to §Methods and evaluation criteria for more details.
Methods And Evaluation Criteria: The problem at hand is concerned with reducing the computational costs in solving SGL problems.
It is well established in the screening literature that reducing the input feature size by identifying those that will be inactive at the optimal solution induces significant computational gains to the subsequent optimization scheme.
The proposed DFR method does therefore make sense for the problem at hand.
In their experimental evaluations, the authors clearly distinguish the gain in performance through what they call the _improvement factor_ (the ratio of computation time in methods without and with screening approach) and the _input proportion_ (quantifying how much the feature space was reduced). Their proposed DFR method is compared against the sparsegl and GAP Safe approaches, which are two other competing rules of screening for SGL problems.
Theoretical Claims: I have not checked the proofs of the theoretical claims.
Experimental Designs Or Analyses: I have checked the soundness of the experimental design (both synthetic and real data). Please refer to §Questions.
Supplementary Material: I have not reviewed the supplementary material.
Relation To Broader Scientific Literature: The key contributions of this paper are regarding screening rules for Sparse-Group Lasso problems, which are commonly found in various machine learning fields where relevant groups of variables need to be identified while ensuring limited spurious variables within a given group.
Essential References Not Discussed: N.A.
Other Strengths And Weaknesses: To the best of my knowledge, the DFR method proposed by the authors is novel.
The presentation of their approach is clear and gradual, building upon well-established existing literature.
Their experimental evaluation is thorough (though please also refer to §Questions).
I believe this is a valid contribution to the broader scientific community.
Other Comments Or Suggestions: For improved readability, I would suggest increasing the font sizes of axis labels in all the figures.
Questions For Authors: Why is the GAP Safe approach not included in the evaluations of the impact of dimensionality, sparsity proportion or data correlation (Figs 4 to 6), or in the real data analysis (Figs 8 and 9) ?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We want to thank the reviewer for their time in reviewing our work and for their helpful and positive feedback. As suggested, we will increase the font sizes in the figures in the camera-ready version. With regards to your question:
>Why is the GAP Safe approach not included in the evaluations of the impact of dimensionality, sparsity proportion or data correlation (Figs 4 to 6), or in the real data analysis (Figs 8 and 9)?
We found that the GAP safe rules did not work for most simulation settings we tried. We encountered issues with convergence and solution optimality. We used an implementation provided by the authors of the GAP safe rules (see Link 1), so that we are confident the issue was not with the implementation but with the approach itself. We understand that a comparison to the safe rules is important, so we included the comparison when it was possible (Figures 1-3). The poor performance of GAP safe in these settings convinced us that further investigation of the safe rules would not be fruitful.
Link 1: https://github.com/EugeneNdiaye/Gap\_Safe\_Rules
**Please don't hesitate to ask us any additional questions about our work.** | null | null | null | null | null | null | null | null |
RocketKV: Accelerating Long-Context LLM Inference via Two-Stage KV Cache Compression | Accept (poster) | Summary: RocketKV is a training-free KV cache compression strategy designed to optimize the inference efficiency of long-context LLMs during the decode phase. The main challenge it addresses is the exponential memory overhead due to KV cache storage, which scales with sequence length. The method is empirically validated on Mistral-7B, LLaMA-3.1-8B, and LongChat-7B, showing up to 3× speedup and 31% peak memory reduction on NVIDIA H100 GPUs, while preserving accuracy across long-context benchmarks (LongBench, Needle-in-a-Haystack, RULER). RocketKV outperforms SnapKV, Quest, SparQ, and DuoAttention, achieving near full-KV accuracy at significantly lower memory footprints.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: No theory.
Experimental Designs Or Analyses: The experiments are sound but could benefit from:
- Latency breakdown: token retrieval and sparse attention computation.
- Ablation studies on kernel size impact.
Supplementary Material: Yes. All.
Relation To Broader Scientific Literature: - Advancing Sparse Attention Methods for LLMs
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strengths
- Hardware-Friendly Optimization.
- Significant Performance Gains in Speed and Memory Efficiency.
Weaknesses:
- Novelty is not enough. The two main stages come from SnapKV + QUEST.
- Potential Sensitivity to Kernel Size Selection in SnapKV++. The adaptive pooling kernel size in SnapKV++ is empirically determined, raising concerns about its generalizability to unseen domains.
- Evaluation on Only One Hardware Setup (NVIDIA H100). Test on cheaper GPUs such as A100.
- Lack of Direct Latency Comparison with Alternative Sparse Attention Methods.
Other Comments Or Suggestions: Regarding the needle-in-the-haystack benchmark, since RocketKV evicts previous tokens, it is unclear whether it might also evict the needle—despite still producing the correct output. Could the authors provide an analysis of which tokens RocketKV evicts and whether the needle is among them?
Questions For Authors: See Strengths And Weaknesses
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable feedback and finding RocketKV results promising.
**Novelty: RocketKV is SnapKV + QUEST:**
As discussed in the paper, existing methods for KV cache compression typically fall into two categories: permanent KV token eviction and dynamic KV token selection. We would like to clarify that the primary novelty of our work is that we introduce a two-stage approach that effectively combines the strengths of both paradigms into a single framework. While we propose SnapKV++ for the first stage and hybrid attention for the second stage, they can be directly replaced with various other methods of the same category.
In terms of SnapKV++, we admit that our improvement over SnapKV is not significant which is reflected in the name (this is also recognized by Reviewer utBU). However, it is still much more effective than the original SnapKV method as demonstrated in our ablation study (Section 4.4 and Appendix B.1).
We would like to point out that our hybrid attention method is NOT the same as QUEST. QUEST relies on K tensor reduction along the sequence dimension to conduct approximate attention, while our hybrid attention method relies on K tensor reduction along both sequence and head dimensions. This two-dimensional reduction scheme can achieve much higher accuracy than one-dimensional reduction at a given compression ratio. For example, with a compression ratio of 16, hybrid attention can evenly split it into a compression ratio of 4 at each dimension, introducing much lower accuracy loss compared to directly compressing the sequence dimension by 16x in QUEST. This is further confirmed in the ablation study (Section 4.4.3 and Appendix B.1) where we demonstrate the standalone hybrid attention method consistently outperforms QUEST and SparQ.
Overall, we believe our work introduces sufficient novelty and is qualified as a top-tier conference publication.
**Potential Sensitivity to Kernel Size in SnapKV++:**
We thoroughly examined the impact of pooling kernel size in the ablation study (Section 4.4.2 and Appendix B.1). While the adaptive pooling size is indeed empirically determined based on accuracy results on the RULER benchmark as shown in Figure 7, we found this simple method is quite effective and generalizes well on other benchmarks as shown in Figure 10 in Appendix B.1. The insight we observe from this study is that tasks with longer sequence lengths usually perform better with larger kernel sizes, which can provide practical guidelines for better pooling kernel size selection in future work.
**Test on Cheaper GPUs:**
Below are the end-to-end speedup numbers of RocketKV over Full KV cache running Llama3.1-8B on A100 with 256 token budget.
| Sequence length | 16K | 32K | 64K | 96K |
|-----------------|------|------|------|------|
| Speedup | 1.3x | 1.5x | 2.7x | 3.6x |
Compared to efficiency data on H100 shown in Figure 5(a), we can see that the maximum speedup on A100 is 20% higher (3.6x versus 3x). This is because A100 has a lower memory bandwidth to compute ratio compared to H100. As a result, LLM inference execution is more memory-bound on A100 and can benefit more from memory traffic savings of KV cache offered by RocketKV. We believe the speedup of RocketKV will be even higher on cheaper GPUs such as RTX 4090/5090 since they are not equipped with High Bandwidth Memory (HBM). Unfortunately, these GPUs also have much smaller memory capacity which prevents us from conducting long-context experiments on them. Notice that the memory savings are the same between H100 and A100 so we didn’t show them here.
**Lack of Direct Latency Comparison:**
Since different sparse attention methods are usually implemented under different frameworks with different levels of code optimizations, it is difficult to provide an apple-to-apple comparison between them. In this work we use the token budget to estimate memory traffic, including attention score approximation. For example, with RocketKV at a token budget of 256, half is used for attention approximation (Steps 1 and 2 in Section 3.4) and half for sparse attention (Step 3). Thus, the token budget itself can directly reflect attention latency in highly-optimized implementations since attention operations are mostly memory-bound.
**Is Needle Among Evicted Tokens:**
RocketKV evicts KV cache tokens independently across attention heads and layers, making it unclear if the needle's tokens are consistently retained. Additionally, later layers can spread the needle's information to other positions through attention, reducing the effectiveness of using needle positions to track information loss. | Summary: This paper combines the advantages of permanent KV token eviction and dynamic KV token selection. It uses a two-staged kv cache compression method to give strong results and shows that it reduces GPU memory usage.
Claims And Evidence: I mostly agree with it. However, given that "permanent KV token eviction" could lose important information in the tokens, I do not quite understand why combining two kinds of kv cache compression could be lossless.
In this sense, I think two things can be done.
1. add more benchmarks to prove the losslessness.
2. use other architectural models? like deepseek MLA models? this may be too expensive. But I really ponder about the performance of RocketKV on the reasoning models, given they have extremely long reasoning tokens.
Methods And Evaluation Criteria: As above, I think we should add more model types and widely used benchmarks.
Theoretical Claims: No proof.
Experimental Designs Or Analyses: See above coments, thanks.
Supplementary Material: I don't see the important codes.
Relation To Broader Scientific Literature: Lossless KV cache compression is what a lot of people need. Have the authors tried to implement it on widely used inference engines, say sglang, vllm?
Essential References Not Discussed: No.
Other Strengths And Weaknesses: see above, I am pondering about the real inference cost, like the time overhead introduced by RocketKV, and whether the overhead can be resolved. If it works quickly and losslessly, it would be perfect.
Other Comments Or Suggestions: Nice work! Hope to see further improvements.
Questions For Authors: No
Ethical Review Concerns: No
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for finding our work interesting and providing valuable suggestions.
**Add More Benchmarks to Prove the Losslessness:**
We would like to clarify that RocketKV is not a lossless approach, as evident from the provided accuracy results. And none of the other methods we compared are 100% lossless including the Exact-TopK method because they all replace dense attention operations with sparse attention. The primary goal is to achieve comparable (but may not exactly the same) accuracy to Full KV cache with much smaller KV token budgets. We have evaluated RocketKV across three models and three datasets under various token budgets to demonstrate the superiority of our method against existing work on achieving minimal accuracy loss with an extremely high KV cache compression ratio (up to ~500x in our evaluation). Compared to other existing works, we believe we have provided a sufficiently comprehensive evaluation. Moreover, we have added some additional evaluation on the recent SCBench in multi-turn scenarios and demonstrated that RocketKV outperforms other methods by a significant margin. Please refer to our response to reviewer utBU for more details.
**Other Architectural Models Like Deepseek MLA or the Reasoning Models:**
Thank you for your suggestion. We are also interested to see how RocketKV and other KV cache compression methods perform on top of DeepSeek MLA models. Note that none of the other works on KV cache compression have done it before. Since RocketKV is fully compatible with GQA, we believe it can be applied to MLA with simple modifications (MLA can be considered as a variant of GQA where all attention heads in a layer share the same KV cache tensors). Considering RockeKV outperforms current methods over various models and datasets, we expect to see similar trends when we evaluate various methods on MLA. Unfortunately, due to time and resource constraints during the rebuttal period, we could not add either DeepSeek V3 or R1 model into our evaluation and have decided to leave it for future work.
**The Real Inference Cost:**
We have provided end-to-end inference results of RocketKV on an NVIDIA H100 GPU showing significant latency speed-up and memory saving against Full KV cache that already includes the time overhead of RocketKV (Section 4.3). | Summary: This paper introduces RocketKV, a two-stage KV cache compression approach. The first stage applies permanent KV eviction through adaptive pooling and GQA-compatible SnapKV methodology, while the second stage efficiently retrieves necessary KV components dynamically based on queries via a hybrid attention mechanism. This approach effectively retrieves essential key-value pairs during each decoding step, maintaining high accuracy. The proposed method demonstrates superior accuracy compared to other KV cache compression techniques (SnapKV, Quest, SparQ, etc.) while also achieving memory savings and end-to-end acceleration.
## Update After Rebuttal
After reading other reviewers' opinions and the authors' rebuttal, I have gained a better understanding of RocketKV's contribution and methods. However, I still have remaining concerns regarding the weaknesses I initially pointed out. First, regarding presentation, while the authors' additional explanations resolved many of my questions, considering the state of the initial submission, I believe this paper still requires significant improvements in structure and writing.
Second, regarding contribution, as reviewer eFmj mentioned, I am not fully convinced about the novelty of RocketKV compared to SnapKV and QUEST. This concern is related to the aforementioned presentation issues. For example, looking back at the paper, it is difficult to consider the heuristic search for adaptive pooling size (a differentiating point from SnapKV++) as a fundamental contribution. Additionally, it is challenging to understand why GQA compatibility improves performance in Figure 6 (it is difficult to find even minimal insight on this). Furthermore, rather than briefly mentioning the concept of QUEST in section 3.1, the paper should provide more detailed explanations and insights about the foundational methodology. Such comprehensive understanding would help readers better appreciate the distinguishing points of RocketKV compared to QUEST.
Overall, while I now better understand the core of RocketKV—ensuring accuracy and achieving better efficiency through two-stage KV cache control, as the authors claim—I find it difficult to consider this paper suitable for ICML publication in its current state, even taking into account the additional rebuttal results. This is mainly due to insufficient presentation and structure. Considering all of my points, I will update my score from 1 to 2.
Claims And Evidence: The paper lacks sufficient explanation of the proposed methodology, making it difficult to thoroughly examine the evidence supporting its claims. For instance, the Stage 2 Hybrid Attention appears to be written with the assumption that readers are already familiar with Quest and SparQ, as detailed explanations are notably absent. It's challenging to understand from the main text alone why a sign function is applied to the sum of q, or how approximate attention scores are calculated by retrieving from K_max and K_min based on these values.
Regarding the proposed SnapKV++, the GQA-compatible aggregation method has already been widely used in recent approaches (e.g., AdaKV - https://github.com/FFY0/AdaKV). Similarly, the adaptive pooling method relies on rule-based, heuristic pooling sizes determined by sequence length without clear justification. Despite potentially contributing to higher accuracy, these two contributions are difficult to recognize as novel and central to the paper's contribution.
Methods And Evaluation Criteria: The paper presents performance comparisons across various long context benchmarks at different compression ratios.
Theoretical Claims: There appear to be no theoretical claims in the paper. The adaptive pooling size is presented heuristically based on empirical experimental results, offering both size candidates and determination methods.
Experimental Designs Or Analyses: The experimental design appears sound, though the paper lacks detailed analysis.
Supplementary Material: No
Relation To Broader Scientific Literature: The proposed KV cache compression methodology seems closely related to H2O and SnapKV, while the dynamic query-based KV retrieval approach appears closely connected to Quest.
Essential References Not Discussed: No issue
Other Strengths And Weaknesses: A strength of the paper is that the proposed method demonstrates high performance across various compression ratios. However, there are significant weaknesses in presentation that cannot be overlooked. Without detailed background on closely related works like Quest and SparQ, readers would struggle to properly understand Stages 1 and 2. Furthermore, there appears to be no analysis in the main text explaining why this method is necessary.
While the paper makes valid contributions regarding performance improvements, considering this is a submission to a top-tier conference, insights and detailed explanations supporting the proposed methodology are essential for readers. Despite the performance results, the current state of the paper makes it difficult to recommend for acceptance. Below are aspects that should be addressed in more detail:
- How exactly is Exact-TopK measured? Does it involve computing Full KV during prefill and retaining only the highest attention scores? Then what would be the difference between SnapKV and Exact-TopK?
- Does Figure 2's illustration of the second stage indicate that different KV tokens are retrieved at each decoding step?
- The process of calculating approximate attention scores using element-wise Max, Min, and the sign result of sum of q is difficult to understand from this paper alone.
- Is the Page always divided into 4 in the dimension? Is there a specific rationale for this approach?
Other Comments Or Suggestions: Experimental results (especially ablation studies) seem to occupy excessive space. This could be adjusted using tables or other methods to allow for more comprehensive background explanations. If my overall perspective is somewhat misguided or differs from other reviewers' opinions, I am open to reconsidering my assessment.
Questions For Authors: See weakness
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for your comprehensive review and providing constructive feedback.
**Lack of Methodology Explanation:**
Due to Space constraints, we decided to prioritize RocketKV’s performance results, resulting in a briefer method explanation. In the final version, we will move certain ablation studies to the appendix to provide a more comprehensive, self-contained description of our approach as you suggested.
**Why RocketKV Is Necessary:**
Figure 1 illustrates that existing KV cache compression methods struggle to match the accuracy of Exact-TopK on Qasper under low token budgets.
For further motivation, we analyzed a random attention head (layer 31 head 0 from Llama 3.1-8B) and calculated the maximum number of unique KV tokens used by Exact-TopK attention across all decoding steps in the Qasper benchmark. As shown in the table below, to keep all important KV tokens (used by at least one TopK attention across all decode steps), a permanent KV cache eviction method needs to keep 2197 out of 6110 tokens (where 6110 is the total seq. length) when K=256. Moreover, a dynamic token selection algorithm needs to accurately select a small set of TopK tokens out of a large set (e.g. 256 out of 6110 when K=256), which is a challenging task. An ideal solution is to keep only important KV tokens and then conduct dynamic token selection among them. Thus, we propose a two-stage approach that first retains only important tokens and then performs dynamic selection on this smaller set. This fusion evicts unimportant tokens and makes the dynamic selection more accurate, motivating our RocketKV design.
| TopK value | 256 | 512 | 1024 | 2048 | 4096 |
|------------------------------|------|------|------|------|-------|
| Max num of unique TopK tokens | 2197 | 3223 | 5229 | 8272 | 11993 |
| Total num of tokens | 6110 | 6110 | 17789| 17789| 17789 |
**Explanation of Calculating Approximate Attention Scores:**
RocketKV’s hybrid attention focuses on accurately pointing KV tokens with the TopK attention scores. We split the K tensor into pages of consecutive tokens. Within each page, we record element-wise min *(Kmin)* and max *(Kmax)* to estimate the upper bound of attention scores for a query *q*. Specifically, we compute *max(q x Kmin, q x Kmax)* for each page to approximate highest possible attention scores within a page, then select pages with higher scores. To further reduce the approximation overhead, we only calculate on partial positions along the head dimensions where the magnitude of *q* is large and ignore other positions, and only fetching either from *Kmin* or *Kmax* at a given position based on the corresponding sign of q as shown in Figure 3. Hence, we approximate attention scores via both sequence and head dimension reductions. To be fully compatible with GQA, we select based on the sum of *q* or *|q|* at group dimension as needed to guarantee that all attention heads within a group are making the same selection at each step.
**Measurement of Exact-TopK:**
As explained in Appendix A-3, Exact-TopK is an oracle-based method that assumes prior knowledge of token importance and dynamically chooses TopK KV tokens for each attention head and decoding step. By contrast, SnapKV permanently prunes tokens of the input prompt via one-time TopK filtering.
**Are Different KV Tokens Retrieved at Each Decoding Step?**
Yes. Our second-stage hybrid attention dynamically selects KV tokens at each decoding step, aiming to pick those most relevant to the current query vector.
**Is Page Always Divided Into 4?**
No. Page size depends on the overall compression ratio. For a token budget of 512 and a total length of 128K, the compression ratio is 256x. We then split this ratio evenly between the first and second stages (16x each), and again between sequence and head dimensions (4x each) in hybrid attention as mentioned in Section 3.5. Consequently, the page size is four tokens in that scenario, but it can vary for other compression ratios.
**Novelty of SnapKV++:**
Thank you for pointing out that a similar GQA-compatible enhancement for SnapKV has already been proposed in AdaKV. This feature was added to GitHub in Nov.'24 and the arXiv paper in Jan.'25. We proposed GQA enhancement for SnapKV independently and concurrently. We are happy to give AdaKV credit for this concurrent contribution in the final version. However, per reviewer instructions outlined in ICML website, authors should not be held responsible for papers that were made public within 4 months of the submission deadline. We are discussing with AC to see whether we should give up GQA enhancement as a claimed contribution. Importantly, this does not diminish our core novelty: a two-stage KV cache compression framework uniquely combining permanent KV eviction with dynamic token selection, which is broadly generalizable to various compression methods at each stage.
Regarding adaptive pooling size, please see our response to reviewer eFmJ. | Summary: This paper presents RocketKV, a method that leverages observation made upon existing permanent and dynamic token eviction. Specifically, RocketKV aims to conduct a permanent eviction with a large budget first and refine it to target a budget with fine-grained dynamic evictions.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: The needle dataset seems to be basic if it strictly follows the GKamradt setup (with the background filler being the reputation of texts). This needle setup is known to be weak per findings like [1] and [2] and it is recommended for the authors to opt for a more comprehensive needle setup. A common practice is to adopt PGraham Essay as background and a passkey-like needle, as done in [2]. This is possibly not much of a concern due to the adaptation of RULER, which is a much more standardized needle task, but it should still be considered.
Further, while LongBench and RULER are popular datasets for long context evaluation, these alone might be a bit outdated by today's standard. As the authors are familiar with SnapKV and Razor/DuoAttention, one major challenge of SnapKV is it is query-position sensitive, which is showcased in literature like SCBench. I am interested in seeing if the proposed method can perform well on such multi-round datasets.
Last, per A.2, the LongBench input is truncated even for long context-capable models like Llama 3.1. It is recommended to have non-truncated input for such models or adopt a longer dataset like $\infty$Bench.
[1] InfLLM: Training-Free Long-Context Extrapolation for LLMs with an Efficient Context Memory
[2] KV Cache Compression, But What Must We Give in Return? A Comprehensive Benchmark of Long Context Capable Approaches
[3] SCBench: A KV Cache-Centric Analysis of Long-Context Methods
Theoretical Claims: -
Experimental Designs Or Analyses: Yes. All three of them.
Supplementary Material: Mainly for setup information.
Relation To Broader Scientific Literature: -
Essential References Not Discussed: -
Other Strengths And Weaknesses: See other sections for weaknesses. Generally I am very fond of this work as it is:
- Nicely written
- Makes some clear and correct taxonomy of existing works, and pay proper tribute when the improvement is incremental in nature (e.g., SnapKV++).
- Goes for a simple but effective approach when such a design works.
- Solid evaluation on executed ones — I might have some reservations regarding its dataset comprehensiveness, but for the conducted experiments, the model and setting coverage seem nicely done.
I am open to improve the rating upon a satisfactory rebuttal.
Other Comments Or Suggestions: Nothing major but it might be worth noting the proper quotation marks for LaTeX are `` and ''. It seems like sometimes only the right quotation marks are used.
---
**Post Rebuttal Update**
I thank the authors for the added results and clarifications. Without memory efficiency gains, RocketKV-MT remains a rather pilot study akin to methods like DeepSeek NSA (though one may argue NSA actually generates *more* KV cache), and while I agree there are channels for optimization in a disaggregated service scenario, significant consideration would incur under that setting. I hope to see a faithful discussion of RocketKV-MT in this regard in the updated manuscript. Further, on the presentation side, the current writing of SnapKV++ can also use a bit more background for unfamiliar readers. I understand it is a small module of the presented method, but Quest/SparQ/(and even AdaKV)'s contribution should also be explicitly stated.
Back to the evaluation note: I encourage authors to construct more complete tests on newer, more challenging benchmarks like SCBench, LongBench v2, and HELMET with more compression settings and baseline features. KV cache studies have been long limited under simple needle tasks and LongBench, which is no longer a proper standard for modern KV cache evaluation. While that might reveal gaps between full precision and the compressed model, making many authors reluctant to feature such evaluations, I firmly believe such gaps would guide future development and should be highlighted more than perfect results. **SnapKV should also be fully featured, by aligning with SCBench's report setting it should be doable.**
Furthermore, I wonder why DuoAttention underperforms. It relies on the mechanistic proper of attention heads, and it seems unintuitive to perform badly under a multi-round scenario if the setting is right. Some investigations in this regard would be appreciated.
Last, regarding the LongBench truncation, apologize for reading A.2. wrong, the current setting sounds proper. The needle test is also proper if it is following [2] (and as a kindly reminder, this paper is wrongly cited).
I now improved the rating to 4 as promised.
Questions For Authors: -
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer’s insightful feedback and recognition of our work in several aspects.
**Needle Dataset:**
Our needle dataset already follows reference [2] mentioned by the reviewer which adopts PGraham Essay as background and a passkey-like needle as explained in Appendix A.2.
**SCBench Results:**
This is a great suggestion. We have conducted additional experiments with SCBench under multi-turn mode and below are the preliminary results. Due to time and resource constraints, we only present results with 4K token budget on Llama3.1-8B-Instruct, but do plan to show more comprehensive SCBench results in the final version of the paper. Notice that some results for SnapKV are missing because it only compresses the KV cache of the input prompt but not the generated tokens, therefore it cannot meet the 4K token budget requirement for tasks with more than 4K generated tokens.
Our results below demonstrate that RocketKV still greatly outperforms other baseline methods in multi-turn scenarios, especially for string retrieval tasks. However, there is still a noticeable gap between RocketKV and Exact-TopK, showing room for further improvement. As you already pointed out, we believe this is due to the limitation of SnapKV which relies on the query at the end of the input prompt to filter out unimportant KV tokens. In multi-turn scenarios, the unimportant KV tokens evicted by the previous turns might be essential for queries in later turns, and this could cause a significant accuracy drop in later turns. To address this challenge, we propose a variant of RocketKV called RocketKV-MT where we do not perform permanent KV token eviction in SnapKV++ but keep them all for later turns. However, the decode phase is still restricted to perform dynamic selection on the filtered KV tokens by SnapKV++. For instance, if SnapKV++ identifies 4K important tokens out of 16K input tokens in the prefill phase, all 16K KV tokens are kept in memory but the hybrid attention method only selects among the 4K important tokens in the decode phase. In the next turn, all 16K input tokens are added as prefixes in the prefill phase and SnapKV++ will perform another round of filtering based on unfiltered history. By doing this, RocketKV-MT still achieves the same performance benefit as RocketKV but does not introduce memory storage savings. Our results below demonstrate that RocketKV-MT performs much better than RocketKV in SCBench and achieves almost the same accuracy as Exact-TopK. We would like to also point it out that RocketKV-MT would be a great fit for disaggregated serving where different GPUs are used for prefill and decode (see NVIDIA’s Dynamo[4] and Moonshot AI’’s Mooncake[5]), as it can keep the full KV cache on the prefill node and send only the filter KV cache to the decode node in each turn. This would lead to not only memory storage savings on the decode node but also significant communication traffic savings between the prefill and decode node.
| Method | Retr.String | Retr.Semantic | Global | Multi-task | AVG. |
|------------------|-------------|----------------|--------|-------------|--------|
| Full-KV | 49.3 | 40.9 | 36.3 | 64.7 | 50.0 |
| Exact-TopK | 43.4 | 40.0 | 36.4 | 63.8 | 47.9 |
| DuoAttention | 0.1 | 25.2 | 34.0 | 12.7 | 20.8 |
| SnapKV | 2.9 | N/A | 35.4 | N/A | N/A |
| Quest | 8.6 | 26.0 | 28.0 | 23.2 | 23.7 |
| SparQ | 3.0 | 27.9 | 29.1 | 28.7 | 24.1 |
| RocketKV | 36.9 | 26.2 | 35.9 | 31.8 | 35.2 |
| RocketKV-MT | **47.8** | **37.3** | **37.0** | **61.2** | **47.8** |
[4] NVIDIA Dynamo, https://github.com/ai-dynamo/dynamo
[5] Mooncake: A KVCache-centric Disaggregated Architecture for LLM Serving
**LongBench Input Truncation:**
We follow the original setting from LongBench to truncate the input if the sequence length is longer than the maximum sequence length enabled by the model. However, the maximum sequence length across all tasks in LongBench is within 30K tokens (as shown in Table 3 of Appendix D in the SnapKV paper), while all three models we evaluated have a maximum sequence length of at least 32K tokens. Therefore, no truncation occurs in practice during our evaluation.
**Proper Quotation Marks:**
Thank you for pointing this out, we will correct those quotation marks in the final version of the paper. | null | null | null | null | null | null |
Chaos Meets Attention: Transformers for Large-Scale Dynamical Prediction | Accept (poster) | Summary: The paper addresses the challenging task of accurately forecasting high-dimensional chaotic systems using a transformer-based approach. By leveraging ergodicity and modifying attention mechanisms, the proposed framework effectively handles high-dimensional chaotic dynamics while preserving long-term statistical properties. The model introduces “A3M” attention blocks to capture extreme values and uses a novel loss function inspired by the Von Neumann ergodic theorem to maintain long-term consistency by enforcing unitarity of the dynamics forward map. Experiments on turbulent flows demonstrate superior performance compared to existing methods both in terms of short-term predictions as well as measures of long-term statistics.
Claims And Evidence: Most of the claims are backed by experimental evidence, however, the manuscript lacks some clear information on consistency of the results as well as how fair the comparison is in terms of model expressivity (e.g. by reporting parameter count).
Methods And Evaluation Criteria: The proposed method is only benchmarked on PDE-type data, but lacks evaluation on simpler baselines, such as small chaotic dynamical systems based on ODEs (e.g. Lorenz63 or Rössler system). In these cases, established measures for long-term statistics for *very long* autoregressive rollouts exist, e.g. by comparing Lyapunov spectra. It would be interesting how the model fares in these simpler settings.
Theoretical Claims: I did not check the correctness of any proofs or theoretical claims.
Experimental Designs Or Analyses: The authors perform experiments for section 4 for three different random seeds and report the mean in e.g. Table 1 and 2. However, to get a measure of consistency of the results, the manuscript should also report some form of error bars via either standard deviation or standard error of the mean. The same goes for Autocorrelation plots in Fig. 2.
Supplementary Material: I skimmed through the Appendix but did not review the material in detail.
Relation To Broader Scientific Literature: The paper presents a novel transformer-based approach specifically designed to improve prediction of high-dimensional and complex chaotic systems with a focus on capturing long-term statistics. The work extends existing approaches by enhancing the traditional transformer architecture with axial mean-max-min (A3M) attention blocks and a unitary operator constraint framework. These innovations enable the model to effectively capture both local extreme values and statistical invariants, thereby improving prediction accuracy over both short-term and long-term horizons. The method is novel in that it introduces a new attention block tailored to chaotic systems prediction as well as combining existing approaches in the operator learning field to enable new state-of-the-art forecasting capabilities.
Essential References Not Discussed: The paper misses literature for a specific class of methods to mitigate chaos-induced problems when modeling chaotic DS. Specifically, “Generalized Teacher Forcing” as introduced in Hess et al. (ICML, 2023) does not need to explicitly match distributions of invariant measures to successfully model chaotic dynamics and can be applied to various sequence models, including operator based methods.
Other Strengths And Weaknesses: **Strengths**: The paper clearly motivates each and every compartment of the architecture, combines and advances existing methods in the field and validates everything through comparison methods and ablation studies.
Other Comments Or Suggestions: - “These approaches have been applied to classic 1-D examples, such as the Lorenz 63, Lorenz 96, and Kuramoto-Sivashinsky equations.” (p. 1): This may cause confusion as e.g. the Lorenz63 system is a 3D system (three dynamical variables). The manuscript should clearly define what is meant with 1D in this context.
**Typos and similar**:
- “to identifies” (p. 4, l. 177)
- “ergodic measure-preserving transformation (MPT)” (p. 3. ll 161-162): MPT has already been previously defined.
- “The number of sample k” (p. 6, l. 289)
- Kolmogorov is written as “Kolmogrov” on several occasions in the text (e.g. p. 6 l. 321, heading of section 4.1)
Questions For Authors: 1. How exactly is $\mathcal{G}$ implemented? Which form does it take in practice? How expensive is the inversion/complex conjugation of the unitary operator $\mathcal{G}$?
2. how is the Autocorrelation (Fig. 2) computed? Is it pooled over the entire grid?
3. Fig. 3: How long are “1000 steps”? What is the Lyapunov time of the dataset at hand (if such a quantity is easily accessible for this type of PDE data)?
4. Can the architecture be benchmarked on simpler, low-d dynamical systems in forms of ODEs? This should facilitate longer rollouts ($>> 1000$ steps) to check whether the model also captures long-term statistics in these simple cases.
5. Did the authors account for equal expressivity in terms of number of parameters for comparison in e.g. Tables 1 and 2? Can those numbers be reported somewhere?
*I am happy to increase my score if the authors address my concerns*.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We genuinely appreciate the reviewer’s time and effort in enhancing our paper.
[Link for results and references](https://anonymous.4open.science/r/ChaosMeetsAttention/README.md)
[Backup link](https://filebin.net/37p4dxup0t320143)
* Concern 1 & Question 5:
`The fair comparison in terms of model expressivity (e.g. by reporting parameter count).`
We appreciate the reviewer for raising concerns similar to those of reviewers vgH9 and ARTA. We report the number of parameters in the provided link. Furthermore, we also compare the computational cost in terms of grid scalability and performer attention, which may also be of interest to the reviewer..
* Concern 2 & Question 4:
`Lacks evaluation on small chaotic dynamical systems based on ODEs.`
Embedding ergodicity into learning methods for chaotic ODEs has been a subject of research and validation in recent literature [1-3], which may be of interest to the reviewer. While the literature did not specifically focus on *large-scale* chaotic dynamical systems, it is one of the primary objectives of our work.
`Establish for very long autoregressive rollouts, e.g. by comparing Lyapunov spectra, on simpler cases.`
As per the reviewer’s request, we implemented MNO [1] and our method on the Lorenz 63 system (L63). We collected 200k rollout steps, along with 300k true velocity data, on the invariant measure of the L63 system. We calculated the Lyapunov exponent for both our predictions and the actual data (1 timestep = 0.05s). Based on this foundation, we present the performance in terms of `Lyapunov exponent` and `Lyapunov time` in the table, and provide additional visualisation results demonstrating the generalizability of our approach on L63 via the provided link.
* Concern 3:
`Clarification of Lorenz 63 dimension`
‘1D examples’ refers to systems with a one-dimensional spatial domain or systems that are low-dimensional in terms of their structure. We have revised the text to better distinguish between spatial and phase space dimensionality.
`typos`
Thanks for bringing the typos to our attention. We have corrected them in the revised manuscript.
`Reference`
We appreciate the reviewer’s suggestion of the reference, which includes efficient reconstruction approaches through dimension reduction. We have incorporated it into the revised manuscript.
`results consistency & error bar for table 1&2`
The updated tables 1 and 2 are included in the provided link.
Long-term statistics are robust across multiple runs, where the standard deviations are negligible ($\leq$ 1e-4). Hence, we only report the mean values on the (percentage) advantages. For short-term prediction accuracy, we report the standard deviation in the updated Table 1&2. The results are consistent with the experiment part in the manuscript. We will add the standard deviation to the revised manuscript as suggested.
* Question 1:
`How is $G$ implemented? How expensive is the conjugation of the unitary operator?`
In practice, $G$ is a square matrix in real space. Therefore, the conjugacy relation is equivalent to the transpose $G^T$. The relation between $G$ and $G^T$ preserve the unitarity using our proposed unitary loss in Equation (9) during training. Using the regularized term with Frobenius-norm, the computational complexity is $O(d^3 + 2d^2)$, challenging for large-scale chaos systems. And, we use the Hutchinson trace estimation technique, which effectively accelerates the computation by random projections. This reduces the complexity to $O(kd^2)$ where k is the number of random samples from the $d$-dimensional unit sphere. For further details, please refer to Section 3.3 and Algorithm 1 of the manuscript.
* Question 2:
`Autocorrelation pooled?`
Yes, the autocorrelation is calculated using the equation provided in Appendix E.2, specifically lines 1101-1117. This equation is closely related to the mixing rate. The autocorrelation is then averaged over all spatial points. We appreciate the reviewer’s suggestion to delve deeper into the subject matter.
* Question 3:
`Fig. 3: How long are “1000 steps”?`
Roll out 1000-step counts for 20 seconds for Figure 3. We have provided the dataset details in Appendix G.
`What is the Lyapunov time of the dataset at hand (if such a quantity is easily accessible for this type of PDE data)?`
The Lyapunov time is not directly accessible for the datasets we have. Given the high dimensionality of the data and the time constraints, we will provide a rigorous report of this statistic in the revised manuscript. We would like to know if the reviewer would prefer us to present the Lyapunov times as [mean, maximum] values.
Please let us know if we have addressed the concerns and increased your confidence in our work.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the clarifications and additional experiments.
**Lyap. Exponent** The results on the Lorenz63 dataset are confusing. In which units are the values reported in the respective table? Can the authors report the maximum Lyapunov exponent in the system's units, i.e. removing discretization? For standard settings ($\rho = 28, \sigma = 10, \beta = 8/3$) we have $\lambda_{max} \approx 0.906$. If one considers the time discretization, one would have $0.05s \cdot 0.906 \approx 0.045s$. Neither of these values are anywhere close to the reported values in table.
**Conclusion** Since most of my concerns were addressed and due to the general positive assessment by the other reviewers, I will increase my score. However, I expect the authors to sort the issue with $\lambda_{max}$ out, as in this form the results are not comparable to existing literature.
---
Reply to Comment 1.1.1:
Comment: We sincerely thank the reviewer BWmX for acknowledging the additional experiments and clarifications, recognizing our improvement, increasing the score and providing insightful feedback on calculating Lyapunov Exponent. These all make the discussion greatly valuable and polish our work to the highest standard.
We value the insightful comments regarding the computation of the Maximal Lyapunov Exponent (MLE). In response, we carefully re-examined our implementation and identified an error in the normalization during the computation process [Strogatz, 1994]. After correcting this issue, the MLE computed on the ground truth dataset now aligns with values reported in the literature [Viswanath, 1998]. We would like to report the updated results in the table and the visualizations [via the link](https://anonymous.4open.science/r/ChaosMeetsAttention/README.md).
Following re-evaluation, Our model achieves an LE of 0.825, which is 19.1% closer to the true value than that of MNO. This improvement is further supported by the evolution of the separation between nearby trajectories in the Lorenz-63 system, initialized at $s_0=[1,1,1]$ with a perturbation vector $\delta_0$ of norm $\delta_0=5e-5$ as shown in the revised figure.
References
Strogartz, Steven H. "Nonlinear dynamics and chaos: With applications to physics, biology." Chemistry and Engineering 441 (1994).
Viswanath, Divakar. Lyapunov exponents from random Fibonacci sequences to the Lorenz equations. Cornell University, 1998. | Summary: The paper introduces a transformer-based model for predicting long-term trajectories in high-dimensional chaotic systems. It modifies standard attention mechanisms using Axial Mean-Max-Min (A3M) attention with random Fourier features to capture spatial correlations. It uses a unitary-constrained loss (based on the Von Neumann ergodic theorem) to preserve long-term statistical properties. The approach is scaled efficiently with tensor factorization and shows good performance on chaotic systems.
Claims And Evidence: The paper’s claims are supported by experiments on Kolmogorov flow and turbulent channel flow. Performance improvements, scalability via factorized attention, and ergodicity preservation through a unitary loss are supported by quantitative results. The lack of generalization to non-ergodic systems is acknowledged.
Methods And Evaluation Criteria: Yes. The methods (modified transformer architecture with A3M attention and a unitary loss to preserve ergodicity) make sense for modeling chaotic systems. The evaluation criteria and datasets (Kolmogorov flow and turbulent channel flow) are appropriate for capturing short-term accuracy and long-term statistical behavior.
Theoretical Claims: I read the derivation using the Von Neumann ergodic theorem, which the paper appears to apply correctly to argue that the operator G should be unitary in L2. However, the paper only enforces this unitarity approximately via a soft loss term (using Hutchinson’s trace), rather than proving exact unitarity.
Experimental Designs Or Analyses: The experimental design seems sound. The paper evaluates its model on two datasets (Kolmogorov flow and turbulent channel flow), using several metrics and three random seeds (though it would be nice to see the standard deviation in addition to the mean). Ablation studies validate the usefulness of key components. More benchmarks would strengthen the paper.
Supplementary Material: I briefly reviewed the supplementary material extending the technical discussions on the theoretical derivations.
Relation To Broader Scientific Literature: The paper extends transformer models to chaotic systems by combinding neural operator learning (e.g., Fourier Neural Operators) with ergodic theory (Von Neumann’s theorem). It builds on prior work by addressing scalability in operator-based methods and introduces efficient tensor factorization with the A3M attention mechanism.
Essential References Not Discussed: One work that might be worth mentioning for context is the Performer (Choromanski et al., 2020), which appears to use random features to achieve linear-time attention.
Other Strengths And Weaknesses: Strengths:
- Combines Transformer architectures with ergodic theory, using A3M attention and a unitary loss to address long-term chaotic dynamics.
- Demonstrates improvements on benchmarks (Kolmogorov flow, turbulent channel flow) and introduces a new turbulent channel flow dataset.
- Includes evaluations and ablation studies that support the key contributions.
Weaknesses:
- The method does not generalize to non-ergodic settings.
- Unitarity is only enforced approximately with soft regularization rather than exactly.
- Performance seems sensitive to the kernel bandwidth.
Other Comments Or Suggestions: The paper is generally well-written, but a few suggestions could improve it further:
- Consider including more details on hyperparameter tuning and reporting variability (e.g., standard deviations).
- Expanding the discussion on applicability to non-ergodic systems would be valuable.
- A clearer explanation of scalability limits and potential computational trade-offs might help.
Questions For Authors: 1. Could you elaborate on how to tune the kernel bandwidth in the RFF positional encoding and how its optimal value varries across different datasets?
2. Can you provide more insight into the impact of enforcing unitarity only approximately via soft regularization?
3. Could you clarify the computational trade-offs of your tensor factorization approach compared to other linear-time attention methods (e.g., Performer)?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely thank reviewer ARTA for carefully reviewing our manuscript, providing valuable feedback and reconzing the strengths of our work. We'd like to address your concerns in the initial review and answer you questions as follows:
[Essential Link for results and references to this Rebuttal](https://anonymous.4open.science/r/ChaosMeetsAttention/README.md)
[Backup link](https://filebin.net/37p4dxup0t320143)
* Concern 1 & Question 2 (also mentioned by reviewer vgH9):
`However, the paper only enforces this unitarity approximately via a soft loss term (using Hutchinson’s trace), rather than proving exact unitarity.`
Yes, we use Hutchinson’s trace estimation to approximate unitarity. This approach is motivated by two key reasons:
1. Directly applying the unitary loss involves eigen decomposition and computing the Frobenius norm, which is computationally infeasible for high-dimensional latent states.
2. Hutchinson’s trace estimation is flexible to implement as a randomized projection and is optimized adaptively during training.
Empirically, we demonstrate that the unitarity of the learned operators is well approximated. This is visualized in the **eigenvalues plot** of the learned operators from our ablation models (Base model and +Unitary) in the provided link.
* Concern 2 & Question 3:
`References and computational trade-offs to other linear-time attention methods like Performer.`
We thank the reviewer for the suggested references and have properly cited them in the final version. We also conducted experiments with Performer attention in our setup (Unitary+Performer) on the TCF benchmark during the rebuttal time. Visualization results are available in the link. The following table compares the **runtime**, **memory consumption**, and **FLOPs per forward pass** of baselines, Performer attention and our method. The results are evaluated on a cuda device with `_CudaDeviceProperties(name='NVIDIA A100-SXM4-40GB', major=8, minor=0, multi_processor_count=108, L2_cache_size=40MB)`. Although attention mechanism is overall computational heavy than neural operator methods, tensor factorization and axial attention manage to make the computation tractable for large scale chaos states. With A3M attention and unitary constraint, our model effectively achieves better performance on the benchmarks. Additionally, we observe that Performer attention trades off memory usage on runtime from the empirical results.
| Models | Parameter count | Runtime | Memory usage | FLOPs per forward pass |
| ---------------------- | --------------- | ---------- | ------------ | ---------------------- |
| MNO | 6467425 | 31ms | 377MB | 3.45GB |
| UNO | 17438305 | 12ms | 769MB | 6.88GB |
| MWT | 5089153 | 50ms | 313MB | 9.52GB |
| FactFormer | 6083009 | 53ms | 6889MB | 239GB |
| Ours | 7325665 | 58ms | 7684MB | 268GB |
| **Unitary+Performer** | **7717931** | **111ms** | **2938MB** | **271GB** |
* Concern 3 & Question 1:
`Sensitive to the kernel bandwidth? How to tune it on different datasets.`
We would like to thank the reviewer for raising the concern regarding kernel bandwidth. Through ablation experiments in Section 4.3, we found that selecting the bandwidth within a reasonable interval consistently yields stable results, indicating that tuning the bandwidth is straightforward and efficient, only requiring a coarse cross-validation.
The strategy for tuning the kernel bandwidth depends on the mesh size and viscosity (i.e., Reynolds number) of the system, which help identify a suitable bandwidth interval. If the data characteristics are unfamiliar, we recommend starting with a wider interval and then performing a grid search within that range. We appreciate the reviewer’s suggestion and will include the tuning strategy in the final version of the paper.
* Concern 4:
`Expanding the discussion on applicability to non-ergodic systems would be valuable.`
We thank the reviewer for this suggestion. Extending our methods to non-ergodic systems is a significant open problem. While our current work focuses on ergodic chaotic systems, we agree that exploring non-ergodic systems is an interesting direction for future research.
Finally, we thank reviewer ARTA once again for the time and effort invested in reviewing our paper. We believe the changes made in response to the reviewer’s comments have significantly improved our manuscript. We look forward to your further feedback. | Summary: The paper investigates the problem of predicting the evolution of ergodic chaotic systems with transformers. To that end, the paper introduces a set of modifications to the traditional transformer architecture that overcome crucial bottlenecks in terms of scalability. Moreover, the paper introduces a novel regularization term that is grounded in physical perspective and helps the model preserve long-term statistics. The paper evaluates its proposed method on two turbulent fluid dynamics systems (Kolmogorov Flow and Channel Flow) and compares the performance to a multitude of baselines, showing superior performance across all metrics. Finally, the paper releases its data as a novel chaotic system benchmark.
Claims And Evidence: The paper is exceptionally well written. All claims are supported by clear and convincing evidence.
Methods And Evaluation Criteria: The paper evaluates its proposed method on two turbulent fluid dynamics, Kolmogorov Flow and Channel Flow, both commonly used to learn states from chaotic systems. Moreover, the paper considers a variety of competitive baselines, both operator- and attention-based. As a result, the experimental evaluation is convincing and shows strong results.
Theoretical Claims: I am unfamiliar with the mathematical machinery required to evaluate the correctness of the theoretical claims.
Experimental Designs Or Analyses: The paper presents the first evaluation of transformer-based methods on (ergodic) chaotic systems and justifies this choice with strong empirical results. The paper proposes a set of modifications to the standard transformer architecture: (i) axial mean-max-min (A3M0 attention, (ii) 2D positional encodings based Random Fourier Features (RFF), and (iii) a loss regularization term based on a unitary constraint. Most of these design choices are either empirically validated (for i) or theoretically motivated (for iii).
However, it would be nice to also show an ablation for the distance-based Gaussian kernel, i.e., A3M without RFF (Table 3 only ablates the full A3M). Moreover, the paper showing the running times in Table 3 would further strengthen the paper's claims.
Supplementary Material: I did not review the supplementary material.
Relation To Broader Scientific Literature: The paper adequately discussed the related work.
Essential References Not Discussed: While not directly related in terms of the type of chaotic system evaluated, the paper would benefit from citing Gilping (2021, 2023), who evaluates transformers on more than 130 chaotic dynamical systems, and Zhang et al. (2025), who evaluate transformers on elementary cellular automata.
**References**
Chaos as an interpretable benchmark for forecasting and data-driven modelling.
William Gilpin.
NeurIPS 2021.
Model scale versus domain knowledge in statistical forecasting of chaotic systems.
William Gilpin.
arXiv:2303.08011
Intelligence at the Edge of Chaos.
Shiyang Zhang, Aakash Patel, Syed Rizvi, Nianchen Liu, Sizhuang He, Amin Karbasi, Emanuele Zappala, David van Dijk.
ICLR 2025
Other Strengths And Weaknesses: The benchmark dataset will perhaps be the paper's biggest contribution in the long run and should, therefore, be featured more prominently (i.e., in the main paper).
Other Comments Or Suggestions: List of typos and mistakes:
* L177 (left): "an attention mechanism to identif**ies**" -- replace with "identify"
* L178 (left): "based on **random Fourier using** Random Fourier features" — replace duplicates
* L445 (left) "none **of** which"
Questions For Authors: None
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely appreciate reviewer FGZz for taking the time to thoroughly review our manuscript, offering valuable feedback, and acknowledging the strengths of our work, particularly in the areas of efficient transformers for large-scale chaos systems, physics-inspired regularisation terms, and the contribution of our dataset to the community. Below, we respond to each of the reviewer's comments and suggestions.
- **Ablation on distance-based Gaussian kernel**
We thank the reviewer's valuable suggestion. It should be clarified that we actually included an ablation study of Gaussian kernels in Section 4.3 (Table 4) and Appendix D (Figure 4). In particular, we observed that extremely large sigma values like 32 lead to a rapid decay of spatial correlations, which is effectively equivalent to the case without a kernel. We will clarify this in the revised version.
Regarding computational efficiency, we evaluated the **runtime** per batch for the ablation models presented in Table 3. The results are evaluated on a cuda device with `_CudaDeviceProperties(name='NVIDIA GeForce RTX 4090', major=8, minor=9, total_memory=24GB, multi_processor_count=128, L2_cache_size=72MB)` with a batch size of 3 over 100 runs.
| Method | Runtime (per batch, size=3) |
| ---------------- | --------------------------------|
| Base | 0.0261 ± 0.00016 |
| + A3M Att. | 0.0244 ± 0.0008 |
| + Unitary Op. | 0.0995 ± 0.0077 |
- **Suggested References**
We appreciate the reviewer’s recommendations. After careful consideration, we’ve integrated the relevant references into the introduction section of our revised manuscript.
- **Benchmark Dataset**
We appreciate the reviewer’s acknowledgement of the significance of our proposed benchmark dataset. We will highlight this contribution even more in the final version of our manuscript.
- **Typos**
We appreciate the reviewer's meticulous review and for bringing the typos to our attention. All the identified typos have been meticulously corrected in the revised manuscript.
We sincerely thank the reviewer once again for their insightful comments and suggestions. We believe that the revisions made in response have significantly improved the quality of our manuscript. We eagerly anticipate any further feedback. In the meantime, we have prepared new visualisations and the code related to trying performer attention via the link [here](https://anonymous.4open.science/r/ChaosMeetsAttention/README.md), which may be of interest to the reviewer.
---
Rebuttal Comment 1.1:
Comment: **We observed that extremely large sigma values like 32 lead to a rapid decay of spatial correlations, which is effectively equivalent to the case without a kernel. We will clarify this in the revised version.**
Indeed, I hadn't considered that. Thanks for clarifying!
I still think that it would be interesting to see the case without the distance-based Gaussian kernel, but I guess Table 4 is already a good start.
**Running times**
Thanks, that's great to see!
---
Reply to Comment 1.1.1:
Comment: Many thanks for recognizing our reply and your further guidance on the ablation study to us. We dropped the distance-based Gaussian kernel and re-trained the model using the same settings as in Table 4 of the ablation study. The updated results are summarized below:
| τ = 5 | τ = 25 | ME-APE | ME-LRw | Δλ |
| ------ | ------- | ------ | ------ | ---- |
| 0.93 | 1.31 | 0.19 | 0.21 | 0.11|
We are diligently working to refine the manuscript and address your comments thoroughly. Please let us know if you have any further feedback or updates. | Summary: The paper introduces a transformer-based framework for predicting large-scale chaotic systems. The authors tackle a key challenge in dynamical system forecasting -- the amplification of prediction errors due to positive Lyapunov exponents -- by using ergodicity. Their approach includes:
- A modified attention mechanism (A3M Attention) that captures statistical moments and extreme values in chaotic systems.
- A loss function inspired by the von Neumann mean ergodic theorem, aimed at enhancing long-term statistical stability.
- A large-scale dataset featuring 140k snapshots of turbulent channel flow to benchmark prediction accuracy.
Claims And Evidence: The paper presents compelling claims regarding the effectiveness of its transformer model in maintaining long-term statistical properties while enhancing short-term prediction accuracy. While the claims are well-supported by empirical results and the von Neumann ergodic theorem is leveraged to justify the approach, it remains unclear whether the proposed loss function strictly enforces ergodicity in practice; thus, additional theoretical validation, such as a formal proof or convergence analysis, would further strengthen the argument.
Methods And Evaluation Criteria: While the evaluation criteria used in the paper (relative $L^2$ norm for short-term accuracy, ME-APE, ME-LRw, and $\nabla \lambda$ for long-term statistical consistency) are reasonable, additional and more rigorous evaluation metrics could provide a more comprehensive assessment of the model's performance. Could the authors consider integrating some of these additional metrics to provide a more holistic evaluation of their method? For instance:
- Kullback-Leibler (KL) Divergence or Wasserstein Distance, or/and the Hellinger distance: Please see Sect. 4.2 of this paper for more details https://proceedings.mlr.press/v202/hess23a/hess23a.pdf
- Autocorrelation Decay Rate: Measuring how the model preserves the autocorrelation structure over time can indicate its ability to maintain memory of system dynamics.
- Computational Cost-to-Accuracy Ratio: A more explicit comparison of runtime, memory usage, and FLOPs per training iteration would provide a clearer picture of the model’s scalability.
And maybe this one:
- (Max) Lyapunov Exponent Deviation: Evaluating how well the predicted trajectories preserve the Lyapunov exponents (or max LE) of the true system would provide a more direct measure of long-term dynamical consistency.
Theoretical Claims: The paper provides a well-motivated theoretical foundation using the von Neumann mean ergodic theorem, but key aspects require further clarification. The claim that the operator $G$ is unitary and preserves norms in $L^2$ space is critical, and verifying whether these properties hold under the proposed modifications is essential for the method's validity. The regularization term in the loss function aims to enforce unitarity, but assessing whether it sufficiently guarantees ergodicity in long-term predictions is crucial. Also, while the authors argue that their approach captures invariant statistical behaviors, a more rigorous justification -- particularly for high-dimensional chaotic systems --
would strengthen confidence in their claims regarding stability and accuracy.
Experimental Designs Or Analyses: The experimental design is robust, utilizing diverse datasets that represent various chaotic systems and comparing against multiple state-of-the-art baselines. However, a more detailed computational complexity analysis comparing memory consumption and training time across baselines would strengthen the overall evaluation of the model's efficiency.
Supplementary Material: I reviewed most parts of sections A to E.
Relation To Broader Scientific Literature: The paper is well-situated within the literature on chaotic system prediction, referencing key works on operator-based learning, transformers for PDEs, and ergodicity-based learning approaches. It builds upon advancements in Fourier Neural Operators (FNOs) and Markov Neural Operators (MNOs) while introducing a novel transformer-based framework specifically designed to enhance prediction accuracy while preserving ergodicity, a crucial property highlighted by Eckmann & Ruelle (1985) and Young (2002). Moreover, the introduction of a new chaotic system benchmark dataset enriches the field by providing a standardized resource for evaluating machine learning methods in chaotic dynamics. These contributions collectively advance the intersection of machine learning and dynamical systems theory, offering both methodological improvements and practical benchmarks for future research.
Essential References Not Discussed: I also recommend considering the following papers on learning chaotic dynamics with RNNs and combining deep learning techniques with Koopman operator theory:
1. https://proceedings.mlr.press/v202/hess23a/hess23a.pdf
2. https://arxiv.org/pdf/2410.23467
Also, the paper does not sufficiently discuss PINNs and other hybrid methods that integrate physics-based constraints into ML models. Works such as:
1. Karniadakis et al., "Physics-Informed Neural Networks for PDEs" (Journal of Computational Physics, 2021)
2. Raissi et al., "Physics-Informed Machine Learning for Dynamical Systems" (PNAS, 2019).
Other Strengths And Weaknesses: **Strengths**
Originality: The introduction of A3M Attention and ergodicity-based loss function presents a novel approach to chaotic system modeling.
Significance: The paper tackles an important problem in ML-driven dynamical system forecasting with well-motivated solutions.
**Weaknesses**
- The paper mainly focuses on comparing transformer-based and operator-based models but does not discuss how physics-informed neural networks (PINNs) or hybrid methods (e.g., physics-constrained transformers) might perform on chaotic systems.
- Choosing the kernel bandwidth ($\sigma$) in RFF positional encoding may significantly affect performance. It would be helpful to have a more in-depth discussion on strategies for tuning hyperparameters.
- Even though the paper mentions improved efficiency, a deeper look into complexity - like memory usage and training time comparisons -
would provide more clarity.
Other Comments Or Suggestions: It would be helpful to:
- Elaborate on the strategies used for hyperparameter tuning of random Fourier features.
- Provide a comprehensive comparison of runtime and memory usage with baseline methods.
I am happy to increase my score if the authors can address my main concerns.
Questions For Authors: 1. How does the von Neumann ergodic loss stack up against traditional loss functions when it comes to stability and long-term statistical preservation? Also, how does the ergodicity-based loss function compare to traditional distribution-matching methods in terms of computational efficiency?
2. How does the A3M attention mechanism perform compared to other modifications of attention (e.g., Linformer, Performer, or Reformer) in chaotic dynamics?
3. Regarding the computational efficiency of your approach, how does training time scale with increasing grid resolution?
4. Given that $\sigma$ in RFF positional encoding may significantly affect performance, how sensitive is the model to variations in $\sigma$ across different datasets? How sensitive is the model to hyperparameter tuning, particularly for the kernel bandwidth σ\sigma in the positional encoding?
5. What if we swapped out the A3M pooling approach for a different aggregation method? Would using just max-pooling or mean-pooling give us competitive results?
6. Can you provide scalability benchmarks comparing the runtime and memory usage of your method against baselines?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We greatly appreciate the reviewer's constructive feedback and recognition of our work. We address the concerns and questions as follows:
[Link to visualizations and references](https://anonymous.4open.science/r/ChaosMeetsAttention/README.md)
[Backup](https://filebin.net/37p4dxup0t320143)
* Concern 1 & part of Question 1:
`Whether the proposed loss function strictly enforces ergodicity in practice, any proof, and preserves in long-term predictions.`
We thank the reviewer for this insightful question. The ergodicity guarantee through the unitarity constraint is supported by Von Neumann’s theorem (cited in Lines 156-160). Imposing hard unitarity constraints requires complex parameterisation and intensive matrix decomposition during training, which are computationally intensive. Instead, we use Hutchison trace estimation to efficiently approximate the constraint through random projections, enabling its application in large-scale chaotic dynamics. Empirical results show that the learned operators maintain unitary, as visualized in the eigenvalues plot (see link). To verify long-term stability, we use the metrics suggested by the reviewer (next section).
* Concern 2:
`KL Divergence and other metrics`
We thank the reviewer for the reference. We evaluated the **KL divergence (KLD)** of model predictions on both datasets (KF256 and TCF). Updated Tables 1 and 2 are presented in *Section B: Updates to Reviewer BWmX* (see link). Our method achieves the lowest KLD, demonstrating its effectiveness in preserving long-term statistics and capturing the underlying invariant distribution.
`Autocorrelation Decay Rate & (Max) Lyapunov Exponent Deviation`
We appreciate the suggestion of these metrics. The autocorrelation decay rate, which relates to the mixing rate, is presented in Appendix E, with visualizations in Figures 3 and 6. Estimating the Lyapunov exponent for high dimensional PDE system was challenging. Instead, we provide the estimated Lyapunov exponent deviation for a lower-dimensional case using our method (see link).
* Concern 3 & Questions 2, 3, 6:
`Computational cost (runtime, memory, FLOPs) and scaling with grid size`
We collected relevant results and organized them into two informative tables (see link). Analytical insights on attention mechanisms are shared in *Response to Reviewer ARTA: Concern 2*. Our method demonstrates linear increase of computational cost in grid size, indicating the strong scalability and computational efficiency of A3M attention.
* Concern 4:
`Relevance to PINN and physics hybrid transformer`
We have added a discussion on this aspect in the manuscript. PINN methods require explicit knowledge of differential equations, which are assumed to be unknown in our setting. In contrast, our approach only assumes ergodicity, without relying on such prior knowledge, and focuses on embedding physical properties within transformer architectures. Our method provides a distinct perspective from PINN in incorporating physical knowledge.
* Concern 5 & Question 4:
`Tuning strategy for kernel bandwidth`
Due to space limits, we kindly refer the reviewer to *Response to Reviewer ARTA: Concern 3*. The ablation study on kernel bandwidth (Table 4, Appendix C) shows that selecting a reasonable interval yields stable results.
* Question 1:
`Ergodic loss vs traditional methods`
We appreciate the reviewer’s insightful questions regarding the ergodic loss.
(1) The ergodic loss is designed to enforce unitarity in the forward operator, thereby maintaining energy throughout forecasting. This is crucial for stability and long-term statistical consistency in chaotic systems. Traditional loss functions like MSE focus on short-term accuracy, which can lead to model drift and instability in long-rollout scenarios.
(2) Compared to traditional distribution-matching methods [5,6], our ergodic loss demonstrates superior computational efficiency and scalability. Distribution-matching methods like MMD [1,3] involve kernel operations that scale poorly with sample size, typically requiring large batches for high-resolution data and leading to significant memory consumption. Moreover, MMD with kernels (e.g., rational quadratic, RBF) requires careful bandwidth tuning, as results are sensitive to bandwidth choices [2].
Our ergodic loss enforce unitary dynamics using matrix operations without such hyperparameter tuning. Using stochastic Hutchinson trace estimation, we reduce the computational complexity to $\mathcal{O}(kd^2)$, where $d$ is the latent dimension and $k$ is the number of random draws from the unit sphere [4]. Typically, $d \ll D$ (the system dimension), and the convergence rate scales as $\mathcal{O}(1/\sqrt{k})$. In practice, we found $k \approx 1000$ sufficient for $d = 256$, balancing accuracy and computational cost.
* Question 5:
`Alternative pooling methods`
We implemented the mean-pooling method for comparison with our full A3M method. Results are reported in the link. | null | null | null | null | null | null |
DSBRouter: End-to-end Global Routing via Diffusion Schr\"{o}dinger Bridge | Accept (poster) | Summary: DSBRouter is an end-to-end neural global routing solver based on the Diffusion Schrödinger Bridge (DSB) model, which learns the forward and backward mapping between initial pins and routing results. It achieves state-of-the-art performance in overflow reduction on ISPD98 and parts of ISPD07, with some cases achieving zero overflow without requiring post-processing.
Claims And Evidence: - This is the first work to introduce the DSB technique to the global routing problem to the best of my knowledge.
- This paper introduces instance-wise objective scores, enabling optimization while ensuring general feasibility.
- This paper claims it does not need post-processing. However, from Figure 2, there are refining steps in the inference process.
Methods And Evaluation Criteria: - The method is built on the Diffusion Schr ̈odinger Bridge, which can incorporate constraints into the diffusion process.
- The evaluation criteria (Wirelength, overflow, running time) is a general evaluation metric for routing tasks.
Theoretical Claims: NA
Experimental Designs Or Analyses: - The evaluation is reasonable. It would be better if the authors could report the mean and standard deviation.
Supplementary Material: I reviewed the supplementary material.
Relation To Broader Scientific Literature: This work builds on prior global routing methods and diffusion-based models by introducing DSBRouter, which leverages the Diffusion Schrödinger Bridge to ensure connectivity without post-processing. Unlike traditional approaches, it achieves state-of-the-art overflow reduction and incorporates constraints directly into the routing process.
Essential References Not Discussed: [1] DGR: Differentiable Global Router, DAC 2024.
Other Strengths And Weaknesses: - This work should collect a large batch of routing results as training data, where traditional global routers do not need this process. According to Table 2 and 4, the method is not very efficient.
Other Comments Or Suggestions: - In table 1, what does the label "△∗" mean?
- According to Fig. 5, the DSB-generated routes should also modified by regulations. which means it is not a pure end-to-end method.
- Typo: Section 4.1, Metircs->Metrics.
- Figure 1, one left-bottom pin is not easy to read.
Questions For Authors: - In Table 3, Why not report the mean and variance?
- Why the size of figures in the training set is fixed to 64 × 64, where the testing set sizes are different.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: > **Weakness 1: This work should collect a large batch of routing results as training data, where traditional global routers do not need this process. According to Table 2 and 4, the method is not very efficient.**
Thanks for your valuable comment. Though traditional global routers indeed do not rely on large datasets for training, we argue that they still exhibit the following issues:
1) **Escalating Complexity and Longer Runtime**
As process nodes continue to shrink and chip sizes grow, traditional methods often demand higher computational resources and longer execution time. Furthermore, these methods rely on continuous updates and refinements from human experts.
2) **Heavy Dependence on Manually Crafted Heuristics**
Traditional routing methods heavily depend on heuristics that are manually specified or tuned, their applicability is limited when the design environment changes.
3) **Prone to Local Optima and Complex Multi-Objective Trade-Offs**
When dealing with highly complex objectives—such as congestion, timing, power, wirelength, IR drop, and other factors—conventional methods struggle to balance all of these goals simultaneously.
By contrast, ML-based methods like DSBRouter yield routing solutions in a feasible runtime, matching or even surpassing traditional routers. They also adapt across diverse routing scenarios and need no expert updates to routing rules.
And we admit that DSBRouter currently faces challenges from relatively longer running time. Several improvements is being considered to address the efficiency of DSBRouter:
1) adopt the techniques of the [Consistency Model](https://openreview.net/forum?id=FmqFfMTNnv) to reduce the sampling steps.
2) Recomputing the optimization objective every few steps during the inference process, which reduces the computation of evaluation-based guidance.
Although we are still in the process of implementing the aforementioned two improvements, we can estimate that after these enhancements, the runtime of DSBRouter could be reduced by 60%-70%. Once the experiments are finished. The results will be added in our later rebuttal and final version.
> **Suggestion 1: According to Fig. 5, the DSB-generated routes should also modified by regulations. which means it is not a pure end-to-end method.**
We believe the misunderstanding arises from unclear explanation of Figure 5. The first row shows randomly selected initial pins from our supervised dataset. The second row presents the corresponding results generated by DSBRouter, based on the first-row inputs. The third row shows the ground-truth routing results from the dataset, not DSBRouter's outputs.
We also emphasize that DSBRouter is an end-to-end global router. In ML-based methods, if the entire process from input to output is learned and optimized by a single model without losing information, it is considered end-to-end. DSBRouter uses a single model, incorporating evaluation-based guidance in the inference process with no intermediate outputs, making it an end-to-end global router.
> **Suggestion 2, 3 and 4: Typo: Section 4.1, Metircs->Metrics; Figure 1, one left-bottom pin is not easy to read; In table 1, what does the label "△∗" mean?**
$\triangle*$ means PRNet is only scalable for one-shot generation, but not scalable for post-processing. We have add the footnode in our revised paper. The modified sections will be presented in our final version.
> **Question 1: DGR: Differentiable Global Router, should be discussed.**
Thanks for your valuable comment. We noticed this work during our survey, but since their [GitHub repository](https://github.com/search?q=DGR%3A%20Differentiable%20Global%20Router&type=repositories) seems to be private and lacks code to reproduce the results, we did not use it as a baseline. If they release their implementation, we will conduct comparative experiments.
> **Question 2: In Table 3, Why not report the mean and variance?**
For the routing results of GeoSteiner, FLUTE, ES, and NeuralSteiner, we used the experiments from [NeuralSteiner](https://proceedings.neurips.cc/paper_files/paper/2024/hash/e6617714485265b9380a5315bf3ba98f-Abstract-Conference.html). For DSBRouter, we repeated the experiments three times with no observed variance, due to the neural network's determinism and the lack of post-processing in DSBRouter. The variance in HubRouter's results is due to the inherent uncertainty in its RL-based post-processing.
> **Question 3: Why the size of figures in the training set is fixed to 64 × 64, where the testing set sizes are different.**
We believe the misunderstanding is due to the "Size" in Table 7. The "Size" in Table 7 refers to the original grid size of the benchmark, not the size of the images. We applied the same clipping operation to the benchmark in the test set as we did with the training set (mentationed in Appendix A.2.1), ensuring that the size of each initial pin image and its image of routing result ground truth is 64x64. | Summary: This paper introduces DSBRouter, a novel global routing (GR) solver leveraging the Diffusion Schrödinger Bridge (DSB) model. The authors aim to address the challenge of ensuring routing connectivity in network prediction results, a persistent issue in learning-based GR methods. DSBRouter learns both forward and backward mappings between initial pins and routing results, and incorporates an evaluation-based sampling scheme to enhance routing predictions. The results demonstrate state-of-the-art performance in overflow reduction on public benchmarks.
Claims And Evidence: The primary claim that DSBRouter achieves state-of-the-art (SOTA) performance in overflow reduction is generally supported by the evidence provided.
Methods And Evaluation Criteria: The proposed DSBRouter method is well-motivated. The use of the Diffusion Schrödinger Bridge model is novel in the context of global routing. The evaluation criteria is comprehensive.
Theoretical Claims: No.
Experimental Designs Or Analyses: - Ablation Study: The ablation study is designed to assess the contribution of the EG and NN modules. The experimental setup is clear, and the results provide some insight into the importance of these components. However, as mentioned earlier, a more detailed analysis would be beneficial.
- Influence of Inferencing Steps: The experiments on the influence of inferencing steps are well-designed. They systematically vary the number of steps and analyze the impact on performance metrics.
- Comparison with Baselines: The comparison with baseline methods is generally sound. The authors include both classical and ML-based routers in their comparison, providing a comprehensive evaluation of DSBRouter's performance.
Supplementary Material: No.
Relation To Broader Scientific Literature: - Diffusion Models and Schrödinger Bridges: The paper builds upon the literature on diffusion models and Schrödinger Bridge models. It extends these techniques to the problem of global routing. The authors also highlight the connection between SGMs and DSB.
- Global Routing Methods: The paper thoroughly reviews traditional and learning-based approaches to global routing. It identifies the limitations of existing methods, such as the lack of connectivity guarantees and the reliance on post-processing. DSBRouter is presented as a solution that addresses these limitations by providing an end-to-end approach that ensures connectivity.
- Objective Guidance: The use of objective guidance in the inference phase is inspired by techniques from score-based generative models. The authors adapt these techniques to the DSB framework to improve the quality of the generated routes.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: No.
Other Comments Or Suggestions: No.
Questions For Authors: The paper acknowledges that DSBRouter has a longer generation time. What specific strategies are the authors considering for future work to address this limitation, and what is the potential for these strategies to significantly improve the runtime?
The training data is generated using NthuRoute. How might the choice of training data generation method affect the generalizability of DSBRouter to different routing scenarios or design styles?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We would like to express our sincere gratitude for thoroughly evaluating our paper and providing insightful and valuable feedback. We are genuinely committed to addressing your concerns and respond to your specific comments below.
> **Question 1: The paper acknowledges that DSBRouter has a longer generation time. What specific strategies are the authors considering for future work to address this limitation, and what is the potential for these strategies to significantly improve the runtime ?**
Thanks for your valuable comment. The DSB in our proposed DSBRouter, like the standard [DDPM](https://proceedings.neurips.cc/paper/2020/hash/4c5bcfec8584af0d967f1ab10179ca4b-Abstract.html) and [SGM](https://proceedings.neurips.cc/paper/2019/hash/3001ef257407d5a371a96dcd947c7d93-Abstract.html?ref=https://githubhelp.com), is based on the diffusion and denoising processes, and therefore, efficiency concerns are also present. To address the efficiency issues, we consider the following two approaches:
1) adopt the techniques of the [Consistency Model](https://openreview.net/forum?id=FmqFfMTNnv) to reduce the sampling steps.
2) Recomputing the optimization objective every few steps during the inference process, instead of at each step, which simplifies the computational of evaluation-based guidance.
While we are still implementing the two improvements, we estimate that they could reduce DSBRouter’s runtime by 60%-70%. Based on the 'Ablation Study' in Table 5, we believe runtime is closely tied to the number of inference steps. With a well-trained consistency model, we expect to compute the final route using only 1/5 of the original inference steps. Additionally, Strategy 2) reduces the complexity of optimizing the computational target, further cutting the runtime by ~60%. Final results will be included in our later rebuttal and final version.
> **Question 2: The training data is generated using NthuRoute. How might the choice of training data generation method affect the generalizability of DSBRouter to different routing scenarios or design styles?**
Thanks for your insightful comment. Regarding your question, we have the following conclusions:
**Conclusion 1**: *The bias of the supervised training set towards specific metrics (OF and WL) does indeed affect the routing generalization performance of DSBRouter.*
In the paper, the optimization objective of DSBRouter focuses on optimizing the OF metric. Therefore, we use the [Nthurouter](https://ieeexplore.ieee.org/document/4681595/) results on the datasets of [ISPD07](https://www.ispd.cc/contests/07/contest.html) including bigblue4, newblue4, newblue5, newblue6, and newblue7 as the supervised training set, as Nthurouter achieves SOTA performance on OF for these datasets compared to other methods proposed in the same year. The dataset newblue3, which has the highest number of pins, was excluded because its routing results performed poorly in terms of OF metrics. On the other hand, [Hubrouter](https://proceedings.neurips.cc/paper_files/paper/2023/hash/f7f98663c516fceb582354ee2d9d274d-Abstract-Conference.html) focuses on optimizing the WL metric and uses routing results from [NCTU-GR](https://ieeexplore.ieee.org/abstract/document/5703167), which are more tailored to optimizing the WL metric, as its supervised training set. The experimental results show that Hubrouter achieves SOTA performance on WL but lags behind on OF. To verify Conclusion 1, we replicated Hubrouter's training set and trained DSBRouter-NCTU using this set. We report the routing results of DSBRouter trained with two different training sets (other settings remain the same) on the ibm01 and ibm05 dataset in Table 1:
Table 1. Comparision of DSBRouter-NTHU and DSBROUTER-NCTU on two different routing scenarios.
| |ibm01-wirelength|ibm01-overflow|ibm05-wirelength|ibm05-overflow|
| :-: | :-: | :-: | :-: | :-: |
|DSBRouter-NTHU|61435|1430|420464|0|
|DSBRouter-NCTU|86776|10256|1312353|115|
To address the current issue of DSBRouter's insufficient generalization, we have recently attempted to improve the optimization objective proposed in Section 3.3 of the paper with a hyperparameter $\tau$. Where $\mathbb{S}(\mathcal{P}(\bar{\mathbf{x}_{k}}))$ turns to $\tau * E^o + (1-\tau) * E^w$. The goal is to enable DSBRouter to optimize a balance between OF and WL during inference. Due to space constraints, we only report the performance of DSBRouter on the ISPD98 cases in Table 2 for the cases where \($\tau = 0.3$ and $\tau = 0$\). (Experiments show that different datasets exhibit varying sensitivities to the hyperparameters $\tau$.) We will report the full results in future work.
Table 2. Routing Results of DSBRouter on ISPD98 with modified objective.
| |ibm01-wl/of |ibm02-wl/of|ibm03-wl/of|ibm04-wl/of|
| :-: | :-: | :-: | :-: | :-: |
|$\tau=0.3$|65665/1479|213588/0 | 165930/0|176945/0|
|$\tau=0$|65632/1390|208405/0|165168/0|175869/0| | Summary: This paper introduces DSBRouter, an end-to-end neural global routing solver based on the Diffusion Schrödinger Bridge (DSB) model. Traditional learning-based approaches to global routing (GR) often require post-processing heuristics or reinforcement learning to enforce connectivity, leading to inefficiencies. In contrast, DSBRouter directly learns a bi-directional mapping between initial pins and final routing solutions, ensuring connectivity without the need for a second-stage correction. DSBRouter leverages a novel evaluation-based guidance mechanism to optimize routing outputs based on overflow minimization and connectivity constraints. Extensive experiments show that DSBRouter achieves SOTA.
Claims And Evidence: Claim: The proposed DSBRouter achieves SOTA performance.
Evidence: Table 2 & 3
Methods And Evaluation Criteria: Methodology is okay. Benchmarks are canonical ones (ISPD07/98).
Theoretical Claims: I briefly check the proof, but I could not ensure each and every detail is 100% correct.
Experimental Designs Or Analyses: I think the experimental designs are not problematic.
Supplementary Material: I read the supplementary materials, which include further theoretical implications, details on the network, baselines, and pseudo codes. There are also additional experiment data.
Relation To Broader Scientific Literature: Unlike previous routing works, this paper is the first to introduce Diffusion Schrodinger Bridge to routing.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strengths:
Novelty. This is the first to introduce DSB to routing.
Weaknesses:
Efficiency is a big concern for the proposed method. (I would like to thank the authors' honesty for reporting this problem.)
Other Comments Or Suggestions: The formulae are way-too crowded in page 5 and 6. I think it might be better to consolidate them into simpler forms.
Questions For Authors: While I understand the constraint of page limits, the solution to faster inference in "limitations" is somehow vague. Could you go into details on how to improve the efficiency of the proposed method.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We would like to express our sincere gratitude for thoroughly evaluating our paper and providing valuable and constructive feedback. We are genuinely committed to addressing your concerns and respond to your specific comments below.
> **Weakness: Efficiency is a big concern for the proposed method. (I would like to thank the authors' honesty for reporting this problem.) Could you go into details on how to improve the efficiency of the proposed method.**
Thanks for your valuable comment. The DSB in our proposed DSBRouter, like the standard [DDPM](https://proceedings.neurips.cc/paper/2020/hash/4c5bcfec8584af0d967f1ab10179ca4b-Abstract.html) and [SGM](https://proceedings.neurips.cc/paper/2019/hash/3001ef257407d5a371a96dcd947c7d93-Abstract.html?ref=https://githubhelp.com), is based on the diffusion and denoising processes, and therefore, efficiency concerns are also present. To address the efficiency issues, we consider the following two approaches:
1) Adopt the techniques of the [Consistency Model](https://openreview.net/forum?id=FmqFfMTNnv) to reduce the sampling steps.
2) Recomputing the optimization objective every few steps during the inference process, instead of at each step, which simplifies the computational of evaluation-based guidance.
We are working hard on implementing the above two strategies and the experiments which take days to finish. We will add the results in our later rebuttal and final version.
> **Suggestion: The formulae are way-too crowded in page 5 and 6. I think it might be better to consolidate them into simpler forms.**
Thanks for your insightful comment. We have revised the paper. The modified sections will be presented in our final version.
> **Question: While I understand the constraint of page limits, the solution to faster inference in "limitations" is somehow vague.**
Thanks for your comment. We have revised the statement in the 'limitations' section regarding the difficulty in achieving accelerated inference as follows: "Due to the nonmonotonically noise injection strategy and meaningful $p_r, p_s$ in the DSB-based sampling process, accelerated sampling techniques such as DDIM in DPM cannot be directly applied."
The reasons why acceleration techniques similar to [DDIM](https://arxiv.org/abs/2010.02502) are difficult to implement in DSB are: In the diffusion process of DDIM, it is assumed that the noise $\epsilon_t = \sqrt{1-\bar{a}_t}\bar{z}_t$ intensity added at each time step monotonically increases as following:
$$
\begin{equation*}
x_t=\sqrt{\bar{a}_t}x_0 + \sqrt{1-\bar{a}_t}\bar{z}_t \quad \bar{z}_t \sim \mathcal{N}(0,\mathbf{I}), \tag{1}
\end{equation*}
$$
where $\bar{a}_t$ is a parameter that decreases with the increase of time $t$ and after $T$ steps of noise addition, $x_T$ will converge to a Gaussian distribution $\mathcal{N}(x_T,\sqrt{\bar{a}_T}x_0,(1-\bar{a}_T)\mathbf{I})$. During the denoising (sampling) process of DDIM, the neural network predicts the noise $\epsilon_t$ and DDIM have the following hypothesis:
$$
\begin{equation*}
P(x_{t-1}|x_t,x_0) \sim \mathcal{N}(kx_0+mx_t,\sigma^2), \tag{2}
\end{equation*}
$$
However, in DSB, since both $p_s$ ($x_0$) and $p_r$ ($x_T$) represent meaningful data distributions, the noise intensity cannot monotonically change during either the forward or backward processes. Thus, a symmetric noise scheduling scheme $\gamma_t$, as shown in Appendix A.2.1, is used. Additionally, supposing that the backbone of DSBRouter also predicts noise $\gamma_t$, predict this nonmonotonically changing target is intuitively more difficult, and since we employ IPF iteration to optimize DSB, if we were to sample using this predicted noise, the model’s training would become harder to converge. Besides, in the transition of $p_s \mapsto p_r$ (or $p_r \mapsto p_s$) in DSB, there is no hypothesis in equation (2). Therefore, techniques like DDIM for accelerated sampling can not directly be used to implement in the sampling process of DSB. Besides, the [DSB method](https://arxiv.org/abs/2403.14623) we follow did not use acclearated samping techniques, indicating that the use of acceleration sampling techniques in DSB still needs theoretical and practical verification. | null | null | null | null | null | null | null | null |
Adapting to Linear Separable Subsets with Large-Margin in Differentially Private Learning | Accept (poster) | Summary: In this paper, the authors propose a $(\epsilon,\delta)$ differentially private algorithm for binary linear classification. The risk bound depends linearly on the arbitrary subset of data points $S_{out}$ , which if removed makes the data linearly separable with margin $\gamma$. The algorithm is adaptive as the knowledge of $\gamma$ or $S_{out}$ is not required by the algorithm.
### update after rebuttal
The authors have answered my questions and I would like to maintain my score.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: N/A.
Theoretical Claims: Yes. I checked supplementary material section A through F.
Experimental Designs Or Analyses: N/A
Supplementary Material: Yes. I checked supplementary material section A through F.
Relation To Broader Scientific Literature: It improves the results over Bassily et al. 2-22 and Nguyen et. al. 2020.
Essential References Not Discussed: To the best of my knowledge essential references are mentioned.
Other Strengths And Weaknesses: The paper is very well written and makes important contributions.
1. When $|S_{out}|$ is small, the proposed work improves the risk bound of previous work by a factor of $\sqrt{n}$.
2. Knowledge of $\gamma$ or $S_{out}$ is not required.
3. Analysis of utility bound of advanced private hyperparameter tuning algorithm.
Other Comments Or Suggestions: 1. On line 572 (proof of Lemma A.4), $(1-t)^t$ should be $(1-r)^t$.
2. On line 602, $beta$ should be $\beta$.
3. In Algorithm 5, step 6, $\tilde{w}_t$ should be $w\_{t+1}$.
Questions For Authors: Is the 3rd typo listed above indeed a typo? Otherwise, it is not clear how $w_t$ is being updated in Algorithm 5.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your detailed review and thoughtful question. We appreciate your recognition of our work. We have corrected all the typographical errors you identified.
Regarding your question, you are indeed correct: $\tilde{w_t}$ in Algorithm 5 should indeed be $w_{t+1}$. We thank the reviewer once again for their careful reading and valuable feedback.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I will maintain my score.
---
Reply to Comment 1.1.1:
Comment: Thank you again for your careful review and feedback. | Summary: The paper addresses the problem of DPERM for binary linear classification. The authors propose an efficient algorithm that achieves an empirical zero-one risk bound of $\widetilde{O}\left(\frac{1}{\gamma^2 \varepsilon n}+\frac{\left|S_{\text {out }}\right|}{\gamma n}\right)$. The algorithm is highly adaptive, requiring no prior knowledge of the margin parameter $\gamma$ or the outlier subset $S_{o u t}$. The paper also derives a utility bound for advanced private hyperparameter tuning. The main contributions include an efficient algorithm that adapts to largemargin subsets, an inlier-outlier analysis, and improved results in the agnostic case when the number of outliers is small.
## update after rebuttal
I am generally satisfied with the rebuttal and have raised my score.
Claims And Evidence: The main claims are supported by proofs.
Methods And Evaluation Criteria: The proposed methodology is well-explained and aligns with intuitive expectations. However, I found the explanation of the experiments somewhat confusing. Please refer to my comments for further details.
Theoretical Claims: The proofs are reviewed but not rigorously verified. Nevertheless, the results are consistent with intuition and appear to be sound.
Experimental Designs Or Analyses: Experiment details are less explained and the experiments mainly serves as a motivation of study.
Supplementary Material: The proofs in the supplementary material are reviewed but not thoroughly checked.
Relation To Broader Scientific Literature: The paper builds on prior work in differentially private half-space learning with large margins (e.g., Nguyên et al., 2020, and Bassily et al., 2022), with significant theoretical improvements. It also connects to the broader literature on neural collapse theory, which suggests that the last layer of a deep neural network trained on a classification task converges to distinct points.
Essential References Not Discussed: To the best of my knowledge, all necessary references are adequately discussed in the paper.
Other Strengths And Weaknesses: The writing of the paper is clear and fluent.
Other Comments Or Suggestions: - The experiments in the introduction appear confusing to me. I understand that the authors are trying to validate the idea that "a larger margin is believed to be the reason why pre-trained features help private learners to work better." However, the logic connecting better features, large margins, and the quick increase in performance when removing outliers seems somewhat incoherent. Additional explanation would be helpful.
- Please ensure the correct usage of \cite, \citep, and \citet. Note that the formatting of citations may vary depending on the LaTeX template used.
- Line 75, where is equation (2)? The position where the hyper-link jumps to does not have (2).
- Line 402, is $5m$ a typo?
Questions For Authors: - I am still a bit confused by Definition 6.1. As defined, $\mathcal{S_{in}} ( \gamma) $ is the collection of subsets that have $\gamma$ seperation. Can the author further explain why this exhaustive collection is necessary? Could something like $S_{out} = {\arg \min_{|S'|}} \gamma(S')\geq \gamma$ be defined instead? Also, as I understand, the choice for $S_{out}$ in the final bound can be arbitrary as long as it pertains $\gamma$ seperation. So I think some infimum of $|S_{out}|$ can be taken right? This is a minor point, but it seems to me that the bound depends on the selected subset, which makes it less conclusive.
- In Lemma 6.2, is the order of $k$ a worst case choice to guarantee that every $S_{in}$ in $\mathcal{S}_{in}(\gamma)$ has preserved margin?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: # Regarding your comments
> "The experiments in the introduction....Additional explanation would be helpful."
Thanks for asking this insightful question. Before presenting our explanations, we want to clarify that the “normalized margin”, labeled in the y-axis of Figure 2, measures the distance between decision boundaries. We believe you're essentially asking two related questions:
**(1) Why do better feature representations imply a larger margin?**
This analysis relates to neural collapse theory. Figure 2 in [WZSW 24] shows that improved feature representations yield smaller feature shift vectors—measuring deviation from ideal neural collapse features—indicating that empirical features align more closely with the equiangular tight frame structure and exhibit larger margins. Accordingly, ViT-pretrained features display smaller shift vectors than those from ResNet-50, reflecting a larger margin.
**(2) How does it relate to the increase of margin as outliers got removed?**
The intuition is that removing outliers makes classes linearly separable, yielding a positive margin. Continued removal of boundary-near points increases the normalized margin (y-axis, Figure 2) until it stabilizes near the data margin—a trend clearly seen in Figure 3 (top-left). Models with better feature representations show larger margins, explaining the ordering in Figure 2: ViT > ResNet-50 > Raw.
[WZSW 24] Wang, C., Zhu, Y., Su, W. J., & Wang, Y. X. (2024). Neural collapse meets differential privacy: curious behaviors of NoisyGD with near-perfect representation learning. arXiv preprint arXiv:2405.08920.
> Line 402, is 5m a typo?
Thank you for bringing up this question. Our intended message is as follows: suppose we have $K$ hyperparameters, each with $m$ possible choices. Using $A_{\text{iter}}$ incurs an overhead that scales as $m^{\mathcal{O}(K)}$, whereas employing the advanced tuning methods leads to an overhead that scales as $\mathcal{O}(Km)$.
> Other comments
Thank you for pointing these out. We corrected the hyperlink and resolved the LaTeX \cite issues. Eq. 2 is defined as the geometric margin in Line 148.
# Regarding your questions
## Question 1
> “Why this exhaustive collection is necessary? Could something like … be defined instead?”
Thank you for asking this thoughtful question. These definitions serve as tools for proving and stating the main theorem. For instance, they are directly used in the proofs of the margin preservation lemmas (Appendix B) and Lemma 6.4. (for proving adaptivity using a doubling grid).
We found the definition you referenced is actually dependent on Definition 6.1. We assume you are referring to $\arg\min_{|S'|} \{ \gamma(S\/S') \geq \gamma \}$, which is equivalent to $ \arg\min_{S \in S_{\text{out}}(\gamma)} |S|$, where $S_{out}(\gamma)$ is defined in Definition 6.1.
> “Also, as I understand, the choice for S_{out} … So I think some infimum can be taken right?”
Yes, the infimum is indeed taken over all $S_{out}$ such that $\gamma (S \/ S_{out})>0$, as demonstrated in Theorems 6.5 and 6.7, specifically on the left-hand side of the inequality in each case.
> “This is a minor point, but it seems to me that the bound depends..less conclusive”
We appreciate your thoughtful question. Our bound is data-adaptive. We explain our result using Theorem 6.5 as an example.
Since the dataset is finite, there must exist at least one optimal outlier subset $S_{out}^*$ that minimizes the upper bound:
$\frac{1}{n\varepsilon\gamma(S_{out}^* )} + \frac{|S_{out}^*|}{n\gamma(S_{out}^*)}$
While the minimizer of the upper bound depends on the optimal subset, this dependency reinforces the fact that the bound is data-adaptive. This stands in contrast to previous results, which are data-independent and yield a fixed rate of $n^{0.5}$, as reported in Table 1.
Since this optimal subset is unknown—and identifying it via brute-force search is NP-hard—our algorithm, which runs in polytime can effectively adapt to it through a hyperparameter search over a logarithmic grid on the margin, as demonstrated in Theorem 6.5 and also pointed out by other reviewer.
## Question 2
That's a good question. The order of k is determined by the JL lemma. The k is chosen to ensure every $\gamma$-level margin inlier set has preserved margin after projection with high probability.
Specifically, from the statement of Lemma 6.2, the probability is taken over the randomness of the Johnson–Lindenstrauss matrix $\Phi$. It can be interpreted as the following conditional probability: $P_{\Phi} ( \gamma ( \Phi S_{in} ) \geq \gamma/3 \mid S_{in} ) \geq 1 - \beta$ rather than $P_{\Phi} ( \forall S_{in} \in \mathcal{S_{in}} (\gamma), \gamma ( \Phi S_{in}) \geq \gamma/3 ) \geq 1-\beta$. We note that the second inequality is a stronger condition, which is what you mentioned in the question. However, throughout this paper, the first one is sufficient for our proof. | Summary: This paper studies empirical risk minimization of large (geometric) margin half spaces, in the agnostic setting. They have the following major contributions:
a) They give an algorithm for this problem that works even without knowledge of the margin. Prior work by Nguyen et al. (2019) required knowledge of the margin $\gamma$. Their approach closely follows Nguyen's in this setting- they perform a JL transform to project the data down into lower dimension, and apply noisy SGD to learn in the lower dimension (arguing that the margin is preserved under the JL transformation, which significantly improves the dependence on the dimension). The main technical difference is that instead of requiring knowledge of the margin, they create a logarithmic discretized grid of the margin, and run the above base algorithm on all $\log n$ possible margin discretizations. They then noisily compare the average empirical risk in order to select the best margin (they also consider a version that uses private hyperparameter tuning developed in prior work by Talwar-Liu and Papernot-Steinke). They show that the bound of $O(1/\gamma^2 n \epsilon)$ with known margin can be matched even without knowing it.
b) To extend their result to the agnostic setting, they consider a notion of inliers (subsets of the datasets that are linearly separable with some margin), and outliers, and argue that a bound of $O(1/\gamma^2 \epsilon n + |S_{out}|/\gamma n)$ applies where $S_{out}$ is the set of outliers and $\gamma$ is the margin of the remaining points (note that this bound automatically applies for the best such subset $S_{out}$). They argue in their introduction that this is a practical way to think about margin, because for some learning problems (and some pretrained features), removing a few 'troublemaker' data-points results in much larger margin.
In addition to DP ERM, they also give similar bounds for the population risk version of the problem.
Claims And Evidence: Yes, the proofs all seemed convincing and correct to me.
Methods And Evaluation Criteria: This is primarily a theoretical work, and so this is not a relevant section.
Theoretical Claims: I read through the important details of all of the proofs related to DP-ERM for this problem and am convinced that they are correct.
Experimental Designs Or Analyses: Not relevant.
Supplementary Material: I reviewed the relevant proofs on DP-ERM in the supplementary material. I did not review the parts on advanced hyperparameter tuning in detail (as well as the beyond unit norm section).
Relation To Broader Scientific Literature: This paper fits into the larger literature on DP learning theory- it considers the natural class of large margin half spaces studied in previous work (Nguyen et al. [2019], Bassily et al. [2022]) and gives utility bounds via a new notion of margin inliers and outliers. Prior work studied different notions of margin and/or assumed that the geometric margin was known.
Essential References Not Discussed: All essential references that I could think of were discussed.
Other Strengths And Weaknesses: Strength:
1. The definition of margin inliers and outliers, and developing a utility bound based on them seems like it could be useful, especially since the intro demonstrates that this phenomenon arises in natural learning settings. Additionally, unlike some prior work, this analysis technique also applies when the domain is bounded.
Weakness:
1. The main weakness of the paper is that the algorithm itself does not seem technically novel (the JL transform + GD approach has been used in prior work by Nguyen et al. and Bassily et al.), and the idea of choosing the margin by private selection over a grid was used in a different capacity (under a different margin definition) in Bassily et al. While the technical details vary slightly, I am not convinced that this algorithm is a significant contribution of this paper.
2. One issue is that for the bounds to be better than Bassily et al., the number of outliers needs to be relatively small (smaller than $O(\sqrt{n})$ where $n$ is the size of the dataset) whereas the intro discusses that in practical examples, a large portion (0.1% of the data) needs to be removed to achieve linear separability, so for large datasets, it's not clear that the utility bounds via this method would be better than existing bounds.
Other Comments Or Suggestions: No other comments/suggestions
Questions For Authors: I did not fully understand the relevance of the private hyperparameter tuning results- since the discretized hyperparameter set is sufficiently small (as the authors point out), is there really a need for this? Additionally, what parts of this analysis are novel- it seems like things should follow from the prior work on private hyperparameter tuning in a blackbox way.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: # Regarding your comment on weaknesses
> Weakness 1
We thank the reviewer for their valuable feedback. We agree that our work builds upon the JL transform and gradient descent (GD) techniques, which have been explored in prior works such as Nguyen et al. and Bassily et al. However, our algorithm introduces several key innovations that go beyond a straightforward combination of these tools:
**(1) Novel Margin Definition and Analysis:**
Our analysis is based on a new definition of margin inliers/outliers, which, to the best of our knowledge, has not appeared in prior work. This definition is data-dependent (as discussed in Remark 3.1), in contrast to the confidence margin used in Bassily et al., which is data-independent. This distinction is not only conceptual but also technical: our data-dependent margin enables a new analysis technique that is not limited to the JL + GD setting (please refer to comments from Line 291 to Line 295). Most importantly, it leads to a data-adaptive generalization bound that avoids the hard $1/\sqrt{n}$ rate in the agnostic case that appears in Bassily et al.
**(2) Practical and Efficient Private Margin Selection:**
While Bassily et al. employs a grid search combined with the exponential mechanism, their approach involves a more complex score function. Additionally, the exponential mechanism appears hard to implement, as evaluating the score function requires solving non-convex optimization problems (Lemma F.3). In comparison, our method is computationally efficient and straightforward to implement.
> Weakness 2
Thank you for the insightful question. To address it more generally, we note that the problem of tolerating a constant fraction of outliers—under certain additional assumptions on the noise model—remains open, as discussed in Section 8. We have also included a comparison between our bound and that of Bassily et al. in Lines 165 to 168.
# Regarding your question
Thank you for this thoughtful question. First, we clarify that this section extends our main result by analyzing utility in the context of general hyperparameter sets, which may not be small. For completeness, we provide a utility analysis of hyperparameter tuning. Specifically, advanced tuning methods yield a utility bound with a $\log(|\Theta|)$ dependence, whereas naively evaluating all base mechanisms and selecting the best incurs a $|\Theta|$ dependence. However, in the small hyperparameter setting considered in our paper, the utility bound is not dominated by the $\log\log(n)$ as it is obscured by other $\log(n)$ factors (see Section 7.2). In addition, to the best of our knowledge, the explicit form of the utility bound has not been previously derived.
Lê Nguyễn, Huy, Jonathan Ullman, and Lydia Zakynthinou. "Efficient private algorithms for learning large-margin halfspaces." Algorithmic Learning Theory. PMLR, 2020.
Bassily, Raef, Mehryar Mohri, and Ananda Theertha Suresh. "Differentially private learning with margin guarantees." Advances in Neural Information Processing Systems 35 (2022): 32127-32141. | null | null | null | null | null | null | null | null |
MASS: Mathematical Data Selection via Skill Graphs for Pretraining Large Language Models | Accept (poster) | Summary: The paper proposed MASS, a novel mathematical skill graph construction method for selecting data for pretraining LLMs in the math domain. MASS prompts a strong LLM to generate nodes of skills from a reference dataset, and then construct an adjacency matrix (as a graph) using the dataset statistics. Then the graph can be leveraged to calculate quality scores for the pretraining data by their representational similarity with the skills. Empirically, MASS outperforms other pretraining data selection baselines and full-dataset training. More broadly, MASS demonstrates the great potential of using graphs for data selection in general.
## update after rebuttal
I have increased my score as my concerns are well addressed by the authors.
Claims And Evidence: Most of the claims are well-supported.
Methods And Evaluation Criteria: The method is brilliant and the evaluation criteria are properly chosen.
Theoretical Claims: N/A.
Experimental Designs Or Analyses: The experimental designs for evaluation and ablation studies are comprehensive. However, more baselines (e.g., DSIR) could be included for pretraining data selection on Mistral-7B.
Supplementary Material: I have checked the prompts and qualitative examples.
Relation To Broader Scientific Literature: Selecting pretraining data using a carefully constructed graph that contains knowledge and statistics is a novel and brilliant idea, which is under-explored by prior works according to my knowledge. Although this paper only demonstrated its practical usability in the domain of mathematics, I believe it has a great potential for data selection for other domains in general.
Essential References Not Discussed: N/A.
Other Strengths And Weaknesses: **Strengths**
1. The paper is well-written and well-structured.
2. The methodology is novel and interesting. Using the adjacency matrix of the skill graph to select pretraining data not only accounts for their direct usefulness to learn individual mathematical skills, but also accounts for their potential usefulness to other skills when multiple skills are required to solve a problem.
3. The ablation studies are well executed to demonstrate the performance gain from the graph construction.
**Weaknesses**
1. Although the design of the method, the empirical performance, and the ablation studies are commendable, there is a lack of theoretical understanding on why the proposed approach works better than other baselines.
2. The computational complexities of constructing the skill graph and selecting from the skill graph remain unclear.
3. The comparison with other baselines is only performed on TinyLlama-1.1B. It will be great to show at least a comparison with DSIR on the larger Mistral-7B.
Other Comments Or Suggestions: N/A
Questions For Authors: 1. In line 323 (right column), the author said “for Jiuzhang 3.0, a higher selection ratio leads to better performance”, but when the ratio is too high (>70%), the performance decreases. Could you please clarify the correctness of the sentence? In addition, could you please explain what could be the reasons that caused the spike?
2. Could you provide a bit more statistics about the graph to understand how dense is the graph, e.g., clustering coefficient?
3. In equations (1) and (2), the entries are normalized with different normalizing factors so that they can sum up to 1 respectively. Why is the normalization important?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Dear Reviewer 2U9L,
Thank you for your thoughtful feedback and positive recognition. Below, we respond to each of your comments in detail.
1. **Weakness 1:** There is a lack of **theoretical understanding** on why the proposed approach works better than other baselines.
**A:** First, we emphasize that this work prioritizes model architecture and performance improvements over rigorous theoretical analysis, which we defer to future research.
Second, we present some initial thoughts and general insights. In Section 2.4, we analyze why MASS works through the lens of skills. Here, we highlight two features of the MASS-selected data:
- It encompasses a wider range of important mathematical skills.
- It captures a richer, more nuanced compositional understanding of these skills.
In contrast, other baselines adopt fundamentally different approaches:
- Rule-based method focuses more on pre-processing, such as language filtering and deduplication.
- Rho-1 improves the data quality within each sample at token-level.
- ProX, following Rho-1, uses programming to refine data.
- DSIR calculates the weight of each sample using bag-of-ngrams estimator.
- AutoDS prompts an LLM to select data simply based on its general math intelligence.
We believe what seperates MASS from other baselines is that we select data at a finer semantic granularity—skills. Having the reference data, we decide what skills models really need and lack, so we can purposely select a subset that contains these skills to teach LLMs. Other baselines, meanwhile, prioritize linguistic-level quality (e.g., grammar, noise, or coherence) without considering the sementics and skill-level efficacy.
2. **Weakness 2:** The **computational complexities** of constructing the skill graph and selecting from the skill graph remain unclear.
**A:** Here we break down the computational complexities of the whole approach of MASS.
| Operations | A100 GPU hours |
|--------|------|
|Extracting skills from reference dataset|~24|
|Embedding reference dataset and target dataset|~0.5|
|Constructing skill graph|~2 CPU hours|
|Selecting high-quality subset from target dataset|~4|
|Training Mistral-7B on high-quality subset (~10B tokens)|~960|
As shown, the computational cost of pre-processing steps is relatively low compared to model training (<3%).
3. **Weakness 3:** Show **a comparison with DSIR on the larger Mistral-7B**.
**A:** Thank you for your suggestion. We began training Mistral-7B using DSIR, and the model is still in training. We will post the results in the discussion phase as soon as complete.
4. **Question 1:** Clarify the **correctness** of line 323 / Explain what could **cause the spike**.
**A:** We confirm this statement is correct. 'Higher' is used when we compare Jiuzhang3.0 and OMW-pro, not with Jiuzhang3.0. In figure 5, it is clear that the best ratio for Jiuzhang is higher than that for OWM-pro.
As for the spikes, we offer the following explanation:
- At very low selection ratios, while the selected data maintains high quality, the reduced diversity negatively impacts model performance. Prior research [1] has demonstrated the importance of data diversity in selection methods.
- When the selection ratio is too high, too much noisy and low quality data is remained thus harming the performance.
Thus, the observed spike-shaped relationship between selection ratio and model performance is well-justified from both quality and diversity perspectives as a trade-off.
5. **Question 2:** More **statistics about the graph**.
| Property | Value|
|--------|------|
| Number of nodes | 46,490 |
| Number of edges | 1,230,497 |
| Density | 0.001 |
| Clustering Coefficient | 0.776 |
| Modularity| 0.587 |
| Average degree | 52.94 |
| Maximum degree | 11691 |
| Minimum degree | 4 |
| Degree standard deviation | 199.69|
6. **Question 3:** Why is the **normalization in Equ. 1 and 2** important?
**A:** We normalize the entries in equations (1) and (2) with different temperature coefficients because these original entries (raw counts of skills and co-occurrences) follow different statistical distributions. This normalization ensures both measures are on comparable scales before being combined in Equation (4) to compute similarity. Without such scaling, one type of entry could disproportionately influence the final similarity measure due to its inherently larger magnitude. The temperature-based normalization prevents either component from dominating the results while preserving their relative importance.
Thank you again for your thoughtful advice. We will include relevant details and analysis in the next version of our manuscript.
Sincerely,
MASS authors
[1] Harnessing Diversity for Important Data Selection in Pretraining Large Language Models, ICLR 2025.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed comments. The authors have addressed most of my concerns well. However, I still don't understand why line 323 is correct (question 1):
> We confirm this statement is correct. 'Higher' is used when we compare Jiuzhang3.0 and OMW-pro, not with Jiuzhang3.0. In figure 5, it is clear that the best ratio for Jiuzhang is higher than that for OWM-pro.
Line 323 says "a higher selection ratio leads to better performance". While the best-performing ratio of Jiuzhang3.0 is higher than that for OWM-pro, it is not clear that this justifies "higher selection ratio leads to better performance". What performance is being compared, and whose performance is being evaluated? Please clarify.
---
I have read comments from other reviewers. I do not have other concerns about the paper. However, I find the questions from Reviewer CEvW quite interesting, especially Q1 regarding the understanding of the compositionality knowledge from equation 10, which I don't think the authors have addressed in response to Weakness 1 in my review. It would be great to hear more thoughts from the authors.
---
Reply to Comment 1.1.1:
Comment: ## **Update:** The experimental results for additional baselines are now available. We have included DSIR and BM25 for pretraining data selection on Mistral-7B.
For BM25 method, we implement it using the repo [3]. We have randomly selected 100 samples from the reference dataset and ranked the target dataset based on the representation to select a high-quality subset. Also, we include DSIR baseline by using their official repo as reviewer 2U9L suggested. We continued pretraining Mistral-7B using variants of OpenWebMath-pro for ~5B tokens:
|Data|asdiv|gsm8k|mathqa|mawps|minerva_math|mmlu_stem|svamp|tabmwp|Avg.|
|-|-|-|-|-|-|-|-|-|-|
|Orig.| 73.7|47.1|42.6|89.5|21.8|52.2|63.2|58.2|56.1|
|BM25|73|44.7|49.8|86.1|24|52.6|63.1|49.1|55.3|
|DSIR|73.4|42.1|55.3|86.8|21.6|51.9|63.6|50.4|55.6|
|MASS|76.8|53.2|51.8|90.4|25.6|54.5|67|57.6|59.6|
MASS still outperforms other baselines by at least 3%. Surprisingly, BM25 and DSIR perform worse than using the original data. We hypothesize that this is due to their limited diversity, as BM25 and DSIR score data points based on linguistic-level features (e.g., BM25 and n-gram similarity between the reference and target data).
---
Thank you for your quick reply. As for line 323, in your reply, you mentioned that:
> While the best-performing ratio of Jiuzhang3.0 is higher than that for OWM-pro.
This is exactly what we meant. Or we could rephrase it as
> For Jiuzhang3.0, a relatively high selection ratio leads to high performance. For OpenWebMath-pro, a relatively low selection ratio leads to high performance.
We wrote this because we wanted to demonstrate that a dataset with higher quality probably matches a higher selection ratio, like Jiuzhang3.0.
Now we have realized the original sentence might seem a bit confusing and we will refine it in the next revision.
---
Thank you for reading other reviews as well. We planed to reply to reviewer CEvW's question in the discussion phase because of the space constraints. Here we would like to put our thoughts:
**A:** To analyze Equ. 10, we first need to address Equ. 4 which calculates the aggregated similarity of a data point $x_i$ and a skill $v_j$. The first term is straightforward: it yields a high score when both the skill frequency ($A_{j,j}$) and the original similarity ($\mathrm{sim}(x_i, v_j)$) are high. The second term captures co-occurrence patterns: when skills $v_j$ and $v_k$ frequently co-occur ($A_{j,k}$ is high) and the point $x_i$ is also similar to $v_k$ ($\mathrm{sim}(x_i, v_k)$ is high), this contributes to the aggregated similarity.
So the fact that the relationship of skill $v_j$ and its relevant skill $v_k$ affects the importance score of a sample $x_i$ is the reason why we claim MASS contains compositional knowledge. Although it feels that we are scoring the sample based on a coarser grained skill/"skill family" as reviewer commented, we specifically characterize this as a compositional feature rather than a robustness mechanism. Finally, by summing over all skills in Equ. 4, we obtain Equ. 10, which gives the final importance score for $x_i$.
As for your version of the second term in Equ. 10, it can be rewriten as $\mathrm{sim}(x, v_j)\sum_{v_k \in \mathcal{N}(v_j)}\mathbf{A}_{j,k}\mathrm{sim}(x, v_k)$ where the extra term $\mathrm{sim}(x, v_j)$ can be seen as a constant coefficient used to regulate the second term, but in fact we have already taken $\mathrm{sim}(x, v_j)$ into acount in the first term of Equ. 10. As a result, we believe that both our original version and reviewer's version share the same philosophical principles behind and reflect the compositional knowledge we claim, while they may have slightly different forms and numerical values. We would love to explore reviewer's implementation in future work.
As for enforcing a looser skill taxonomy and only using the first term, we have done it in our ablation study-impact of skill graph-w/o non-diagonal entries section. This variant is equivalent to only using the first term in Equ. 10 and we found that it does degrade the performance by at least 2%. This empirical result substantiates our claim that MASS effectively captures compositional knowledge through its complete formulation. | Summary: This paper proposes an approach called MASS for selecting mathematical training data. The paper takes a high-quality reference math dataset, obtains each problem's skills (by prompting a LM), and constructs a skills graph, from which we can read off how frequent each skill is in the reference dataset and which skills commonly occur together. MASS uses the skills graph to select samples from a target dataset, prioritizing samples that are associated with the most commonly occurring skills in the reference dataset, and samples that have compositional knowledge. Experiments show that continually pretraining with MASS outperforms other common data selection methods.
Claims And Evidence: I have no concerns about the claims made regarding whether MASS works; however, I do have some questions about the claims regarding *why* MASS works, and would appreciate any clarification. The paper claims that MASS encourages selecting samples that cover compositional information, but I am not fully convinced that this is the entire explanation of what is actually happening.
When looking at the first term in equation 10, my interpretation of MASS is that it uses the frequency of skills, $[A_{11}, A_{22}, ...,]$ as a representation of a reference dataset. With just the first term, the score function encourages selecting points whose associated skills (as captured by $sim(x, v_j)$) are those that frequently occur in the reference dataset; i.e., we select target dataset samples that are similar to the reference dataset samples in terms of the skills they exhibit. The second term in the scoring function appears to cover a neighborhood of v_j, which feels like its a robustness mechanism, rather than something that explicitly encourages compositionality. That is,using $\mathcal{N}(v_j)$ in addition to $v_j$ feels like we are scoring the sample based on a coarser grained skill/"skill family" in the skills graph, which provides robustness against if individual skills are too narrow/poorly defined. Mathematically, the point I am trying to get across becomes clearer if we rewrite equation 10 slightly: $score(x) = \sum_{j=1}^{|V|} \sum_{v_k \in v_j \cup \mathcal{N}(v_j)} A_{jk} sim(x, v_k)$. For each skill $v_j$, you are summing over the skills graph entries corresponding to its neighborhood (including itself).
To further understand why MASS works, I have two questions/thoughts:
- For equation 10, what if the second term was $A_{jk} sim(x, v_k) sim(x, v_j)$? That is, we'd increase the score of $x$ a lot if it was similar to both $v_k$ and $v_j$, and $v_k$ and $v_j$ tend to occur together a lot in the reference dataset. This feels to me like it would capture more of this compositional knowledge you claim.
- It would be interesting to investigate if enforcing a looser skill taxonomy and only using the first term would essentially result in the same sort of data being selected.
Methods And Evaluation Criteria: See "Claims and Evidence" above. The evaluation criteria makes sense, and the proposed method makes sense overall, besides the following two points (rephrased from above):
- For equation 10, what if the second term was $A_{jk} sim(x, v_k) sim(x, v_j)$? That is, we'd increase the score of $x$ a lot if it was similar to both $v_k$ and $v_j$, and $v_k$ and $v_j$ tend to occur together a lot in the reference dataset.
- I see the appeal of making the LM identify skills in an unsupervised manner---you don't require much domain knowledge. However, I wonder if making the LM list 10 skills per sample results in hallucinations/poorly defined skills. That is, is there some benefit to defining a set of skills/level of granularity (for example, skill = the name and level of the math class you would learn about this problem in), and getting the LM to adhere to this taxonomy?
Theoretical Claims: Not applicable, no theorems in paper.
Experimental Designs Or Analyses: The experimental setup appears well-structured, although I had a few concerns:
- Compute efficiency analysis: how do these data selection methods perform if we control for compute in the data selection process? It would be interesting to compare the performance of MASS to the performance of allocating the compute used for MASS's data selection to just train on more random data from the target dataset.
- AutoDS baseline: do you prompt using the same LM for AutoDS and MASS? If I remember correctly, AutoDS uses Qwen-72B base language model while MASS uses Qwen2.5-72B-Instruct-GPTQ-Int4. More details on implementation of baselines would be appreciated.
- Additional baselines: additional baselines that select data based on matching some representation of a reference dataset, like LESS https://arxiv.org/abs/2402.04333 or BM25, could be helpful.
Supplementary Material: Yes, all parts.
Relation To Broader Scientific Literature: The contributions are highly relevant to broader scientific literature, since data selection is a critical part of the LM development pipeline. The idea of skills as a way to capture different characteristics of data and matching along skill distributions is interesting and builds on important lines of work that examine how models learn from data. It provides an alternative to existing, less interpretable data selection algorithms, which involve embedding the reference dataset, training on it, computing gradients on it, and so on.
I do think the paper would be more interesting/stronger if the authors investigated if skills graphs could be applied to other domains, such as code, science, or general natural language.
Essential References Not Discussed: None.
Other Strengths And Weaknesses: Strengths:
- Doing data selection by extracting the skills from a reference data and selecting the target samples that are most aligned with these skills is a very interesting and novel idea. It provides evidence towards this skills-based view of how models learn from data, which is both scientifically and practically important.
- The method performs very well and outperforms other data selection approaches.
- The paper is generally well-written and easy to read.
Weaknesses:
- It is not completely clear when this approach works; the paper is lacking recommendations for how to use this approach in practical settings.
- Does this approach work for other domains?
- Does this approach work when a practitioner wants to define their own set of skills? What sort of skills work the best for this method? (i.e., topic-based, reasoning-based, style-based).
- Does the choice of LM matter for extracting skills?
- It is also not completely clear why this approach works; see my comments above regarding my interpretation of the score as encouraging fuzzy neighborhood-based skill alignment, rather than compositional knowledge.
- There should be more information on how baselines were implemented and why these baselines were chosen. Moreover, I don't think there is a clear intuitive reason stated regarding why MASS should outperform these other approaches; why are skills a better axis for data selection than, i.e., directly prompting an LM to score things or a token-level data selection approach?
Other Comments Or Suggestions: None.
Questions For Authors: 1. Understanding the claim of encouraging compositional information: For equation 10, what if the second term was $A_{jk} sim(x, v_k) sim(x, v_j)$?
2. What is the impact of prompting for different skills (i.e., a pre-defined skills taxonomy, or having the model output 5 or 20 skills rather than 10)?
3. How does MASS compare against other approaches when we match the amount of end-to-end compute used to train the model?
4. Clarification: do you prompt using the same LM for AutoDS and MASS?
5. Can MASS be applied to other domains, like code, science, or natural language?
6. What types of skills work well with MASS, and what skills do not?
7. Why were these baselines chosen and why should one expect MASS to intuitively outperform them?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear Reviewer CEvW,
Thank you for your thoughtful feedback and positive recognition. Below, we respond to each of your comments in detail.
1. **Weakness 1 Q1 / Question 5:** Does this approach work for **other domains**?
**A:** Yes, it does work for other domains. Please see our reply to reviewer NxxW on Question 1.
3. **Weakness 1 Q2 / Question 6:** Does this approach work when a practitioner wants to define their **own set of skills?** What sort of **skills work the best** for this method? (i.e., topic-based, reasoning-based, style-based).
**A:** For the first question, yes, it does work with minor adjustments. We propose two potential solutions:
- We can directly prompt LLMs to identify relevant skills from a pre-defined set for each math sample. However, this faces challenges when the skill set is large (e.g., our 46,490 distinct skills) due to: (1) prompt length constraint and (2) model's instruction following ability.
- We can opt for a BERT-like classfier model instead of autoregressive models. So we need to collect a sufficiently large training set of math data and their corresponding skills pairs to train the BERT model which can further be used at scale for pre-training corpus. The data collection can be done either by utilizing LLMs or manual annotation by experts.
For the second question, while our current version doesn’t specify skill types during extraction, this might be a direction worth trying. We believe that hierachical topic-based skills may work well. Specifically, we first identify math data point to algebra, geometry, statistics and so on. Next, we further classify algebra data to abstract algebra, lie algebra and so on at a finer level. Thus it also incorporates flexibity and hierachical knowledge.
However, this remains speculative and what works the best still needs comprehensive designs and experiments to figure out. We leave this for future work.
5. **Weakness 1 Q3:** Does the **choice of LM** matter for extracting skills?
**A:** During our initial implementation, we compared the SOTA proprietary model GPT-4o and open-sourced Qwen2.5-72B-Instruct-GPTQ-Int4 and found that the extracted skills are similar. Using the same example in our manuscirpt, here we show the outputs:
*GPT-4o:* ["rational expressions","factoring polynomials","root identification","solving rational equations","quadratic equations","expanding algebraic expressions","solving equation","excluding restricted values","considering inequality"]
*Qwen2.5:* ["Equation solving", "Factoring polynomials", "Fraction manipulation", "Quadratic equations", "Root identification", "Expression simplification", "Algebraic transformation", "Polynomial division", "Inequality consideration", "Solution verification"]
Actually what matters more is the prompt template and the output parsing process. After careful evaluation, we selected Qwen2.5-72B-Instruct-GPTQ-Int4 for its optimal balance between cost and performance. This choice allowed us to focus resources on extensive prompt engineering, which ultimately contributed to MASS's significant performance improvements.
While the current results are satisfactory, we acknowledge that skill extraction could potentially benefit from stronger LLMs and prompt template.
8. **Weakness 3 Q2 / Weakness 3 Q3 / Question 7:** A clear intuitive reason regarding **why MASS should outperform these other approaches**; **why are skills a better axis** for data selection than, i.e., directly prompting an LM to score things or a token-level data selection approach?
**A:** Please see our reply to reviewer 2U9L on Weakness 1.
11. **Question 2:** What is the **impact of prompting for different skills** (i.e., a pre-defined skills taxonomy, or having the model output 5 or 20 skills rather than 10)?
**A:** For a pre-defined skills taxonomy method, please see our reply to Weakness 1 Q2.
For the number of skills extracted, we need to emphasize that our prompt asks the LLM to output 1-10 skills (not a fixed 10), so the ouput actually varies based on the sample. Moreover, the core contribution of our approach resides in its skill-based framework rather than in quantitative aspects of skill extraction, so we chose not to focus extensively on determining the exact number of skills.
## Due to space constraints, we are unable to address all questions in this section. We will provide other responses during the discussion phase. Thanks for your understanding.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response. I also looked through your response to reviewer 2U9L regarding the form and interpretation of equation 10; to me it still intuitively reads more as handling neighborhoods of skills, but I think both interpretations are fine. I will keep my score for now, but look forward to the results from currently ongoing experiments.
---
Reply to Comment 1.1.1:
Comment: Thank you for your feedback. Below, we address the remaining questions and present the experimental results.
1. **Weakness 3 Q1 / Concern 2 / Question 7:** How baselines were implemented and why these baselines were chosen.
**A:** The reason why we chose these baselines is that we researched on LLM data selction literature, and we tried to include at least one baseline from each catogory. So we have RULE: rule-based method, Rho-1 and ProX: token-level based method, DSIR: n gram feature based doc-level method and AutoDS: LLM based math specific doc-level method.
As for how they were implemented, we use the results of RULE and Rho-1 from paper [1]. We implement ProX by using their refined dataset form [2]. We implement DSIR from their official repo and use the default setting to select data. We implement AutoDS from their official repo and Qwen2.5 to select data.
For all methods, we trained models using identical hyperparameters (as specified in Table 1 of our manuscript) on their respective selected datasets to ensure fair comparison.
13. **Question 3 / Concern 1:** How does MASS compare against other approaches when we **match the amount of end-to-end compute** used?
**A:** For the compute analysis, please see our reply to reviewer 2U9L on Weakness 2.
As shown, the cost of pre-processing steps is relatively low compared to model training (<3%). For other baselines, either we do not know their pre-processing compute or they only require CPU hours so it is hard to precisely match the end-to-end compute.
In Figure 3 of our paper, it is clear that MASS achieves ≥40% greater efficiency than all baselines, which means even if we do not consider the pre-processing steps of MASS, it remains the most efficient and effective approach.
15. **Question 4 / Concern 2:** Do you prompt using the same LM for AutoDS and MASS?
**A:** Yes, we use the same model, Qwen2.5-72B-Instruct-GPTQ-Int4 to ensure a fair comparison. Additionally, the Qwen-72B base model is somewhat outdated, and its large size is impractical given the scale of the corpus. In AutoDS, the filtering procedure took ~3,000 A100 GPU hours for 11.26M docs, so we chose a 4-bit quantized yet better-performing model.
---
4. **Concern 3:** **Additional baselines** that select data based on matching some representation of a reference dataset, such as LESS and BM25.
**A:** We initially considered LESS as a baseline but later found it unsuitable for our setting. As shown in Table 4 of the LESS paper, their method requires 54 GPU hours for the entire data selection process. However, LESS operates at the instruction-tuning scale (with a maximum dataset size of 18,721 samples) due to the computational cost of gradient-based feature extraction. In contrast, our work focuses on pre-training-scale data (OpenWebMath contains 6.32 million documents), making LESS impractical for comparison. Thus, we excluded it from our analysis.
For BM25 method, we implement it using the repo [3]. We have randomly selected 100 samples from reference dataset and ranked the target dataset based on the representation to select high-quality subset. Also we include DSIR baseline by using their offical repo as reviewer 2U9L suggested. We continued pretraining Mistral-7B using variants of OpenWebMath-pro for ~5B tokens:
|Data|asdiv|gsm8k|mathqa|mawps|minerva_math|mmlu_stem|svamp|tabmwp|Avg.|
|-|-|-|-|-|-|-|-|-|-|
|Orig.| 73.7|47.1|42.6|89.5|21.8|52.2|63.2|58.2|56.1|
|BM25|73|44.7|49.8|86.1|24|52.6|63.1|49.1|55.3|
|DSIR|73.4|42.1|55.3|86.8|21.6|51.9|63.6|50.4|55.6|
|MASS|76.8|53.2|51.8|90.4|25.6|54.5|67|57.6|59.6|
MASS still outperforms other baselines by at least 3%. Surprisingly, BM25 and DSIR perform worse than using the original data. We hypothesize that this is due to their limited diversity, as BM25 and DSIR score data points based on linguistic-level features (e.g., BM25 and n-gram similarity between the reference and target data).
5. **Stronger base models:**
**A:** As reviewer NxxW suggested, we trained a stronger base model, Qwen2.5-7B, using variants of OpenWebMath for ~9B tokens:
|Data|asdiv|gsm8k|mathqa|mawps|minerva_math|mmlu_stem|svamp|tabmwp|Avg.|
|-|-|-|-|-|-|-|-|-|-|
|-|93.1|85.8|80.6|97.9|53.4|67.5|90.9|82|81.4|
|Orig.|85.6|67.1|71.3|94|38.2|65.8|80.2|61.9|70.5|
|MASS|85.9|71|74.7|95.7|42|69.6|83.6|71.6|74.3|
As we expected and explained in our rebuttal to reviewer NxxW, performance declines after CPT because even MASS-filtered data likely has lower quality than the industry-standard data used in Qwen2.5. Nevertheless, MASS still outperforms the original data, demonstrating its effectiveness.
[1] Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale, ArXiv 2024.
[2] https://huggingface.co/datasets/gair-prox/open-web-math-pro
[3] https://github.com/dorianbrown/rank_bm25 | Summary: This paper introduces a method for math data selection in pre-training. It begins by extracting a skill graph from a high-quality reference dataset, then utilizes this graph to score a larger dataset and filter out high-quality samples.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes, they evaluated the models before and after continued pre-training on various math benchmarks. However, it would be beneficial to also include results on:
1. Corresponding results after instruction tuning for each model.
2. Performance on additional tasks, such as general instruction following and coding, to provide a more comprehensive assessment.
Theoretical Claims: There are no proofs.
Experimental Designs Or Analyses: Yes, the experiments on continued pre-training are well designed. However, in addition to the evaluation concerns mentioned earlier, it might be better to use stronger base models, such as DeepSeek-Code, which serves as the starting point for DeepSeek-Math's continued pre-training. Other 7B models stronger than Mistral is also better.
Supplementary Material: Yes. Every part.
Relation To Broader Scientific Literature: This paper presents a practical and straightforward pipeline for pre-training data selection, particularly for math. It would be interesting to see this approach applied to stronger base models and larger-scale training.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strength:
1. The writing is clear and concise, without unnecessary information.
2. They conduct extensive ablation studies.
3. The method seems to be simple and practical.
Weakness:
1. It would be better to use stronger base model with larger-scale training and more comprehensive evaluation.
Other Comments Or Suggestions: No.
Questions For Authors: 1. How can this method be adapted for other data domains?
2. How would higher-quality math data impact the overall model capabilities?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear Reviewer NxxW,
Thank you for your thoughtful feedback and positive recognition. Below, we respond to each of your comments in detail.
1. **Weakness 1.1:** It would be better to use **stronger base models.**
**A:** Thank you for your suggestion. We chose Qwen2.5-7B as a stronger base model, and are currently pretraining it on both original OpenWebMath and MASS-selected OpenWebMath. The training is still in progress, and we will share the results in the discussion phase once the training is complete.
Additionally, we would like to explain why we did not opt for these SOTA models in the first place. The datasets we use are all open-sourced and have probably already been used in the pre-training stage of these strong models. For instance, Qwen2.5 was trained on 18 trillion tokens [1]. So, even if we selected high-quality data from OpenWebMath and continue pretraining Qwen2.5 on it, we would likely observe minimal to no performance improvements. Conversely, a relatively smaller and less capable model might be more suitable to test the data selection methods effectively.
3. **Weakness 1.2:** More comprehensive evaluation: It would be beneficial to include results on corresponding results after **instruction tuning**.
**A:** Thank you for your suggestion, but we would like to emphasize the three reasons why MASS prioritizes the data selection of pre-training over instruction-tuning and did not provide results after instruction-tuning:
- First, pre-training datasets are typically much larger than instruction-tuning datasets (e.g., OpenWebMath with 14.7 billion tokens vs. MetaMathQA with 103 million tokens). Due to their scale, pre-training datasets often contain a significant amount of noisy, repetitive, and irrelevant low-quality data, whereas fine-tuning datasets are generally more compact and high-quality, gaining little benefit from selection.
- Second, most of the knowledge LLMs acquire comes from the pre-training stage, while the instruction-tuning stage primarily focuses on aligning with human preferences and formatting. Thus, refining the vast pre-training datasets through the lens of skills is more effective and directly impacts model performance.
- Third, in our experiments, we used the Jiuzhang dataset, which follows a QA format similar to an instruction tuning dataset but at a much larger scale. This successfully demonstrates MASS's effectiveness even in an instruction-tuning-like setting to some extent.
2. **Question 1:** It would be benificial to include results on performance on **additional tasks, such as general instruction following and coding**. / How can this method be adapted for **other data domains?**
**A:** Thank you for your suggestion. While we cannot fully implement the method across other domains during the rebuttal phase, our future direction is to adapt MASS to other tasks and domains.
Since the core of MASS is a skill graph so it would be naturally suitable for domains where distinct and clear 'skills' exist. For example, in the coding area, we may extract skills such as *['CSV file handling','data visualization','SQL query construction',...]* (generated from *iamtarun/python_code_instructions_18k_alpaca dataset* by DeepSeek). In the biomedical area, skills may include *['alcohol-related disorders','neurotoxic substance effects','thiamine deficiency',...]* (generated from *FreedomIntelligence/medical-o1-reasoning-SFT* dataset by DeepSeek).
After skill extraction, we can construct the corresponding skill graph and apply MASS’s data selection approach. Some minor adjustments (e.g., prompts, graph construction) may be needed for domain-specific adaptations. However, for domains like creative writing or role playing, where skills are harder to define, the current MASS framework may not be suitable.
4. **Question 2:** How would higher-quality math data impact the **overall model capabilities**?
**A:** We believe higher-quality math data can impact the model capability in two ways:
- Improving math ability: High-quality mathematical data can directly enhance a model's mathematical reasoning skills, just as shown in our manuscript.
- Improving general reasoning ability: mathematical data is inherently logical and structured, so it can also help models better grasp logical relationships in complex tasks such as scientific document processing and code generation. For example, pre-training on datasets containing mathematical proofs and formulas can improve a model's ability to handle tasks requiring logical reasoning.
We thank you again for your thoughtful advice, which has strengthened our work. We will include relevant details and analysis in the next version of our manuscript. Should you have any further questions or require additional information, we are happy to address them.
Sincerely,
MASS authors
[1] https://qwenlm.github.io/blog/qwen2.5/ | null | null | null | null | null | null | null | null |
Fast Inference with Kronecker-Sparse Matrices | Accept (poster) | Summary: This paper presents the first energy and time benchmarks for the multiplication of Kronecker-sparse matrices. These benchmarks reveal that specialized sparse matrix multiplication implementations spend up to 50% of run time on memory rewrite operations. As a remedy, the authors propose a new tiling strategy for Kronecker-sparse matrix multiplication achieving a median speed-up of 1.4x while also cutting energy consumption by 15%.
Claims And Evidence: That authors appear to be making a few main claims.
1. The first claim is that previous approaches to handling Kronecker-sparse matrix multiplications spend up to 50% of their total runtime on memory rewriting operations. This is supported empirically by a set of results reported by the authors.
2. The second claim is that the time spent on memory rewriting operations can be attributes to the structure adapted by most such algorithms, which can be dropped by a factor of three by tiling, essentially coalescing the read/write operations. The authors have supported this claim analytically.
3. The third claim concerns empirical improvements, where the authors have claimed they new Kronecker-sparse multiplication algorithm to lead to speeds ups of up to 1.4x and energy savings of 15%, as well as improve the efficiency of neural network inference. The authors have support all of these claims empirically.
Methods And Evaluation Criteria: Yes, the methods and evaluation criteria make sense. The only thing is I was expecting to see comparison to more sparse matrix multiplication baselines but I am not very familiar with the area, so maybe such specialized algorithms do not exist or aren't used in practice?
Theoretical Claims: Yes, I went over proofs and theoretical claims and they appear to make sense to me
Experimental Designs Or Analyses: I found the experimental design and analysis to be sound.
The authors start with a hypothesis that current matrix multiplication algorithms used with Kronecker-sparse matrices spend a significant portion of their run time on memory rewriting, a hypothesis which they empirically validate on 600 different sparsity patterns. They then devised a theoretically-equivalent algorithm with a theoretically lower number of memory rewriting operations. They then validate that they new algorithm is indeed faster than the current used algorithms as well as more energy efficient, due to having to perform less energy on memory rewritings.
Supplementary Material: I have only checked the related works sections in the supplementary material.
Relation To Broader Scientific Literature: The contribution put forth in this paper is very timely and application to machine learning community, but I suspect also to the algorithms and scientific computing communities. This work is essentially on how to speed up matrix multiplication when we know of, or impose, a specific sparsity structure on one of our input matrices.
Essential References Not Discussed: Not aware of any.
Other Strengths And Weaknesses: - I found the paper to be very well written despite the very technical subject matter. I greatly appreciated the top-down manner through which the problem was decomposed, the hypothesis validated and the method proposed to tackle the shortcoming of the current approaches.
- I like how the authors did not stop at comparing their algorithm to previously proposed algorithms in a synthetic setting, but rather also showed that practical settings, such as inference in ViT architectures can benefit greatly in terms of speedup.
- To my knowledge, this contribution is novel, and I foresee it having a sizable impact in the community's effort to improve the efficiency of inference in deep models.
- I was expected to see quite a bit more in terms of related work given that sparsity is currently a highly-researched area
Other Comments Or Suggestions: Typos: Section 5 326-363 line/row -> column?
Questions For Authors: The only question that I have is: What is the point of diminishing returns? My understanding is there must be a threshold whereby if the matrix does not exhibit enough sparsity then the baselines might perform better? Am I correct in my assumption? I believe this might also be suggested by the The third column of Table 3 that shows that kernel is slower than all other baselines 12% of the tested patterns? Any idea why that might be the case?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your review. We address your points below.
1. >Are there any other sparse related works relevant for this benchmark?
The benchmark includes all the relevant baselines we are aware of. The revision will include an additional discussion clarifying how our work relates to a few other areas of sparse matrix research (such as sparse 3d convolutions or sparse tensor compilers, as suggested by other reviewers), even though these related works do not offer relevant implementations to be included in the benchmark.
2. > On the potential existence of a sparsity threshold under which previous baselines are better
You are absolutely right, thank you for mentioning that. The 12% of the cases where the baselines are still better correspond to patterns that have a high density of nonzero or a small value of the proposed heuristic (the ratio (b+c)/bc). The revision will contain a plot that shows how the speedup increases with the sparsity level (percentage of zeros).
We also thank the reviewer for spotting typos. | Summary: This paper proposes a novel CUDA kernel designed to accelerate neural network inference using Kronecker-sparse matrices. These matrices, characterized by sparsity patterns derived from the Kronecker product, offer a structured alternative to traditional dense matrices in neural networks. By optimizing memory access and reducing redundant operations, the proposed kernel achieves a 1.4× speedup and a 15% reduction in energy consumption compared to existing approaches, demonstrating its effectiveness in enhancing computational efficiency.
## update after rebuttal
I have read author's response, but I still believe that the conversion cost could be high when the proposed method operates on activations—such as in self-attention operations (e.g., Q @ Kᵀ, Attention-Score @ V)—which was my main concern. Since my concern regarding this point have not been fully resolved.
Claims And Evidence: - The study evaluates existing GPU implementations, identifies inefficiencies in memory access, and proposes a new CUDA kernel optimized for Kronecker-sparse matrix multiplications.
- The paper is well-supported by empirical benchmarks and theoretical analysis, with the 1.4× speedup and 15% energy reduction demonstrated through extensive experiments across various sparsity patterns.
- However, the study could be further strengthened by evaluating its performance on different hardware platforms (e.g., AMD GPUs, CPUs, FPGAs) and offering deeper insights into automated sparsity pattern selection.
Methods And Evaluation Criteria: - The paper proposes a new tiling strategy for matrix multiplication with Kronecker-sparse matrices, which reduces memory transfer overhead in GPU computations.
- The study carefully selects a range of Kronecker-sparsity patterns and tests them across different conditions, ensuring that results are not limited to specific cases.
- The evaluation extends to practical scenarios like transformer inference acceleration, demonstrating the broader utility of the proposed method.
Theoretical Claims: - The paper demonstrates that existing methods incur significant GPU memory access costs.
- It quantifies the memory rearrangement cost as (b + c) / (bc) , arguing that a higher value of this ratio indicates the inefficiency of existing approaches.
- However, the theoretical analysis presented in the paper does not clearly establish whether the proposed method guarantees consistent performance improvements across all Kronecker-sparse patterns.
Experimental Designs Or Analyses: - The benchmarking of execution time and energy consumption on various GPU implementations is relevant and well-justified.
- The comparison with existing PyTorch implementations (bmm, bsr, einsum) and generic dense/sparse approaches provides a meaningful baseline.
- However, there is no direct comparison with other existing Kronecker-sparse matrix calculation techniques.
Supplementary Material: - The authors provided PyTorch code to reproduce the Kronecker-sparse matrix evaluations presented in the paper.
- However, the CUDA kernel code proposed in the paper was not included, which may limit the reproducibility of certain experiments.
Relation To Broader Scientific Literature: - None
Essential References Not Discussed: - None
Other Strengths And Weaknesses: Strengths
- The paper introduces a new tiling strategy for GPU matrix multiplication, reducing memory access overhead.
- The proposed CUDA kernel achieves up to 1.4× speedup and 15% energy reduction compared to existing methods.
- Demonstrates that the optimized kernel can accelerate vision-transformer model inference.
Weaknesses
- The paper does not compare its CUDA kernel with other structured matrix optimization kernels (e.g., Monarch matrices, Butterfly Transform).
- The study assumes W (weight matrix) is already in Kronecker-sparse format, but does not discuss the cost of converting a dense matrix to this structure. If the transformation cost is high, the practical benefit of the proposed optimization may be reduced in self-attention operation (ex, Q @ K^T, Attention-Score @ V).
- Only the speed difference by unit operation is shown and the End-to-End model (ViT) latency results are not provided.
Other Comments Or Suggestions: - None
Questions For Authors: - Can you provide end-to-end latency comparison results measured in ViT?
- Can you provide a latency comparison with other structured matrix optimization kernels (e.g., Monarch matrices, Butterfly Transform) rather than a comparison with the code implemented in pytorch?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for your review.
# Regarding your questions
1. > End-to-end latency results in ViT
Table 4 already provides an end-to-end latency result, showing a 22% relative time gain on a vision transformer when using the kernel. If you actually meant to ask about the *absolute* measurements in seconds rather than the *relative* comparison, we would be happy to add them upon request.
2. > Comparison to other “existing Kronecker-sparse matrix calculation techniques” / “structured matrix optimization kernels” such as Monarch Transform or Butterfly Transform
The implementation associated with the Monarch Transform [1,2] corresponds to the “bmm” baseline in the benchmark. The official code associated with the Butterfly Transform [3,4] is no longer maintained and we were unable to make it work. However, we tested a faithful reproduction internally but found it significantly slower than the other baselines, so we chose not to include it in the benchmark. The revision will make this explicit.
# Regarding the other points you mentioned
3. > Further strengthen the study by extending it to other hardwares
We agree that exploring the opportunities and challenges related to benchmarking and optimising the kernel on other hardwares is an exciting open avenue that is now raised by this work. To support further exploration, we will release an OpenCL version of the kernel, enabling users with specific requirements to test it on platforms such as AMD GPUs or CPUs.
4. > CUDA kernel code not included
The final version will include a template of the kernel code and a link to the open-source (non-anonymous) repository.
5. > Computational cost of converting a dense matrix to the butterfly structure
Although this issue falls outside the claimed scope of the paper—which is to study Kronecker-sparse matrix multiplication on GPUs and to showcase its potential to accelerate the inference of models having Kronecker-sparse matrices (e.g., models trained from scratch with such matrices, or dense models replaced by Kronecker-sparse ones after training, with potential subsequent fine-tuning)—it is worth mentioning that approximating a given target matrix of size $m \times n$ by a product of Kronecker-sparse factors can be done efficiently in roughly $\mathcal{O}(mn)$ time [5].
# References
[1] Monarch: Expressive Structured Matrices for Efficient and Accurate Training, Dao et al, PMLR 2022.
[2] Monarch mixer: A simple sub-quadratic GEMM-based architecture. Fu et al. NeurIPS, 2023.
[3] Butterfly transform:An efficient FFT based neural architecture design. Vahid et al. CVPR, 2020.
[4] Learning fast algorithms for linear transforms using butterfly factorizations. Dao et al. ICML, 2019
[5] Butterfly factorization with error guarantees. Le et al., preprint, 2024.
---
Rebuttal Comment 1.1:
Comment: Thank you for your kind reply. I have read your response, but I still believe that the conversion cost could be high when the proposed method operates on activations—such as in self-attention operations (e.g., Q @ Kᵀ, Attention-Score @ V)—which was my main concern. Since my doubts regarding this point have not been fully resolved, I would like to keep my score as it is. | Summary: This paper aims to speedup DNN inference with kronecker-sparse matrices by optimizing GPU memory accesses via customizing the CUDA kernels. The paper has made three key contributions: (1) analyzing the time and energy efficiency of existing implementations for multiplying kronecker-sparse matrices; (2) proposing a new tiling strategy in a new CUDA kernel implementation which reduces expensive GPU memory accesses; and (3) introducing a heuristic model that describes the time and energy efficiency. The experimental results have shown the proposed methods achieve 1.4x median speedup and 15% energy reduction.
Claims And Evidence: While the paper has shown evidences that the proposed methods outperform existing frameworks, its evaluation results are not enough to fully support the claims. Specifically, the paper has never conducted ablation studies on how the percentage of non-zero elements affect the performance of the proposed kernels. This is important because when sparse matrix is used in practice, different level of sparsity might be used for better accuracy.
Methods And Evaluation Criteria: The paper claims that they are able to support general inference of transformers. However, they only evaluate the vision transformers of small sizes, which could have very different trade-offs compared to transformer-based language models. Thus, it is obscure how the proposed method would perform in practice.
Theoretical Claims: I do not find any problems in the theoretical analysis in the paper but my expertise could be limited.
Experimental Designs Or Analyses: Please refer to the "Methods and Evaluation Criteria" section of the review.
Supplementary Material: I have reviewed all parts of the supplementary material.
Relation To Broader Scientific Literature: While the paper has pointed out an interesting way to tackle the memory inefficiency in existing implementation of kronecker-sparse matrix multiplication, the paper should have discussed its relationship to other types of sparse matrix multiplication problems. In particular, similar issues have been seen in similar problems such as 3D sparse convolutions [1], where the permutations also needs to be done on inputs and outputs. The authors should have discussed the trade-offs of different design choices (e.g. input/output/weight stationary), and the reason why they pick up a specific one.
[1] Tang, Haotian, et al. "Torchsparse++: Efficient point cloud engine." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
Essential References Not Discussed: The paper should also discuss how their methods related to (sparse) tensor compilers, since these compilers can also be easily used to speedup sparse-kernel inference. Specifically, the paper should discuss why kronecker-sparse matrix multiplication is considered as a challenging problem and cannot just be considered as a special case for sparse tensor compilers:
* Ye, Zihao, et al. "Sparsetir: Composable abstractions for sparse compilation in deep learning." Proceedings of the 28th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 3. 2023.
* Guan, Yue, et al. "Fractal: Joint multi-level sparse pattern tuning of accuracy and performance for DNN pruning." Proceedings of the 29th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 3. 2024.
Other Strengths And Weaknesses: The motivation of the paper is also lacking. Specifically, the paper has never discussed how the kronecker-sparse matrix is used in practice. This is important because different sparsities levels could result in different trade-offs and design choices.
Other Comments Or Suggestions: I feel the paper is in general well written. However, the section captions could be simplified as it makes more clear for the readers to understand the topic of each section.
Questions For Authors: * How do you measure the memory access time in general (see line 209, "We find that the memory rewritings can take up to 45% of the total runtime")
* How the proposed method speedup the inference of other types of networks such as GPT or Llama models?
* What is the cache hit ratio of reading/writing global memory of the proposed CUDA kernels?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for your review.
# Regarding your questions
1. >How did we measure the time spent on memory rewritings
We compared with the execution time where we removed the permutations/memory rewritings part, i.e. lines 1 and 3 in algorithm 1 (details in appendix B.2).
2. >Implications on other transformer sizes, e.g. larger transformers like GPT/Llama
While we benchmark end-to-end latency on a transformer involving matrices of size 768 x 768, note that the benchmark covers sizes between 102 x 102 and 131072 x 131072, largely covering the wide range used in practice, even the ones in large transformers (e.g. 53248 x 53248 in Llama3-405B). Since the speedup of the kernel increases with the matrix size (section 4), our resource-limited evaluation on a small transformer actually corresponds to one of the most challenging setup to observe a speedup. The revision will make this explicit.
3. > Cache hit ratio
We measured the cache hit as suggested by the reviewer and will add a discussion about it.
# Regarding the other points you mentioned
4. >How Kronecker-sparse matrices are used in practice (motivation)
Kronecker-sparse matrices are the building block of Butterfly matrices. The literature has proposed using them in a variety of way in neural networks (Table 1) and the benchmark considers patterns aligned with these different use cases. Therefore, the potential misalignment problem, where studied patterns would be different to the ones used in practice, does not arise here. The revision will make it clear.
5. >Relation to similar issues in sparse 3d convolutions
A relevant analogy with this literature is that the new Algorithm 2 enables the implementation of a tiling strategy that optimises the dataflow in an analog way to how recent works [2,3,4] built on top of TorchSparse [1]. TorchSparse has three kernel calls: gather, multiplication, scatter. The subsequent works optimised this by overlapping memory and compute operations. For the sparse problem we consider, the dataflow optimisation enabled by Algorithm 2 has the same flavour: the kernel can now coalesce the input/output permutations with the multiplication part (Figure 3).
6. >Specify whether the kernel is input/output or weight-stationary
The kernel is output-stationary, similarly to the dense cutlass kernels [5] and sparse 3d convolutional kernels [2,3,4] that have a similar dataflow. The revision will include a code template and explain this design choice as follows.
* *Weight-stationary is not attractive*. In our Kronecker pattern $(a, b, c, d)$, when the parameter $a$ is large, the matrix is partitioned into $a$ submatrices that act on disjoint regions (Figure 2). Thus, weights are not reused across multiple input/output regions, limiting the benefits of keeping them stationary.
* *Input-stationary is not attractive*. Due to the non-consecutive memory accesses involved (Figure 5), read and write operations on inputs/outputs come at a higher cost, so it is preferable to keep one of them stationary to reduce these costs. Both costs are largely driven by the parameter $d$ of the pattern $(a,b,c,d)$, which determines the distance between consecutive data elements that should be loaded together when considering one of the dense subproblems (Figure 5). Since their reuse costs are similar, we had to consider other factors. Input-stationarity poses parallelization challenges as different thread blocks cannot accumulate into the same output coefficient (no possible synchronization) [5]. In contrast, output-stationarity avoids this issue, hence our choice.
7. >Relation to sparse tensor compilers
These compilers efficiently handle unstructured or simple block/tile sparsity, but Kronecker-sparse matrices feature a block-diagonal pattern with $b\times c$ sub-blocks further refined by $d\times d$ identity matrices. Capturing such nested sparsity would require combining an outer block-sparse format with an inner identity layout—something not supported off-the-shelf. Moreover, our Algorithm 2 leads to a tiling strategy with *non-contiguous* tiles along certain axes to reduce memory operations, a design choice that these compilers cannot accommodate.
8. >On adding results to show how the sparsity level affects speedups
Thanks for the suggestion, the revision will include a graph showing how the speedup increases with the sparsity level (percentage of zero, ranging from 0 to 99% in the benchmark). The revision will also explain that the 12% of the 600+ patterns where the baselines still outperform the kernel correspond to cases with a large density of nonzero or a small value of the proposed heuristic, i.e., a small value of (b+c)/bc.
# References
[1] TorchSparse: Efficient Point Cloud Inference Engine. Tang et al 2022
[2] Torchsparse++: Efficient point cloud engine. Tang et al CVPR 2023
[3] SpConv, Yan 2022
[4] SECOND: Sparsely Embedded Convolutional Detection. Yan et al 2018
[5] github.com/NVIDIA/cutlass/blob/main/media/docs/efficient_gemm.md | Summary: The paper "Fast inference with Kronecker-sparse matrices" focuses on optimizing matrix multiplication algorithms for Kronecker-sparse matrices, which are used in neural networks to reduce parameters while maintaining accuracy. The main contributions include:
- Benchmarking and Optimization: The authors benchmark existing GPU algorithms for multiplying Kronecker-sparse matrices and identify that up to 50% of runtime is spent on memory rewriting operations. They propose a new tiling strategy implemented in a CUDA kernel to reduce these memory transfers.
- New CUDA Kernel: The kernel achieves a median speedup of ×1.4 and reduces energy consumption by 15% compared to baseline implementations.
- Broader Impact: The new kernel accelerates the inference of neural networks, such as Vision Transformers (ViTs), by replacing dense layers with Kronecker-sparse matrices. This results in significant speedups in fully-connected layers.
Update after rebuttal:
I would like to sincerely thank the authors for taking the time to address all the issues I raised. However, as I pointed out in the initial review, the design and evaluation are quite limited in the single platform and software stacks. Even the authors claimed that they would provide the example in OpenCL, but it is hard for reviewers to evaluate without actually seeing the related implementation and experiments. So, for the current version, it may not meet the acceptance threshold for ICML.
Claims And Evidence: The submission provides extensive evidence to support its claims, primarily through benchmarks and theoretical analyses. However, some aspects could be scrutinized for clarity and robustness:
Benchmarking Methodology: The paper presents a comprehensive benchmarking framework that compares various implementations of Kronecker-sparse matrix multiplication. The evidence is convincing, as it includes multiple scenarios and configurations, such as different memory layouts and precision levels (float and half-precision). However, the choice of specific hardware (NVIDIA GPUs) might limit the generalizability to other architectures.
Hardware Dependency: The performance benefits might not generalize equally across different hardware platforms.
Applicability to Other Architectures: The focus on ViTs leaves room for further research on how well the approach works with other neural network architectures.
Robustness Across Different Input Sizes: While the benchmark covers a range of sparsity patterns, the performance with very large or very small matrices might require additional validation.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria in the paper on "Fast inference with Kronecker-sparse matrices" appear well-suited for the problem at hand, which is optimizing matrix multiplication algorithms for Kronecker-sparse matrices in neural networks.
The benchmarks are conducted on NVIDIA GPUs. Extending the evaluation to other hardware platforms, such as AMD GPUs or CPUs, could provide a more comprehensive understanding of the methods' applicability.
Theoretical Claims: Formal Proof for Algorithm Equivalence: While the paper explains the equivalence between Algorithm 1 and Algorithm 2, a formal proof might be beneficial for readers seeking rigorous mathematical validation.
Robustness of the Heuristic: The heuristic for efficient patterns is empirically validated but might benefit from further theoretical analysis to ensure its applicability across different scenarios or hardware platforms.
Experimental Designs Or Analyses: Hardware Dependency: While the paper focuses on NVIDIA GPUs, extending the benchmark to other hardware platforms could provide broader insights into the applicability of the proposed methods.
Generalizability Across Architectures: The paper primarily focuses on Vision Transformers. Investigating how well the new kernel performs with other neural network architectures (e.g., CNNs or RNNs) could enhance its utility.
Statistical Analysis: The benchmark results are presented using medians and interquartile ranges. While this provides a good overview, additional statistical analysis (e.g., hypothesis testing) might further validate the significance of the observed improvements.
Energy Consumption Measurements: The energy measurements are conducted on a different GPU (V100) than the time benchmarks (A100). While this is noted, ensuring consistency across all measurements could strengthen the conclusions regarding energy efficiency.
Supplementary Material: Yes. I have reviewed supplementary materials what was included in the main text of the paper.
Relation To Broader Scientific Literature: The paper builds on the concept of butterfly matrices, which are structured matrices that can be expressed as products of sparse factors with specific sparsity patterns, often described by Kronecker products. Butterfly matrices have been used to accelerate linear transforms like the Discrete Fourier Transform (DFT) and the Hadamard Transform.
Prior work has shown that replacing dense matrices with sparse or structured matrices can improve neural network efficiency. For example, Dao et al. demonstrated that using butterfly matrices can speed up neural network training.
The paper introduces a new tiling strategy for matrix multiplication with Kronecker-sparse matrices, implemented in a CUDA kernel. This approach reduces memory transfers between different levels of GPU memory, leading to improved time and energy efficiency.
Similar heuristics have been used in other contexts to optimize sparse matrix operations, but the specific application to Kronecker-sparse matrices is novel and contributes to the broader literature on efficient neural network design.
Essential References Not Discussed: No
Other Strengths And Weaknesses: Limited Hardware Scope: The paper focuses primarily on NVIDIA GPUs, which might limit the generalizability of the results to other hardware platforms like AMD GPUs or CPUs.
Originality of Heuristic: While the heuristic for efficient Kronecker-sparsity patterns is useful, it is based on a relatively straightforward analysis of memory operations. Further theoretical justification or exploration of its applicability beyond the current context could enhance its originality.
Clarity in Some Technical Details: Some sections, such as the explanation of perfect shuffle permutations, might be challenging for readers without a strong background in linear algebra. Additional explanations or references could improve clarity for a broader audience.
Broader Applicability: The paper primarily focuses on Vision Transformers. Exploring how the new kernel performs with other neural network architectures could further demonstrate its significance and versatility.
Other Comments Or Suggestions: N/A
Questions For Authors: 1. How do you envision extending the new CUDA kernel to work efficiently on other hardware platforms, such as AMD GPUs or CPUs? Are there any specific challenges or opportunities you foresee in this process?
2. The heuristic based on the ratio (b+c)/bc is useful for identifying efficient Kronecker-sparsity patterns. Could you elaborate on how this heuristic might be refined or extended to accommodate different types of sparse matrices or computational contexts?
3. The paper highlights the impact of memory layout (batch-size-first vs. batch-size-last) on performance. How do you think this might influence the design of other neural network operations, and are there opportunities for further optimization in this area?
4. While the paper demonstrates significant speedups in Vision Transformers, what potential exists for applying these techniques to other neural network architectures, such as CNNs or RNNs? Are there specific challenges or opportunities in these contexts?
5. The new kernel not only improves time efficiency but also reduces energy consumption. Could you discuss how these energy savings might be further optimized or generalized across different hardware platforms?
6. How do you see Kronecker-sparse matrices comparing to other sparsity techniques, such as unstructured sparsity or other forms of structured sparsity? Are there scenarios where one might be preferred over the others?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for your review.
# Regarding your questions
1. > Extending the kernel to other hardwares / Limited Hardware Scope
We translated the CUDA kernel to openCL, so the kernel can now be used on other hardwares such as AMD GPUs or CPUs. The CUDA and openCL codes have also been integrated to an open-source python package available online. The revision will mention that and provide links to the (non-anonymous) repositories.
2. > Extending the heuristic beyond Kronecker-sparsity
The heuristic is specifically tailored to Kronecker-sparse matrices with patterns (a, b, c, d), in order to predict the improvement in terms of memory transfer brought by the proposed kernel, compared to PyTorch baselines such as bmm. So there is no general and straightforward way to adapt it to other forms of sparsity and sparse multiplication algorithms. Nevertheless, counting the number of memory versus algebraic operations in a given sparse matrix multiplication for a certain sparsity pattern, and putting that in relation to its efficiency on a given hardware, is a general principle that might help in broader scenarios beyond Kronecker-sparse factors.
3. > Influence of batch-size-last versus batch-size-first on other neural network operations
We agree that this is an interesting open question arising from the paper's results. For pointwise operations (e.g., activation functions like ReLU), changing the batch position is not expected to have any impact (and we indeed observed that internally). To investigate the impact of the batch position on other operations, it requires to carefully rewrite highly optimised kernels to convert them to batch-size-last, which requires time and expertise. Such a study falls out of the scope of what we claim to study (Kronecker linear layers), and is left to future work.
4. > Measuring speedups in other architectures, beyond Vision Transformers
There is indeed an opportunity to extend the observations made in this paper to other architectures such as LLMs or RNNs, as they contain large linear layers where introducing Kronecker-sparsity is expected to speedup inference. The case of CNNs is more challenging. There are at least two ways to inject Kronecker sparsity in convolutional layers, none of which we found convincing enough to be considered in this paper. The first option—replacing the convolutional kernel K with a Kronecker-sparse matrix—is limited because of the small size of typical kernels (e.g., 7×7). The second option—recasting convolutions as matrix products with a butterfly-structured weight matrix as in [1]—requires in practice costly folding/unfolding operations on the inputs and outputs, making it impractical in our view.
5. > Generalisation of the speedups to other hardwares
The claimed scope of the paper is to focus on NVIDIA GPUs as they are the most common hardware used in AI clusters. It is left open to explore the opportunities and challenges related to benchmarking and optimising the kernel on other hardwares. Since the paper comes with an openCL version of the kernel, people with specific hardware needs can now easily include it in their benchmark.
6. > Kronecker-sparsity versus other forms of sparsity
Structured sparsity is clearly to be favored over unstructured sparsity for time and energy performance. Indeed, knowing the structure of the support in advance helps a lot to design efficient algorithms. Structured sparsity has also better theoretical proven properties (e.g., the functional space is closed and there always exists a minimizer of the loss for structured sparse networks, while it is not the case for unstructured ones, which might cause the solution to diverge, see theorem 4.2 in [2]). More specific comparisons would depend on the task, hardware, model and sparsity structure at hand.
# Regarding the other points you mentioned
7. > Formal proof of the algorithm equivalence, background on permutation shuffles matrix
Thank you for the suggestions, the revision will include both.
8. > Energy measurements on V100 while time measurement on A100
The pyJoules package used in the paper to measure energy consumption is unfortunately not yet compatible with A100 GPUs.
9. > On the robustness of the benchmark for different matrix sizes
We believe that the problem of robustness that you mention does not arise here, because the benchmark already covers all relevant matrix sizes encountered in practice. Indeed, it covers matrix sizes from 102×102 to 131072×131072 (6 orders of magnitude), while sizes typically used in transformers range from 500×500 to 15000 x 15000, and go up to 53248 in the largest models like Llama 3-405B [3].
# References
[1] Deformable Butterfly: A Highly Structured and Sparse Linear Transform. Lin et al NIPS 2022
[2] Does a sparse ReLU network training problem always admit an optimum? Le et al NIPS 2023.
[3] The Llama 3 Herd of Models, 2024.
---
Rebuttal Comment 1.1:
Comment: I would like to sincerely thank the authors for taking the time to address all the issues I raised. However, as I pointed out in the initial review, the design and evaluation are quite limited in the single platform and software stacks. Even the authors claimed that they would provide the example in OpenCL, but it is hard for reviewers to evaluate without actually seeing the related implementation and experiments. So, for the current version, it may not meet the acceptance threshold for ICML. | null | null | null | null | null | null |
Improving Diversity in Language Models: When Temperature Fails, Change the Loss | Accept (poster) | Summary: The paper investigates the impact of temperature scaling on the precision–recall (P&R) trade-off in language models. The authors provide a theoretical analysis showing that while lowering the temperature enhances precision, increasing it does not necessarily improve recall. They propose new loss functions (e.g., TruncR, c-Div, and λ-PR) to train models that emphasize recall, thereby allowing for a more balanced diversity–quality trade-off when temperature scaling is applied. Experimental evaluations on tasks such as code generation, integer multiplication, and writing prompts are used to validate the theoretical insights.
Claims And Evidence: - The authors provide a theoretical investigation into how temperature affects P&R, providing analyses that explain the observed limitations of temperature scaling. However, Artificial case analysis is limited, as only a small set of cases are considered. Cannot give a general insight about the trends of P&R when varying temperature in general cases.
Methods And Evaluation Criteria: Strength
- Viewing the quality-diversity trade-off using precision and recall is interesting (however, it’s not new). The alternative loss functions to train models for higher recall seem to be effective. By shifting the focus from decoding adjustments to training objectives, the authors provide a new angle on tackling the quality–diversity trade-off.
Weakness
- While the claim in the main paper is general (change the loss function when the temperature sampling is based), there is a lack of comparison with existing decoding-based works that promote diversity along with quality. This is essential to understand the effectiveness compared to simpler methods (no need to train with a different loss function). One example is Chang et al. KL-Divergence Guided Temperature Sampling.
- The effects of changing the loss function on other tasks are not investigated. For example, does changing the loss function affect the generalization/in-context learning ability of LMs? This affects the utility of the proposed methods for general use.
Theoretical Claims: I have not closely verified the proofs.
Experimental Designs Or Analyses: - The experiment setup and evaluation criteria have some concerns (see comments).
Supplementary Material: No
Relation To Broader Scientific Literature: Most of the existing work on prompting diversity or balancing diversity-quality trade-off in LLMs is in the decoding phase. This paper brings new insight and claim that we should adjust the loss function to promote the effectiveness of temperature-scaling in decoding phase.
Essential References Not Discussed: - Chang et al. KL-Divergence Guided Temperature Sampling.
- Lu et al. Diver: Large Language Model Decoding with Span-Level Mutual Information Verification
- Zhang et al. Trading Off Diversity and Quality in Natural Language Generation
- Chung et al. Increasing Diversity While Maintaining Accuracy: Text Data Generation with Large Language Models and Human Interventions
Other Strengths And Weaknesses: See comments and questions
Other Comments Or Suggestions: - For the completeness of the paper, I suggest to discuss the detailed computation of Precision and Recall metrics for writing prompts experiment in the appendix.
- MAUVE and Average Pairwise Cosine Similarity are more common metrics to evaluate the quality and diversity of LLM responses. It would be better if the authors can evaluate the proposed methods on those metrics to give diverse insights.
Questions For Authors: - Sentences 243 244 are vague to me and does not give enough context for the next drawback claim. What are specific trade-offs (is it about P&R only or about general measures? minimizing alternative f-divergences between what? Citations need for this sentence as well.)
- Why pass@100 - pass@1 measures the diversity of generated samples? if all generated samples are the most same with some different tokens. In this case, the pass@100 is high but diversity is low?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for the detailed review and your feedback.
## Decoding-based methods
In our paper, we deliberately focused on the temperature parameter (commonly referred to as the "diversity" parameter) because its theoretical analysis already presented some complexity. For this reason, we chose to limit the scope of this study to ensure a focused exploration of the temperature parameter.
However, we agree that comparing empirically with other decoding-based methods could bring other insights.
As suggested, we investigated the effect of the following methods on the WritingPrompts dataset:
- top-p (or nucleus) sampling
- KL-Divergence Guided Temperature Sampling
We re-implemented the KL-Guided sampling from scratch, since the original implementation was not compatible with our PyTorch models. We used the same range for $\sigma$ as in the original paper.
| | P | R |
| --- | ----- | ----- |
| NLL | 0.848 | 0.086 |
#### Top-p
| p | Precision | Recall |
| --- | --------- | ------ |
| 0.1 | 0.997 | 0.001 |
| 0.5 | 0.996 | 0.001 |
| 0.8 | 0.886 | 0.033 |
| 0.9 | 0.805 | 0.058 |
#### KL-Guided
| $\sigma$ | P | R |
| -------- | ----- | ----- |
| 1.0 | 0.757 | 0.061 |
| 3.0 | 0.800 | 0.068 |
| 5.0 | 0.831 | 0.069 |
| 10.0 | 0.844 | 0.086 |
(As $\sigma$ increases, KL-Guided approaches standard decoding.)
We can conclude that Top-p increases Precision at the cost of Recall. Since Top-p removes the least probable tokens, we could intuitevely expect a higher Precision.
KL-Guided seems to globally decrease both P&R. We believe that an explanation would be that this method has been mainly designed for conditional text generation, like summarization and question answering, which might exhibit specificites compared to our generation task.
Overall, we believe that these results provide more grounding on the necessity of using Recall-oriented losses as opposed to simple decoding-based methods.
## General models
To answer your question, we trained a general-purpose instruction model on the Alpaca dataset using our proposed losses, $\lambda$-PR and $c$-Div, to demonstrate their effectiveness on broader instruction tuning tasks. We then evaluated this model on both the WritingPrompts and MathQA-Python datasets.
For MathQA-Python, which requires generating Python code from natural language questions, we used 3 in-context examples to prompt the model and evaluated it as in CodeContest.
We used the same evaluation as for CodeContest. We chose MathQA over CodeContest because it offers more training data and is better suited to the capabilities of our models.
### Alpaca MathQA-Python
| Method | P | R |
| -- | ----- | ---- |
| NLL | 0.088 | 0.39 |
| c-Div ($c = 1.4$) | 0.084 | 0.43 |
| $\lambda$-PR ($\lambda = 0.75, \gamma = 1e^{-5}$) | 0.083 | 0.46 |
On these results, we see that overall, our losses do not significantly impact Precision, meaning that generalization/in-context ability is not affected. However, we observe a significant increase in Recall, which is consistent with our previous findings.
### Alpaca WritingPrompts
| Method | P | R |
| -- | --- | ----- |
| NLL | 0.83 | 0.040 |
| c-Div ($c = 1.4$) | 0.82 | 0.12 |
| $\lambda$-PR ($\lambda = 0.75, \gamma = 1e^{-5}$) | 0.51 | 0.21 |
On the WritingPrompts dataset, we observe that general models trained with our losses on the Alpaca dataset exhibit a similar pattern to the specialized models. We see a significant increase in Recall, sometimes at the expense of Precision.
This suggests that our losses can consistently improve Recall even on more general models and tasks. We hope these additional experiments provide a more comprehensive view of the effectiveness of our proposed losses.
### Suggestions
- We will add more details about P&R in the appendix, this should help clarify the evaluation process.
- We computed MAUVE metrics for the WritingPrompts dataset. We will add it to Table 1.
| Method | MAUVE |
| -- | ----- |
| Trunc | 0.074 |
| GOLD | 0.005 |
| Tailr | 0.087 |
| c-Div | 0.068 |
| Trunc-R | 0.073 |
| λ-PR | 0.096 |
| NLL | 0.104 |
- For 243–244, we mean that image generation can be optimized for specific PR tradeoffs using f-divergences, as shown by Verine et al. (2023). However, these methods rely on assumptions that don’t hold for text due to its causal nature. We’ll clarify this in the final version.
- In code generation, pass@100–pass@1 reflects the gain from sampling multiple candidates over one. In your example, the metric is relevant: if samples differ only slightly (e.g., variable names), structure diversity is low and pass@1 is likely high. We’ll clarify this with more details on the evaluation process.
---
Rebuttal Comment 1.1:
Comment: Thanks the authors for their very detailed responses and additional experiments. Since most of my questions were answered in the rebuttal, I updated the score accordingly. | Summary: This paper studies how recall and precision can be effectively traded off in language models. First, they study formal definitions of precision and recall in simplified settings and show cases where decreasing the temperature improves precision at the cost of recall, but increasing the temperature hurts both precision and recall. Motivated by the fact that it seems easier to improve precision than recall via temperature adjustment, they then propose recall-oriented loss functions. Empirically, (1) they confirm that it is difficult to improve recall by increasing temperature, and (2) they show that fine-tuning with recall-oriented loss functions and then decreasing temperature leads to a better precision-recall tradeoff than starting with a normally trained model and increasing temperature.
Claims And Evidence: I really like the direction that the paper is going, as the claims are interesting and the problem is important. However, the experiments presented in the paper do not sufficiently support the claims, as discussed below. One other minor limitation of the paper is that I find the writing somewhat confusing (will provide concrete suggestions in a later section).
- The first main claim is that lowering temperature improves precision at the cost of recall, but increasing temperature typically harms both after a certain point. This claim is supported by experiments.
- The second main claim is that fine-tuning with recall-oriented loss and adjusting temperature attains a better precision-recall curve than doing normal NLL training and adjusting temperature. This claim is weakly supported by a single experiment on WritingPrompts. I find this experiment insufficient because:
- (a) Because there is no "downstream" measure of precision and recall, they instead rely on an automatic measure based on a previously proposed method involving embedding the texts and measuring precision and recall in the embedded space. This evaluation setup provides useful evidence, but automatic metrics have been found to be flawed (see, e.g., [Gehrmann, Clark, and Sellam 2023](https://arxiv.org/abs/2202.06935)). Therefore, I think this claim needs to be evaluated with more metrics. One good one would be looking at pass@k on CodeContests, which the authors use as a dataset for other parts of the paper.
- (b) I also think this claim needs to be evaluated on more than one model and more than one dataset.
- Minor point: section 6.2 claims that it "empirically confirms the theoretical insights from Theorem 4.2," but as far as I can tell it only measures support sparsity at various cutoffs, when Theorem 4.2 makes specific predictions about upper bounds for precision and recall measures. So the authors' claims that their experiments confirm their theory does not seem well-supported by evidence.
Methods And Evaluation Criteria: The methods and datasets make sense for the problem.
Theoretical Claims: I read the theoretical claims in the main text and they seem reasonable. I did not check the proofs.
Experimental Designs Or Analyses: - The paper provides some description of the experiments but does not describe them in enough detail for me to judge their soundness with confidence. Nonetheless, the experiments seem sound at first glance. Some details that I could not find:
- How hyperparameters were chosen
- Which model is used to produce embeddings for the automatic P&R metrics
- Whether there was a train-test-val split
Supplementary Material: I reviewed the experimental details.
Relation To Broader Scientific Literature: - The first contribution, studying how temperature affects precision and recall, seems well-studied. Nonetheless, the paper provides useful further experiments for this question.
- The second contribution, training LMs for recall to attain a better precision-recall tradeoff, seems new and valuable.
- Theory in section 4 (characterizing P&R tradeoffs in simplified settings): I found this analysis interesting but would characterize it more as a supplement and motivation for the other claims in the paper, rather than a standalone theoretical contribution.
- Theory in section 5 (characterizing the proposed recall-oriented losses): I see this theory as supplementing the main contribution of proposing recall-oriented training.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: Some writing suggestions:
- Overall, I think the paper could be streamlined in terms of making it clearer how the theory motivates the proposed solution, as well as omitting tangential results or making it clearer that they are tangential. For example, I did not find theorem 4.2 to be particularly compelling because as far as I understand, the bound is only useful for large $t$, but in practice people do not take temperature to be very far from 1.
- The second part of Section 4 is framed as using an artificial setting to make the prediction that very high temperature decreases both P&R. However, it's already well-known empirically that setting temperature very high is not effective. Nonetheless, I found the setting interesting because it could provide some intuition on the factors at play. I would suggest spending more time talking about the intuition behind the model, like how different factors affect the effect of temperature, rather than centering the section around the specific claim that high temperatures are harmful.
- I found it confusing to refer to the previous methods as "baselines" because they are not solving the same problem as this paper. I think it would be clearer to just refer to these methods as the precision-focused versions of the recall-focused losses in this work.
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We would like to thank Reviewer 2owJ for the detailed review and suggestions. We are glad you appreciate the paper.
Following your suggestion, as well as those from other reviewers, we conducted additional experiments:
**New dataset: MathQA-Python.**
We trained a model on MathQA-Python and used the same evaluation as for CodeContest: Precision as pass@1, and Recall as the gain obtained from sampling (pass@100 - pass@1). We chose MathQA-Python over CodeContest because it offers more training data and is better suited to the capabilities of our models.
**New model: Llama3.2-3b WritingPrompts.**
We trained an additional model on the WritingPrompts dataset.
**General-purpose instruction model.**
We trained a general-purpose instruction model on the Alpaca dataset using our proposed losses, $\lambda$-PR and $c$-Div, to demonstrate their effectiveness on broader instruction tuning tasks and more expressive models. We then evaluated this model on both the WritingPrompts and MathQA-Python datasets.
We report the results below and will update Table 1 in the final version. While time constraints limited the scope, we believe these additional experiments provide a strong indication the effectiveness of our losses on more tasks, models and metrics.
### Olmo1b MathQA-Python
| Alpha | P | R |
| - | - | - |
| NLL | 0.42 | 0.36 |
| c-Div (c=1.4) | 0.30 | 0.46 |
| $\lambda$-PR ($\lambda = 0.1, \gamma = 1e^{-7}$) | 0.06 | 0.48 |
| TruncR ($\Delta = 0.1$) | 0.29 | 0.43 |
An interesting observation is that $\lambda$-PR impacts Precision more severely than the other losses, but ultimately achieves a high Recall. This suggests that the resulting model offers better coverage of the target distribution.
### Llama3.2-3b WritingPrompts
| Method | P | R |
| - | - | - |
| NLL (MLE) | 0.77 | 0.08 |
| c-Div ($c = 1.3$) | 0.72 | 0.17 |
| $\lambda$-PR ($\lambda = 0.9, \gamma = 1e^{-5}$) | 0.59 | 0.19 |
Note that for this larger model (compared to Olmo1b), we did not benchmark the TruncR loss due to its incompatibility with distributed training in the current implementation.
We also observed that the model tuned with $\lambda$-PR was much more sensitive to the sampling temperature. We used $t=0.5$, as higher values led to some degeneracies. However, we could not investigate this further.
### Alpaca WritingPrompts
| Method | P | R |
| - | ---- | ---- |
| NLL | 0.83 | 0.04 |
| c-Div ($c = 1.4$) | 0.82 | 0.12 |
| $\lambda$-PR ($\lambda = 0.5, \gamma = 1e^{-7}$) | 0.57 | 0.26 |
### Alpaca MathQA-Python
| Method | P | R |
| -- | ---- | ---- |
| NLL | 0.09 | 0.39 |
| c-Div ($c = 1.4$) | 0.08 | 0.43 |
| $\lambda$-PR ($\lambda = 0.1, \gamma = 1e^{-5}$) | 0.08 | 0.42 |
All Alpaca experiments used a temperature of $t=0.5$ to avoid degeneracies.
### Analysis
We observe the same pattern as in the initial experiments. This confirms that our losses can achieve higher Recall than NLL, which we believe strengthens the claims and findings presented in the paper.
### Section 6.2
- For Section 6.2, we will reformulate the text to clarify that the experiments are designed to verify the assumptions behind the theoretical analysis, not its results.
### Experiments
- We used hyperparameters very similar to those described in the original Olmo paper (we only used a smaller learning rate of 1e-6 instead of 2e-6). To ensure comparability, we used the same optimization parameters for all losses.
- For the PR metrics, we used the exact same setup as in the original paper (Le Bronnec et al., 2024), i.e., GPT2-large as the embedding model.
- We trained all models under the same conditions (same number of epochs, same batch size, same optimizer, etc.) on the training set and reported the metrics on the validation set.
We will incorporate these details in the appendix of the paper.
### Suggestions
- We will gladly incorporate the suggested reformulations, it will indeed improve the overall flow of the paper. We could indeed move some parts to the appendix and spend more time discussing the idea behind the model.
For a side note, in Theorem 4.2, the bounds rely on *almost* no assumption on the target distribution or the model, and the bound is not necessarily useful for large $t$, especially if $Z$ is low. For instance, when the model is uncertain (i.e., $Z$ is low), the bound may still be useful even for $t < 1$. But we agree that this is more a general theoretical result than a practical one.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed rebuttal!
The additional experiments look promising but only validate one half of the claim, which is that recall-oriented training improves recall at the cost of precision. But the more important claim to me is that lowering the temperature then leads to a better precision-recall tradeoff compared to adjusting the temperature of NLL. Do you have experiments showing this claim on the additional settings?
---
Reply to Comment 1.1.1:
Comment: Thank you for your answer! We acknowledge that our earlier reference to a "better tradeoff" should be clarified in the light of these new experiments. To clarify, our new experiments illustrate two distinct improvements achieved by our proposed losses compared to standard NLL:
- At the highest Recall achievable by NLL (optimized via temperature tuning), our method consistently attains superior Recall at the same Precision level. This improvement is demonstrated in our experiments on MathQA with Olmo1B, in the table below (and also supported in the initial experiments in Figure 5, comparing the 70B model with the 70B-RLEF variant, although the RLEF variant itself was not trained using our proposed losses, RLEF is a Precision-oriented model). This supports our assertion that for use-cases prioritizing Recall, our losses provide an improved Precision-Recall tradeoff.
- We identify scenarios in which our approach consistently achieves higher Recall across the entire spectrum of Precision levels attainable by NLL. This is demonstrated in the new experimental results with Llama 8B trained on Alpaca, summarized in the table below (and supported in the paper on Fig. 7 and 8.)
We will add these new results in the next revision.
## Olmo MathQA-Python
**Highest Recall of NLL, R=0.47:**
| Method | P | R |
| ------------------------------------------------- | ---- | ----- |
| NLL (temperature=1.6) | 0.20 | 0.47 |
| c-Div ($c = 1.4$, temperature=1.0) | 0.21 | 0.50 |
## Alpaca MathQA-Python
**Highest Recall of NLL, R=0.49:**
| Method | P | R |
| ------------------------------------------------- | ---- | ----- |
| NLL (temperature=1.0) | 0.067 | 0.49 |
| c-Div ($c = 1.4$, temperature=0.8) | 0.087 | 0.49 |
**Highest Precision of NLL, P=0.10:**
| Method | P | R |
| ------------------------------------------------- | ---- | ----- |
| NLL (temperature=0.1) | 0.10 | 0.10 |
| c-Div ($c = 1.4$, temperature=0.1) | 0.10 | 0.14 | | Summary: The paper provides a detailed analysis of the relationship between temperature, precision, and recall, offering insights into why lowering the temperature improves quality (precision), while increasing the temperature usually does not enhance coverage (recall). The paper primarily addresses two key questions: the impact of temperature adjustment on the precision-recall trade-off in language models, and how to train models to improve recall. By proposing recall-oriented loss functions, it presents a method to achieve a better P&R trade-off through temperature scaling, and validates this approach experimentally. The main contributions of the paper are as follows:
1. Analysis of the impact of temperature scaling on the P&R trade-off
2. Proposal of recall-oriented loss functions
3. Experimental validation of theoretical findings
Claims And Evidence: The author’s experiments and theoretical demonstrations extensively validate their claims:
1. The analysis of the impact of temperature on P&R is thorough and detailed, and the improvements to the loss function for the P&R trade-off have achieved the stated effects.
2. The experimental section sufficiently validates the relevant theoretical findings through three different scenarios.
Methods And Evaluation Criteria: 1. The method designed by the author analyzes the relevant impact of temperature on P&R, and the method design is reasonable.
2. The author compares a sufficient number of baselines.
Theoretical Claims: The theoretical proof results in the paper are clear and well-presented, with complete and detailed proofs provided, although I did not check all the details.
Experimental Designs Or Analyses: The experimental design by the authors is adequate, but some additional experiments might be necessary:
1. The authors provide three scenarios to answer the three proposed questions, yet in the subsequent analysis, not all of these questions seem to be fully addressed by the three given scenarios. Therefore, additional experiments may be needed to ensure that the issues claimed in the three scenarios are adequately answered (e.g., in Section 6.4, only the WritingPrompts dataset task is discussed).
2. The analysis of the experiments in Section 6 is somewhat confusing, and the appearance of some figures is not clearly related to the problems they aim to address.
Supplementary Material: I checked the appendix, which includes proofs and more experimental details.
Relation To Broader Scientific Literature: I haven't found it yet.
Essential References Not Discussed: I haven't found it yet.
Other Strengths And Weaknesses: I haven't found it yet.
Other Comments Or Suggestions: Some images appear in the main text but are not mentioned in the text.
Questions For Authors: Please refer to the above parts.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank Reviewer DUE2 for the review and constructive feedback. We have added further experiments with more datasets and models; please refer to our response to Reviewer 2owJ for details. While the first question has only one relevant scenario, we believe the second and third questions require multiple scenarios to answer meaningfully.
**The authors provide three scenarios to answer the three proposed questions, yet in the subsequent analysis, not all of these questions seem to be fully addressed by the three given scenarios. Therefore, additional experiments may be needed to ensure that the issues claimed in the three scenarios are adequately answered (e.g., in Section 6.4, only the WritingPrompts dataset task is discussed).**
Following your suggestion, as well as those from other reviewers, we conducted additional experiments within the limited available time to further investigate the effect of our Recall-oriented losses. (Similar response as to Reviewer 2owJ)
The new experiments include:
- **New dataset: MathQA-Python.**
We trained a model on the MathQA-Python dataset. Using the same proxies for evaluating Precision and Recall as in CodeContest.
- **New model: Llama3.2-3b WritingPrompts.**
We trained an additional model on the WritingPrompts dataset.
- **General-purpose instruction model.**
We trained a general-purpose instruction model on the Alpaca dataset using our proposed losses, $\lambda$-PR and $c$-Div, to demonstrate their effectiveness on broader instruction tuning tasks. We then evaluated this model on both the WritingPrompts and MathQA-Python datasets.
We report the results in the tables below and will extend Table 1 accordingly in the final version of the paper.
### Olmo1b MathQA-Python
| Alpha | P | R |
| ------------------------------------------------ | ---- | ---- |
| NLL | 0.42 | 0.36 |
| c-Div (c=1.4) | 0.30 | 0.46 |
| $\lambda$-PR ($\lambda = 0.1, \gamma = 1e^{-7}$) | 0.06 | 0.48 |
| TruncR ($\Delta = 0.1$) | 0.29 | 0.43 |
An interesting observation is that $\lambda$-PR impacts Precision more severely than the other losses, but ultimately achieves a high Recall. This suggests that the resulting model offers better coverage of the target distribution.
### Llama3.2-3b WritingPrompts
| Method | P | R |
| ------------------------------------------------ | ---- | ---- |
| NLL (MLE) | 0.77 | 0.08 |
| c-Div ($c = 1.3$) | 0.72 | 0.17 |
| $\lambda$-PR ($\lambda = 0.9, \gamma = 1e^{-5}$) | 0.59 | 0.19 |
Note that for this model, which is larger than Olmo1b, we did not benchmark the TruncR loss, as the current implementation is not compatible with distributed training.
We also observed that the model tuned with $\lambda$-PR was much more sensitive to the sampling temperature. We used $t=0.5$, as higher values led to some degeneracies. However, due to limited time, we could not investigate this further.
### Alpaca WritingPrompts
| Method | P | R |
| ------------------------------------------------ | ---- | ---- |
| NLL | 0.83 | 0.04 |
| c-Div ($c = 1.4$) | 0.82 | 0.12 |
| $\lambda$-PR ($\lambda = 0.5, \gamma = 1e^{-7}$) | 0.57 | 0.26 |
### Alpaca MathQA-Python
| Method | P | R |
| ------------------------------------------------ | ---- | ---- |
| NLL | 0.09 | 0.39 |
| c-Div ($c = 1.4$) | 0.08 | 0.43 |
| $\lambda$-PR ($\lambda = 0.1, \gamma = 1e^{-5}$) | 0.08 | 0.42 |
We started from a pre-trained Llama3.1-8B model and used the same training setup as described in the paper. This yields a basic instruction-tuned model, capable of generalization (but still with limited capacity compared to SOTA models).
Note that for all experiments conducted on Alpaca, we used a temperature of $t=0.5$ to avoid degeneracies. As with previous experiments, we could not benchmark the TruncR loss due to its incompatibility with distributed training.
### Analysis
We observe the same pattern as in the initial experiments. This confirms that our losses can achieve higher Recall than NLL, which we believe strengthens the claims and findings presented in the paper.
---
Rebuttal Comment 1.1:
Comment: Thanks to the author for the detailed response. I will maintain my rating. | Summary: Increasing diversity in language models requires careful tuning of decoding temperature. This paper shows that lowering temperature improves precision, but raising it often fails to enhance recall and effective tunability demands training models focusing on coverage. This paper provides two settings where the precision would fail to improve and recall can also go down provably. Then it provides a series of loss functions that can be used to perform fine-tuning on LLMs that claim to improve the recall. Then results are provided for three tasks, for all the models considered and proposed loss functions.
## update after rebuttal
I've updated my ratings reflecting my satisfaction with the rebuttal.
Claims And Evidence: The mentioned proofs and assumptions are correct. The evidence is clear and convincing and the claims aren't problematic. I do have certain questions on how certain quantities are computed, which I've deferred for the later section.
Methods And Evaluation Criteria: Need more datasets, such as MATH, AIME2024, MathQA-Python.
Theoretical Claims: The theoretical claims are correct. I've read through the appendix and examined the two artificial cases closely. I think assumptions are reasonable, and the two cases span the most practical settings. Overall I like the math aspect quite a lot.
Experimental Designs Or Analyses: The experimental aspect seems correct. However, the presentation for some of the graphics can be improved to make it widely appreciable by people with visual impairment.
I think more datasets can be used where it is often desirable to generate multiple solutions and then used in conjunction with a verifier (see -- Generative Verifiers: Reward Modeling as Next-Token Prediction). Such datasets may be -- MATH, AIME 2024.
Supplementary Material: Proofs in Appendix A
Relation To Broader Scientific Literature: Improving on PR is an important task, and increasing temperature is something people think can lead to nice and diverse solutions. Therefore, I think this paper is a good contribution to a variety of sub-community in LLM, including synthetic data generation to verification.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Following are my questions --
1. I am not fully able to understand how the support is computed in practice, or how M can be computed. Why can one find an upper bound on the cardinality of support set by the geometric mean of the $\max_{\ell}\mathcal{S}(x_{\ell})$?? Shouldn't cardinality be monotonic with $\ell$ and with that how does this max operation over $\ell$ not have a trivial solution?
2. How is the top-p region estimated? The writing is not very clear, and I am not sure what is being done algorithmically. I'd appreciate an algorithmic block for every computation done in the paper.
3. For the toy example, it would be good to remind the readers that the actual PR is computed over the entire distribution till length L and not on the conditionals. I got very confused initially since I was thinking about conditional distribution PR, and it was clear only when I read the proof.
4. Do all mentioned conditions to have a strictly increasing recall have to hold simultaneously? Moreover, what happens when $\rho \approx 1$ , that is, Q is very close to P?
5. I am quite a bit confused about the turnc loss function, and its practical implementation aspect. For the starters, what is $\bar{Q}$? When we are training, initially the model I am assuming should have a lower $\delta$ but over time $\delta$ should increase, right?
6. Can authors add proof to Proposition 5.2? Below the same prop, why is a sampling from Q difficult? Moreover, if someone swaps the P and Q, isn't it somewhat similar to PPO, assuming the log-likelihood under the true distribution is the reward model (which we can replace with a teacher model's likelihood function)? This makes me feel that there should be a natural baseline like this in the work (formulating and using RLHF to improve recall)
7. Equation 18 seems non-differentiable as it has a parameter inside an indicator function. This needs clarification and again an algorithmic block.
8. Why is $\lambda$-PR separately plotted (which increases as y=x line) as another method to improve PR in Fig4? Isn't it the case that all the PR plots when varying temperatures are at fixed \lambda? What is even the use case of having a plot between $\alpha_{\lambda}$ and $\beta_{\lambda}$ if truly one always varies $\lambda$ in the entire range [0, $\infty$]
9. For Fig 6, what is the $\lambda$, otherwise if it is plotted for all lambda then it kind of conflates the trend with different hyperparameters or loss functions.
10. Which of the considered tasks have the sparsest distribution?
Other Comments Or Suggestions: N/A
Questions For Authors: See weakness section.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank Reviewer pQ4a for the thoughtful feedback and careful review. We are grateful for the time and effort.
First, regarding the algorithmic questions raised by several reviewers: we will include detailed algorithm blocks for all computations in the appendix.
We address explicit questions below and will incorporate all clarity improvements in the next revision.
**Additional experiments**
Following your suggestion, we added experiments on the MathQA-Python dataset (details in response to reviewer DUE2). We also trained a general-purpose instruction model on the Alpaca dataset, and observed a similar trend as in our initial experiments. We hope these results further support our method's generalizability.
**Computing the support**
- In practice, the support size of $P$ is indeed intractable. However, we can still obtain a rough characterization of the sparsity of $P$ (introduced in Thm 4.2) by leveraging the sparsity of the token-level conditional distributions $P(\cdot \mid x_{<l})$. To this end, we approximate the reference $P$ using a strong model $Q_{\theta}$ (Llama3.1-8B in our experiments). For each sample and each token position $i$ in the solution, we compute the conditional distribution $Q_{\theta}(\cdot \mid x_{<i}) \in \mathbb{R}^V$, where $V$ is the vocabulary size. At each position, we then determine the smallest number of tokens $n_{\mathrm{top_p}}$ that account for $p \in {0.9, 0.95, 0.99}$ of the total probability mass. For each sample, we take the maximum of these $n_{\mathrm{top_p}}$ values across all positions, and finally, we compute the geometric mean of these maxima across the dataset to estimate the sparsity of $P$.
- We refer to this estimate as an "upper bound" since the true sparsity of $P$ is likely lower than what we obtain from the conditional distributions alone.
- We will make sure to include this explanation in the final version of the paper.
**Confusing explanations**
- Thanks for the feedback, we will better distinguish between PR over the full distribution and conditional PR.
**Conditions for an increasing Recall**
- Yes, all conditions mentioned in Prop. 4.3 should hold simultaneously. In the case $\rho \approx 1$, the temperature does not help, as the model is already close to the target distribution.
**Eq.18 and TruncR loss.**
- We indeed differentiate only the terms outside the indicator function. The notation $\bar{Q}$ indicates that we use the value of $Q$ in the implementation, but no gradient flows through it (achieved via `detach()` in PyTorch). In Eq. (18), this implies that the gradient of the sum is zero when the condition is not satisfied. We will add an algorithm block to clarify this in the final version.
- Regarding $\delta$, it characterizes the proportion of samples to keep. So, yes your interpretation is correct. During training, samples below the $\delta$ threshold will have their likelihood increased. To maintain a fixed proportion of selected samples, $\delta$ will gradually increase.
**Prop 5.2 + teacher model**
- Proposition 5.2 is proved in Appendix B.1, but is not clearly referenced in the main part. We will fix this. By "difficult," we refer to the fact that both of both sampling from $Q$ and computing $P(\cdot | x_{<l})$, since $P$ is unknown.
- In a different setup, you're right that $P$ could be approximated by a teacher model. But in that case, the objective would be to match the Recall of the target model to that of the teacher, not to the true distribution. This, along with the idea of using RLHF to improve Recall, are very interesting directions for future work, but we believe they deviate somewhat from the main focus of our paper and theoretical analysis. We will nonetheless add a discussion on this in the final version of the paper.
**Fig 4 PR**
- Unlike Figures 5, 6, and 7, which display empirical values of Precision and Recall (computed using the metrics from Le Bronnec et al., 2024), Figure 4 is a theoretical illustration of the effect of each loss on the PR-curve, as in Verine et al., 2023. It is parametrized by $\lambda$, and showing both $\alpha_\lambda$ and $\beta_\lambda$ is standard in that context. Our goal was to illustrate that optimizing the $\lambda$-PR objective increases the corresponding tradeoff value for a given $\lambda_0$. In particular, for $\lambda_0 < 1$ (i.e., below the line $y = x$), Recall increases more than Precision.
**Fig 6 $\lambda$**
- The three points corresponds to models trained with $\lambda$-PR for $\lambda=1$ (matching TailR loss), and some $\lambda<1$. We observe that $\lambda < 1$, Precision decreases while Recall increases, as expected.
**Tasks sparsity**
- In addition to CodeContest, we evaluated the sparsity of WritingPrompts (plotted in the table below), a creative task. The sparsity is lower than for CodeContest, which could be expected as this is a less constrained task.
Top-p | 0.9 | 0.95 | 0.99
- | - | - | - |
|Supp(P)\|/V | 3.86% | 7.80% | 24.9%
---
Rebuttal Comment 1.1:
Comment: Thanks for the added clarifications. Can you also clarify why is geometric mean is a good strategy?
I am also raising scores.
---
Reply to Comment 1.1.1:
Comment: We thank Reviewer pQ4a for the constructive feedback and for the updated score.
Regarding the use of the geometric mean to estimate sparsity: we rely on it because it is well suited to multiplicative or exponential behaviors, which naturally arise in token-level conditional probabilities in language models.
In our context, at each position in a sequence, we compute the number of tokens needed to cover a fixed portion (e.g., 90%) of the total probability mass. Taking the maximum per sample, and then aggregating across the dataset using the geometric mean, allows us to capture a robust notion of sparsity that:
- Penalizes extremely large values less than the arithmetic mean would (which is desirable since these are rare),
- Preserves scale-invariance (if all values are scaled by a constant factor, the geometric mean scales accordingly),
- Reflects the multiplicative nature of uncertainty across positions in sequence models.
This is consistent with best practices in probabilistic modeling and has also been discussed in sources such as Murphy (2012) and other works on log-loss and information content. We will make sure to add this clarification in the paper. | null | null | null | null | null | null |
Temperature-Annealed Boltzmann Generators | Accept (poster) | Summary: In this paper, the authors present a temperature-annealing strategy to train normalizing flows to match unnormalized probability densities (in this case focusing on the equilibrium Boltzmann distribution of high-dimensional molecular systems). The training is done with the reverse KL divergence, assuming no access to any samples from the true density, and only the true energy function. To avoid the mode collapse issue associated with training with the reverse KL divergence objective, the authors propose to first train at a high temperature at which the sample density is closer to a uniform distribution and energy barriers are less difficult to overcome, followed by iterative reweighting and retraining at slightly lower temperatures, until the target temperature (e.g room temperature) is reached. The proposed method is demonstrated on three small protein systems, increasing in complexity from alanine dipeptide to alanine hexapeptide. The method avoids mode collapse in larger systems and successfully reproduces the ground truth distributions obtained from MD simulations.
Claims And Evidence: The authors claim that their proposed annealing approach prevents mode collapse by starting at a higher temperature at which the sample density is closer to uniform, followed by gradual reweighting. This claim is generally supported by the results in Figures 2, 3 and Table 1, which shows that training with reverse KL without temperature annealing often results in mode collapse.
The results of the proposed TA-BG approach are only marginally better than the strongest baseline, FAB, with most of the improvement appearing in the number of potential energy evaluations (typically much lower than FAB). Can the authors discuss more in depth what the aspect of their method is that enables them to learn more efficiently in this way? This wasn’t particularly intuitive to me.
Methods And Evaluation Criteria: I am concerned about the scalability of the approach to higher-dimensional systems. Specifically, I noted that the effective sample size (ESS) resulting from reweighting quickly becomes smaller (from 95% to 15%) as the system complexity increases from alanine dipeptide to alanine hexapeptide. This makes intuitive sense, as the overlaps between distributions become smaller in higher dimensions. As the authors correctly note, importance sampling between these distributions can become ineffective. I would like the authors to demonstrate, either empirically or theoretically, some scaling properties of their proposed method, perhaps as a function of system size and/or number/spacing of reweighting temperatures. I can suggest the fast-folding proteins from D.E. Shaw simulations [1] as a natural testbed for some slightly larger proteins with interesting conformational dynamics and metastable states.
[1] Lindorff-Larsen, Kresten, et al. "How fast-folding proteins fold." Science 334.6055 (2011): 517-520.
Theoretical Claims: N/A
Experimental Designs Or Analyses: See above.
Supplementary Material: I reviewed the full Supplementary Material section.
Relation To Broader Scientific Literature: As acknowledged by the authors, the idea of temperature annealing is similar to replica-exchange molecular dynamics, in which higher temperatures are used to circumvent high energy barriers and speed up sampling.
In relation to other variational sampling methods, the paper compares most closely with Flow Annealed Importance Sampling Bootstrap (FAB). The authors demonstrate that their method matches the ground truth distribution more closely than FAB for the systems considered, and requires fewer potential energy evaluations.
Regarding other generative modeling methods for molecular systems, the main other line of work is training deep, generative models given large-scale data from, e.g. equilibrium MD simulations [1, 2]. The main benefit of the presented method is that no ground truth data, which can be expensive to collect, is needed. However, I would like a more detailed justification from the authors as to why they believe their approach can remain relevant compared to this line of work, particularly as the availability of structural and simulation databases like the PDB and ATLAS [3] steadily grow. Relatedly, can the authors’ method make use of this growing data availability to improve the efficiency of learning with their method on new/unseen systems? I think some discussion regarding this would be useful to better situate the contribution in the current landscape of protein generative modeling.
[1] Lewis, Sarah, et al. "Scalable emulation of protein equilibrium ensembles with generative deep learning." bioRxiv (2024): 2024-12.
[2] Zheng, Shuxin, et al. "Predicting equilibrium distributions for molecular systems with deep learning." Nature Machine Intelligence 6.5 (2024): 558-567
[3] Vander Meersche, Yann, et al. "ATLAS: protein flexibility description from atomistic molecular dynamics simulations." Nucleic acids research 52.D1 (2024): D384-D392.
Essential References Not Discussed: While not directly related since it uses data to learn generative models, the authors should consider citing this recent paper: Lewis, Sarah, et al. "Scalable emulation of protein equilibrium ensembles with generative deep learning." bioRxiv (2024): 2024-12.
Other Strengths And Weaknesses: Strength: The paper is very well-written and easy to follow.
Other Comments Or Suggestions: See above.
Questions For Authors: Is the reported ESS metric the result of reweighting from the original temperature (1200K) to the final temperature (300K), or from some combination of all of the intermediate steps?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you very much for your detailed and helpful feedback. We will address your points in the following:
* Comparison to FAB in accuracy and metrics: While Table 1 suggests TA-BG mainly improves in terms of the number of target evaluations, it’s important to note that the high-energy metastable regions constitute only a small part of the state space. Small NLL differences can thus have significant implications. For example, in the hexapeptide system, TA-BG and FAB show similar NLLs, but Ramachandran plots (Fig. 3) reveal that FAB fails to resolve the high-energy metastable region, while TA-BG succeeds. Therefore, quantitative metrics should be complemented with system-specific analyses, such as Ramachandran plots.
* Comparison to FAB in computational cost: We want to briefly discuss why our method is substantially more efficient than FAB. FAB uses annealed importance sampling (AIS) with intermediate Hamiltonian Monte Carlo (HMC) transitions to evaluate the α=2 divergence loss. While this yields accurate results (except for the hexapeptide), the AIS costs a significant amount of potential energy evaluations. Our method simply uses the reverse KLD at high temperature, which is significantly less costly to evaluate. While the annealing adds some additional cost, it is still very efficient since large buffers are used, where training samples are reused several times. Therefore, in total, our method is significantly cheaper than FAB.
* Scaling to larger systems: First, we would like to emphasize that scaling to the hexapeptide is already a big achievement in itself, as the previous SOTA method FAB was the only method that worked even on the smallest of our three systems studied. Despite this, we are confident that our method scales to larger systems. While it is true that the ESS drops down for the larger systems, this can be counteracted with a more expressive architecture, e.g., the one recently proposed in [3]. At the same time, even with the current flow architecture, one can counteract the drop in ESS with other measures:
* Increase in the number of temperature steps or the number of drawn samples per annealing step, see our ablations in the answer to reviewer SzWG.
* Additional fine-tuning steps, see our ablations in the answer to reviewer 1DKq.
* One can use AIS instead of IS to estimate the loss in the annealing steps. While this comes at an additional cost, it can keep the variance of the loss low when increasing the intermediate AIS steps linearly with the dimensionality (see [4] for a theoretical analysis).
* Your question about data-driven vs data-free sampling of Boltzmann distributions is very interesting. First, we want to point out that the task of sampling the equilibrium Boltzmann distribution of proteins is significantly harder than just predicting folded structures, as is done in AlphaFold. While the references [1] and [2] that you cited tackle the task of equilibrium sampling, the results are only rough approximations of the true equilibrium distribution. Even though large MD datasets have become available recently, we believe that much more data is necessary to achieve good transferability that can replace direct sampling through MD or variational methods, such as the one proposed in our work.
Another recent publication [5] trained transferable Boltzmann generators on a large dataset of dipeptide MD simulations, which is a very narrow chemical space. While the transferability was successfully shown, also here the transfer to new systems only resulted in a rough estimate of the true equilibrium distribution.
We thus believe that a hybrid data / variational approach could be pursued in the future. In our method, MD data can be used to supplement the high-temperature pre-training, which will make finding new modes easier and may lead to quicker convergence. We agree that this discussion is very important and we will include a discussion of these aspects in our revised manuscript.
* The reported ESS in Table 1 is the ESS when sampling from the flow distribution (which has been annealed through multiple iterations to 300K) in the very end, reweighting to the 300K target.
We further note that we have now performed several new ablation studies, as detailed in our responses to reviewers SzWG and 1DKq. Lastly, we now additionally benchmarked against the 2D GMM system (see our rebuttal to reviewer CidM), where TA-BG outperforms FAB and all diffusion-based baselines.
With this, we hope you agree with us that our work is a significant achievement that the sampling community can build upon in the future. We hope to see more methods applied to the complex benchmark systems proposed in our work, while we are eager to scale our method to even larger systems next.
[3] Zhai et al. 2024, "Normalizing Flows are Capable Generative Models"
[4] Midgley et al. 2023, “Flow Annealed Importance Sampling Bootstrap”
[5] Klein et al. 2024, “Transferable Boltzmann Generators”
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their reply. Overall I think this is good work and would be of interest at ICML even in its current form. I would raise my score to a 4 if I saw more convincing empirical evidence of the scaling potential of this method, perhaps demonstrating AIS on the considered peptide systems. See specific comments/questions below.
1. The manner in which samples are reused from one annealing iteration to the next is still unclear to me and I didn't find an explanation in the paper or rebuttal. Is there are pre-specified temperature difference cutoff beyond which samples from the flow at one temperature are no longer reweighted to another temperature? On average, what is the fraction of samples that are reused compared to re-sampling?
2. Regarding the claims of scaling, I see from the ablations that choosing more steps for annealing or performing finetuning after each step helps improve the ESS for larger systems. However, I'm not sure this wouldn't run into some of the same issues as, e.g. free energy calculations, which also must perform many intermediate finetuning steps to maintain reasonable ESS at significant computational cost [1]. The rebuttal mentions that AIS can yield linear scaling of intermediate steps w.r.t. system size. That is very interesting, and I wonder if the authors could demonstrate a small example/teaser of AIS in this context and show it's better scaling properties?
3. I find the claim of using more expressive architectures to solve this problem slightly dubious, as fundamentally the reweighting problem is a property of the underlying Boltzmann distributions, and not the learnable function class used to approximate them. Please let me know if I'm missing something.
[1] "Free energies at QM accuracy from force fields via multimap targeted estimation", PNAS 2023
[EDIT]: In light of the AIS analysis provided by the authors in the response below, I am willing to increase my score from 3 -> 4. I appreciate this analysis, and I would like to see it included in the final paper. [Minor detail] I would also like to see how the ESS of IS scales as you increase the number of intermediate annealing steps. The authors showed that you can achieve linear scaling with AIS, but what is the relationship for IS (is it exponential)?
---
Reply to Comment 1.1.1:
Comment: Thank you for acknowledging that our work is of interest to the ICML community. We address your remaining concerns in the following:
1. To simplify our workflow, we are not reusing samples from previous annealing steps. When transitioning from $T_i$ to $T_{i+1}$, we train solely on $\mathcal{W}_{i+1}$ as defined in Section 4.2. We will make this more explicit in the manuscript and discuss reusing older samples in the appendix, as it may improve efficiency. We suggest monitoring the ESS of past buffers to guide such reuse.
2.+3.:
We want to go into detail regarding your concerns with importance sampling (IS), which we agree is important when scaling our approach.
The problem with IS is that small local bias, e.g., the possibility of two atoms clashing locally in the proposal, gets amplified due to the dimensionality of the problem. We consider a molecular system with $N$ independent neighborhoods, each with a chance $\eta$ for a clash under the proposal distribution. A sample's importance weight is 1 if no clashes occurred, and 0 otherwise. The ESS is $\text{ESS}=\frac{\left(\sum_{i=1}^M w_i\right)^2}{\sum_{i=1}^M w_i^2}=\sum_{i=1}^M w_i $, so $\mathbb{E}(\text{ESS}) = M \cdot (1-\eta)^N$. Thus, the ESS drops exponentially, which captures why IS does not scale well with dimensionality.
We now clarify our statement that more expressive architectures can mitigate this. A better architecture with less bias will not remove the scaling problem of IS, unless the proposal exactly matches the target, which is unrealistic. But a better model can reduce $\eta$, thus improving the constants in the exponential scaling law. This allows scaling to larger (though not arbitrarily large) systems.
To demonstrate how AIS addresses this, we move to a continuous model: The proposal distribution is an $N$-dimensional Gaussian with $\sigma=1.1$ and the target distribution is an $N$-dimensional Gaussian with $\sigma=1.0$. This is a simplified model of a single annealing step.
We first look at vanilla IS in this scenario. We plot the empirical ESS as a function of dimensionality in the following figure: https://ibb.co/xd6jxL6
As expected, the ESS drops exponentially, which can also be shown analytically [1].
Now we turn to AIS. Per intermediate distribution, we use a single step of HMC and scale the number of intermediate distributions $T$ linearly with the dimensionality $N$, i.e., $T=c \cdot N$. In our demonstration here, we use $c=5$. We visualize the ESS as a function of dimensionality and compare with vanilla IS: https://ibb.co/bDD53J4
The ESS of AIS when scaling the number of intermediate distributions linearly can be kept approximately constant. We further refer to the AIS paper [3], where this is shown generally for any factorizable distribution under the assumption of perfect HMC transitions. Again, we acknowledge the simplifications present in this analysis. However, it still captures the essence of why IS fails for higher dimensions and how AIS can help. While for molecular systems, exact factorizability of the distribution is of course not given, AIS can still remove atom clashes, etc., which are often local effects.
We also point to a recent publication [2], which coincidentally worked on very similar molecular systems as in our work. While they train on data from MD, we cover the data-free case, so the task itself is different. However, they use Sequential Monte Carlo (SMC) for sampling from the flow for evaluation. SMC is a variant of AIS where one resamples during the annealing if the ESS drops too low. As you can see from Tables 2 and 3 in [2], the ESS can be kept very high also for the hexapeptide, though we acknowledge that the involved resampling during the SMC in [2] makes judging and comparing these ESS values somewhat difficult.
Finally, we evaluated the ESS with AIS for the molecular systems in our work. We report the change from using IS (Table 1 in our paper) to using AIS in the following:
- Dipeptide: 95.6% $\rightarrow$ 98.8%
- Tetrapeptide: 62.5% $\rightarrow$ 84.6%
- Hexapeptide: 14.8% $\rightarrow$ 40.0%
We note that this is for a fixed number of intermediate distributions (8, the same as used in FAB) and the ESS can thus be further increased.
Overall, we are confident our method can scale, based on approaches as the one we illustrated above. We want to reiterate that scaling to the hexapeptide in this domain is already a big achievement, as previously only a single method (FAB) succeeded even on alanine dipeptide. We are actively aiming to push the field of variational inference in the domain of molecular sampling forward, which we support by releasing the two additional benchmark datasets. We hope our extended AIS analysis motivates a reevaluation of your score.
[1] Chatterjee & Diaconis 2017, “The Sample Size Required in Importance Sampling”
[2] Tan et al. 2025, “Scalable Equilibrium Sampling with Sequential Boltzmann Generators”
[3] Neal 1998, “Annealed Importance Sampling” | Summary: This paper considers the problem of off-policy sampling from the unnormalized density distributions and proposes a novel method (TA-BG) based on a normalizing flow architecture (like FAB) that is less prone to mode collapse. In fact, the authors present a way of training a normalizing flow in this setting with the reverse KLD without mode collapse. They start from training a normalizing flow at high temperatures and then annealing the learned distribution into lower temperatures, which prevent collapsing. Moreover, the paper proposes two novel benchmark problems - similar to alanine dipeptide modeling, but harder.
Claims And Evidence: The paper demonstrates that training a normalizing flows with reverse KLD does not necessarily end with mode collapse when we consider higher temperatures. Moreover, the authors present a novel annealing scheme from higher temperatures to lower. The general statements regarding problems with mode collapsing in previous models are reasonable.
The main problem is with the evidence supporting the claims for TA-BG superiority. Firstly, the experimental setup is limited to three datasets, similar to each other. Moreover, the only baselines are FAB, forward KLD, and reverse KLD, even though, FAB is not the only sampler that was tried in alanine dipeptide setting. Finally, the presented results for NLL and ESS for TA-BG and FAB are similar, without any significant difference. More on improving evaluation you can find in the following sections.
Methods And Evaluation Criteria: The evaluation setting is limited in the sense of number of considered energies, their variability, and the considered baselines. However, the proposed novel benchmarks might be beneficial for the community.
Theoretical Claims: I haven’t found any obvious flaws in the theoretical properties of TA-BG.
Experimental Designs Or Analyses: I think that the experimental design is very limited and should be significantly improved:
- Firstly, FAB is not the only sampling method successfully used for alanine dipeptide, please see, e.g., the following papers: [1], [2], [3], and [4]. At least part of them and other samplers should be considered as baselines also (e.g., DDS, DIS, PIS, GFlowNets, or iDEM).
- The evaluation setup is limited to three energies, which are in some sense similar. I will suggest considering also typical sampling problems, e.g., log-cox, DW-4, and LJ potentials.
- A comparison of training costs for the considered baselines would be beneficial, regarding the presented (not significantly better) results of TA-BG.
- Finally, the presented results in Tab. 1 (especially NLL and ESS) don’t support the claims in a convincing way.
**References:**
[1] Holdijk, Lars, Yuanqi Du, Ferry Hooft, Priyank Jaini, Berend Ensing, and Max Welling. "Stochastic optimal control for collective variable free sampling of molecular transition paths." Advances in Neural Information Processing Systems 36 (2023): 79540-79556.
[2] Seong, Kiyoung, Seonghyun Park, Seonghwan Kim, Woo Youn Kim, and Sungsoo Ahn. "Transition Path Sampling with Improved Off-Policy Training of Diffusion Path Samplers." arXiv preprint arXiv:2405.19961 (2024).
[3] Petersen, Magnus, Gemma Roig, and Roberto Covino. "Dynamicsdiffusion: Generating and rare event sampling of molecular dynamic trajectories using diffusion models." (2023).
[4] Phillips, Dominic, and Flaviu Cipcigan. "MetaGFN: Exploring distant modes with adapted metadynamics for continuous GFlowNets." arXiv preprint arXiv:2408.15905 (2024).
Supplementary Material: I’ve briefly checked the whole supplementary materials.
Relation To Broader Scientific Literature: This paper is positioned within the sampling community, focused on sampling from molecular energies like alanine dipeptide. However, the proposed method seems to be novel, the paper doesn’t compare (theoretically and empirically) against related baselines.
Essential References Not Discussed: **Missing references for standard sampling methods:**
[1] Duane, Simon, Anthony D. Kennedy, Brian J. Pendleton, and Duncan Roweth. "Hybrid monte carlo." Physics letters B 195, no. 2 (1987): 216-222.
[2] Hoffman, Matthew D., and Andrew Gelman. "The No-U-Turn sampler: adaptively setting path lengths in Hamiltonian Monte Carlo." J. Mach. Learn. Res. 15, no. 1 (2014): 1593-1623.
[3] Halton, J. H. Sequential Monte Carlo. In Mathematical Proceedings of the Cambridge Philosophical Society, volume 58, pp. 57–78. Cambridge University Press, 1962
[4] Chopin, N. A sequential particle filter method for static models. Biometrika, 89(3):539–552, 2002.
**And novel neural diffusion-based samplers like:**
[1] Vargas, Francisco, Shreyas Padhy, Denis Blessing, and Nikolas Nüsken. "Transport meets Variational Inference: Controlled Monte Carlo Diffusions." In The Twelfth International Conference on Learning Representations.
[2] Sendera, Marcin, Minsu Kim, Sarthak Mittal, Pablo Lemos, Luca Scimeca, Jarrid Rector-Brooks, Alexandre Adam, Yoshua Bengio, and Nikolay Malkin. "Improved off-policy training of diffusion samplers." Advances in Neural Information Processing Systems 37 (2024): 81016-81045.
[3] Zhang, Qinsheng, and Yongxin Chen. "Path Integral Sampler: A Stochastic Control Approach For Sampling." In International Conference on Learning Representations.
Other Strengths And Weaknesses: **Strengths:**
[1] Proposing a way of annealing the target distribution to lower temperatures.
[2] Showing that training a NF with reverse KLD at high temperatures is possible without mode collapse.
[3] Introducing two novel benchmarks.
**Weaknesses:**
[1] Missing related references and methods (see above).
[2] Limited experimental setup, not supporting the claims.
[3] Lack of comparison with other than FAB baselines, especially worth to consider are diffusion-based samplers.
Other Comments Or Suggestions: Overall, I think that the current state of the experimental setting is not enough for the general work in sampling community.
For other comments, please refer to the previous sections.
Questions For Authors: **Questions:**
[1] Since the proposed method needs an annealing procedure from high to low temperatures, it seems to be computational heavy. Could you present the computational cost of TA-BGs and compare against baselines? I think that considering both training and inference time might be beneficial.
[2] I think that comparing against diffusion-based samplers would be beneficial also from the perspective of mode collapse issue. If I remember correctly, it is not such an issue as for Normalizing Flow-based samplers?
For additional questions, please refer to the previous sections.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your detailed and helpful feedback. We address your questions and concerns below:
* While Table 1 suggests TA-BG mainly improves in terms of the number of target evaluations, it’s important to note that the high-energy metastable regions constitute a small part of the state space. Small NLL differences can thus have significant implications. For example, in the hexapeptide system, TA-BG and FAB show similar NLLs, but Ramachandran plots (Fig. 3) reveal FAB fails to resolve the high-energy metastable region, while TA-BG succeeds. Therefore, quantitative metrics should be complemented with system-specific analyses like Ramachandran plots.
In summary, TA-BG achieves better NLL across all systems, needs significantly fewer target evaluations, and is the only method to capture the complex interactions in the hexapeptide.
* FAB is, indeed, the only method successfully applied to learning the Boltzmann distribution of alanine dipeptide without prior knowledge (as also confirmed by Reviewer SzWG in his review). Of the works you cited:
* [1] and [2] discuss transition path sampling between points A and B, which is a different task than sampling the full Boltzmann distribution.
* [3] relies on data from the target distribution; we cover the data-free case.
* [4] depends on a low-dimensional collective variable, typically non-trivial to obtain.
We will cite these papers and discuss that they are related but ultimately solve a different task.
* While diffusion samplers have gained popularity, they have not yet been successfully applied to alanine dipeptide. Most diffusion sampling studies use synthetic benchmarks with less correlated transitions between minima than the molecular systems we examine. In our reply to reviewer 1DKq we further discuss why the necessity of using Cartesian coordinates might make applying diffusion models to sample molecular systems harder, but we can mostly speculate here. As reviewer 1DKq points out, diffusion models are "not quite there yet", but we hope to see a successful publication on this topic soon. FAB and our method remain the only approaches that scale to system complexities like alanine dipeptide, which we now significantly extended to the tetrapeptide and hexapeptide.
* We now include the 2D GMM system (as introduced by FAB). Since diffusion methods have been successfully applied here, this allows a direct comparison. We pre-trained the flow at T=30.0 with reverse KLD, annealed to T=1.0 in 7 steps, and performed one fine-tuning step. Results are shown in this figure (neural splines: https://ibb.co/7dkKCr76) and table (https://ibb.co/bgrDkVyD), using values reported by iDEM [5]. We note that we used two different flow architectures, REAL NVP (as originally used in FAB) and an improved neural splines architecture. We did not have enough time in the rebuttals to repeat the FAB experiments with neural splines, but we will prepare this for the revised manuscript (and also include error bars for our method). As shown, TA-BG outperforms FAB and all diffusion baselines on the 2D GMM system. We will further explore other sampling tasks, such as DW-4, and include them in the appendix.
We emphasize our focus on molecular systems, which present more challenging and realistic benchmarks than commonly used synthetic systems. Our goal is to motivate the application of sampling methods, including diffusion, to these more difficult and application-relevant domains, supported by our release of the two new benchmark systems.
* We approximated training time (excluding evaluation) for FAB and TA-BG on the alanine dipeptide system. On our hardware, FAB takes ~18.2h, TA-BG requires ~17.1h. Thus, runtimes are comparable. Furthermore, inference time is identical, since the same model is used for all experiments.
Force field evaluations in our current setup are relatively inexpensive. With more accurate and computationally costly forces, such as ML-based foundation models or DFT, the evaluation cost becomes dominant. In such cases, the higher sampling efficiency of our method will translate into significantly reduced computational cost.
* Thank you for pointing out the missing standard sampling and diffusion references; we will revise the manuscript accordingly.
* We have performed several new ablation studies (see responses to reviewers SzWG and 1DKq).
With this, we hope that we were able to convince you that our method and results are significant achievements and interesting to the sampling community. We hope to see more sampling approaches, including diffusion models, being applied and benchmarked on the complex and more application-relevant molecular systems proposed in our work.
[5] Akhound-Sadegh et al. 2024, “Iterated Denoising Energy Matching for Sampling from Boltzmann Densities”
[6] Durkan et al. 2019, “Neural Spline Flows”
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their time spent on their rebuttal and answering my concerns. I'm still a little bit concern about the scale of experimental verification. However, I believe that the additional experiments added during the rebuttal should be included in the final version, and significantly improve this submission.
I will raise my score accordingly (2 -> 3).
---
Reply to Comment 1.1.1:
Comment: Thank you for acknowledging our improvements made during the rebuttal and the clarifications we provided regarding your concerns. We will include all additional ablation experiments and the results on the GMM system in the appendix. In particular, the experiments on the GMM system and the comparison with diffusion samplers will make our paper more accessible to readers not familiar with molecular systems. Thank you for this suggestion.
We appreciate the raised score and thank you for your review of our work, including all suggestions and comments. | Summary: The authors propose an iterative training approach for learning normalizing flows to approximate unnormalized densities, such as Boltzmann distributions for physical systems. The method proceeds to first train a normalizing flow using the mode-seeking KL divergence match a high-temperature target density. Using samples drawn at the previous temperature reweighted according to the next (lower) temperature, the method proceeds to train on these examples using the mass-covering KL divergence. The procedure is iterated until reaching the target temperature.
The authors demonstrate that this approach can mitigate mode-seeking behavior of directly training a normalizing flow at the target temperature. The proposed method is competitive with or outperforms the previous gold standard method (FAB) on tetrapeptide and hexapeptide systems which are more difficult that standard baselines in the literature.
Claims And Evidence: 1)
The motivation for training on successively lower temperatures is natural and well presented.
- the proposed method conveniently uses mass-covering / maximum-likelihood KL divergence for training after the first iteration
2)
This benefit of the algorithm is not properly emphasized throughout the paper
- Below eq. 3, the transition to `we focus on the case' leads the reader to question the introduction of Eq. 2-3 and suggests it may not play a key role
- The mass-covering nature of the KL used in training steps after the first likely plays a key role in the lack of mode dropping.
- This change in loss, facilitated by iterative sampling, should be emphasized in "Training by Energy" and the introduction to Sec. 4.2
3)
The presentation of the method could be improved:
- clarify that the same parameters are used to retrain at each step (i.e. $\theta_{T_i}$ used to initialize $\theta_{T_{t+1}}$ and then discarded)
- an inline equation specifying the temperature schedule would be appreciated
- it is not 100% obvious what is the meaning of a 'fine-tuning step' in Lines 254-260 R. It would be beneficial to highlight the "off-by-one" nature of the proposal samples in the stated algorithms, how the "fine-tuning" corrects this, and that hexapeptide uses e.g. 2x the number of steps
Most additional findings are empirical rather than theoretical.
Methods And Evaluation Criteria: The method and evaluation measures make sense for the problem at hand.
I might be interested to see how the ESS of the sampled datasets changes over iterations (with and without 'fine-tuning').
The authors appear to use a large number of training steps at the initial mode-seeking KL training run. This makes sense since later sampling results depend crucially on good initial learning, but could be emphasized.
4)
I am slightly confused by the philosophy behind setting training parameters. In line 334-336 L, "our approach achieves better results... while requiring $3.08 x 10^8$ target evaluations," it is somewhat unclear what is meant by "requires". How is the number of gradient steps decided? Eventually, the authors approximately match the number of function evaluations for forward KL (MD steps), FAB (AIS steps), and the proposed method (resampling), but it might be good to emphasize that the proposed training framework allows for many more gradient steps for the same # PE Evals than comparison methods.
Theoretical Claims: None given.
Experimental Designs Or Analyses: see above
Supplementary Material: Appendix was reviewed for details of experiments, metrics, and additional plots.
Relation To Broader Scientific Literature: The paper is well-positioned in relation to the literature.
In Related Work (Lines 342-348 R), the points about diffusion models should be moved to a new paragraph.
"Our results *suggest* (the possibility) that... *might* benefit" would be better wording.
"computing exact likelihoods for these models is prohibitively expensive"
- this claim should be expanded upon in Lines 191-192 L and/or Lines 345-365 R. (PF ODE, need for divergences, possible approximations)
Essential References Not Discussed: *Not essential*, but I noticed concurrent work also proposing to train a sampler from a higher temperature target distribution [1]. Their approach is not iterative and involves Sequential Monte Carlo resampling to reweight back to the target temperature along a diffusion trajectory.
The authors might also consider advances in normalizing flow architectures demonstrated in [2]
[1] Skreta et. al 2025, "Feynman-Kac Correctors"
[2] Zhai et. al 2024, "Normalizing Flows are Capable Generative Models"
Other Strengths And Weaknesses: The experimental results are promising, but the presentation can be greatly improved.
Other Comments Or Suggestions: I personally dislike the "forward" and "reverse" KL nomenclature and don't intuitively follow it when reading papers. This further emphasizes the need for the authors to clearly state the difference in training objectives in (i) their first step and existing methods, versus (ii) training steps at subsequent temperatures.
Questions For Authors: The FAB method applies just as easily for α=1 (mass-covering KL) as it would for α=2 (perhaps requiring fewer PE Evals for shorter annealing). Since the proposed method also optimizes this loss, it may be an interesting further benchmark.
5)
Do the authors have any comment on the use of internal coordinates versus Cartesian coordinates? I see that Midgley et. al 2023a train from maximum-likelihood on replica exchange MD simulation. It seems that methods in the field are just "not quite there yet"
6)
How sensitive is the method to clipping the highest importance weights? The authors throw out the "forward ESS" values due to sensitivity to clipping, but this begs the question of its impact on training and/or Ramachandran plots (although for the latter, the authors helpfully provide reweighting-free evaluations)
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your positive, detailed, and constructive feedback.
* We address your points following the same numbering as in your review:
2. We will more clearly outline the advantages of our method. For example, explicitly stating the mass-covering nature of the forward KLD and why this is helpful in our case is a great suggestion. We will generally work on making our introduction of forward / reverse KLD more intuitive.
3. Thank you for these suggestions to improve the presentation of our method. We will work on these points to revise our manuscript. We will include the equation $T_i = T_\text{start} \left( \frac{T_\text{target}}{T_\text{start}} \right)^{\frac{i-1}{K-1}}$ to clarify the “geometric temperature progression”. Further, we will improve the explanation of the annealing procedure, including the fine-tuning and its motivation.
4. Choosing hyperparameters for a fair comparison of the different methods is not straightforward, as the final metrics and the total number of target evaluations form a tradeoff. Thus, you are right that saying our method “requires” a certain number of evaluations is imprecise; we will correct this.
To better make the tradeoff visible, we included FAB hyperparameter variations in Appendix H of our original submission. Furthermore, we now performed ablation studies to make this tradeoff better visible for our method, see our answer to reviewer SzWG, and the fine-tuning ablation study discussed below.
5. One of the reasons why using Cartesian coordinates might make the problem harder is that with internal coordinates, we can constrain the angles and bond lengths to reasonable bounds. This is not possible with Cartesian coordinates, where even completely different chemical graphs can be formed. We suspect this is partly why diffusion-based sampling methods have not yet been successful in modeling molecular systems, since handling periodic torsions is less straightforward with diffusion than with flows, and thus Cartesian coordinates are often used.
6. Not clipping the importance weights results in numerical outliers visible in the reweighted Ramachandran plots. Therefore, we clipped the weights when determining the KLD of the Ramachandran plots, analogous to FAB.
We further ran ablation experiments for alanine dipeptide to determine the impact of clipping the importance weights when resampling the buffer datasets during the annealing. The final NLL and ESS stayed within our error bounds, so clipping has little impact here. We will mention this in the revised manuscript.
* Further responses to your additional suggestions and questions:
7. We performed an ablation study to show the impact of fine-tuning. First, we look at the impact of the intermediate fine-tuning for the hexapeptide system. In the following figure, we show the ESS of the training buffer dataset in each iteration for the case with and the case without intermediate fine-tuning (both experiments have a final fine-tuning step): https://ibb.co/XxZJvr0r
The buffer ESS of the fine-tuning steps that follow each annealing step is much higher, which is reasonable since we here reweight to the same temperature the flow was previously annealed to ($T_{i+1}=T_i$). One can further see that the buffer ESS generally drops down when not using intermediate fine-tuning. We summarize the impact of intermediate fine-tuning on the final metrics for all systems in this table (https://ibb.co/hjXk96b). As you can see, the impact is the largest for the hexapeptide, which is why we included intermediate fine-tuning only in this system for our main experiments.
We further summarize the impact of the final fine-tuning step for all systems in the following table: https://ibb.co/XfcbxGb3
8. Testing other objectives (such as the α=2 divergence) for the annealing is a great idea. However, we believe this goes beyond the main claims we want to cover. We will add a short discussion of other possible objectives to our manuscript.
9. Thank you for pointing out the concurrent work [1]. We will mention it in our revised manuscript.
10. We will mention more advanced flow architectures as a path for the future [2,3].
11. Thank you for your additional suggestions to improve our manuscript. We will include them in the revised version.
In addition to the molecular systems studied in our work, we now additionally benchmarked against the 2D GMM system (see our rebuttal to reviewer CidM), where TA-BG outperforms FAB and all diffusion-based baselines.
Overall, your comments were very helpful in improving our paper. Our approach outperforms the current SOTA (FAB) in this area and demonstrates scalability to more complex systems. We believe that this will be very helpful for the community to advance in this task, and we would appreciate it if you could revise your score based on our revision.
[3] Tan et al. 2025, “Scalable Equilibrium Sampling with Sequential Boltzmann Generators”
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for the detailed reply and additional ablation experiments. I agree that these will notably improve the paper and raise my score to 4.
That said, reading other reviews and returning to the paper spurred several other thoughts which the authors may optionally consider.
> Testing other objectives (such as the α=2 divergence)... goes beyond the main claims we want to cover.
I agree (for the proposed method), but I meant for FAB and α=1. This is also not essential.
> While Table 1 suggests TA-BG mainly improves in terms of the number of target evaluations, it’s important to note that the high-energy metastable regions constitute a small part of the state space. Small NLL differences can thus have significant implications.
The above is quoted from the authors' response to Reviewer CidM. In returning the paper, I also had the question of why NLL performance differences were seemingly so minor, and encourage prominent discussion of this in the final version.
> We approximated training time (excluding evaluation) for FAB and TA-BG on the alanine dipeptide system
The above is quoted from the authors' response to Reviewer CidM. I would also be curious to see "total gradient steps" and "wall-clock time" in Table 1. For gradients, I find myself returning to the appendix, hopping around tables, and doing multiplications to compare. It is promising that wall-clock time is also favorable or similar compared to FAB, even with retraining. These are important practical considerations which highlight benefits of the proposed approach.
Finally, the schedule ablations reminded me of online adaptive scheduling techniques which use sample-based estimation to approximately construct annealing schedules with constant-rate progress in the Fisher-Rao metric. See e.g. [1] and discussion of related work within their Sec. 5 (not 100% sure these are applicable).
[1] Chopin, Crucinio, Korba, "A connection between Tempering and Entropic Mirror Descent"
[2] Syed et. al "Optimized Annealed Sequential Monte Carlo Samplers" (Sec 4.3, 5.1)
---
Reply to Comment 1.1.1:
Comment: Thank you for raising your score and for the additional suggestions; we will use them in our revised manuscript.
Since the NLL is not a divergence, the missing scale yields that seemingly small differences can be decisive, which also means that it can be hard to interpret. However, we found that the NLL is still an important indicator, also of mode collapse, and can be complemented with the RAM KLD metric and Ramachandran plots to obtain a more intuitive understanding. We will include an extended discussion on this in our manuscript.
We agree that a better comparison of the computational cost is called for; we will prepare a table directly comparing compute time, number of flow evaluations, and target evaluations for the revised manuscript.
Furthermore, thank you for the additional references; the approaches are interesting. We will take a closer look and discuss ways of better annealing schedules in the appendix.
Again, thank you for your detailed feedback, suggestions, and positive evaluation of our work. | Summary: This paper proposes temperature annealed Boltzmann generators. The proposed idea is train a normalizing flow with reverse KL at some high temperature, then train a series of models down to the target temperature using forward KL using generated samples reweighted with importance sampling from a higher temperature model.
It is shown that this method scales up to Alanine Hexapeptide in an internal coordinate system favourably compared to flow annealed bootstrapping and reverse KL training directly at the target temperature.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: No theoretical claims
Experimental Designs Or Analyses: Yes
Supplementary Material: Yes. All.
Relation To Broader Scientific Literature: This paper proposes using reverse KLD at a higher temperature then annealing which is a complementary approaches to previous works. While many works use temperature and many works use normalizing flows in this field I have not seen work that combines these two pieces together in this way.
I agree with the authors assertion that to the best of my knowledge too the only ML approach that has been successfully applied to ALDP is FAB.
Essential References Not Discussed: None
Other Strengths And Weaknesses: Strengths:
* I think this is a very strong work providing a useful method and observations. I also found a number of tricks in the appendix that I personally had not thought about in parametrization. I think this will be a useful work for the community to build off of.
* I found the exposition and figures very clear, although I do think that it could be made slightly clearer to the broader ML community why these are the interesting figures to look at.
* The main results appear strong with adequate baselines.
Weaknesses:
* This work could be made much stronger with additional ablations around hyper parameter choices. I think this paper proves that there exists a setting which scales to at least AL6 using this method, but not a systematic understanding of what it takes to get there or further. In particular, there are a number of hyper parameters which seem to appear without justification and seem like they might be extremely important to the overall performance of the method. Specifically,
* Initial target temperature (set to 1200K here). It's unclear where this number came from or how it was decided. It seems like this would have a significant effect on results, but it is unclear how much of an effect it has. It would be very reassuring to see that this method also works within a reasonable range of initial target temperatures, or that there is a reasonable method to selecting the initial target temperature.
* Number of intermediate temperature annealing steps. As far as I can tell all that is mentioned is "For all experiments, we chose 9 temperature annealing steps."
* Intermediate fine tuning. Very little detail is given on how this is done, what parameters are used, and what effect it has. What is the effect of this empirically?
* Number of samples drawn for each intermediate distribution.
* Number of steps trained for / stopping criterion at each temperature.
* Temperature annealing schedule
Other Comments Or Suggestions: None
Questions For Authors: 1. I think the missing baseline is training with reverse KL at the high (1200K) temperature then directly importance sampling down to 300K (and either training a flow or not). I think this would emphasize the need for annealing and the benefit of training many flow models.
2. Would it be possible to include additional ablations I think the most interesting would be on starting temperature and number of temperature scales? I think this would greatly improve the practical applicability of this work.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your detailed feedback and the many helpful suggestions. We address them in the following:
We performed ablation studies to investigate the impact of hyperparameters, which we will include in our revised manuscript:
* We ran ablation experiments to determine suitable starting temperatures for each of the systems. At a given temperature, we performed 4 experiments to determine how many of these experiments show mode collapse in the Ramachandran plots when pre-training with the reverse KLD:
https://ibb.co/5XgHtZYM
There is a large range of temperatures where reverse KLD training is possible without mode collapse. This ablation was performed with a batch size of 1024 and 250k gradient descent steps. Increasing the temperature beyond the “critical temperature” where the experiments start not collapsing anymore allows us to pre-train cheaper with a smaller batch size and fewer gradient descent steps without mode collapse. A tradeoff exists: When increasing the temperature beyond the “critical temperature”, one can pre-train cheaper, but then the annealing will be more expensive.
This ablation nicely extends our mode collapse discussion in the paper. The fact that alanine dipeptide can be sampled without mode collapse with the reverse KLD at only slightly increased temperature (375 K) is likely interesting and surprising to the sampling community in itself.
* We ran experiments to justify the geometric temperature schedule. We compare the geometric schedule with a linearly spaced temperature schedule. In the following figures, we show the ESS of each buffer dataset we train on during the annealing:
https://ibb.co/TDfMwLbs (alanine dipeptide, one final fine-tuning step)
https://ibb.co/gbBHb66W (hexapeptide, with intermediate fine-tuning steps)
The buffer ESS “spikes” visible for alanine dipeptide in the end and for the hexapeptide in-between come from the fine-tuning steps, where $T_{i+1}=T_i$, yielding better overlap. One can see that the geometric temperature schedule yields an approximately constant buffer ESS, whereas the ESS drops down significantly toward the end for the linear schedule (since the temperature steps are too large!).
* Regarding details on the fine-tuning steps, these are performed with the same parameters as the annealing steps. The only difference is that we do not decrease the temperature, but rather reweight to the same temperature the flow was annealed to in the previous step, so $T_{i+1}=T_i$. We will make this clearer in the revised version of our manuscript.
We further created an ablation to show the impact of fine-tuning. You can find details on this in our reply to reviewer 1DKq. To summarize, the final fine-tuning step is important for all systems, while the intermediate fine-tuning helps keep the ESS up for the hexapeptide.
* For alanine dipeptide, we ran ablation experiments to show the impact of the number of temperature steps on the final metrics. We summarize the results in the following table: https://ibb.co/NgPHGG9J
As you can see, a tradeoff between increasing the number of intermediate temperatures (which yields more target evaluations) and improving the final metrics exists.
In the following figure, we further show the buffer ESS over the course of training: https://ibb.co/Z6VGsLFj
One can see that with smaller temperature steps, the buffer ESS during training increases.
* For alanine dipeptide, we ran ablations to show the impact of the number of samples drawn in each annealing iteration (https://ibb.co/jxwVY4M). Again, a tradeoff is visible between the number of target evaluations and the final NLL / ESS.
Furthermore, thank you for suggesting the additional baseline of reweighting directly from 1200 K to 300 K. You can see what this looks like (using $10^6$ samples) in the following figure: https://ibb.co/7JHtBb2v
We will include this additional baseline in our main text.
In general, the ablations show that a drop in ESS can be counteracted by using more intermediate temperatures, drawing more samples, or with additional fine-tuning steps. We are thus confident that we can scale our method to even larger and more complex systems, but we believe that already scaling to the hexapeptide is a significant milestone that we want to share with the community.
In addition to the molecular systems studied in our work, we now additionally benchmarked against the 2D GMM system (see our rebuttal to reviewer CidM), where TA-BG outperforms FAB and all diffusion-based baselines.
Again, thank you for your helpful comments, we believe to have addressed all suggestions and concerns of your review. The suggested ablations shed light on interesting dependencies present in our method. We are confident that our results, including the suggested ablations, are interesting for the sampling community and provide a foundation for future work. We hope that you can raise your score accordingly.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their hard work during the rebuttal period which has led to significant improvements in understanding around this work.
> This ablation nicely extends our mode collapse discussion in the paper. The fact that alanine dipeptide can be sampled without mode collapse with the reverse KLD at only slightly increased temperature (375 K) is likely interesting and surprising to the sampling community in itself.
Agreed. This is quite surprising to me, and quite useful.
I'm somewhat less confident that this will scale to much larger systems, but I believe the results thus far are already a very useful step.
The authors have satisfied my concerns with he current work. For this reason I raise my score 3->4.
---
Reply to Comment 1.1.1:
Comment: Thank you for raising your score and acknowledging the importance of our work, including the added ablations.
Furthermore, thank you for your detailed review of our work, which included many good suggestions. They not only improved the paper but also deepened our own understanding of the method. | null | null | null | null | null | null |
Customizing the Inductive Biases of Softmax Attention using Structured Matrices | Accept (poster) | Summary: --- score update to 3 (weak accept) from 2 (weak reject) after the rebuttal ---
The paper consider the Score function in attention computation. This is one core component of modern LLMs. The common setting for multi-head attention is that each head has a "low-rank" bottleneck, as the product of key and query projection matrices ($W_Q W_K^\top$) is a low-rank matrix. Prior work has shown that this precludes learning some functions when doing in-context regression.
Furthermore, regular attention uses the full context which is computationally expensive, relative positions are encoded via positional encodings and are not explicitly treated in the attention itself anymore. An existing proposal to make the attention more efficient is sliding-window attention.
The paper proposes families of structured matrices to parametrize $W_Q W_K^\top$ in ways that are still computationally efficient but lead to full rank matrices.
Claims And Evidence: The paper claims to a/ solve the problem encountered with low-rank approximations for problems that intrinsically require high-rank. b/ improve language modelling tasks by including a locality bias.
The claims are not made very precise throughout the paper (no formal statements) and the empirical evidence seems a bit weak.
Also the claim "attention uses the same scoring function for all input pairs without imposing a locality bias" in the abstract seems misleading to me. Early models used positional encoding with the embedding, whilst Llama models and many others use ROPE in each attention layer. Hence, it is certainly the case that the attention use some relative information about the the tokens.
Overall it is a bit unclear whether they aim to be better (quality) or more efficient. Or pareto-better.
Methods And Evaluation Criteria: The benchmark with the in context regression seems a bit limited.
For the language modelling task, I would expect some discussion of the positional encoding.
Theoretical Claims: The only theoretical claim is that MLBTC (Definition 1) subsumes MLR and BTT matrices. This seems correct.
Experimental Designs Or Analyses: see above.
Supplementary Material: I just looked at some more experiments.
Relation To Broader Scientific Literature: As mentioned above I am missing a discussion of positional encodings. Furthermore, modern LLMs often use GroupedQueryAttention. I think for larger impact those should be discussed.
Essential References Not Discussed: see above
Other Strengths And Weaknesses: --
Other Comments Or Suggestions: Overall, I am somewhat insecure about the impact of the paper. I don't see a large theoretical contribution and the empirical results are a bit limited.
Questions For Authors: ### 1. I am struggling a bit to follow the Locality Bias discussion in section 3.4.
Could you give me a bit more explanation of how Equation 9 get's transformed into sth of the form of equation 4?
In particular (9) requires some explicit handling of the relative indices, whereas 4 does not.
Also in (9) my understanding is that there is a maximum "reach" defined by the maximum level. This would mean that it cannot consume the full context and is actually more like a sliding window.
---
### Figure 3:
Am I understanding correctly that Attention with 1Head uses the full rank $D \times D$ matrix? And yours is always using a single head as well, right?
From an expressivity perspective, 1 Head Attention should be strictly better that BTT. Are you hence making a claim about training dynamics?? Also in Appendix Figure 9, the ordering seems not consistent. How did you choose to report D=128 in the main document?
Ethical Review Concerns: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your constructive feedback. We respond to your questions below.
**Relationship with Positional Encoding and RoPE.**
It is certainly true that many modern transformers use positional encoding schemes that capture relative position. For instance, RoPE encodes positional information by rotating query and key vectors by an angle proportional to the token’s index in the sequence. This can help the scoring function treat neighboring tokens alike. So in a way, RoPE does have a “locality bias”, or at least a "locality awareness".
But the goal of our MLR attention is different. We are not trying to simply encode relative positional information or boost the attention score between neighboring tokens. Instead, we change the computational cost of the scoring function based on the tokens’ positions. Put differently, standard attention (with or without RoPE) uses the same query/key dimension for all pairs of tokens. We effectively use a smaller query/key dimension for tokens that are far apart. This saves FLOPs compared to standard attention, while still allowing high quality attention scores for neighboring tokens. Thus our MLR attention is compatible and complementary with existing positional embedding schemes, including RoPE.
We acknowledge that our choice of the term “locality bias” in the context of MLR attention may have obscured our meaning. In our revised manuscript, we instead call it "distance-dependent compute allocation".
**Do we "aim to be better (quality) or more efficient"?**
1) Bilinear BTT improves expressive power by increasing the effective query/key dimension of each head (“the rank”). However, it costs more parameters and compute than standard attention (see Table 1). That is, it increases “quality” at the possible expense of efficiency. Figure 3b shows that this trade is worth it.
2) MLR attention improves efficiency by decreasing the effective query/key dimension for pairs of tokens that are far apart in the sequence. That is, it saves FLOPs at the expense of expressiveness. Because real-world sequence data has local structure, this reduction in expressive power probably does not matter.
**Grouped Query Attention (GQA).**
In GQA, the same KV transformation is shared across several heads. Our methods can be thought of in terms of query and key transformations too; see Appendix B.2 and the discussion under Eq 9. Thus, our approach is fully compatible with GQA. We include a detailed discussion of GQA in our revision.
**Explanation of how Equation 9 gets transformed into S of the form of equation 4.**
Claim: The $j, j’$ entry of S equals the right hand side of Eq 9.
Proof. For simplicity, assume there are just two levels, so Eq 9 reduces to the formula preceding it on line 246.
Divide $X = \begin{bmatrix}X_1 \\\\ X_2\end{bmatrix}$ into blocks. $X_1$ is the first half of the sequence and $X_2$ corresponds to the second half. Divide $W_Q = \begin{bmatrix}L_1 & L_2\end{bmatrix}$ (line 270). Thus $$Q = X W_Q = \begin{bmatrix}X_1 L_1 & X_1 L_2 \\\\ X_2 L_1 & X_2 L_2\end{bmatrix}$$
Now divide $Q$ into 3 named blocks according to Figure 1b:
- $Q_{11} = XL_1$
- $Q_{21} = X_1 L_2$
- $Q_{22} = X_2 L_2$
Analogously
- $K_{11} = XR_1$
- $K_{21} = X_1 R_2$
- $K_{22} = X_2 R_2$
Finally, plug the above into the definition of S on line 268:
$$S = Q_{11}K_{11}^\top + \begin{bmatrix}Q_{21}K_{21}^\top & 0 \\\\ 0 & Q_{22} K_{22}^\top\end{bmatrix} = XL_1R_1^\top X^\top + \begin{bmatrix}X_1 L_2 R_2 ^\top X_1^\top & 0 \\\\ 0 &X_2 L_2 R_2^\top X_2^\top\end{bmatrix}$$
Consider the $j, j'$ entry of $S$. If they’re in different blocks, then it’s $x_j^\top L_1 R_1^\top x_j$. If they're in the same block, it's $x_j^\top L_1 R_1^\top x_j + x_j^\top L_2 R_2^\top x_j$.
We added this derivation to the appendix.
**Is there a maximum "reach" defined by the maximum level?**
The first level is global, so all tokens interact. In the notation of Eq 9, this is because *all* pairs of tokens $j, j’$ have $d(j, j’) \geq 1$.
**Does Attention with 1 Head use the full rank D×D matrix?**
Yes.
**Does our attention always use a single head as well?**
We use multiple heads. E.g., in Fig 3, our bilinear models use 8 heads.
**How did we choose to report D=128?**
In all four subplots of Figure 9, standard 1-head attention and BilinearBTT (both full rank) converge well before standard H=8 attention (which is low rank). This is the only claim we make with this figure or with Fig 3a. We picked d=128 because it was the largest and most realistic. In Fig 3b, we further show that Bilinear BTT is better than 1-head attention because it trains faster in terms of FLOPs.
Thank you again for your detailed feedback and your questions. We made a significant effort to address your questions, and we would appreciate it if you would consider raising your score in light of our response. Do you have any additional questions we can address? | Summary: --- increased score from 3 to 4 after comment from authors ---
This work proposes a new way to parameterize the query-key operation in attention matrices. When the key and query matrices are low-rank, as it is the case in the vanilla transformer architecture, high-dimensional input data might suffer from a low-rank bottleneck leading to low performance. While introducing full-rank matrices would solve this problem, this increases the number of flops needed to run the model. Therefore the authors propose a new version of parameterizing this operation via highly structured high-rank matrices. This not only increases efficiency compared to the dense matrices and allows representing high-rank operations, but also allows taking in distance between tokens as a parameter to model locality.
Claims And Evidence: The claims on the parameterizations themselves are valid, i.e. how they relate to one another and their parameter counts.
The evidence that the new parameterizations are more efficient is shown in experiments on language, in context learning and regression. However, it is unclear from the manuscript how the training time, hyperparameter search and final performance, as well as the number of heads influence one another. For practicioners, it would be useful to obtain a deeper understanding of there links, to be able to decide in which cases to use this architectural component specifically.
Methods And Evaluation Criteria: see above.
Theoretical Claims: There is no strong theoretical claims except the parameter efficency that are to check, the inclusion of the related matrix families seem ok.
Experimental Designs Or Analyses: It would be useful to include error bars in all experimental plots over several runs to estimate the variance and significance of the differences.
Supplementary Material: The authors provide the code in the supplementary material, thanks!
Relation To Broader Scientific Literature: The work examines attention mechanism closely, which is itself being studied widely due to the high frcation time and compute constraints it ensues. Next to the other models of decreasing computational efficiency this is another valid and interesting approach.
Essential References Not Discussed: Two factors that might be discussed more in depth are the following:
(Q1) Since the authors mainly advocate that their approach is more efficient, it would be useful to know how exactly the flops and "non-embedding compute" are calculated. Also it would be useful to know how this approach impacts parallelization. Similarly, when the authors mention on page 5, "DxD matrix is prohibitively expansive", I would like to understand better why. After all the rule of thumb is that the hidden layers have size 4D - so the computation there is even more expensive. What is the difference between the two settings that motivates your statement?
(Q2) The statements about locality never consider the fact that most input data is decorated with positional information via positional embedddings. It seems that this already adds a sort of positional bias to the matrix when the positional embeddings are relational and their outer product represents a structured matrix. How does this relate to your approach?
(Q3) You mention on page 4 that the BTT matrix has been used as a replacement for linear layers in neural networks. This is elaborated on in section 6, first paragraph. Can you make the special properties of the query-key-matrix more explict, that warrant a special examination, rather than simply treating it as a linear layer and replacing it with those previous methods?
(Q4) Is there work that investigates the trade-off between the number of heads and the attention rank?This would seem related to the efficiency discussion, as well as your experiments where you compare 1- and 8-head models. Some known mechanisms, such as induction heads [1] expllicitly use several heads to execute different functions, it is unclear how your proposal would be able to implement them.
[1]Olsson, Catherine, et al. "In-context learning and induction heads." arXiv preprint arXiv:2209.11895 (2022).
Other Strengths And Weaknesses: .
Other Comments Or Suggestions: The figure lables are quite small compared to the text.
Even though at this point in time some elements of the introduction seem like universal truths, it would be correct to cite the appropriate related work when mentioned, i.e.
- L.43: the attention mechanism/transformer
- L.47: transformers are being used as general purpose tools
- L.48: Transformers have specific inductive biases
- L.14: a large ongoing research effort ... for long and big models
- L.34: lacks a bias for prioritising local interactions - how is this related to positional encodings? - why is the locality bias from them not enough?
Questions For Authors: See all questions (QX) before in the relevant sections.
(Q5) On page 5, left first paragraph you choose $s=1$ or $s=2$. I might have misunderstood the previous section, but does this imply that the largest matrix you can fully represent for $s=1$ is a 1x1 matrix? Could you elaborate on this together with the practical values you choose in your experiments? Or is there a typo in the $a=b=c=d=s=\sqrt{d}$ constraint?
(Q6) Why do the 8- and 1-head attention structures in Fig.3b) have the same points on the x-axis? Should the 8-head one not be more compute intense? Maybe it woud be helpful to clarify which settings you are using here in the caption.
(Q7) In Figure 3a), what do you mean by "controlling for the number of training steps"?
(Q8) This might be me, but can I use several heads of MLR and BTT, or combine them? Would it make sense to compare the 8-head attention to the 8-head setting of the structured matrices?
The responses to the questions here would improve the presentation and understanding of the paper, specifically to make the efficiency improvements more clear empirically - this would strengthen the main claims of the paper.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your thoughtful questions and supportive feedback!
**Hyperparameter and Error Bars.**
In all experiments, for both our methods and baselines, we sweep over the learning rate while keeping other hyperparameters fixed. In all figures, including Fig 3 and Fig 4, we plot only the best learning rate for visual clarity. Since our code derives from the NanoGPT codebase, the other hyperparameters (weight decay, Adam betas, etc.) are well-tuned for standard attention already. We did not have the compute budget to sweep multiple random seeds for our larger experiments, though we can add error bars for the smaller-scale experiments to our revision.
The choice of learning rate is further discussed in Appendix E.1. Our implementation adopts muP, a method for stable hyperparameter transfer across model width. Please refer to the “Larger Models and Datasets” section in our response to Reviewer 1ojm for a figure that plots loss against various learning rates. We hope that this helps address your concern that we indeed pick the best learning rate for both the baseline and our method.
**Response to Review’s Questions.**
### Q1
You can refer to the `compute_model_FLOPs` function in `src/utils.py` provided in our codebase. We use FlopCountAnalysis from the fvcore library. This function traces the forward pass of the model and records the total number of FLOPs for all the operations. To compute the non-embedding FLOPs, we subtract the FLOP count of the token embedding and language modeling head (the first and last linear layer)
The MLP layers have weight matrices of size 4D x D, but the attention layers have many heads. It would be too expensive for *each* head to have a D x D weight matrix. For example, Llama 3, 70B has H = 64 heads, D = 8192, and 80 layers. If each head used an unstructured DxD weight matrix instead of the D x (D/H) matrices $W_Q$ and $W_K$, it would have 330 billion more parameters.
### Q2:
We kindly refer you to our response to Reviewer zR4a.
### Q3:
Previous methods were for linear layers of the form $X \mapsto AX$. We use it for $x \mapsto X W_Q W_K^\top X^\top$. That is, previous work on replacing linear layers with structured BTT could not account for the fact that a structured matrix is *already* being used in standard attention, since $W_Q$ and $W_K$ function not as two separate linear transformations, but a single, low-rank (bi-)linear transformation.
### Q4:
- Like standard attention, our proposed methods use multiple attention heads in each attention layer. Our modifications to standard attention are applied separately to each head. In Fig 3, we compare our method to standard attention with 1 head as a baseline since 1-head attention is full rank like our version, but that isn’t our proposed method. We don’t explicitly test for induction heads, but we think our methods can implement them as well as standard attention, especially since they succeed at language modeling and in-context linear regression.
- There is some work on the tradeoff between number of heads and rank. Theoretical: Amsel (https://iclr.cc/virtual/2025/poster/27747) and Sanford, “Representational Strengths and Limitations of Transformers” (https://openreview.net/pdf?id=36DxONZ9bA). Empirical: Bhojanapalli (https://dl.acm.org/doi/10.5555/3524938.3525019) and Appendices D.4 and E.2 of the muP paper https://arxiv.org/pdf/2203.03466.
### Q5
Let us fix $a=b=c=d=\sqrt{D}$, as in our experiments. We are now free to set s to anything between 1 and $\sqrt{D}$ to trade off efficiency for expressivity. (Because of the cited result, setting $s > \sqrt{D}$ would be pointless.) We chose to set s = 1 or 2 to maximize speed, and we find that the model is still expressive enough to perform well (even better than low rank attention). In fact, when s=1, BTT matrix is exactly a Monarch matrix (https://arxiv.org/abs/2204.00595), which is an expressive matrix class already.
### Q6
The 8-head and 1-head models we test have exactly the same compute intensity. This is because the 1-head model has an 8x larger head dimension, due to the Hr=d rule.
### Q7
We mean to say that each point on this graph corresponds to a model that was trained for the same number of steps. We have now clarified this point in the caption.
### Q8
We are currently using multiple heads of BTT or MLR attention. We generally compare our attention to standard attention with the same number of heads. In Fig 3, we additionally compare to standard attention with one head. We explain this more clearly in the revision.
**Writing and Style**
Thank you for your suggestions about enlarging figure labels and including citing some standard claims. We have incorporated them in our revised manuscript.
We made a significant effort to address your questions, including paper edits which we feel improve the clarity of our paper. We would appreciate it if you would consider raising your score in light of our response. Do you have any additional questions we can address?
---
Rebuttal Comment 1.1:
Comment: Dear authors,
Thank you for the explanations, that improved my understanding of the paper, and hopefully will help future readers as well in the future after you edited them in. In agreement with reviewer zR4a I think the choice of "distance-dependent compute" is very helpful to the intuitive understanding. I also apprechiate that you will add the error bars for smaller scale experiments, but have complete understanding that with limited compute budget it is not possible to do it for the larger scale experiments.
Best!
---
Reply to Comment 1.1.1:
Comment: We are grateful to the reviewer for the constructive feedback! We will add the explanations and error bars for smaller scale experiments in our revision. We will also change the term locality bias to distance-dependent compute. Thanks to the Reviewers' comments, these will undoubtedly improve the manuscript. | Summary: The authors propose to address a common limitation of existing softmax attention layers: the information bottleneck when using small head dimension. To do this, they propose to bake in a locality bias into the structured parameterisation of the attention weights.
The attention mechanism is introduced cleanly as a billinear form, from which they can introduce structured and parameter efficient bilinear forms. This work appears to build upon the work of Parshakova et al but in the context of designing an efficient attention layer.
Claims And Evidence: The main claim is that current attention layers have an information bottleneck, which is more significant when the head dimension is reduced. Naturally, this depends on the data, but the intuition is valid, the results confirm this, and the authors provide significant references to existing works that highlight both the theoretical and practical cases where this can be a problem.
Secondly, they propose that a locality bias is a good way to introduce more parameter efficiency, without degrading performance. The authors confirm this with emperical results.
Methods And Evaluation Criteria: The authors evaluate on the OpenWebText dataset and the ETT time series forecasting dataset, where they observe better performance as the sequence length grows. Although the first dataset is moderate size, a large scale dataset and larger models would give this paper a much bigger impact. Although understandably, there are new difficulties when scaling up these methods to ensure theoretical flops does translate to real wall clock performance reductions.
Theoretical Claims: I looked through the derivations and they appear to be correct.
Experimental Designs Or Analyses: The wall clock time is great to see and strengthens the practical impact of this paper. The implementation using batch matrix multiplication is simple and effective for these structured block matrices.
Supplementary Material: I had a look through the code and it seems to be relatively complete.
Relation To Broader Scientific Literature: This work naturally builds upon existing and recent development of multi-level low-rank matrices. These fit very nicely into the attention matrix formulation and are well motivated. This work has a very broad impact, and further practical developments on efficient implementations could enable its widespread adoption among practitioners.
Essential References Not Discussed: Other than [1], the discussions are relatively complete.
[1] Rethinking Attention with Performers. ICLR 2021
Other Strengths And Weaknesses: One of the main limitations is not seeing more quantitative results, larger models etc.
Other Comments Or Suggestions: None
Questions For Authors: None
Ethical Review Flag: Flag this paper for an ethics review.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank you for your supportive feedback. We address your comments below.
**Larger Models and Datasets:**
We are hopeful about the prospect of scaling up these experiments to even larger models and datasets. As a first step in that direction, we now present a new experiment on hyperparameter transfer for our architectures. The maximum update parameterization (muP) has become a crucial tool for scaling models up because it provides a way to transfer the results of hyperparameter tuning from small models to large ones. This is far less expensive than tuning hyperparameters for large models directly. For architectures that use structured matrices, muP does not work out of the box, but in Appendix E of our paper, we provide a recipe for adapting muP to our architectures. Our latest experiment validates this recipe.
In the figure provided in the link https://drive.google.com/file/d/18pI--DmWWqB3PqFd-dEaziOypc5rRIP8/view?usp=sharing, we show the validation loss of an 8 Level MLR attention and standard attention on OpenWebText across a variety of learning rates. In our paper, the maximum model width we used is 768 with a context length of 1024. Here we sweep over model widths 512, 768, and 1024 with a reduced context length of 256 due to compute constraints. As the figure shows, our MLR attention shares the same optimal learning rate across model width, and it’s also consistently better than standard attention when both are properly tuned. This result suggests MLR will continue to perform well on larger models trained with more data.
**Efficient Implementation.**
As the review notes, a thoughtful IO-aware implementation is needed to ensure that savings in FLOPs translate to savings in wall-clock time on a GPU. Like standard attention, our proposed methods rely on a few batch matrix multiplications followed by softmax. The block diagonal structure of MLR matrices is easy to parallelize, and we think the highly structured permutation matrix in BTT can be fused with the surrounding multiplications. Thus, we are optimistic that techniques similar to Flash Attention can be applied to our attention variants (and indeed, to general MLBTC matrices).
**Rethinking Attention with Performers. ICLR 2021**
Thank you for pointing out this paper! We have incorporated a discussion of it in the revised manuscript.
We would like to note that while the Performers paper proposes an algorithm for computing attention more efficiently achieving a linear instead of quadratic dependence on the sequence length, the function they are (approximately) computing is the standard attention function. As a result, they still have a low rank bottleneck and they lack a locality bias in the allocation of compute. In contrast, our approach to improving transformer models is to replace the standard attention function with something else. Our proposed methods have different inductive biases from standard attention, which we show leads to better performance in several cases. However, we retain the quadratic dependence on the sequence length. We leave it to future work to find efficient algorithms for (approximately) implementing our proposed architectures.
Thank you again for your detailed and supportive review. We hope we addressed your questions. Please let us know if you have any additional questions or comments that we can discuss.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors comments in addressing the discussion with Performers. The comparison is good to see and should be included in the main manuscript. Furthermore, the additional results scaling to a larger model is great to see! I maintain my original score.
---
Reply to Comment 1.1.1:
Comment: We are grateful to the reviewer for the constructive feedback! We will add the new figures, discussions, and references. Thanks to the Reviewers' comments, these will undoubtedly improve the manuscript. | Summary: The paper address two issues of the attention computation.
The first is the bottleneck caused by the low rank computation of the Key and Query matrices in the attention computation. Instead of doing the standard low rank decomposition, they proposed to use structure matrices to represent the attention score, which is full rank but still more efficient in computation that the full DxD matrix (O(D^(2/3)) v.s. O(D^2)).
The second is not paying more attention to locality. Most problems have locality properties where the closer tokens are often more relevant than the further tokens. By introducing structure matrices, the proposed method introduce locality into attention computation.
Claims And Evidence: * Better at the efficiency and performance trade-off.
- Figure 10 in the Appendix shows the loss as a function of wall time.
* Locality.
- The paper includes experiment on regression problems where the standard attention performs poorly and show that the proposed method can improve the performance (Figure 3).
Methods And Evaluation Criteria: Yes. The proposed methods introduce structure into the attention computation that was originally represented as low-rank matrix multiplication. The method is evaluated on in-context regression, which highlight the low-rank bottleneck issue.
Theoretical Claims: N/A.
Experimental Designs Or Analyses: The paper uses experiments measuring squared error on in-context regression tasks and shows that the proposed method outperforms the standard method.
Supplementary Material: Experiments regarding the time and performance tradeoff.
Relation To Broader Scientific Literature: The paper is proposing structured matrices in attention computation. There were other previous works using different design of structured matrices.
Essential References Not Discussed: N/A.
Other Strengths And Weaknesses: The paper is clear in the target problem and proposed methods to solve the problem.
The writing of the paper can be improved.
Other Comments Or Suggestions: Typos:
- line 033. Section 1, paragraph 2, third last line. "express with attention".
- line 304. Section 4, paragraph 3, last line. The equation is missing a "(".
Questions For Authors: I'm wondering if there are textual problems (non-regression problems) where the locality can have an impact.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your thoughtful and positive feedback. We address your questions below.
**Contributions in Comparison to Prior Works.**
Thank you for pointing out the connection of our work to the broader literature of structured matrices. We would like to highlight further connections that we did not mention explicitly in the paper:
- Several prior works [2, 3, 5] replace linear layers in the Transformer model for improved efficiency and a better scaling law. Our work goes beyond them by 1) considering structured matrices in bilinear transformations, like the one that defines the attention score, and 2) using structured matrices to replace data-dependent matrices like QK^T, where the query and key matrices are themselves the outputs of a linear layer.
- The Multi-Level Low Rank (MLR) matrix [1] was originally proposed in the applied math literature as an extension of low rank matrices to fit multilevel factor models. To our best knowledge, our work is the first to apply MLR matrices to deep learning models.
- We propose Multi-Level Block Tensor Contraction (MLBTC), a family of structured matrices that generalizes many common structured matrices like Low Rank, Kronecker, Monarch [2], Block Tensor Train [3], Multi-Level Low Rank [1]. We believe this is a novel perspective. We conduct experiments to explore the various inductive biases that these structured matrices encode when applied to a Transformer architecture.
- The Hydra paper proposed the matrix mixer framework that unifies common sequence mixer modules (e.g. CNN <-> Toeplitz, Linear Attention <-> Low Rank, Mamba <-> Semi-Separable, etc.) with their underlying structured matrices. We believe our efforts add to this line of work of exploring novel structured matrices for sequence mixing.
**Writing.**
Thank you for your helpful feedback regarding the writing and typos. We have updated our manuscript accordingly, alongside additional clarifications inspired by your questions.
**Locality.**
We believe that problems with long context lengths and a hierarchical structure are most amenable to our approach. The computational savings of MLR attention compared to standard attention increases with the sequence length. And the multi-level nature of MLR matrices make them well-suited to hierarchically structured data, where tokens become gradually more and more related as their distance decreases. One application we are excited about is code models. Code repositories tend to be large, pushing the limits of standard transformers' context windows. They are organized hierarchically into folders and files, and each code file is further organized by an Abstract Syntax Tree, with hierarchically nested classes, methods, control structures (loops), lines, and expressions. Our method may help transformers read repositories more efficiently, like humans do, focusing most of their effort on the immediate context, but keeping the global structure in mind too.
We are also intrigued by potential applications to non-text data, such as DNA sequences, phylogenetic data, time series data, and graph data. We plan to apply our method to some of these settings in future work.
Thank you again for your detailed review. We hope we were able to address all of your questions and that you would consider raising your score. Please let us know if you have any additional questions or comments that we can address or discuss.
________
**Reference:**
[1] Factor Fitting, Rank Allocation, and Partitioning in Multilevel Low Rank Matrices. https://arxiv.org/abs/2310.19214
[2] Monarch: Expressive Structured Matrices for Efficient and Accurate Training. https://arxiv.org/abs/2204.00595
[3] Compute Better Spent: Replacing Dense Layers with Structured Matrices. https://arxiv.org/abs/2406.06248
[4] Hydra: Bidirectional State Space Models Through Generalized Matrix Mixers. https://arxiv.org/abs/2407.09941
[5] Searching for Efficient Linear Layers over a Continuous Space of Structured Matrices. https://arxiv.org/abs/2410.02117 | null | null | null | null | null | null |
Almost Optimal Fully Dynamic $k$-Center Clustering with Recourse | Accept (poster) | Summary: The paper claims to construct an $O(1)$-approximate solution for the metric $k$-center problem in the dynamic setting, with $O(1)$ amortized recourse and $\widetilde{O}(k)$ amortized update time. By combining a recursively nested MIS (Maximal Independent Set) with a dynamic sparsifier, the paper improves the amortized update time of MIS from $\widetilde{O}(n)$ to $\widetilde{O}(k)$.
Claims And Evidence: The proofs for MIS are clear and convincing. However, the explanation of the dynamic sparsifier lacks clarity; more details or a refined lemma for the properties of the dynamic sparsifier may be useful.
Methods And Evaluation Criteria: The methods are appropriate. The paper adapts the dynamic sparsifier from $k$-means to $k$-center via a conversion from $k$-means to $(k,p)$-clustering and then to $k$-center.
Theoretical Claims: The proofs and analysis for MIS are correct. However, the analysis of amortized recourse and update time in the dynamic sparsifier is unclear and raises questions:
1) In the expression "$\beta · k = \wiletilde{O}(k)$," is a $\log n$ factor hidden?
2) On page 8, the paper states that "each $U_j$ has size $O(k)$." However, since $U_1 = O(n)$, could $U_j$ potentially be $O(n)$? If so, should the recourse here include a $\log n$ term?
Experimental Designs Or Analyses: No experiments are included.
Supplementary Material: Appendix A (comparison with prior work) and Appendix B (two proofs) were reviewed; the proofs are correct. No other supplementary material is provided.
Relation To Broader Scientific Literature: The paper claims to achieve optimal approximation ratio, amortized recourse, and update time simultaneously, building on prior work.
Essential References Not Discussed: No
Other Strengths And Weaknesses: The work demonstrates originality by extending the dynamic sparsifier from $k$-means to $k$-center. The notation and lemmas are clearly presented, and the proof ideas are straightforward.
Other Comments Or Suggestions: No
Questions For Authors: Mentioned above
Ethics Expertise Needed: ['Other expertise']
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank all the reviewers for their efforts and insightful comments. Please let us know if you have any questions about our rebuttal.
Thank you for pointing this out; this will help us improve the clarity of the proofs.
Indeed, since $\beta = O(\log (n/k))$,
there is a $\log (n/k)$ factor hidden in $\Tilde{O}(k) = O(k \cdot \log (n/k))$, which also adhere our use of notation $\Tilde{O}$ (please see footnote $2$ in page $2$).
We will point this out clearly in the Notation section instead of the footnote.
Note that this will not break down the proof of Theorem 1.3 since no matter what is the size of the space (either $n$ before using sparsifier, or $\beta \cdot k = k \log (n/k)$ after using sparsifier) the recourse would be constant since we are calling the algorithm of Theorem 1.2 on this space which has constant recourse (i.e. $R_A(\cdot) = O(1)$ in Line 312).
You are right about the comment on page 8. There is a typo in Line 435, we should have written: ``each $S_j$ has size $O(k)$".
The rest of the analysis remains intact and correct.
Due to space constraints, we could not elaborate more on the proofs in the paper.
Below, we explain the proof in more detail (note that the current proof in the paper is self-contained and this explanation does not add any new ideas or change the proof):
Clarification on the proof:
Note that the recourse is defined as the number of changes in the output of the sparsifier $U = S_1 \cup S_2 \cup \cdots \cup S_{\ell-1} \cup U_\ell$ (see line 13 of Algorithm 1), and by reconstructing from $U_i$, the sets undergoing changes in the output are $S_i, S_{i+1}, \ldots, S_{\ell-1}$ and $U_\ell$.
This concludes the total change in the output $U$ is at most $O(k(\ell-i+1))$ since each $S_j$ has size $O(k)$, as well as $U_\ell$.
Although $|U_1| = O(n)$, the sizes of $U_i$s decrease \textbf{exponentially} according to Lemma 3.11, which is the key fact that we use to bound the recourse.
The recourse caused by reconstructing from $U_i$ is at most $O(k(\ell-i+1))$ as explained above.
Since the sizes of $U_i$s decrease exponentially (according to Lemma 3.11) until it becomes $|U_\ell| = \Theta(k)$, we get $\ell - i = O(\log(|U_i|/k))$.
Since we only start the reconstruction from $U_i$ every $\Omega(|U_i|)$ steps, the amortized recourse incurred by all the reconstructions starting from level $i$ would be bounded by $\frac{k}{|U_i|} \cdot \log \left( \frac{|U_i|}{k} \right)$.
The rest of the analysis follows from the bound on the potential function defined in the paper.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. I have one more question: Given that there is $\mathrm{poly} \log n$ factor in the amortized update time, does your result ultimately improve any result in Table 1, e.g., Bateni et al., 2023, or is the new result parallel to them? Could you provide a more straightforward comparison between your results and those of the previous ones, e.g., listing the advantages and disadvantages of the new result?
---
Reply to Comment 1.1.1:
Comment: Thank you for your response.
First, we note that although the update time of the algorithm is important, the approximation ratio and the recourse are also two crucial parameters to optimize in $k$-clustering problems.
The notion of recourse in $k$-clustering has received significant attention in recent years.
We now elaborate on the advantages of our algorithm:
We did not optimize the analysis of the constants in our paper for the sake of simplicity.
We will provide a more intricate analysis in the updated version of our paper that shows the approximation ratio is at most $20$---this is the best approximation ratio of any known algorithm with constant recourse.
In particular, the previous smallest approximation was $24$ by Lacki et al.~[SODA24], but their update time is a large $O(\text{poly} (n))$.
Furthermore, by slightly tweaking how we use the sparsifier, we can get a recourse of at most $4 + \epsilon$, for any constant $\epsilon > 0$. We note that the best we can hope to achieve here is a recourse of $2$, since in the worst case, we must remove a point and insert a new point.
We note that the result of Bateni et al.~[SODA23] does not have any guarantee on recourse (it might be as large as $\Omega(k)$).
We will provide this analysis in the updated version of the paper.
Finally, the update time of our algorithm is some $\text{poly}\log n$ factor larger than Bateni et al.~[SODA23].
To summarize, our approximation ratio is better than all of the previous results that have constant recourse, and the update time is at most some $\text{poly}\log n$ factor larger than Bateni et al.~[SODA23], which does not have any guarantee on recourse.
We will make sure to add a more detailed comparison in our updated paper. | Summary: This paper gives almost optimal dynamic $k$-center algorithm in metric space. In dynamic $k$-center, the task is to obtain an approximate solution with as small update time and recourse, where recourse means the number of center points needed to be updated per insertion/deletion.
This paper designs an algorithm to obtain $O(1)$-approximation with $\tilde{O}(k)$ update time and $O(1)$ recourse. So it achieves improved or matching guarantees in all metrics (approximation factor, update time, recourse) upon all existing results. This is impressive.
Technically, authors borrow a few techniques from the literature and manage to combine all the advantages.
Claims And Evidence: The claims have been proved rigorously.
Methods And Evaluation Criteria: Yes
Theoretical Claims: I read the proofs and I think they are correct.
Experimental Designs Or Analyses: Not applicable.
Supplementary Material: I have read the supplementary material.
Relation To Broader Scientific Literature: Dynamic k-center is an important research problem in algorihtmic machine learning. This paper achieves state-of-the-art dynamic k-center algorithm. I believe the result will benefit future research.
Essential References Not Discussed: No
Other Strengths And Weaknesses: Strength: 1. The algorithm presented is easy to follow. 2. The paper is mostly written well.
Weakness: No experiments.
Other Comments Or Suggestions: I have not found typos. The writing is very good.
Questions For Authors: 1. Are there any evidences (e.g. a lower bound) that $\tilde{O}(k)$ is a necessary update time bound?
2. Which metric of the algorithm performance suffers failure probability? I have not found the explicit probability claim in Theorem 1.1. Is it the case that with high probability the algorithm succeeds at every update?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank all the reviewers for their efforts and insightful comments. Please let us know if you have any questions about our rebuttal.
It is known that an update time of $\Omega(k)$ is necessary for this problem.
As stated in Line 58 in the introduction, it is known that, in the static setting, a running time of $\Omega(nk)$ is necessary to achieve any non-trivial approximation (Bateni et al. 2023).
Any dynamic algorithm with $o(k)$ update time yields a static algorithm with $o(nk)$ running time, which rules out any non-trivial approximation ratio.
Hence, $\Omega(k)$ update time is necessary in the dynamic setting, which means that $\tilde{O}(k)$ update time is near optimal (up to polylog factors).
Indeed, when we say that our algorithm succeeds with high probability at every update, we mean that the approximation ratio holds with high probability after handling each update (please see Footnote 5 in Theorem 1.3). | Summary: The paper proposes an algorithm for the k-center problem in the fully-dynamic setting in general metric spaces. In particular, the proposed method obtains a constant approximation with constant recourse and Otilde(k) update time (thus the name “almost optimal”). The algorithm is based on a combination of the reduction of the k-center problem to dynamic MIS in threshold graphs by Bateni et al. with the dynamic sparsifier (for k-median) of Bhattacharya et al.
Claims And Evidence: The theoretical claims seem to be correct. However, they are often very imprecise (see W1 and W2).
Methods And Evaluation Criteria: The proposed method, which combines two existing theoretical tools, is simple and very effective.
Theoretical Claims: I checked the proofs, albeit not in depth, and they seem to be correct. However, they often lack clarity.
Experimental Designs Or Analyses: Not applicable.
Supplementary Material: Not applicable.
Relation To Broader Scientific Literature: The paper combines two existing theoretical tools from the literature in a non-trivial way. This paper represents a step forward in closing the gap towards an optimal algorithm for dynamic k-center.
Essential References Not Discussed: Some additional references (albeit not essential) could be discussed, see W3.
Other Strengths And Weaknesses: Strengths:
- S1: the k-center problem is a highly relevant topic, and the fully dynamic framework is of great interest to both theoreticians and practitioners.
- S2: the paper is technically very solid. the contributions, although incremental on the works on which is based, are non-trivial and seem to be correct. Moreover, the paper represents a step forward in closing the gap towards an optimal algorithm for dynamic k-center.
---
Weaknesses:
- W1 (Main concern): Hiding the logarithmic factors in the $\widetilde{O}$ notation is fine for the statements in the introduction. However, in the rest of the paper, the correct asymptotic running time should be stated. Indeed, the current notation would make it harder for subsequent works to compare the running times to the ones of the present work (e.g. the comparison for reasonable values of $k$ between a $O(k \sqrt k)$ algorithm and a $O(k (\log k)^{10} )$ algorithm would be misrepresented by the $\widetilde{O}$ notation.)
The same actually holds for the approximation ratio and recourse, they should be clearly stated in the theorem statements in the technical sections. I would say this is especially important in a venue like ICML, which is of interest also to practitioners (unlike STOC or FOCS).
- W2: the fact that the algorithm assumes oblivious adversaries, although fair, should be stated in Theorem 1.1, as not to overstate results. It is unclear to me why the authors chose to duplicate the theorem into Thm 1.1 and 1.3.
- W3: The discussion of related works seems insufficient, as it fails to discuss related works such as:
* [1] improves memory requirements over the original fully dynamic k-center paper, at the cost of a worse approx. ratio.
* [2] 2+eps in general metric spaces, efficient in spaces with low doubling dimension.
* [3] 2+eps in general metric spaces, state-of-the-art for update times in spaces with low doubling dimension. Also, k and eps are not needed to be known in advance, which offers an advantage over the present work.
* [4] 2+eps approximation, and offers an alternative approach for spaces with low doubling dimension. Also does not require k and eps to be known.
* [5] proposes an algorithm for dynamic k-center on graphs.
These works and the relationship to the present work should be discussed in the related work section or in Table 1. See for example Table 1 in [4].
- W4: The proofs, albeit correct, are not easy to follow, as they require jumping back and forth in the paper. As an example, the recourse analysis in Thm 1.3 is proved for the sparsifier in Section 3.4.2, but then requires the reader to backtrack to Lemma 3.3 to recall how this is used to obtain the total recourse. A few sentences here and there to guide the reader would go a long way.
- W5: As previously stated, the paper is incremental with respect to the works on which is based. Indeed, other than combining the two established techniques, it seems like the recourse analysis of the sparsifier is the only substantial technical contribution.
---
- [1] Chan, T-H. Hubert, et al. "Fully Dynamic k-Center Clustering With Improved Memory Efficiency." IEEE Transactions on Knowledge and Data Engineering (2020).
- [2] Goranci, Gramoz, et al. "Fully dynamic k-center clustering in low dimensional metrics." 2021 Proceedings of the Workshop on Algorithm Engineering and Experiments (ALENEX).
- [3] Pellizzoni, Paolo, Andrea Pietracaprina, and Geppino Pucci. "Fully dynamic clustering and diversity maximization in doubling metrics." Algorithms and Data Structures Symposium. 2023.
- [4] Gan, Jinxiang, and Mordecai J. Golin. "Fully dynamic k-center in low dimensions via approximate furthest neighbors." Symposium on Simplicity in Algorithms (SOSA) 2024
- [5] Cruciani, Emilio, et al. "Dynamic algorithms for 𝑘-center on graphs." arXiv preprint arXiv:2307.15557 (2023).
Other Comments Or Suggestions: Minor comments:
- You state that the fully-dynamic framework focuses mainly on the three metrics, approximation ratio, recourse and update time. This is a compelling story for your paper. However, I would argue that there is a fourth metric, the query time. This is arguably the most important one. Saving all the points in a set and re-running Gonzalez on the pointset after each update has very low update time and yields the optimal approximation guarantee, but of course it has terrible query time, and thus does not qualify as a fully dynamic algorithm. This should be mentioned at least briefly.
Remark: Despite the simplicity of the proposed method, I am prone to suggesting acceptance if the authors address (e.g. by stating clearly the modifications they would make to the manuscript) my concerns on (i) the clarity in the statements of the theorems, and also on (ii) the literature review and (iii) the clarity of proofs.
Edit: after the rebuttal, I increased my score to suggest acceptance.
Questions For Authors: -
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank all the reviewers for their efforts and insightful comments. Please let us know if you have any questions about our rebuttal.
Our algorithm can be implemented to have an update time of $O(k \cdot \log^4 n \log \Delta)$ by using standard data structures.
Regarding the approximation ratio and recourse, we did not optimize our analysis of the constants in these bounds for the sake of simplicity.
By carrying out a more intricate analysis of the sparsifier, we can show that our approximation ratio is at most $20$---this is the best approximation ratio of any algorithm with constant recourse, even if we allow exponential update time. In particular, the previous smallest was $24$ by Lacki et al. [SODA'24].
Furthermore, by slightly tweaking how we use the sparsifier, we can get a recourse of at most $4 + \epsilon$, for any constant $\epsilon > 0$ (we note that the best we can hope to achieve here is a recourse of $2$, since in the worst case we must remove a point and insert a new point).
We tried to keep the proofs simple and provide asymptotic bounds.
Although the constants in the approximation ratio and recourse are not specified in the current analysis, they are good in practice.
We implemented our algorithm and compared the approximation ratio of our algorithm with the static $2$-approximation offline greedy algorithm by Gonzalez; we observed that the quality of the solution maintained by our algorithm is significantly better than what is derived by the current analysis.
Specifically, we tested our dynamic algorithm on 5 different datasets, on inputs of size $n = 10000$ with $k = 10, 50$ and $100$, and compared the cost of the solution maintained by our algorithm to the cost of the solution produced by the offline greedy algorithm. We observed that the cost of the solution produced by our algorithm is consistently within a factor of 1.25 - 1.75 of the cost of the greedy algorithm.
Regarding recourse, we also observed that the amortized recourse of our algorithm is actually sublinear in practice, at most $1$ in almost all of our test results.
We will improve the accuracy of the statements in the paper and provide precise guarantees.
We will also improve the analysis and provide a clear road map together with a natural flow for the analysis to be easy to follow for the reader.
Thank you for pointing out the results that we missed mentioning in our paper.
We will add them to the introduction and the related work sections of our paper.
Regarding the query time, note that our algorithm maintains the solution explicitly after every update.
Without considering an explicit solution after every update, we can not define the notion of recourse.
In this framework, the query time is subsumed by the update time since we can assume there is a query after every update as the solution must be maintained explicitly.
We will mention this clearly in the introduction to prevent any confusion.
---
Rebuttal Comment 1.1:
Comment: Thank you for your reply.
I think you should include the time complexity explicitly, as it would be interesting to try to close the gap with $O(k \log\Delta)$ or even $O(k)$ (but it's unlikely). As for the approximation ratio, the fact that it is tight with the best possible one (for constant-recourse algorithms) should be highlighted. If the analysis deviates significantly from the current one, you might want to defer the formal proof to an extended version of the paper, and only state it as a high-level remark. For the recourse, I'd advise against modifying the strategy, as the reviewers would have no way of assessing it. An analysis of the current one would be sufficient.
I am not surprised that the algorithm behaves in practice better than the worst case analysis suggests, and that's also a positive point.
I am confident that the revised paper with the more accurate statements, the expanded related work and a clear roadmap would be a significant improvement over the current state. Therefore, I am willing to increase my score.
---
Reply to Comment 1.1.1:
Comment: Thanks for your insightful suggestions. We will follow your recommendations to make our paper stronger. | Summary: There is no better summary of the paper than the one given in the abstract of the paper. So, I'll simply copy it below:
_"We give a simple algorithm for dynamic k-center that maintains an O(1)-approximate solution with O(1) amortized recourse and ˜O(k) amortized update time, obtaining near-optimal approximation, recourse and update time simultaneously. We obtain our result by combining a variant of the dynamic k-center algorithm of Bateni et al. [SODA’23] with the dynamic sparsifier of Bhattacharya et al. [NeurIPS’23]."_
Let me expand on some of the terms used to better understand the above statement.
- Dynamic algorithm: The algorithm maintains a solution for the problem at every step of data update, which includes both insert and delete operations on data items.
- The time to update the solution at every step is the update time, and the number of changes to the maintained solution is the recourse. The amortised analysis considers the overall resource usage across all steps instead of considering the worst-case over all the steps.
-The k-center problem is NP-hard to approximate below 2-approximation. Given this, O(1) approximation is the best one can hope for.
- The minimum number of steps required to obtain any constant approximation for k-center is \Omega(nk). Given this, O(k) update time is the best one can hope for.
So, the result is nearly optimal in approximation factor, update time, and recourse. Previous results did not achieve these almost-optimal bounds on ALL of (approximation, update, recourse).
Claims And Evidence: Yes, the claims made in the paper are supported by evidence.
Methods And Evaluation Criteria: Yes, the evaluation criteria makes sense.
Theoretical Claims: I verified the claims at a high level, and the claims are sound. It is possible that I may have missed some specific details.
Experimental Designs Or Analyses: This is a theoretical paper. There are no experiments.
Supplementary Material: No.
Relation To Broader Scientific Literature: The paper makes non-trivial progress on a theoretical problem on dynamic k-center problem. This is an important addition to the theory of clustering.
Essential References Not Discussed: I found the references are appropriate.
Other Strengths And Weaknesses: Strengths:
- The paper achieves almost optimal bounds for all the relevant resource bounds in a dynamic algorithm for the k-center problem (an important clustering problem).
- This is an important progress from the theoretical viewpoint. The paper brings together several ideas from dynamic algorithms literature and utilises them to design a state-of-the-art dynamic algorithm.
- The high-level ideas in the paper are well presented.
Weaknesses:
- It would be good to see experimental results that compare the given algorithm with other known dynamic algorithms.
- Instead of stating O(1) approximation, it might be good to explicitly state the constant in the constant factor approximation (unless the constant is bad). If this constant equals 2, then a brief discussion on where the loss happens could help close the gap.
- Briefly describing DynamicMIS used in Lemma 2.1 should help the reader unfamiliar with the previous literature. For example, it seems from the statement of Lemma 2.1 that DynamicMIS is a randomised algorithm (hence the expectation). So, it should be mentioned clearly that the expectation is over the internal randomness of the algorithm. This also makes is clear that the proposed algorithm is a randomised algorithm.
Other Comments Or Suggestions: - Some comments have been mentioned in the strengths and weaknesses section.
## Update after rebuttal:
I have decided to retain my score of (4:accept) after the rebuttal.
Questions For Authors: - Some important questions have been mentioned in the strengths and weaknesses section.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank all the reviewers for their efforts and insightful comments. Please let us know if you have any questions about our rebuttal.
In order to keep the proofs simple, we did not optimize the constant in the approximation ratio in the paper. By carrying out a more intricate analysis of the sparsifier, we can show that our approximation ratio is at most $20$---this is the best approximation ratio of any algorithm with constant recourse, \emph{even if we allow exponential update time}. In particular, the previous smallest was $24$ by Lacki et al. [SODA'24].
We have also done some experiments and compared the approximation ratio of our algorithm with the static $2$-approximation offline greedy algorithm by Gonzalez; we observed that the quality of the solution maintained by our algorithm is significantly better than what is derived by the current analysis.
Specifically, we tested our dynamic algorithm on 5 different datasets, on inputs of size $n = 10000$ with $k = 10, 50$ and $100$, and compared the cost of the solution maintained by our algorithm to the cost of the solution produced by the offline greedy algorithm. We observed that the cost of the solution produced by our algorithm is consistently within a factor of 1.25 - 1.75 of the cost of the greedy algorithm.
Thank you for pointing out the issue with using the DynamicMIS algorithm as a black box.
We will elaborate on this and clarify this point. | null | null | null | null | null | null |
Instruct2See: Learning to Remove Any Obstructions Across Distributions | Accept (poster) | Summary: This paper tackles the problem of obstruction removal in images with transformer-based generative models. The key design lies in the modeling of obstructions and also the alignment between types of obstructions and language descriptions (e.g. "rain drops", "fences"). The learned model achieves comparably better results on several benchmarks.
Claims And Evidence: The authors claimed the following contributions:
- **first unified obstruction formulation**: The authors did provide a formulation, but it is a general and broad formulation that previous works also shared.
- **zero-shot paradigm for obstruction removal, incorporating multi-modal prompts**: The authors did include self-switching modules given both visual and textual prompts. The dynamic soft masking strategy seems new to me.
- **comprehensive experiments, strong zero-shot capability**: The unseen experiments did suggest the model is somewhat superior to previous models.
Methods And Evaluation Criteria: The overall design of the model is intuitive, the modeling of soft and hard masking also seems reasonable for the obstruction types at hand. One minor concern is on the depth of exploration over the design of the text-aligned masking strategy as it seems currently it also relates to tasks like inpainting and editing.
Theoretical Claims: The theoretical claims are intuitive and easy to follow.
Experimental Designs Or Analyses: Though the authors achieve comparatively better results against existing methods, the overall improvement seems marginal.
Supplementary Material: Went through the whole supplementary material.
Relation To Broader Scientific Literature: The designs could similarly be adapted to tasks like generation, editing, etc.
Essential References Not Discussed: There are no obvious missing references.
Other Strengths And Weaknesses: As mentioned earlier, the current soft masking strategy seems to go beyond obstruction removal alone, the authors might want to dive deeper in to this learning framework
Other Comments Or Suggestions: I guess more representative images in the qualitative visualization could be beneficial for understanding the effectiveness of the proposed method.
Questions For Authors: See the Strength and weakness section.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely appreciate Reviewer zzKc's valuable feedback. Our responses to the weaknesses and questions are listed below.
**R-W1 Minor performance improvements:** We would like to emphasize that the primary goal of our method is not to maximize performance for any specific obstruction type, but to achieve robust zero-shot generalization across a wide range of unseen obstructions. From this perspective, our approach shows a clear advantage over existing methods, including those that claim to be category-agnostic and those we adapted to operate in zero-shot settings.
Instead of tailoring the model to known obstruction types, our distribution-agnostic formulation treats all obstructions using a unified approach. This applies regardless of their appearance, transparency level, or spatial structure. As a result, our method delivers consistent performance across both seen and unseen categories, something existing methods cannot achieve without retraining or manual tuning.
**R-W2 Selection of more representative cases:** Thank you for your suggestion. We will replace more representative experimental samples for visualization. | Summary: This paper introduces a zero-shot framework for image restoration that can handle a wide range of obstructions, including those not seen during training. Overall, the paper contributes a flexible, distribution-agnostic method for obstruction removal that harnesses multi-modal cues and dynamic masking to achieve robust performance across diverse and unpredictable obstacles.
Claims And Evidence: The submission puts forward several key claims, and overall, many of these claims are supported by a comprehensive set of experiments and analyses.
Methods And Evaluation Criteria: The contrast experiment and evaluation criteria make sense.
Theoretical Claims: I’ve checked the correctness of the proofs in the paper.
Experimental Designs Or Analyses: I check the soundness/validity of any experimental designs or analyses.
Supplementary Material: I review all the supplementary material.
Relation To Broader Scientific Literature: The key contributions and ideas include:
• Adaptive Masking with a Tunable Adapter: Depending on whether an obstruction has clear (hard) or ambiguous (soft) boundaries, the model dynamically adjusts the mask. This adapter refines the initial mask estimates, enabling more accurate removal, particularly for semi-transparent obstructions like raindrops.
• Zero-Shot Generalization: Extensive experiments show that Instruct2See not only performs well on in-distribution obstructions (like fences, raindrops, and flares) but also generalizes effectively to unseen types of obstructions (e.g., power cables, yarn, scratches).
• Empirical Results: The paper provides comprehensive quantitative and visual comparisons with state-of-the-art methods. The proposed approach often demonstrates improved PSNR/SSIM scores and superior visual quality, especially in challenging, real-world scenarios where traditional models may fail.
Essential References Not Discussed: No
Other Strengths And Weaknesses: Strength:
1. The paper proposes a unified obstruction removal method by formulating the task as a soft-hard mask restoration problem, effectively integrating visual and textual information, which demonstrates strong theoretical innovation.
2. It adopts a multi-modal prompting strategy along with an adaptive mask module, enhancing the model’s generalization ability in handling unseen obstructions, with extensive experiments validating its effectiveness.
Weakness:
1. The experiments on unseen obstructions are relatively limited in scope; further expanding the evaluation range may better demonstrate the robustness of its zero-shot learning capability.
2. The technical contribution is somewhat limited; while the multi-modal prompting and mask recovery techniques are effective, they do not substantially deviate from established methodologies, indicating a reliance on existing concepts rather than offering groundbreaking innovations.
Other Comments Or Suggestions: None.
Questions For Authors: In Supplementary Material B, we observe that the authors trained CLIP to better adapt to the dynamic soft masking approach. In fact, is it reasonable to train only the text module within the multimodal framework, and could not training the vision encoder affect consistency? Furthermore, based on our concerns mentioned above, we would like to ask whether the use of a multimodal model is necessary, or if it is possible to use separate modules for visual and textual input.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely appreciate Reviewer 4Rdn's valuable feedback. Our responses to the weaknesses and questions are listed below.
**R-W1 Limited scope of experimentation:** Thank you for your comment. We believe some of our experimental results may have been inadvertently overlooked. In both the main paper and supplementary materials, we present extensive zero-shot evaluations across a wide range of obstructions, including **rain streaks, snow, strokes, power cables, spots, scratches, yarn, shadows, watermarks**, and **complex multi-obstruction scenarios**. These results collectively validate the effectiveness and generalization capability of our method.
Unlike object removal, where datasets are abundant and well-defined, obstructions are often **irregular in shape, appearance, and distribution**, making comprehensive evaluation more challenging. To address this, we have curated and tested on **all publicly available obstruction data** that aligns with our problem definition. We will revise the manuscript to highlight these results more prominently. Additionally, we welcome suggestions on any other publicly available datasets we may have missed and are happy to include further evaluations as needed.
**R-W2 Limited technical contribution:** Thank you for your feedback. We respectfully believe this concern may arise from an underappreciation of the conceptual novelty behind our approach. Our method introduces a new formulation of obstruction removal as a distribution-agnostic soft masking problem, which fundamentally unifies the treatment of both opaque and semi-transparent obstructions within a single framework.
This perspective shifts away from traditional category-specific pipelines and reframes obstruction removal as a context-aware reconstruction task, where the model learns to reason about partially visible content regardless of obstruction type. Prior methods typically rely on known obstruction categories or retraining per class, whereas our method enables zero-shot generalization to unseen obstruction types—an ability that existing approaches lack.
Moreover, our flexible soft-mask recovery strategy, combined with multi-modal prompt integration, allows the model to adaptively handle obstructions with varying degrees of transparency, shape complexity, and semantic ambiguity. We also show that even with access to ground-truth masks, conventional methods fail to generalize across distributions, underscoring the need for our proposed formulation.
This unified, distribution-agnostic perspective and the demonstrated generalization across diverse and unseen scenarios represent a meaningful advancement in both the theoretical framing and practical capabilities of obstruction removal.
**R-Q1 CLIP fine-tuning strategy:** In our framework, the primary role of the text encoder is not to align text with visual features in the conventional sense, but rather to help the model interpret user instructions, particularly in understanding the transparency attributes of obstructions described in the prompt. To achieve this, fine-tuning the text encoder is essential.
For example, before fine-tuning, the cosine similarity (after softmax) between the user instruction *"There are raindrops in the image, please remove them"* and the two core commands *"remove opaque obstructions"* and *"remove transparent obstructions"* were 0.5374 and 0.4626, respectively. After fine-tuning, these shifted to 0.00004 and 0.99996, respectively, indicating a dramatic improvement in semantic alignment. This refinement is critical for guiding the model’s recovery strategy based on the nature of the obstruction.
Additionally, Section 5.3 (Ablation Study) in the main paper provides experimental evidence for the importance of textual and visual conditioning. We further support this with a qualitative case study in Appendix D.2, illustrating the performance gains achieved under different conditioning strategies. | Summary: In this paper, the author propose the Instruct2See, which is a zero-shot framework for removing both seen and unseen obstructions from images. It formulates obstruction removal as a soft-hard mask restoration problem, integrating multi-modal prompts via cross-attention. A tunable mask adapter refines masks for semi-transparent obstructions. The results demonstrate the outperforms state-of-the-art methods on PSNR and SSIM while generalizing well to unseen cases.
Claims And Evidence: The paper claims "Remove Any Obstructions." However, I would like to know whether there are any domain restrictions or size limitations.
Other claims I think are clear.
Methods And Evaluation Criteria: I think it makes sense.
However, I would like to understand the computational efficiency. Although Table 6 presents the results, I am still concerned about why the proposed model has the largest number of parameters, yet the FLOPs and runtime are not significantly higher.
Theoretical Claims: The paper lacks formal theoretical proofs but provides a clear mathematical formulation.
Experimental Designs Or Analyses: I think the results are promising, and the comparisons are comprehensive. However, I suggest that the author provide more implementation details.
Supplementary Material: Yes; Most of it.
Relation To Broader Scientific Literature: The work builds on vision-language models (CLIP). It advances the field by unifying multi-modal prompts for zero-shot generalization, addressing limitations of task-specific and all-in-one models.
Essential References Not Discussed: No
Other Strengths And Weaknesses: I am concerned about how the model performs in specific domains, such as the medical field.
Other Comments Or Suggestions: Typo: L802 As shown in...
Questions For Authors: Can the method handle overlapping obstructions (e.g., rain + snow) without interference?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely appreciate Reviewer 91FV's valuable feedback. Our responses to the weaknesses and questions are listed below.
**R-W1 Domain restributions:** Our model is trained on natural image datasets, and as such, it performs well across a wide range of domains within the natural image space, such as urban scenes, landscapes, wildlife, and daily objects, without restriction. However, for domains outside this distribution, such as medical imaging, the model cannot be expected to produce meaningful restorations, as the training data does not include such content.
However, our framework itself is domain-agnostic. By training on appropriately curated domain-specific data (e.g., medical images), the same architecture and methodology can be applied to obstruction removal in those domains. We will clarify this point in the revised manuscript to better reflect the scope and potential of our approach.
**R-W2 Explanation of running efficiency:** Thank you for your question. The efficiency of our method is largely due to the encoder-decoder architecture of the restoration model. Although the model has a relatively high parameter count, this does not translate into high computational cost. This is because deeper modules operate on low-resolution feature maps, allowing for higher channel dimensions without significantly increasing FLOPs or runtime. This design decouples parameter size from computational load, enabling efficient resource use while maintaining strong performance.
**R-W3 More details of experiment:** Thank you for the suggestion. We will revise the Experiment Settings section to include additional implementation and training details for improved clarity and reproducibility.
**R-W4 Minor spelling error:** Thank you for pointing this out. We will carefully review the manuscript to correct any spelling errors.
**R-Q1 Overlapping obstruction removal:** Thank you for your comment. We would like to clarify that our original submission includes experiments on images with multiple overlapping obstructions, with results presented in Appendix C.2 and Figure 11. These experiments demonstrate that our method can accurately identify and represent overlapping obstructions using multimodal prompts and masks, and that our model effectively removes them, further validating the robustness of our approach. | Summary: The paper provides a novel pipeline for “obstruction removal”: the combined task of identifying unwanted obstructions in an image and filling in the masked regions with plausible pixels. The method proposes a) using a hybrid adaptable masking strategy to identify the occluders and b) learning a text and vision conditioned model to make the inpainting process “context-aware”. These changes lead to competitive quantitative and qualitative results on seen occluders, and better performance with unseen occluders.
Claims And Evidence: In general, this paper backs up the claims it makes with reasonable evidence. The main claim of the paper is that their method performs on par, and sometimes worse or better than baselines on the task of obstruction removal. This seems to be well justified through standard PSNR / SSIM metrics. Evaluation is split between seen and unseen object classes, and many ways of visualizing the difference (eg. tables, qualitative visualization, comparison plot across baselines eg. Fig 5) are used to justify it. For different parts of the model (use of masking instead of end-to-end, cross attention module for conditioning, and adaptiveness in the masking), an ablation study is provided to show quantitative improvements.
Methods And Evaluation Criteria: At a high level, the method seems to be well motivated and clearly explained. However, key details which are present in the figures (eg. text / image encoders) are not included in writing. There were a lot more details provided in the appendix (a clear algorithm for inference, what actually goes into the text encoder), but this was not clear in the main paper. I believe the methods section needs to be augmented with more details about how corresponding text is generated for the datasets, and how a user will actually interact with the system using prompts.
A quick glance at a close baseline (PromptIR) seems to suggest that there are many other datasets / benchmarks being used in this family of work. As I am not an expert in obstruction removal, it is unclear to me whether any of these datasets are transferable to the current work. I believe the work should augment it's evaluation criteria reading to better describe why particular datasets were chosen, and add a section in the supplementary to describe why datasets used in comparable works were left out.
Theoretical Claims: This paper has no theoretical claims.
Experimental Designs Or Analyses: Other than the question of how the dataset was chosen and organized, the paper follows standard experimental design and analysis in comparison to a range of baselines, showing many quantitative and qualitative results and using standard metrics in computer vision for image quality. I reviewed all experimental details, including both tables and all details. Assuming that correct datasets were chosen, experiments are sound and clear.
Supplementary Material: I reviewed most parts of the supplementary material. I found many important experiments including method description for text encoder finetuning, limitations (single vs. multiple obstacles), failure modes (incorrect text) and comparisons with diffusion-based methods shown in the supplementary. I believe many of these things should be moved to the main paper, replacing repeated data in many figures (eg. Figure 5 shows the same information as the tables but with a different visualization, Table 1 has a greyed out section).
Relation To Broader Scientific Literature: In general, I find the work to be similar to other broad works in vision that use a generative model to solve under-constrained problems in a feed forward manner. The paper will benefit significantly from more description comparing the work in a more detailed manner to other naive approaches (simply using the easiest strategy of inpainting after masking with a foundation diffusion model for image generation) and more problem-specific approaches, while highlighting key differences at both levels. The paper has a very sparse description of related works and differentiation from baselines. While many relevant works seem to be cited, the description don’t clearly highlight the differences in the method in terms of technical contribution. Particularly, I believe revisions are needed to related work to highlight how the key components of the method differ from Restormer, PromptIR, or other close performing baselines. Further, it is important to characterize the scope in which ease of these baselines work. To this end, many claims in the “Obstruction Removal” section of related work are very general. Lines 083 (“still face challenges”), 085 (“limiting their effectiveness in real world”), etc do not clearly state the differences in failure modes of the two methods.
The writing of the work would benefit from augmenting it to highlight clearly how the model choices enable specific improvements in unseen classes of obstructions. For example, one might expect that conditioning additionally on text should reduce Bayes error of the prediction problem, or using cross attention (instead of another naive conditioning strategy) might enable more precise conditioning on fine-grained features, etc. These will then specifically expand the cases in which the method works compared to baselines. Presently, the writing does not follow this kind of structure clearly motivating what the method enables.
Essential References Not Discussed: I am generally unfamiliar with research in this sub-field and would not know if key baselines were not listed here.
Other Strengths And Weaknesses: This likely involves a summary of the other sections of the review with a few additional points.
Strengths
For the experiments the paper includes, they are sound and clear to the extent they are described. The paper is full of qualitative visualizations showing test cases in which the method works well, and all baselines are included in these visualizations.
Weaknesses
Writing is missing key details on related work, dataset choice and prompting / text conditioning details of method.
A large part of some critical important parts of the method and results have been put in the appendix / supplementary despite many repeated results being in the main paper. This includes the description of how the CLIP text encoder is finetuned, some failure analysis and comparison to image inpainting methods.
The paper does an insufficient job of limiting the scope of what works with their method, and providing technical motivation for the precise unseen cases that the technical changes should help (or not help) on. For example, after reading the paper, it is clear to me that the method performs better on three sub-classes of unseen obstacles, and some miscellaneous novel obstacles by looking at the qualitative and quantitative results. However, it is not clear why how these unseen classes precisely differ from the seen classes, how large the distribution shift is, or how robust the method will be to other forms of perturbations. It is also clear from the appendix that one failure mode is providing the unintended input text, but there is no analysis presenting the failure mode of the method when it is used as intended. For example, why were the three particular unseen classes chosen to evaluate? And what are some out of distribution unseen classes.
Other Comments Or Suggestions: As a result of 1) incomplete descriptions of related work, 2) lack of failure analysis/shortcomings, 3) lack of technical description precisely characterizing what kind of “unseen” objects we’d expect this method to work on, and 4) belief that the work would be a stronger fit for a vision rather than a machine learning conference due to lack of ML-direct writing, the present review is only that of weak acceptance. The review is especially borderline in belief.
Given lack of full understanding of benchmarks of this particular sub-field, I am keen to see the other reviews and will be open to change depending on author / other reviewer’s clarifications.
Questions For Authors: 1) Why is it that post-masking, an image diffusion model with one of many inpainting / imputation strategies cannot be applied for this task? Such models, since they’re trained at scale will naturally handle a wide range of objects - potentially even in a zero shot way? What prevents numbers from such an approach to be reported alongside other baselines?
2) How was the dataset selection for this method done? What are the standard datasets in this field and why was a subset of BSD chosen for evaluation? Why can’t datasets used by prior work (eg. Prompt IR and Urban100) be also evaluated for this method?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely appreciate Reviewer 17U3's valuable feedback. Our responses to the weaknesses and questions are listed below.
**R-W1 Lack of key details:** We emphasize that our method targets a fundamentally different problem than prior approaches such as Restormer and PromptIR. These existing restoration methods assume prior knowledge of obstruction characteristics and are designed to handle specific, predefined categories (e.g., rain, snow, flare). As a result, they lack generalization ability: a model trained for snow removal, for example, fails on rain without retraining, even when the distributions are similar.
In contrast, our work breaks this constraint through two key innovations: (1) a distribution-agnostic obstruction formulation, and (2) a flexible soft-mask recovery strategy integrated with multimodal obstruction representation. Together, these enable zero-shot generalization to diverse and unseen obstructions. This direction is both novel and underexplored, marking a significant advancement in the theory and practice of obstruction removal.
We will further strengthen the distinction between our method and existing work in the Introduction and Related Work sections to ensure clarity.
Additionally, we clarify that our original submission already includes: (1) a detailed rationale for dataset selection (see Section 3), and (2) complete details on prompt/text conditions, including their integration in Algorithm 1 (Appendix A) and illustrative examples in Figure 8 (Appendix B). As most of this information is provided in the appendix and may be easily overlooked, we will revise the main text to reorganize the structure or better guide readers to these supporting materials.
**R-W2 Unreasonable distribution between main text and appendices:** Thank you for your suggestion. We will reorganize the content distribution of the main text and appendices.
**R-W3 & Q2 Application scope and problem definition:** Thank you for your feedback. All obstruction categories, both seen and unseen, are sourced from publicly available datasets to ensure reproducibility.
The seen classes (Fences, Raindrops, Flares) were selected for their diversity in shape and structure, providing a strong foundation for generalizing to irregular obstructions. These include grid-like, point-based, and diffuse patterns, offering broad coverage of common obstruction forms.
The unseen classes (Rain Streaks, Snow, Stroke, Power Cables, Yarn, Scratches) were chosen to introduce appearance-level and geometric shifts from the training distribution. For example, snow and yarn differ in opacity and texture; rain streaks and power cables differ in orientation and continuity. This setup enables a meaningful evaluation of zero-shot generalization across varied obstruction types.
We will expand Section 3 to clarify these distinctions and include additional descriptions of seen/unseen differences.
Regarding limitations: Indeed, the limitation of our method under intended usage has already been discussed in Section 6 of the paper. Our approach is specifically designed for small, spatially sparse obstructions and is not suitable for cases where large regions are occluded, as this exceeds the model’s implicit semantic completion capacity. To make this limitation more explicit, we will include additional visual examples and further elaborate on typical failure modes, even within the intended application scope.
**R-Q1 Comparison with diffusion-based method:** Some important comparisons may have been overlooked. As shown in Figure 12 and Table 4 of Appendix C.3, our method demonstrates superior zero-shot generalization compared to inpainting baselines. This includes Repaint, a representative diffusion-based approach. In addition, in response to Reviewer CuLC’s Question W2, we extended our analysis to include DiffEdit, a diffusion-based image editing method. Together, these results provide comprehensive evidence of the advantages of our method over diffusion-based baselines.
| Method | Rain Streak | Snow | Stroke | Average |
| ------------ | -------------------------------- | -------------------------------- | -------------------------------- | -------------------------------- |
| LaMa | 29.07/0.8858 | 32.32/0.9108 | 28.10/0.8728 | 29.83/0.8898 |
| RePaint | 28.78/0.8865 | 32.20/0.9064 | 23.78/0.8059 | 28.25/0.8662 |
| DiffEdit | 23.88/0.6561 | 24.23/0.6732 | 11.65/0.6072 | 19.92/0.6455 |
| Instruct2See | **29.82**/**0.8907** | **34.85**/**0.9283** | **29.45**/**0.9067** | **31.37**/**0.9086** |
---
Rebuttal Comment 1.1:
Comment: Thank you for your response and acknowledging 1) improvements in writing, 2) characterizing distribution shift and 3) point to limitations in scope. To reiterate, I understand the acknowledgement about generalization to larger patches as a limitation - my question was more about limitations on the defined axes that were used to choose the categories (for example: opacity, texture, etc) that experiments and evaluations reflect. To be precise, the paper argue "rain streaks" and "power cables" can be repaired, but it is unclear if a more opaque, or a more high frequency texture could be reconstructed.
Thank you for the additional experiment, and clarifying that the approach performs better than DiffEdit. Have these method been trained from scratch on the dataset of choice? My question was more about the importance of pre-trained diffusion models (previous comment: "Such models, since they’re trained at scale will naturally handle a wide range of objects - potentially even in a zero shot way?") and the importance of the proposed design choices when comparing against a easy-to-use zero shot pre-trained method + editing objective / method (RePaint, DiffEdit with Stable Diffusion) etc, rather than just using a diffusion model instead of a reconstruction objective.
Thank you for other clarifications. I will maintain my recommendation for (weak) acceptance for now.
---
Reply to Comment 1.1.1:
Comment: Thank you for your thoughtful comment. We would like to clarify a few points, as there may still be some misunderstanding regarding our comparisons and experiments.
Our method is designed to generalize across diverse obstruction types, regardless of transparency or texture. A clear example is the stroke removal task (Figs. 1 and 9), where highly opaque, artificially generated obstructions occlude complex facial features. Our method accurately reconstructs these regions, supporting the strength of our zero-shot formulation.
Regarding comparisons with Stable Diffusion-based methods, we used official, publicly released models without task-specific fine-tuning to ensure a fair evaluation. All testing was conducted under a strict zero-shot setting, with obstructions differing from training distributions. Results show that while diffusion-based models generalize well in broad tasks, they struggle with precise obstruction removal, where our method performs more robustly.
We hope this clarifies the design and strengths of our approach. Thank you again for your feedback. | Summary: This paper studies obstruction removal of 2D images. The proposed model is a zero-shot method that can handle both seen and unseen obstacles in open vocabulary. The method is to obtain a mask of the obstructions and inpaint/repaint the image with a transformer. The results are claimed to be state-of-the-art.
Claims And Evidence: - Extensive experiments in both qualitative and quantitative manners effectively support the claim of state-of-the-art results.
Methods And Evaluation Criteria: - The core of this method contains two parts: (1) a mask detector to predict the obstruction mask, and (2) an inpainting model (that leverages cross-attention, prompts, etc.) to inpaint or repaint the image.
- However, it is completely unclear how this mask detector is designed, constructed, or trained.
- The word "mask detector" only appears 5 times in the whole paper, merely used as a black box.
- It is unclear whether the mask detector requires the user to input the type of obstructions or multi-modal clues or only with image input.
- Given that it is a crucial and integral model component, this becomes a substantial flaw in this paper's technical perspective.
- For the inpainting part, the model itself is straightforward and well-motivated. However, it is also a very typical task for diffusion-based models with the out-of-the-box training-free inpainting method DiffEdit.
- DiffEdit strictly constraints the modifications within the unmasked part from its implementation, so there will be nothing unrelated changed outside of the mask. The strong capability of diffusion models also have the potential to outperform the proposed method's novel part.
- I think "applying the predicted mask to Stable diffusion (XL or 3/3.5) + VLM-generated obstruction-free prompt + DiffEdit inpainting" should be considered a naive baseline for comparison.
- One issue of the naive baseline is some bad support of semi-transparent obstructions as the semi-occluded covered part is unseen to the model. And a very typical way is to follow Instruct-Pix2Pix to channel-wise concatenate the original image in the input, while still inpaint the masked image. If time permits, I would also like to see this results.
- The architecture of the proposed inpainting model is not very clear. It looks like both a VLM that can generate images and a ViT that do dense prediction. I would request more detailed explanation of this architecture, given no code is provided.
Theoretical Claims: There are no theoretical claims.
Experimental Designs Or Analyses: - The experiment is extensive, and the results can well support the claims.
- The metrics are mainly PSNR and SSIM. It would be better if there were also LPIPS metrics, along with VQAScore, GPTScore, or user studies.
Supplementary Material: There is no supplementary material.
Relation To Broader Scientific Literature: This paper proposes a novel method for inpainting in dealing with obstruction removal tasks.
Essential References Not Discussed: N/A.
Other Strengths And Weaknesses: - All the figures are small sizes. Even though there is a zoomed-in area, the area itself is too small compared with the full image. Therefore, it is unable to verify whether other parts of the images contain artifacts.
Other Comments Or Suggestions: Please refer to the reviews above. Specifically,
- Please provide a detailed description of how the mask detector works.
- Please add the naive diffusion inpainting baseline, at least the training-free one. Note that the prompt can be generated by a VLM, given the image and then removal of all descriptions of the obstruction.
This information is provided in the rebuttal. Therefore, I would like to raise the reviewing score from 2 to 3.
Questions For Authors: Please refer to the reviews above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely appreciate Reviewer CuLC's valuable feedback. Our responses to the weaknesses and questions are listed below.
**R-W1 Mask detector design:** This concern appears to stem from a misunderstanding. We have already clarified the role and details of the mask detector in Appendix A.1 of the submission. The mask detector is an off-the-shelf component and not central to our contribution. Our primary focus is a novel, distribution-agnostic framework for obstruction removal, which introduces (1) adaptive soft/hard recovery strategies for both opaque and semi-transparent obstructions, and (2) multi-modal prompt integration for precise obstacle representation. The framework is fully plug-and-play and can seamlessly incorporate more advanced mask detectors to further enhance performance without modification.
**R-W2 Comparison with diffusion-based method:** Indeed, our submission demonstrates that diffusion-based models struggle with irregular and complex obstructions. In Table 4 and Figure 12 of Appendix C.3, we compared our method against **Repaint**, a representative diffusion-based inpainting approach, and validated the strong zero-shot generalization ability of our framework through both qualitative and quantitative results.
As requested, we further evaluated **DiffEdit** for unseen obstacle removal (PSNR↑/SSIM↑):
| Method | Rain Streak | Snow | Stroke | Average |
| - | - | - | - | - |
| DiffEdit | 23.88/0.6561 | 24.23/0.6732 | 11.65/0.6072 | 19.92/0.6455 |
| Instruct2See | **29.82**/**0.8907** | **34.85**/**0.9283** | **29.45**/**0.9067** | **31.37**/**0.9086** |
The results show that DiffEdit performs poorly. This stems from its design as an image editing tool, which strictly constrains changes to masked regions and lacks the flexibility to handle irregular-shaped holes. Even when provided with detailed VLM-generated prompts, DiffEdit fails to achieve meaningful completions and often hallucinates entirely new objects. These limitations highlight the inadequacy of diffusion-based editing methods for this task and underscore the effectiveness of our proposed framework.
**R-W3 Usage of input strategy like Instruct-Pix2Pix:** We conducted a direct comparison between our final strategy and the channel-wise concatenation approach (original + masked image), as in Instruct-Pix2Pix. The results are shown in the table below.
| Method | PSNR↑ | SSIM↑ |
| - | - | - |
| Masked Image | **30.93** | **0.9250** |
| Original + Masked Images | 30.19 | 0.9173 |
Incorporating the raw image as input significantly degrades performance rather than improving it. This is because our training data includes three types of degradations, and direct access to the original image encourages the model to memorize specific degradation patterns, which harms zero-shot generalization.
While we agree that conventional inpainting methods struggle with semi-transparent obstructions due to the occluded content being partially visible yet unknown, our method addresses this effectively through a soft-masking strategy. Moreover, original image features are already embedded implicitly via CLIP’s encoder, enabling semantic completion without overfitting to training-specific obstructions.
**R-W4 Description of the restoration model:** The restoration model follows a standard encoder-decoder architecture, a widely adopted design in image processing tasks. As this component is not the core focus of our work (the key is the distribution-agnostic framework), we provided only a brief overview in the main text. However, to ensure methodological transparency and reproducibility, we will include a detailed description of the network architecture in the appendix.
**R-W5 More evaluation metrics:** As suggested, we incorporated three additional evaluation metrics—**LPIPS↓**, **CLIP Score↑**, and **User Study (US)↑** (0–1)—to provide a more comprehensive assessment of obstacle removal performance across both **seen** (Fence, Flare, RainDrop) and **unseen** (Rain Streak, Snow, Stroke) scenarios. Due to space limitations, we reported results for three representative methods. Even under these expanded metrics, our method consistently outperforms baselines. Full comparisons across all metrics and methods will be included in the final submission.
| Method | Seen Obstruction | Unseen Obstruction |
| - | - | - |
| Restormer | 0.0746/0.9421/0.74 | 0.1984/0.8833/0.80 |
| Histoformer | 0.0798/0.9365/0.84 | 0.1328/0.8885/0.70 |
| XRestormer | 0.0924/0.9360/0.68 | 0.1873/0.8820/0.60 |
| Instruct2See | **0.0694**/**0.9442**/**0.92** | **0.1071**/**0.9140**/**0.84** |
**R-W6 Too small image size:** Thank you for your suggestion. We will modify the experimental images to enhance the comparability between methods. | null | null | null | null |
Ringmaster ASGD: The First Asynchronous SGD with Optimal Time Complexity | Accept (poster) | Summary: This paper studied federated learning where each client has a different computation resource.
The authors first showed that it is optimal to run Asynchronous SGD on the fastest $m^\star$ clients. (Theorem 2.1)
This naive approach is optimal, but it does not work well when the computation power of each client changes over time.
Then, the authors proposed Ringmaster ASGD, showing that Ringmaster ASGD can also achieve the optimal convergence rate while it does not require prior knowledge of the computation power of each client.
Claims And Evidence: The proposed method and the derived convergence results sound reasonable for the reviewer.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: The reviewer did not check the proof, while the claim of this paper sounds reasonable.
Experimental Designs Or Analyses: No experimental results are shown in this paper.
Supplementary Material: No
Relation To Broader Scientific Literature: The proposed method, Ringmaster ASGD sounds novel, and it sounds novel that Ringmaster ASGD does not require the prior knowledge of computation powers of each client.
Essential References Not Discussed: * The reviewer feels that the relationship between this paper and existing papers is a bit unclear. Some of the results shown in this paper have been already proposed in the existing papers. Specifically, Theorem 2.1 has been already shown in [1], while the reviewer feels that the authors claimed that these results are also novel results in this paper.
## Reference
[1] Alexander Tyurin, Peter Richtárik, Optimal Time Complexities of Parallel Stochastic Optimization Methods Under a Fixed Computation Model, NeurIPS 2023
Other Strengths And Weaknesses: See other sections.
Other Comments Or Suggestions: N/A
Questions For Authors: * In Sec. 2.2, the authors claimed that "Naively selecting the fastest $m^\star$ workers at the start of the method and keeping this selection unchanged may therefore lead to significant issues in practice". However, since there is no need to select the same clients over time, the reviewer is wondering if Algorithm 3 really does not work well when the client speeds change over time.
* The reviewer feels that this paper is claiming Algorithm 3 and Threom 2.1 are novel, while these results have already been shown in [1]. Thus, the main contribution of this paper is Sec. 3. The reviewer would like to suggest that the authors clarify which parts of this paper are novel.
* The primary contribution of this paper is theoretical, while the reviewer would like to suggest that the authors verify their results by demonstrating the experimental results.
## Reference
[1] Alexander Tyurin, Peter Richtárik, Optimal Time Complexities of Parallel Stochastic Optimization Methods Under a Fixed Computation Model, NeurIPS 2023
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the review.
> This paper studied federated learning where each client has a different computation resource.
Our setup is relevant not only to Federated Learning but also to datacenter environments, where heterogeneous GPU clusters are common. Even in datacenters with identical GPUs, failures become inevitable as the number of GPUs increases, introducing additional heterogeneity. For example, see [0], Section 3.3.4.
[0] Grattafiori, Aaron, et al. "The llama 3 herd of models." arXiv preprint arXiv:2407.21783 (2024).
> No experimental results are shown in this paper.
Experiments are presented in Section F of the appendix. We included them there because the primary focus of our paper was to refine the method and establish that it is not only practical but also theoretically optimal. That said, Figures 1 and 2 further demonstrate the practicality of our method, with Figure 2 specifically showing that it outperforms existing benchmarks in speed.
> The reviewer feels that the relationship between this paper and existing papers is a bit unclear. Some of the results shown in this paper have been already proposed in the existing papers. Specifically, Theorem 2.1 has been already shown in [1], while the reviewer feels that the authors claimed that these results are also novel results in this paper.
Let us clarify the relationship between [1] and our work. [1] establishes lower bounds for the time complexity of first-order methods, while we propose a fully asynchronous method and derive an upper bound that matches these lower bounds. Additionally, [1] analyzes Rennala SGD, which also attains the lower bound, but the key difference is that Rennala SGD is not fully asynchronous—it operates as Minibatch SGD (which performs synchronous model updates) combined with an asynchronous minibatch collection mechanism. We discuss this distinction in detail in Section 1.2. Our main contribution lies in closing the gap by developing a fully asynchronous method that achieves the lower bound on time complexity.
Regarding Theorem 2.1, it is not explicitly stated in prior work, though it follows as a direct consequence of existing results. The analysis of Asynchronous SGD (Algorithm 1) appears in [2] and [3], while [1] establishes its time complexity. We formally state this in lines 175-183 and then show that Theorem 2.1 naturally follows with a different choice of workers, leading to the optimal time complexity. Notably, this specific choice of workers and its role in achieving optimal time complexity has not been shown in prior work.
[1] Alexander Tyurin, Peter Richtárik, Optimal Time Complexities of Parallel Stochastic Optimization Methods Under a Fixed Computation Model, NeurIPS 2023
[2] Koloskova, A., et al. Sharper convergence guarantees for Asynchronous SGD for distributed and federated learning. NeurIPS 2022.
[3] Mishchenko, K., et al. Asynchronous SGD beats minibatch SGD under arbitrary delays. NeurIPS 2022.
> In Sec. 2.2, the authors claimed that "Naively selecting the fastest workers at the start of the method and keeping this selection unchanged may therefore lead to significant issues in practice". However, since there is no need to select the same clients over time, the reviewer is wondering if Algorithm 3 really does not work well when the client speeds change over time.
It is possible to reselect workers each time the $\tau_i$ values change, but this would introduce additional computational overhead due to the need for continuous selection. Plus, it is not obvious when we should reselect workers. Additionally, a bigger issue is that $\tau_i$ is typically unknown, making this approach impractical. Instead, we propose a simpler solution using a threshold-based approach, as presented in Algorithms 4 and 5, which does not require $\tau_i$ and automatically and adaptively chooses the fastest subsets of workers.
> The reviewer feels that this paper is claiming Algorithm 3 and Threom 2.1 are novel, while these results have already been shown in [1]. Thus, the main contribution of this paper is Sec. 3. The reviewer would like to suggest that the authors clarify which parts of this paper are novel.
No, these results are not shown in prior work, as discussed above.
> The primary contribution of this paper is theoretical, while the reviewer would like to suggest that the authors verify their results by demonstrating the experimental results.
Indeed, our work is primarily theoretical, as mentioned earlier. However, we provide experimental results in Section F of the appendix. | Summary: This paper discusses the characteristics of asynchronous parallel algorithms when a delay upper bound is provided, specifically Algorithm 4 and Algorithm 5. Asynchronous parallel SGD was the focus of SGD research from 2014 to 2020. With the popularity of the Adam algorithm, research on SGD has begun to decline. This paper offers rigorous proofs for Algorithm 4 and Algorithm 5, and simple experiments are provided in the appendix.
I believe the biggest issue with this paper is that it merely adds a delay constraint on top of Algorithm 3 without altering any of its other mathematical properties. Providing such an analysis requires only minor modifications to the analysis of Algorithm 3. Therefore, the paper should have extensively discussed the shortcomings of Algorithm 3, particularly in Section 2.2. However, the content in Section 2.2 does not convince me that Algorithm 3 has significant flaws. Furthermore, the analysis of Algorithm 3 itself, such as that of the Hogwild! algorithm, already includes discussions on maximum delay. Consequently, I find the contribution of this paper to be insufficient.
Claims And Evidence: This paper does indeed present two asynchronous parallel algorithms and provides relatively complete proofs and very simple experiments for these algorithms. However, from an experimental perspective, it is difficult to support the arguments made in this paper.
Methods And Evaluation Criteria: From a theoretical perspective, this paper provides a fairly complete proof. From an experimental perspective, it is difficult to support the arguments made in this paper.
Theoretical Claims: I I have basically reviewed the proof frame approach.
Experimental Designs Or Analyses: There is almost no experiments in this paper
Supplementary Material: I read full appendix.
Relation To Broader Scientific Literature: The main conclusions of this paper are closely related to the analysis of Algorithm 3. Based on the analysis of Algorithm 3, such as the Hogwild! algorithm analysis, I believe that the analysis presented in this paper is quite straightforward.
Essential References Not Discussed: no
Other Strengths And Weaknesses: This paper discusses an asynchronous parallel algorithm with a bounded delay, which essentially adds a delay constraint to Algorithm 3. The proof and analysis of Algorithm 3 are already very well-established, and in these proof, the requirement for delay in these proofs is based on maximum delay. Therefore, the algorithm and analysis presented in this paper have limited value.
In practical industrial scenarios, significant fluctuations in node delays are relatively rare. Spending additional computational resources to control delay may result in higher performance losses than those caused by the delay itself. Hence, from both algorithm design and theoretical analysis perspectives, the value of this paper is quite limited.
Other Comments Or Suggestions: no
Questions For Authors: Show more discussion about the analysis difference between tradtional ASGD and algorithm 4/5.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for the review.
> Asynchronous parallel SGD was the focus of SGD research from 2014 to 2020. With the popularity of the Adam algorithm, research on SGD has begun to decline.
We respectfully disagree with this statement. One of the important works [1] in stochastic optimization (a field that includes both SGD and Adam) was published only in 2023 (preprint in 2019). [1] proves that SGD is an optimal method in the nonconvex stochastic setting, meaning that, theoretically, Adam cannot be better than SGD. At the same time, we agree that Adam is a very important method that requires further investigation, but it is based on SGD. Without understanding SGD, it would be much more difficult to understand Adam.
> I believe the biggest issue with this paper is that it merely adds a delay constraint on top of Algorithm 3 without altering any of its other mathematical properties.
Adding this delay constraint in Alg. 4 is a non-trivial and elegant fix for the original Async SGD. Surprisingly, this simple yet essential algorithmic improvement was previously overlooked.
> Furthermore, the analysis of Algorithm 3 itself, such as that of the Hogwild! algorithm, already includes discussions on maximum delay .. The main conclusions of this paper are closely related to the analysis of Algorithm 3. Based on the analysis of Algorithm 3, such as the Hogwild! algorithm analysis, I believe that the analysis presented in this paper is quite straightforward ...
Note that the analysis of Async SGD in the Hogwild! paper is not the tightest previous result. [2,3] establish a better rate. We further improve it to the optimal time complexity, which is unimprovable due to the lower bounds.
The mathematical properties change significantly. Instead of the classical asynchronous SGD, we analyze a new version in which outdated and irrelevant data are ignored. First, we analyze the sum $\sum_{k=0}^K \mathbb{E}[|x^k - x^{k-\delta^k} |^2]$ in Lemma C.2 more carefully and improve upon the analysis in [2,3]. Next, Lemma 4.1, Theorem 4.2, Lemma 5.1, and Theorem 5.1 are entirely new results, which show the optimality of the Async SGD approach for the first time in the literature!
> Providing such an analysis requires only minor modifications to the analysis of Algorithm 3. Therefore, the paper should have extensively discussed the shortcomings of Algorithm 3 ... the content in Section 2.2 does not convince me that Algorithm 3 has significant flaws.
Algorithm 3 presents at least two key flaws. First, it requires computation times as input and assumes that these times are fixed, an assumption that is clearly unrealistic in practical scenarios. Second, as detailed in Section 2.2, there is another significant flaw: Algorithm 3 is neither robust nor adaptive to adversarial computation environments.
> This paper does indeed present two asynchronous parallel algorithms and provides relatively complete proofs and very simple experiments for these algorithms. However, from an experimental perspective, it is difficult to support the arguments made in this paper.
The goal of this paper was to prove the theoretical optimality of the Asynchronous SGD approach. This paper closes an important theoretical question from the optimization field. This paper should be read primarily as a theoretical paper.
> In practical industrial scenarios, significant fluctuations in node delays are relatively rare.
This observation does not always hold; significant fluctuations appear in federated learning. Even when fluctuations are minimal, we still obtain improved and optimal theoretical guarantees (see Table, Thm. 4.2, 5.1).
> Spending additional computational resources to control delay may result in higher performance losses than those caused by the delay itself.
Additional computations due to control of the delays are negligible compared to stochastic gradient computation times. How can a simple comparison $\delta^k < R$ of two integers decrease the performance of modern computation systems?
> Hence, from both algorithm design and theoretical analysis perspectives, the value of this paper is quite limited.
**We strongly believe obtaining an optimal Async SGD method is an important task for the ICML community. This is the first paper to demonstrate the optimality of asynchronous SGD in terms of time complexity—a point we believe was overlooked by the reviewer. The corresponding complexity is significantly better than that of previous variants (see Table 1). We believe this work closes an important open problem in asynchronous optimization.**
[1] Arjevani, Y, et al. "Lower bounds for non-convex stochastic optimization." Mathematical Programming 199.1 (2023): 165-214.
[2] Koloskova, A., et al. Sharper convergence guarantees for Asynchronous SGD for distributed
and federated learning. NeurIPS 2022.
[3] Mishchenko, K., et al. Asynchronous SGD beats minibatch SGD under arbitrary delays. NeurIPS 2022.
---
Rebuttal Comment 1.1:
Comment: The author's rebuttal and other reviewers' opinions addressed some of my concerns. I do think that this approach might have some practical significance in the field of federated learning, so I have slightly increased my score. However, the improvements to asynchronous SGD mentioned in the paper are very minimal, if not expected. Table 1 shows that the modifications and optimizations are basically built upon the work of Koloskova et al., thus, I believe the innovation is quite limited. From the perspective of clusters and data centers, this approach is rather conventional. As I mentioned in my review, the impact of delays has long been detailed in theorems in early studies of asynchronous SGD, which already guides the choice of cluster size (since the minimum delay equates to maximum the number of workers, i.e., parallelism). For instance, in industrial applications like AD CTR model training, cluster sizes typically do not exceed 300 workers because increased parallelism leading to higher delays would severely hinder convergence speed.
Regarding experiments, theoretically, I expect to see a match between theoretical curves and experimental results (since the experiments in this paper are based on very simple analyzable functions). Practically, I hope to see outcomes from neural networks, even relatively simple ones like ResNet20. Considering the main contribution of this paper lies in its theory, I am merely offering suggestions; issues with experiments are not the reason for my disapproval of this paper.
---
Reply to Comment 1.1.1:
Comment: Thank you for increasing the score! Let us clarify the remaining concerns:
> However, the improvements to asynchronous SGD mentioned in the paper are very minimal, if not expected. Table 1 shows that the modifications and optimizations are basically built upon the work of Koloskova et al., thus, I believe the innovation is quite limited.
It is true that our work builds upon the results of Koloskova et al., but this is a natural progression in mathematical and optimization sciences. There is a connection between the work of Koloskova et al. and the earlier work [1], which in turn may rely on techniques introduced in [2], and so on. All these papers, including ours, study the same method, each improving upon the results of the previous ones. Our work builds on Koloskova et al., their work may build on [1], [1] may build on [2], and so forth.
In this line of progress, we provide a new analysis that improves all previous results. Furthermore, we show that our analysis is tight and can not be improved further by proving matching lower bounds, which we believe is an important contribution.
---
> As I mentioned in my review, the impact of delays has long been detailed in theorems in early studies of asynchronous SGD, which already guides the choice of cluster size (since the minimum delay equates to maximum the number of workers, i.e., parallelism). For instance, in industrial applications like AD CTR model training, cluster sizes typically do not exceed 300 workers because increased parallelism leading to higher delays would severely hinder convergence speed.
Note that in modern large-scale training, the number of workers can far exceed 300. For example, Llama 3 was reportedly trained using 16,000 GPUs. At such a scale, delays and interruptions become increasingly common. Moreover, the number of GPUs used in training continues to grow and is rapidly approaching 100,000 (if it hasn’t already).
In [3], a 16K GPU cluster experienced 419 unexpected interruptions over a 54-day period. The following excerpt from the paper illustrates this: "During a 54-day snapshot period of pre-training, we experienced a total of 466 job interruptions. Of these, 47 were planned interruptions due to automated maintenance operations such as firmware upgrades or operatorinitiated operations like configuration or dataset updates. The remaining 419 were unexpected interruptions, which are classified in Table 5."
For more details, see [3], Section 3.3.4.
As the number of GPUs increases, delays and interruptions become more frequent; our work offers both new practical and theoretical guidance on how to effectively manage significant delays.
---
We believe that establishing optimality and improving the results of [1, 2, 4] (and many other related works) for Asynchronous SGD is an important objective for the ICML community. Our work is the first to achieve optimal time complexity — a key contribution that may have been overlooked by the reviewer.
---
> Regarding experiments, theoretically, I expect to see a match between theoretical curves and experimental results (since the experiments in this paper are based on very simple analyzable functions). Practically, I hope to see outcomes from neural networks, even relatively simple ones like ResNet20. Considering the main contribution of this paper lies in its theory, I am merely offering suggestions; issues with experiments are not the reason for my disapproval of this paper.
We ran a small MNIST experiment using a 2-layer neural network with ReLU activation. See results
[here](https://anonymous.4open.science/api/repo/nn_exp-17E3/file/real_data.pdf?v=695e36eb)
---
Thank you for your review!
Best regards,
Authors
---
[1] Sebastian U. Stich and Sai Praneeth Karimireddy. The error-feedback framework: SGD with delayed
gradients. Journal of Machine Learning Research, 21(237):1–36, 2020
[2] Alekh Agarwal and John C Duchi. Distributed delayed stochastic optimization. In Advances in Neural
Information Processing Systems 24, pages 873–881. Curran Associates, Inc., 2011.
[3] Grattafiori, Aaron, et al. “The llama 3 herd of models.” arXiv preprint arXiv:2407.21783 (2024).
[4] Koloskova, A., et al. Sharper convergence guarantees for Asynchronous SGD for distributed and federated learning. NeurIPS 2022. | Summary: This paper proposes a method called Ringmaster ASGD to achieve the optimal time complexity for asynchronous methods as described in [1]. Ringmaster ASGD is a simple modification of vanilla asynchronous SGD. In Ringmaster ASGD, gradients with large delays (>R) are discarded.
[1] Tyurin, A. and Richt´arik, P. Optimal time complexities of parallel stochastic optimization methods under a fixed computation model. Advances in Neural Information Processing Systems,36,2023.
Claims And Evidence: The claims are partially supported by evidence. Please see details below.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense for the problem.
Theoretical Claims: As shown in Theorem 4.2 and Theorem 5.1, the value of the delay threshold does not depend on the computation times. Does this imply that the same value of R can be used across different distributed systems? This conclusion is somewhat confusing and counter-intuitive. It seems that the optimal value of R should be related to the computing capability of the workers in the cluster.
Experimental Designs Or Analyses: Only one very simple problem, a simulated convex problem, is used in experiment. Furthermore, only one specific value of R is evaluated. More experiments with more complex models and real datasets, under different settings of R, are needed to validate the efficiency of Ringmaster ASGD.
Supplementary Material: Yes. I have reviewed both proof details and experiments.
Relation To Broader Scientific Literature: The key contributions of the paper are related to federated learning.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Algorithm 5 is a modification of Algorithm 4 that stops irrelevant computations. The update rules of these two algorithms are equivalent. Hence, the statement from line 319 to line 328 is sufficient, and the details of Algorithm 5 can be moved to the appendix for simplicity.
Other Comments Or Suggestions: No.
Questions For Authors: Please refer to the above issues.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for the review.
> As shown in Theorem 4.2 and Theorem 5.1, the value of the delay threshold does not depend on the computation times. Does this imply that the same value of R can be used across different distributed systems? This conclusion is somewhat confusing and counter-intuitive. It seems that the optimal value of R should be related to the computing capability of the workers in the cluster.
To clarify, in the proof of Theorem 4.2, we select $R$ to minimize the upper bound on time complexity (Equation 11), up to a universal constant. The exact minimizer—without this constant—depends on the worker times $\tau_i$. However, the resulting time complexity differs only by a constant factor.
Consider an optimization problem with varying worker times $\tau_i$. Let’s examine two extreme cases:
- $\tau_i = \infty$ for all $i > 1$ and $\tau_1 = \tau > 0$ (only one active worker) – The optimal choice here is simply $R = 1$.
- All $\tau_i$ are equal – The optimal $R$ in this case is potentially larger than $1$.
While the optimal $R$ varies across these scenarios, the final time complexities remain within a constant factor of each other.
This property is, in fact, quite powerful: as you pointed out, the same value of $R$ can be used across different data centers, leading to, at most a constant-factor difference in performance. We appreciate this observation and will clarify it further in the camera-ready version of the paper.
> Only one very simple problem, a simulated convex problem, is used in experiment. Furthermore, only one specific value of R is evaluated. More experiments with more complex models and real datasets, under different settings of R, are needed to validate the efficiency of Ringmaster ASGD.
The primary goal of this paper is to establish the **theoretical optimality** of Asynchronous SGD. Our work addresses a fundamental open problem in optimization, providing the first proof of **optimal time complexity** for asynchronous SGD. We firmly believe that developing an optimal Asynchronous SGD method is essential for the ICML community. This work represents a significant step in that direction, offering theoretical guarantees that pave the way for future practical advancements.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for the rebuttal.
From the rebuttal, it is difficult to get the conclusion that the resulting time complexity differs only by a constant factor. Furthermore, it is counter-intuitive that the delay threshold does not depend on the computation times and the number of nodes in the cluster. Given a special case that the computation times are totally different for each node, it seems that a more reasonable choice is to set the delay threshold to be proportional to the number of nodes.
In addition, even experiments on simple non-convex problems like neural networks with two or three layers, not necessarily large models on large datasets, can improve the convincingness of the paper. For the convex problem in the paper, other methods like variance-reduction based asynchronous SGD can achieve much faster convergence than the method proposed in this paper.
---
Reply to Comment 1.1.1:
Comment: Thank you for engaging with us. It seems there may be a misunderstanding, and we are happy to clarify.
> From the rebuttal, it is difficult to get the conclusion that the resulting time complexity differs only by a constant factor.
In Theorem 4.2, we provide an upper bound on the time complexity (eq. 8), expressed using big $\mathcal{O}$ notation, which hides universal constants (i.e., constants independent of any problem parameters). Reference [1] (specifically Theorem 6.4) gives a matching lower bound for the same class of functions (smooth, nonconvex) for first-order asynchronous methods. This lower bound coincides with our upper bound in eq. 8, up to universal constants.
This implies that our method, with the threshold selection from Theorem 4.2, is unimprovable up to constant factors. In other words, while it is possible that for certain functions and specific time distributions ($\tau_i$), a different choice of the threshold $R$ might lead to better time complexity, the improvement can only be by a constant factor (e.g., twice as fast). It cannot exceed that because of the matching lower bound established in [1].
For example, in the case where $\tau_i=\infty$ for $i >1$, the optimal choice of $R$ is 1, which gives a better time complexity than the choice in Theorem 4.2. However, even in this extreme case, the improvement remains within a constant factor.
---
[1] Tyurin A., Richtárik P. Optimal Time Complexities of Parallel Stochastic Optimization Methods Under a Fixed Computation Model. NIPS 2023.
> Furthermore, it is counter-intuitive that the delay threshold does not depend on the computation times and the number of nodes in the cluster.
In fact, the optimal choice of $R$ does depend on the computation times, as we previously mentioned. However, there is no closed-form expression for the best $R$ in general—it depends on the specific values of $\tau_i$ and must be determined on a case-by-case basis.
If desired, the optimal threshold $R$ can be written as
$$
\arg\min_{R\geq1}\left\\{t(R)\left(1+\frac{\sigma^2}{R\varepsilon}\right)\right\\},
$$
which comes from eq. 11 after removing constants that do not depend on $R$. Here, $t(R)$ denotes the time required for $R$ consecutive iterations, and it is upper bounded by eq. 7.
Plugging in the bound from eq. 7, we obtain the following expression for the optimal $R$
$$R = \max\left\\{\sigma\sqrt{\frac{m^*}{\varepsilon}},1\right\\},$$
where
$$m^*=\arg\min_{m\in [n]}\left\\{\left(\frac{1}{m}\sum_{i=1}^m\frac{1}{\tau_i}\right)^{-1}\left(1+2\sqrt{\frac{\sigma^2}{m\varepsilon}}+\frac{\sigma^2}{m\varepsilon}\right)=:T_m\right\\}.$$
As you can see, the optimal $R$ depends on $m^*$, which in turn depends on the time distribution through the $\tau_i$ values.
The choice of $R$ in Theorem 4.2 was made for simplicity—it has a closed-form expression and avoids dependence on the time distribution. While not always optimal, it achieves time complexity within a small constant factor of the best possible. This makes it a practical and robust choice, especially in the dynamic setting considered in Theorem 5.1.
---
> Given a special case that the computation times are totally different for each node, it seems that a more reasonable choice is to set the delay threshold to be proportional to the number of nodes.
Could you please clarify what you mean by ‘totally different’?
In the rebuttal, we gave an example where $\tau_i=\infty$ for $i>1$, and the optimal choice is $R = 1$, which clearly does not depend on $n$. Even if the $\tau_i$ values are not infinity but are just very large (larger than the total convergence time using only one worker) and arbitrarily different from each other, the threshold is still $R=1$.
Let’s consider a less extreme case: suppose $\tau_i=i^p$ for any $p\geq1$. Using the formula for $R$ based on $m^*$, we see that $T_m$ becomes an increasing function of $m$ beyond a certain $m$, so $m^*<n$. Consequently, even as the number of nodes increases, the value of $m^*$—and thus the optimal $R$—does not grow. Again, the delay threshold is not intrinsically tied to the total number of nodes.
Finally, the idea that $R$ should scale with the number of workers is exactly what prior works assumed. For instance, [2] shows that the average delay grows with $n$ and proposes a delay-adaptive method to improve the convergence rate (eq. 4). However, our work argues that including updates from all workers may hinder convergence. We demonstrate that carefully controlling the delay threshold, even if it means excluding slower workers, leads to a faster rate (eq. 3). This highlights a key insight of our work: the delay threshold should not be simply scaled with the number of workers.
> On experiments
We did extra experiments with NN
[here](https://anonymous.4open.science/api/repo/nn_exp-17E3/file/real_data.pdf?v=695e36eb).
---
[2] Koloskova, A., et al. Sharper convergence guarantees for Asynchronous SGD for distributed and federated learning. NIPS 2022. | Summary: In the settings when all clients compute the same function, the paper introduces a family of Asynchronous SGD algorithms:
-- A trivial algorithm which chooses the optimal number of fastest machines. The paper shows that this algorithm achieves the optimal convergence rate. The downside of the algorithm is that it doesn’t handle the case when clients have variable performance.
-- Two algorithms which ignore or cut off computations which last more than some specified number of rounds. For these algorithms, the paper shows optimal convergence rate for the settings when the client performances are constant and when they are variable.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: -- I don’t think paragraph “Proof techniques” actually provides any insights about the proof
-- The techniques feel incremental, with the main proof being completed mainly using Lemmas from Koloskova et al (2022).
Experimental Designs Or Analyses: Ok
Supplementary Material: Ok
Relation To Broader Scientific Literature: Ok
Essential References Not Discussed: Ok
Other Strengths And Weaknesses: Ok
Other Comments Or Suggestions: I would like to recommend restructuring the paper as follows:
-- Move some of the discussion right before 1.1 to “Related work”
-- Move section 1.3 earlier, before section 1.1
-- Move paragraph “Why do we ignore the old stochastic gradients?” earlier, maybe around Equation (3)
-- You might want to clarify that there are other definitions of ε-stationary point (in particular, ones which would use ε^2)
-- You don’t refer to Table 1 in the text
-- Footnote 2 might be not obvious to a reader
-- You assume that all v_i are continuous. This might be an unrealistic assumption; moreover, I don’t think you use this assumption.
-- In Theorem 4.2, since you know the value of R, you can simplify the statement
Questions For Authors: Your algorithm uses a binary decision for every gradient: use it if the delay is less than R, and ignore it otherwise. This is fine if one knows a proper value of R, which introduces an additional hyperparameter. Do you think it’s possible to avoid using such a hyperparameter, e.g. by scaling the gradients inversely proportionally to the delay?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for the review.
> The techniques feel incremental, with the main proof being completed mainly using Lemmas from Koloskova et al (2022).
We acknowledge that the proof is not complicated, but we see this as an advantage rather than a limitation. A small yet impactful change can often be more valuable than a more complex modification achieving the same result.
Our work provides the first proof of the optimality of asynchronous SGD in the literature. We establish this through a simple yet non-trivial and elegant idea: introducing a threshold on gradient delays. Surprisingly, this fundamental algorithmic improvement had been previously overlooked.
Moreover, our approach significantly alters the mathematical properties of the method. Unlike classical asynchronous SGD, it discards outdated and irrelevant data, leading to a refined theoretical analysis. Specifically, we analyze the sum $\sum_{k=0}^K \mathbb{E}[|x^k - x^{k-\delta^k} |^2]$ in Lemma C.2 more carefully, improving upon the analysis in [1,2]. Additionally, Lemma 4.1, Theorem 4.2, Lemma 5.1, and Theorem 5.1 present entirely new results.
> I would like to recommend restructuring the paper as follows:
Thank you for the suggestions. We will make the changes for the camera-ready version of the paper.
> You assume that all $v_i$ are continuous. This might be an unrealistic assumption; moreover, I don't think you use this assumption.
We assume that $v_i$ is non-negative and continuous almost everywhere. It means that $v_i$ can be discontinued in a countable set of points, and our analysis still works. We believe that this is general enough since it allows the computation powers to "jump" on a countable set of times. We use this assumption (non-explicitly) when integrating $v_i$. Under this assumption, $v_i$ is Riemann integrable. Notice that we can easily assume that $v_i$ is measurable and use the Lebeague integral, but for clarity, we work with the Riemann integral.
> Your algorithm uses a binary decision for every gradient: use it if the delay is less than R, and ignore it otherwise. This is fine if one knows a proper value of R, which introduces an additional hyperparameter. Do you think it's possible to avoid using such a hyperparameter, e.g. by scaling the gradients inversely proportionally to the delay?
We think it may be possible to make $R$ adaptive by estimating it in an online fashion, but this is beyond the scope of our current research and is left for future work.
Regarding scaling. Prior work [1,2] explored this approach by scaling gradients inversely proportionally to their delay, a method known as delay-adaptive ASGD. However, this does not achieve optimal time complexity—some outdated gradients must be ignored. We discuss this in the “Comparison to Previous Work” section (lines 306-318).
That said, introducing an additional hyperparameter is unavoidable. However, the choice of $R$ is not particularly sensitive. Specifically, in Theorem 4.2, setting $R = \max\\{1, \lceil c \frac{\sigma^2}{\varepsilon} \rceil \\}$ for any absolute constant $c$ still ensures optimal time complexity up to a constant factor depending on $c$.
[1] Koloskova, A., et al. Sharper convergence guarantees for Asynchronous SGD for distributed and federated learning. NeurIPS 2022.
[2] Mishchenko, K., et al. Asynchronous SGD beats minibatch SGD under arbitrary delays. NeurIPS 2022. | null | null | null | null | null | null |
Variational Rectified Flow Matching | Accept (poster) | Summary: This paper introduces a variational rectified flow matching method. Instead of learning a deterministic mean velocity at time $t$, the paper explicitly models a distribution over the velocity $v_t$, grounding the approach in VAE theory.
Claims And Evidence: Partially, the baselines do not include recent methods. See the detailed comments in the Weaknesses section.
Methods And Evaluation Criteria: Yes. They make sense.
Theoretical Claims: Yes, I checked Claim 1 and the proof.
Experimental Designs Or Analyses: Yes. I checked the expriments on synthetic data and CIFAR10 and ImageNet.
Supplementary Material: No. No supplementary material was found in the submission.
Relation To Broader Scientific Literature: The paper addresses the ambiguous nature of the marginal velocity field, which is critical for the performance of FM models. Previous methods tackling this challenge mainly focus on using different noise-data couplings or distillation-based methods to straighten the trajectories. This paper takes an orthogonal approach.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: ### Strengths:
1. The paper addresses an important problem in flow matching: the ambiguity of the marginal velocity field.
2. The method demonstrates strong performance compared to the baseline OT-FM and I-CFM methods.
### Weaknesses:
1. Novelty: VAEs are already well-established, which positions this work as an application of VAEs to flow matching. It would be beneficial if the authors could further highlight the paper's unique contributions beyond the direct application of VAEs. (To be clear, this is more of an open-ended question than a significant limitation. If VAEs are shown to be a good tool for the problem the authors address, the paper can still be considered strong.)
2. Sampling in Algorithm 1: In Algorithm 1, how many samples of $z$ are used for large-scale experiments (e.g., ImageNet)? How does the sample size affect performance? Furthermore, during inference, are the sampled $z^{(i)}$ first averaged and then fed into the velocity predictor, or are they individually fed into the velocity predictor to predict $v^{(i)}$, which are then averaged to obtain the final velocity? Clarifying this is crucial, as these two approaches have significantly different implications for inference speed.
3. KL Divergence Regularization: Based on experience, VAEs are often sensitive to the choice of the KL divergence regularization weight (denoted as $\lambda$). This sensitivity is also apparent in Table 1. The authors should provide an analysis of the impact of $\lambda$ on the learned velocity field and discuss the intuition behind its effect.
4. Implementation Details: The authors did not provide source code, preventing a deep dive into implementation details. Specifically, there are several options for implementing the pipeline:
(A) The encoder $q_{\phi}$ shares parameters with *a part of* the velocity predictor $v_{\theta}$. In this case, the initial part (i.e., the first several layers) of the velocity predictor could be used to predict $\mu_{\phi}$ and $\sigma_{\phi}$, while the latter part predicts the velocity field.
(B) The encoder $q_\phi$ uses a similar structure to $v_\theta$ but is a separate network, and they are jointly trained.
If (B) is the case, it could introduce a substantial number of extra parameters, potentially limiting scalability. However, Table 1 shows that the total number of parameters is 37M, compared to 36.5M for the baseline. This discrepancy needs further clarification.
5. Related to point 4, it would be very helpful if the authors could provide a code snippet illustrating the implementation of lines 342-357.
6. Limited Baselines: The baselines only include vanilla OT-FM and I-CFM. Other relevant methods, such as distillation-based consistency models (Song et al. 2023) and shortcut models (Frans et al. 2024), which may also address velocity field ambiguity by encouraging "stepping over" ambiguous regions of $x_t$ with merged steps. If possible, these methods should also be considered. Including a broader range of baselines would further strengthen the paper. E.g. ``Towards Hierarchical Recitfied Flow'' from Zhang et al. (ICLR 2025) addressed a similar multimodal problem.
Other Comments Or Suggestions: ### Suggestions:
1. In Eq. (4), $||$ should be used as separator for $q$ and $p$ in the KL divergence in stead of $|$.
### Minor questions:
1. What resources have you used for training SiT-XL on ImageNet-1k with 256x256 resolution? How long is the training schedule?
Questions For Authors: See the questions in the Weaknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thanks for detailed feedback and for recognizing the importance of addressing ambiguity in the marginal velocity field, and our strong results.
**1. Paper's unique contributions**
We study a method for capturing multi-modal velocity vector fields. We show that incorporating an unobserved continuous latent variable z via a variational formulation (akin to a VAE) enables the velocity model to learn a multi-modal vector field. Experiments across diverse data and models show that our method outperforms classic approaches.
**2. How many z used for large-scale experiments. Averaging**
To remain fair and as shown in Algorithm 2, we sample a single $z$ per data point and keep it fixed throughout integration. No averaging is performed on $z$ or predicted velocity.
**3. Impact of KL weight**
We summarize the impact of the KL weight $\lambda$ based on our experimental findings:
1. The model successfully captures velocity ambiguity and predicts crossing flows when $\lambda$ is in a reasonable range (in [0.1,10.0]).
2. When $\lambda$ is large (e.g., 100.0), the latent model is forced to equal a standard Gaussian. Hence, the latent z contains minimal useful information. Hence, the velocity network behaves similar to one obtained using classic rectified flow. This is also apparent in the loss: the KL loss diminishes, and the velocity reconstruction loss is comparable to the baseline loss. The resulting flow cannot capture ambiguity.
3. When $\lambda$ is small (e.g., 0.01), the model can exploit excessive information from the latent. This leads to a very low velocity reconstruction loss but a very high KL loss. The resulting flow appears as straight lines, but the endpoint distributions do not match the target data due to the mismatch between the predicted posterior and prior.
In our experiments, we didn't tune the KL regularization weight much, but instead scaled the KL loss with the dimension of the latent variable. E.g., for ImageNet SiT experiments, we directly used the KL weight that we employed for CIFAR-10.
**4. Implementation of $q_\phi $and $v_\theta$**
As stated in **Sec 4.4 (L342 - 343)**, $q_\phi$ and $v_\theta$ share a similar structure but are separate nets. We will clarify this to avoid confusion. Regarding the increase in parameters, as described in **Sec 3.3 and Sec 4 (L262, 348, 372 right column)**, during inference, $q_\phi$ is not used. Instead, we sample the latent variable from a prior. The only increase in parameters comes from the two MLP layers to fuse the latent z in $v_\theta$. This design ensures that our velocity network remains comparable in size to the baseline, with less than a 2% parameter increase for CIFAR-10 and less than a 0.3% increase for ImageNet.
**5. Code Snippet**
We will release code and models. Note, we provided more implementation details in **App D**. We are happy to address further questions during the rebuttal phase.
**6. More baselines: consistency models, shortcut models, HRF**
Consistency model:
A detailed comparison to consistency models, particularly distillation models, is included in **App B**. We used the recently developed consistency flow matching [1]. It improves upon consistency models [2] and is more closely related to flow matching. We summarized the results in **App C.1**. Our key findings:
* The consistency flow matching model performs well at low function evaluation regimes (i.e., with NFEs of 2 or 5).
* Its performance degrades as NFEs increase.
* Its best performance across all NFEs remains below classic rectified flow matching and our variational rectified flow matching.
We also highlight an exciting future research direction: combining variational flow matching with consistency models, which could further enhance results.
Shortcut model:
We evaluate the Shortcut Model (XL), trained for 800k iterations, using the FID score and following the same evaluation protocol used in **Tab 2**. Our results show that our method consistently outperforms it.
| | Params (M) |FID|
| - | - | - |
|SiT-XL| 675| 13.1|
|Shortcut-XL| 676| 19.752 (128 NFE, reproduced)/19.630 (250 NFE, reproduced)|
|V-SiT-XL| 677| 10.6|
|SiT-XL (cfg=1.5)| 675| 3.43|
|Shortcut-XL (cfg=1.5)| 676| 3.8 (128 NFE, from paper)/4.709 (128 NFE, reproduced)/4.707 (250 NFE, reproduced)|
|V-SiT-XL (cfg=1.5) |677| 3.22|
Hierarchical Rectified Flow (HRF): This concurrent work also aims to model multi-modal velocity and acceleration fields but uses a hierarchical rectified flow. Their method requires multiple integrations during inference, making it slower than our approach. Also, HRF does not support semantic disentangling of flows, as demonstrated in our **Fig 6 and 7** for MNIST and CIFAR-10.
[1] Yang, L. et al. (2024) Consistency flow matching.
[2] Song, Y. et al. (2023) Consistency models.
**7. || should be used instead of | in KL**
Thanks for spotting this typo, we'll fix.
**8. resources for SiT-XL on ImageNet 256**
We used 8 H100 GPUs and trained the model for about 3.5 days.
---
Rebuttal Comment 1.1:
Comment: Thank you for the explanation. My concerns are addressed and I will raise the score. Looking forward to seeing the release of the code. | Summary: # Update
In the rebuttal, the authors have addressed many of my questions and criticism. I feel that the main issue has not been adequately addressed, so I decided not change the score.
To elaborate, I feel that the method adds complexity to diffusion models. The added complexity has to serve some purposes in order for it to be worthwhile. In light of the paper, the added complexity has two practical benefits.
(1) It improves the scores of over the base models.
(2) It allows a form of conditional sampling.
For (1), while improvements are consistent across sampling steps, they are not very pronounced.
* For the CIFAR-10, the scores are very close to baselines from NFE=10 onward. For NFE=2 or 5, improving from 166 to 104 or 36 to 25 cannot really be consider a practical improvement because the images produced by both the V-RFM and baselines are still of low quality. Distillation techniques, only the other hand, reduces 166 to a 1-digit figure at NFE=2 and NFE=5. This is what I meant when I said improvements cannot be compared to distillation techniques.
* For ImageNet datasets, while there is quite a significant gap when not using CFG, the gap becomes gradually smaller as training becomes longer (although the authors show that the percentage improvement still incrasese) and is significantly reduced when CFG is applied. This means that techniques already employed on non-variational models are already quite effectively, and one has to wonder whether adding a variational component to the model would worth the trouble.
Benefit (2), on the other hand, is much more interesting to me because it is a feature that a normal diffusion model has: VAE-style latent codes. This is something that I feel *worths the trouble* of making the model more complex, so I think stressing this benefit should become a bigger part of the paper. However, from reading the rebuttal, while the authors did experiments on interpolating the latents, it seems they have not investigated how to use latent codes to control the outputs with more degrees of certainly. As a result, it is unlikely that the final version of the paper would contain more material in this direction.
Because of these concerns, I decided not to change my evaluation.
# Old Summary
The paper proposes "variational rectified flow matching," an extension to (rectified) flow matching. The latter trains a neural network $v_\theta(x,t)$ that predicts the expected value of velocities induced from velocity fields that continuously transforms one Gaussian distribution (whose mean comes from a "source" distribution) to another Gaussian distribution (whose mean comes from a "target" distribution). The paper observes that there is a distribution of velocity vectors at $p(v|x,t)$ each $(x,t)$ point and explores modeling them as a part of training the flow matching model instead of just estimating the distribution's means as is done by standard rectified flow matching.
To model the velocity distribution at each point $(x,t)$, the paper casts the flow matching model as a latent variable model, much like a variational autoencoder. The flow matching model nows accept a latent variable $z$ and becomes $v_\theta(x,t,z)$. The latent is supposed to come from a prior distribution $p(z)$, taken to be the standard multivariate Gaussian distribution. To train the model, one needs to model the conditional distribution $q(z|x,t)$, which the paper models with an encoder network, muck like what VAE does.
The distribution, conditioned on the latent $z$, is modeled as a Gaussian distribution around the predicted value: $p(v|x,t,z) = \mathcal{N}(v; v_\theta(x,t,z), I)$. This gives $p(v|x,t) = \int \mathcal{N}(v; v_\theta(x,t,z), I) p(z)\, \mathrm{d}z$ where $p(z)$ is the prior distribution of $z$, which is taken to be the standard Gaussian distribution. The flow matching matching, together with the encoder, can be trained like a VAE with a loss derived from the ELBO of $\log p(v|x,t)$. The KL divergence term between $q(z|x,t)$ and $p(z)$ remains the same, but the reconstruction term in the VAE loss is replaced by the conditional flow matching loss instead. To sample with a flow matching model trained this way, one must first sample $z$ from $p(z)$, and then one can use the flow matching model to generate a sample as usual with the exception of feeding $z$ to it at every integration step.
The paper demonstrates that the method has several benefits.
(1) It yielded better evaluation scores on various datasets and models architectures compared to vanilla flow matching. In particular, the gap is wider when low number of NFEs are used to generate samples or when the training time is shorter.
(2) It yields flow matching models that can model distributions at each $(x,t)$ point better. In aggretate, such models can model sampling trajectories, and these trajectories seem to be less curved than the non-intersecting trajectories of vanilla flow matching models.
(3) By varying $z$ at test time, one can control the generation output.
Claims And Evidence: I believe claim (1) is supported by enough evidence as the paper contains experiments on 5 datasets, and three types of architecture. Performance gaps on ImageNet generation without guidance are quite significant.
Claim (2) is supported by showing that, in the 1D dataset, the vanilla flow matching model often collapes the velocity distribution. The experiment on the 2D dataset clearly shows in Figure 4(c) that the proposed method can model intersecting trajectories. However, while Figure 4(b) shows trajectories that seem to be more curved than those in Figure 4(c), it is better to quantify the average curvature of the trajectories and show the numbers along with the pictures.
For Claim (3), the paper shows that the generated samples changed when $z$ is changed in Figure 6 (MNIST dataset) and Figure 7 (CIFAR-10) datset. While one must accept that the outputs do change, it is not quite clear whether these changes are useful or intuitive. In Figure 6, different areas of the unit square seems to correspond to different digits, but the paper does not explicity show how one can obtain the desired digits through controlling $z$. In Figure 7, different latent codes seem to yield different overall brightness of the outputs, but it is unclear whether one can arbitrarily control the brightness though varying $z either. Several simple experiments where the latent codes interpolated to get the desired outcomes would make this claim stronger.
Methods And Evaluation Criteria: The method seems to make sense from the problem at hand.
The 1D and 2D datasets are used to effectively show the ability of the trained models to better capture velocities of distributions. However, I think it is better to quantify the average curvature of 2D trajectories instead of just showing pictures.
MNIST, CIFAR-10, and ImageNet are widely used datasets to benchmark generative models. The paper also uses appropriate architectures for these datasets.
The metrics (log-likelihood for 1D/2D datsets and FID for image datsets) also make sense.
Theoretical Claims: Section 3 is easy to read and seems sound. However, the most important theoretical claim that the paper's training method preserves the marginal data distribution lacks a full formal proof. What is provided in Appendix A is a proof sketch where what is to be proven is supported by statesments such as "one can show equivalence" and "equivalence can be shown via" without any work being shown. A reader would have to go to Liu's paper and follow all the logic by themself. I suggest the authors write down the proof in Appendix A for completeness.
Experimental Designs Or Analyses: To my understanding, the paper compares its training method, variational rectified flow matching (VRFM), against two other training methods:
(1) vanilla flow matching (OT-FM), proposed by Lipman et al. (2023), and
(2) independent coupling flow matching (I-CFM), proposed by Tong et al. (2024).
The mathematical formulation of these algorithms are slightly different, and they are almost equivalent if their source distributions are the standard Gaussian $\mathcal{N}(0,I)$. As a result, I find the comparison with I-CFM for the CIFAR-10 dataset redundant. In fact, the numbers of OT-FM and I-CFM in Table 1 are very close. Moreover, the comparison with I-CFM is only available for the CIFAR-10 dataset.
The paper would feel more consistent if either (a) comparison with I-CFM is removed or (b) it also provides comparison with I-CFM training method for other datasets.
Supplementary Material: I skimmed the supplementary material, mainly to look for details that are missing from the main paper. I found that:
(1) Section A does not contain the complete proof of the main theoretical claim of the paper.
(2) Figure 12, which shows the FID scores for the MNIST dataset, should have been turned into a table and included in the main paper.
Relation To Broader Scientific Literature: The paper propses a new extension to flow mathcing that allows sampling paths to be chosen based on a latent vector. In a sense, it is an interesting way to combine VAE with flow matching model.
Essential References Not Discussed: (1) I believe that the idea of modeling distributions of values inside a diffusion sampling process has been explored previously, and this paper is one instance of it. An example that comes to mind is the Denosing Diffusion GANs paper by Xiao et al. [1], which uses conditional GANs to model the denoising distribution at each step of the diffusion process.
(2) There is another way to combine a diffusion model with a VAE, and it involves using the former to model the latent space of the latter [2].
(3) The opposite idea to the one proposed in the paper is to regard the distribution of target values to match the neural network's output against as a kind of noise and seeks to eliminate it. Stable Target Field by Xu et al. implements this idea. [3]
1. Zhisheng Xiao, Karsten Kreis, Arash Vahdat. Tackling the Generative Learning Trilemma with Denoising Diffusion GANs. ICLR 2022.
2. Arash Vahdat, Karsten Kreis, Jan Kautz. Score-based Generative Modeling in Latent Space. NeurIPS 2021.
3. Yilun Xu, Shangyuan Tong, Tommi Jaakkola. Stable Target Field for Reduced Variance Score Estimation in Diffusion Models. ICLR 2023.
Other Strengths And Weaknesses: I believe this paper presents a new and interesting formulation of flow matching models. However, I do not feel that its benefits are compelling. Being able to model the velocity distribution is clearly a novelty, but rather a conceptual one. The most concrete one improvements in metrics which diminish as the number of function evalutions become larger. Still, for image datasets such as CIFAR-10 and MNIST (and perhaps ImageNet), these improvements are small and not comparable to improvements achieved by distillation methods.
Another benefit claimed by the paper is the ability to control the output through the latent code $z$. However, to make the paper stronger, I think the paper should do more experiments to highlight this aspect. This can include a simple method to sample a specific number from the MNIST dataset or a way to control the brightness of CIFAR-10 samples.
Other Comments Or Suggestions: I suggest replace the term "data-domain-time-domain" with "data-time space" or "$(x,t)$-space," which should be more concise.
Questions For Authors: (1) In Table 2, it seems that performance gap between SiT-XL (the baseline) and V-SiT-XL seem to diminish as training becomes longer. Can you show the FID scores at 1200K steps and/or 1600K steps to confirm that the gap still exists there as well?
(2) It would be interesting to see how ImageNet models perform at NFEs lower than 250. Please include those numberse in Table 2 if possible.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks for feedback and for highlighting our theoretical contributions and strong results across datasets and models.
**1. Quantify the average curvature of 2D trajectories**
We calculated the curvature for 2D data results (**Sec 4.2**) and find significantly lower curvature for our method:
| | Mean/Max Curvature|
| - | - |
|Baseline (rectified flow)| 21.03/171.35|
|Ours| 0.98/4.23|
**2: How to obtain desired digits/brightness via $z$. Latent interpolation would strengthen the claim**
Great suggestion. We conducted interpolation experiments and summarize findings verbally as images can't be uploaded. For MNIST, interpolating latents leads to smoothly transitioning digits (e.g., 1 → 7 → 8 → 3 → 2 → 0). Note, **Fig 6** illustrates that each digit corresponds to a specific latent $z$. Intermediate digits emerge naturally when interpolating. For CIFAR-10, we observe analogous effects—interpolating between two latents leads to smooth transitions in brightness and color patterns.
**3: Marginal data distribution preservation lacks a proof**
The proof in **App A** refers to results established by Liu et al. to provide adequate credit. This may require readers to reconstruct parts of the proof using work of Liu et al. To make the paper self-contained, we will expand it to add the missing steps (derivation of $d/dt E[h(X_t)]$ and equivalence of $0 = E_Z \int(...))$.
**4. Formulations of OT-FM and I-CFM are almost equivalent. Redundant comparison with I-CFM**
OT-FM/I-CFM differ in $x_t$ and the conditional vector field $u_t$. Specifically, OT-FM defines $x_t$ as $\mathcal{N}(t x_1, 1-(1-\sigma) t)$ and $u_t$ is $\frac{x_1-(1-\sigma)x_t}{1-(1-\sigma)t}$. I-CFM defines $x_t$ as $\mathcal{N}(t x_1+(1-t)x_0, \sigma)$ and $u_t$ as $(x_1-x_0)$. SiT uses the rectified flow objective, which is equivalent to I-CFM. We'll follow the suggestion to remove OT-FM for consistency.
**5. Figure 12 of MNIST FID score should be table**
We'll convert **Fig 12** into a table.
**6. Discussions of related work**
Thanks for highlighting these works. We'll add a full discussion. Below is a brief summary:
1. Denoising Diffusion GANs replace the Gaussian model in the denoising step with a multimodal distribution. Unlike our method, a conditional GAN with separate discriminator models the distribution. But GANs face mode collapse and stability issues. In contrast, our method uses rectified flow matching, preserving the maximum likelihood benefits.
2. Score-based Generative Modeling uses a VAE to map raw data $x_0$ into latent space $z_0$, with the VAE jointly trained with score-based generative modeling (SGM). Unlike our approach, SGM still faces ambiguity issues due to its use of a uni-modal Gaussian distribution.
3. Stable Target Field notes that the posterior distribution is multi-modal. To model this distribution, the paper reduces training target variance using a reference batch. In contrast, our method directly models this multi-modal posterior via a recognition model.
**7. Improvements between SiT-XL and V-SiT-XL diminish as training continues**
A diminishing gap is expected as absolute values decrease, a trend also presented in Fig 2 of the SiT paper (comparing the gap of DiT v.s. SiT). However, the relative improvement remains strong. We extend training to 1200k steps and report both absolute and percentage improvements. We observe the percentage improvements increase with more training iterations.
||200k |400k |600k |800k |1200k |800k (cfg=1.5) |1200k (cfg=1.5)|
| - | - | - | - | - | - | - | - |
|SiT-XL |26.09 |17.84 |14.77 |13.15 |11.26 |3.43 |2.97|
|V-SiT-XL |23.34 |14.60 |12.00 |10.62 |8.97 |3.22 |2.76|
|abs diff |2.75 |3.24 |2.78 |2.53 |2.29 |0.21 |0.20|
|percent diff |10.53% |18.16% |18.79% |19.24% |20.31% |6.12% |6.86%|
**8. Improvements are small, not comparable to distillation**
We respectfully disagree. Our improvement is solid, while strictly following the open-source SiT training. Results are consistent with the SiT-to-DiT improvement, demonstrating a comparable level of progress (19.5 for DiT-XL, 17.2 for SiT-XL, and 14.6 for V-SiT-XL at 400k steps, as shown in **Tab 2**).
Also note, our key contribution is to model the velocity distribution. We find this to consistently improve evaluation metrics across all NFEs. While distillation methods may show improvements for low NFEs, our method achieves better results across both low and high NFEs.
Additionally, as discussed in **App B**, V-RFM focuses on single-stage training to capture a multi-modal velocity distribution from “ground-truth” data without leveraging pre-trained models. Exploring distillation for V-RFM is an exciting avenue for future research, particularly when the interest is to improve results for low NFEs.
**9. Replace "data-domain-time-domain"**
Great suggestion. We'll revise.
**10. ImageNet performance below 250 NFEs**
**Fig 8** shows those results, revealing a consistent boost, further highlighting our method's effectiveness. | Summary: The paper introduces Variational Rectified Flow Matching (VRFM), a novel approach that integrates techniques from Rectified Flow Matching (RFM) and Variational Autoencoders (VAEs). This design aims to address the vector ambiguity issue inherent in the original RFM method. Through extensive experiments, the authors demonstrate that VRFM improves data generation quality, producing samples that more closely align with ground truth compared to standard RFM. Additionally, the proposed method effectively mitigates vector ambiguity to a significant extent. Empirical evaluations on benchmark image datasets, including CIFAR-10 and ImageNet, reveal that VRFM consistently achieves superior Fréchet Inception Distance (FID) scores compared to baseline models, highlighting its effectiveness in generative modeling.
Claims And Evidence: yes
Methods And Evaluation Criteria: yes
Theoretical Claims: I reviewed the proofs quickly, and they appear to be correct.
Experimental Designs Or Analyses: yes, there are not issues.
Supplementary Material: I reviewed part A.
Relation To Broader Scientific Literature: The proposed Variational Rectified Flow Matching (VRFM) builds upon previous work in Rectified Flow Matching (RFM) and Variational Autoencoders (VAEs). A key limitation of RFM is vector ambiguity, which can hinder generative performance. VRFM addresses this issue by integrating a variational framework, a well-established technique in generative modeling for learning more effective latent representations. Experimental results show that VRFM achieves improved Fréchet Inception Distance (FID) scores compared to RFM, indicating enhanced generative quality beyond what was previously achievable with RFM alone.
Essential References Not Discussed: No
Other Strengths And Weaknesses: Strengths:
1. The paper is well-written, with clear and easy-to-follow explanations.
2. The authors conduct extensive experiments to evaluate the proposed method comprehensively.
3. The proposed approach effectively addresses vector ambiguity and demonstrates superior generative performance compared to baseline models.
Weaknesses:
1. The method requires additional parameters to compute the latent representation during training, increasing computational complexity.
2. Some claims rely on empirical observations; providing stronger theoretical proofs would further strengthen the paper.
3. While the proposed method outperforms baseline models, it still lags behind the current state-of-the-art models.
Other Comments Or Suggestions: No
Questions For Authors: 1. In Figure A, the visualization depicts the ground truth data, including the source data distribution, target data distribution, and the mapping between them. I assume that the mapping between source and target data points is randomly generated, meaning any point in the source could potentially correspond to any point in the target. If this assumption is correct, then a definitive ground truth mapping may not exist. Could the authors clarify how they obtained the ground truth mapping used in the visualization?
2. The claim that the proposed method resolves vector ambiguity is primarily supported by visualizations using toy data. Would it be possible to provide a rigorous theoretical proof to substantiate this claim?
3. How does the choice of latent variable affect the generative path? Specifically, would the generative trajectory be different when using different numbers of latent variables during inference?
Ethical Review Concerns: No
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks a lot for your detailed feedback and for recognizing our well-written paper, extensive experiments, and comprehensive evaluation of performance. We also appreciate the acknowledgment of V-RFM’s effectiveness in addressing velocity ambiguity and its superior performance. Below, we address questions:
**1. The method requires additional parameters to compute the latent representation during training, increasing computational complexity.**
Yes, extra computation is used during training. It is essential to extract latent information within the velocity network, ultimately enhancing generation quality and expressiveness. As discussed in **Sec 3.3 and 4**, the latent encoding network is not used during inference—we directly sample $z$ from the prior distribution. Furthermore, the increase in parameters is minimal (e.g., less than 0.3% on ImageNet), and the impact on inference speed is negligible while delivering superior results. The ablation study on the size of the posterior model summarized in **App Tab 5** shows that performance remains consistent across variations, demonstrating the robustness and flexibility of our approach. This allows users to balance training efficiency and runtime quality based on their computational constraints.
**2. While the proposed method outperforms baseline, it lags behind the current SOTA.**
The primary contribution of our work is not in surpassing the current state-of-the-art, but rather in introducing a methodological innovation, i.e., capturing the multimodal velocity field, which offers new avenues for improvement in the field. While our method lags behind the state-of-the-art, it can be combined with innovations put forth by current SOTA SiT-XL/2 + MG [1] and SiT-XL/2 + REPA [2], both built upon the SiT framework. As detailed in our experiments (**Sec 4.5, L364-367 right column**), we strictly followed the original training recipe from the open-source SiT repository and replicated the process outlined in the SiT paper for a fair comparison. Our experimental results provide empirical evidence of the effectiveness of our approach.
[1] Tang, Z. et al. (2025). *Diffusion Models without Classifier-free Guidance*.
[2] Yu, S. et al. (2024). *Representation alignment for generation: Training diffusion transformers is easier than you think*.
**3. A definitive ground truth mapping may not exist.**
A definitive “ground-truth” does not exist indeed. The (“ground-truth”) velocities form a distribution for every $(x_t, t)$-location. To see this we independently sample from both the source distribution and the target data distribution, calculate the rectified flow interpolants for each pair, and visualize them in **Fig 1** and the velocity distribution in **Fig 3(a)**. Our method models this velocity distribution at every $(x_t, t)$-location, while standard rectified flow matching cannot capture it. In the abstract and the main paper, we use quotation marks around “ground-truth” to emphasize this distinction. We will correct any missing instances in the final version.
**4. Provide a rigorous theoretical proof to substantiate the claim that the proposed method resolves vector ambiguity.**
In **Sec 3**, we show that the proposed approach leads to a mixture model for the velocity distribution (**L179-180**). A mixture model is theoretically capable of capturing multi-modality. Following classic expectation maximization or variational inference, we introduce the recognition model, derive the lower bound of the marginal likelihood for an individual data point (**L189-192**), and present the variational flow matching objective (**L197-199**). To further substantiate our approach, in **App A**, we show how to prove that the distribution learned by the variational objective preserves the marginal data distribution. Empirically, on 1D data we visualize the learned velocity distribution in **Fig 3**, showing that the method indeed learns the velocity ambiguity. Further, during training on high-dimensional data, we observe that our method achieves better velocity reconstruction losses (**App Fig 15**) compared to standard rectified flow, indicating that the predicted velocities more accurately approximate the “ground-truth” velocities.
**5. How does the choice of latent variable affect the generative path.**
We studied the role of $z$ for MNIST data (**Fig 6**) and CIFAR-10 data (**Fig 7**). As noted in **Sec 4.3 and 4.4**, we observe clear patterns in the generated samples based on $z$. Specifically, images conditioned on the same latent $z$ exhibit consistent color patterns, while images at the same grid location show similar content. These observations validate the effectiveness of the latent variable $z$ in influencing and controlling the generated samples. | Summary: This paper proposes Variational Rectified Flow Matching (V-RFM), a generative model that integrates Variational Autoencoders (VAEs) with Rectified Flow Matching (RFM). Unlike conventional RFM, which struggles to capture the multimodal nature of the ground-truth velocity vector field and learns only a single averaged direction, V-RFM introduces an encoder-based architecture to enable modeling of multimodal velocity fields. This capability allows V-RFM to theoretically achieve straighter sampling trajectories compared to traditional RFM frameworks.
## update after rebuttal
I thank the authors for their response and I will maintain my score as Weak Accept. I suggest that the authors add discussion and evidence on training stability and convergence in an updated paper. Also, it would be valuable to report more results in an updated version.
Claims And Evidence: Yes, the major claim of this submission is supported by experiments.
Methods And Evaluation Criteria: Yes, the proposed method makes sense for the current problem.
Theoretical Claims: I checked the correctness of the proof of Claim 1 of the main paper. The proof looks correct to me.
Experimental Designs Or Analyses: I suggest adding more baselines in the experiments in Tables 1 and 2, such as flow matching and 1/2/3-Rectified Flow, which can increase the soundness/validity of experimental designs. It is also recommended to increase qualitative comparison with those baselines.
Supplementary Material: I reviewed the experimental part of the supplementary material.
Relation To Broader Scientific Literature: No.
Essential References Not Discussed: The paper provides a full discussion of related work.
Other Strengths And Weaknesses: Strengths
+ The proposed V-RFM is novel for me, and is promising as a generative model.
+ The introduction of encoder enables V-RFM to have the ability to infer latent codes given the data samples.
Weaknesses
- I am more concerned about the training stability and effectiveness of the method as a generative model. V-RFM introduces VAE to RFM. Relatively speaking, the training stability and effectiveness of VAE are not as good as RFM because it may encounter the posterior collapse problem. However, this paper does not seem to discuss whether the encoder will encounter this problem and how it affects the generation effect.
Other Comments Or Suggestions: N/A
Questions For Authors: Please see Weaknesses and Experimental Designs Or Analyses.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks a lot for your detailed feedback and for recognizing V-RFM's novelty and promise as a generative model. See below for answers to your questions:
**1. Training stability of V-RFM. VAE training is not as stable as RFM because it may encounter the posterior collapse problem.**
Great question. During our studies we observed that stability can be controlled by the architecture. Specifically, as stated in **L353-357**, bottleneck sum, which fuses the latent $z$ with activations of the velocity network, and adaptive normalization, which explicitly scales and offsets the latent $z$ at multiple layers of the velocity network, are effective at ensuring that the latent variable $z$ sampled from the posterior is not ignored. As shown in **App Fig 15**, the reconstruction losses of V-RFM remain consistently lower than the baseline, demonstrating that the latent variable contributes meaningfully to reducing the reconstruction loss. Furthermore, **Fig 6 and 7** confirm that modifying the latent $z$ alters the predicted image, further verifying that our latent representation remains informative and utilized throughout training. Lastly, we conducted an ablation study on the size of the posterior model, summarized in **App Tab 5**. The results show that performance remains consistent across variations, indicating that even with a very small encoder (6.7% of its original size), the latent information remains informative and helps achieve competitive FID scores.
**2. More baselines in Tables 1 and 2, such as flow matching and 1/2/3-Rectified Flow.**
Note, the OT-FM/I-CFM baselines in **Tab 1** and the SiT baseline in **Tab 2** employ the flow matching objective with differences in the parameterization of $x_t$ and the conditional vector field $u_t$. Specifically, OT-FM parameterizes $x_t$ as $\mathcal{N}(t x_1, 1-(1-\sigma) t)$, while I-CFM defines it as $\mathcal{N}(t x_1+(1-t)x_0, \sigma)$. The conditional vector field $u_t$ is $\frac{x_1-(1-\sigma)x_t}{1-(1-\sigma)t}$ for OT-FM, while it is simply $(x_1-x_0)$ for I-CFM. We also note that 1-Rectified Flow is equivalent to I-CFM.
We have added a comparison with 2/3-Rectified Flow via Reflow. We find that while strong FID scores in the low-NFE regime are achieved, it does so at the cost of limiting peak performance at high NFE. We emphasize that Reflow is a supplementary technique applied on top of Rectified Flow Matching (RFM)—primarily aimed at fast sampling rather than improved sample quality. It also requires $N$ times longer training and a significantly larger fine-tuning dataset, where $N$ denotes the number of Reflow rounds. These differences make a direct comparison with our V-RFM less fair. Hence, RFM without Reflow is a more appropriate baseline. Additionally, Reflow can be applied to our method as well, potentially improving results at the cost of increased training overhead. We will clarify these points in the final version and include additional qualitative comparisons for a more precise evaluation.
|Methods |# Params |2 |5 |10 |50 |100 |1000 |Adaptive|
|:---------:|:--------:|:--------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|
|RFM w.o. Reflow / I-CFM |36.5M |168.654 |35.489 |13.788 |5.288 |4.461 |3.643 |3.659|
|RFM w. 1 Reflow |36.5M |7.512 |5.906 |5.513 |5.283 |5.276 |5.276 |5.275|
|RFM w. 2 Reflow |36.5M |7.559 |6.925 |6.776 |6.729 |6.733 |6.752 |6.752| | null | null | null | null | null | null |
SCISSOR: Mitigating Semantic Bias through Cluster-Aware Siamese Networks for Robust Classification | Accept (poster) | Summary: This paper presents a novel debiasing architecture that leverages siamese networks and clustering techniques to mitigate spurious correlations in learned embeddings. The proposed approach remaps the embedding space to discourage unwanted dependencies between inputs and outputs while preserving meaningful semantic clusters.
A key advantage of this method is that it eliminates the need for extensive data augmentation or text rewriting, making it a more efficient alternative for debiasing.
The architecture introduces a debiasing module designed to disrupt semantic shortcuts, which are often responsible for model biases. This module is integrated between a pre-trained model and the classification head.
Extensive experiments on both image and text datasets demonstrate the effectiveness of this approach, showcasing its ability to enhance generalization by reducing the impact of spurious correlations.
Claims And Evidence: The authors have done a good job in presenting lemmas on the existence of semantic biases, deriving insightful theoretical observations, and validating their claims empirically. The combination of theoretical derivations and experiments strengthens their argument and provides evidence for their claims.
Methods And Evaluation Criteria: While the paper effectively presents its method, there are some concerns regarding the reliance on Markov clustering. If the clustering does not accurately reflect the semantic structure of the embeddings, it may fail to suppress shortcuts effectively. Exploring alternative clustering methods such as Spectral Clustering, DBSCAN, or Gaussian Mixture Models could enhance robustness, particularly for larger models.
While the authors claim that larger models exhibit a weaker cluster effect, as indicated by a high Hopkins statistic, it is worth considering that this may be due to the difficulty of identifying clusters in high-dimensional spaces rather than an inherent weakening of the clustering effect itself. This could be a reason why SCISSOR seems to work best on smaller architectures.
Theoretical Claims: The paper’s theoretical contributions are well-structured, particularly in defining semantic biases and how they influence shortcut learning.
I have skimmed the proof of the lemmas in the appendix.
Experimental Designs Or Analyses: The experiments effectively demonstrate the performance improvements of SCISSOR across multiple benchmarks, showing significant gains over baselines. However, it would be valuable to evaluate alternative clustering techniques and compare their impact on model performance, particularly for larger architectures where clustering appears weaker.
Supplementary Material: I have read it all.
Relation To Broader Scientific Literature: This paper contributes to literature on spurious correlations and shortcut learning. The methodology aligns with recent work on debiasing techniques and latent space interventions.
Essential References Not Discussed: Not to the best of my knowledge.
Other Strengths And Weaknesses: Strengths: The proposed model is novel, especially in how it remaps the latent space to suppress spurious clusters. It also forces the model to focus on real discriminative features rather than cluster membership, improving generalization.
Weaknesses: The reliance on pre-trained embeddings being well-structured at the start of training could limit the method’s applicability to scenarios where embeddings are noisy or uninformative.
Other Comments Or Suggestions: 1. Typos:
1. Line 127, left column: "A employs LLMs"
2. Line 266 right column: "debasing" -> "debiasing"
2. What do the different background colors in the table in step2 of figure 2 represent?
Questions For Authors: Have you considered alternative clustering techniques to improve performance on larger models?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: **1. On Clustering Method (with Larger Models)**
We conducted an ablation experiment using DBSCAN with LLaMA3, the largest model we used. The results are as follows:
| Dataset | ACC (GYAFC) | F1 (GYAFC) | | ACC (Yelp) | F1 (Yelp) |
|---------------|------------|------------|--|------------|-----------|
| LLaMA3 | 89.37 | 0.89 | | 94.57 | 0.95 |
| LLaMA3 (w/ DBSCAN) | 89.91 | 0.90 | | 95.41 | 0.95 |
We observed that the improvement of DBSCAN and MCL on LLaMA3 is similar, both being approximately 1%. This indicates that SCISSOR exhibits robustness to variations in clustering algorithms.
**2. On Semantic Bias and Initial Embeddings**
Since most contemporary debiasing works rely on pre-trained language models, we followed their works. We will consider debiasing methods in future work that do not rely on initial information.
**3. On Background Colors in the Table in Step2 of Figure 2**
We are sorry for making the confusion. The background colors are used to distinguish samples (i.e., Anchor, Positive, Intermediate, Negative). Their function is same to the color of the "Sun" and "Moon" markers. We will improve the readability of Figure 2 in the camera-ready version.
---
Rebuttal Comment 1.1:
Comment: Thanks for you rebuttal, I will keep my accept score | Summary: ## Summary
This work aims at learning an adapter on pretrained representation to "filter out classification-irrelevant semantic features", to help out-of-distributation robustness. The author proposes a compliciated approach incorporating clustering, reweighting, contrastive learning, and creating “intermediate sample".
## Strongthness
- Out-of-distributation generalization is a challenging and interesting topic.
## Weakness
- This work claims " first to identify and demonstrate, both theoretically and empirically, that imbalances in the semantic distribution of samples can also lead to the shortcut problem." However, imbalance and shortcut learning is not new. Theorical work includes [1]. Experimental works includes GroupDRO and [3].
Please note that "semantic distribution" notion in this paper doesn't make the claim novel. Because one can treat the "semantic embedding", i.e. the representation of pretrained modek, as a preprocessed input. And apply all the imbalance techniques in the literature.
- The proposed method is complicated and hard to undertsand. Maybe an Algorithm table help. There are many unclear parts in the method, including how to choose intermediate example? How to support dataset with multiple classes?
- Lack of experimental comparision between naive baselines, such as simple reweighting, IRM [2] and many variations. Table 2 and Table 3 are not very helpful.
[1] Chaudhuri, K., Ahuja, K., Arjovsky, M. & Lopez-Paz, D.. (2023). Why does Throwing Away Data Improve Worst-Group Error?. Proceedings of the 40th International Conference on Machine Learning
[2] Arjovsky, M., Bottou, L., Gulrajani, I., & Lopez-Paz, D. (2019). Invariant risk minimization. arXiv preprint arXiv:1907.02893.
[3] Kirichenko, P., Izmailov, P., & Wilson, A. G. (2022). Last layer re-training is sufficient for robustness to spurious correlations. arXiv preprint arXiv:2204.02937.
Claims And Evidence: check summary
Methods And Evaluation Criteria: check summary
Theoretical Claims: check summary
Experimental Designs Or Analyses: check summary
Supplementary Material: no
Relation To Broader Scientific Literature: check summary
Essential References Not Discussed: check summary
Other Strengths And Weaknesses: check summary
Other Comments Or Suggestions: check summary
Questions For Authors: check summary
Code Of Conduct: Affirmed.
Overall Recommendation: 1 | Rebuttal 1:
Rebuttal: **1. On the Lack of Novelty**
We respectfully clarify that, to our knowledge, shortcut learning stemming from a semantic imbalance in pre-trained models (which requires unsupervised analysis) is not widely studied. The theoretical reference [1] you mentioned deals with label quantity bias, rather than semantic distribution. In fact, our experiments (lines 192-196) use balanced label distributions. Likewise, GroupDRO also focuses on imbalances in categories, not the distribution of semantic features. The debiasing in our paper refers to mitigating shortcuts which universally across all groups in the training set, rather than focusing on the worst group. Meanwhile, [2] requires prior knowledge of shortcuts and access to an unbiased dataset, both of which can be impractical in real-world scenarios [3][4][5].
Consequently, our assertion that “an imbalance in semantic distribution can lead to shortcuts” remains valid and distinct from the biases explored in [1], GroupDRO, and [2].
**2. On Baseline Comparisons**
As noted in our Introduction, existing shortcut-mitigation research typically addresses label imbalance or focuses on token/pixel-level biases. Please note that direct reweighting methods, for example, are not applicable here because our training samples are already balanced by label. This underscores our key statement: “Shortcuts can persist even when labels are balanced.”
To address concerns about baselines, we have added a comparative experiment with IRM; please see our “to reviewer gyYJ” response for details. We also compare our approach with RAZOR, a leading method that already surpasses several simpler baselines. Because our method outperforms RAZOR, we did not repeat comparisons with those baselines, especially since their focus is on token-level biases, whereas our work targets semantic-level biases.
**3. On the Usefulness of Tables 2 and 3**
In response to concerns about the value of Tables 2 and 3, our main motivation arises from the imbalance in semantic distribution, which the Hopkins Statistic measures effectively. By comparing Tables 4 and 5, we illustrate that our method’s effectiveness increases as semantic imbalance grows, demonstrating that our semantic debiasing approach genuinely shifts how samples are represented.
**4. Clarity Concerns**
**4.1 Choosing Intermediate Samples**
In lines 214-215, we explain that “Intermediate samples share the same label and cluster as the anchor,” and this process is also illustrated in Figure 2. Nevertheless, we will try to better highlight this aspect in our paper.
**4.2 Multi-Label Classification**
Our “Problem Formulation” section does not restrict us to binary classification. Figure 1 uses a binary example solely for illustrative purposes. In fact, we have already evaluated multi-label scenarios in the paper, such as the Not-MNIST dataset, which has ten labels (A-J) as noted in lines 291-293.
Finally, we emphasize that shortcut learning is multifaceted; recent works on style-based shortcuts [6] and in-context learning shortcuts [7] highlight various dimensions of this issue. We believe their contributions are valuable, as they address different forms of shortcuts that are similar to our notion of semantic shortcuts.
Finally, we emphasize that our work tackles semantic shortcuts, a fundamentally different challenge than label-based or count-based bias mitigation. Addressing these subtler cues demands deeper scrutiny than simply balancing label distributions or removing obvious spurious correlations. Consequently, our approach necessitates unsupervised techniques to effectively uncover and mitigate these hidden biases.
[1] Chaudhuri, K., Ahuja, K., Arjovsky, M. & Lopez-Paz, D. (2023). Why does Throwing Away Data Improve Worst-Group Error? Proceedings of the 40th International Conference on Machine Learning
[2] Kirichenko, P., Izmailov, P., & Wilson, A. G. (2022). Last layer re-training is sufficient for robustness to spurious correlations. arXiv preprint arXiv:2204.02937.
[3] Weizhi Xu, Qiang Liu, Shu Wu, Liang Wang (2023), Counterfactual Debiasing for Fact Verification
[4] Shuo Yang, Bardh Prenkaj, Gjergji Kasneci. (2025). RAZOR: Sharpening Knowledge by Cutting Bias with Unsupervised Text Rewriting
[5] Zeming Chen, Qiyue Gao, Antoine Bosselut, Ashish Sabharwal, Kyle Richardson. (2024). DISCO: Distilling Counterfactuals with Large Language Models
[6] Yuqing Zhou, Ruixiang Tang, Ziyu Yao, Ziwei Zhu. (2024). Navigating the Shortcut Maze: A Comprehensive Analysis of Shortcut Learning in Text Classification by Language Models
[7] Joonwon Jang, Sanghwan Jang, Wonbin Kweon, Minjin Jeon, Hwanjo Yu. (2024). Rectifying Demonstration Shortcut in In-Context Learning | Summary: This paper proposes SCISSOR, a debiasing approach that mitigates semantic biases in classifiers by disrupting semantic clusters that create shortcut learning. Using a Siamese network with Markov Clustering, it creates contrastive learning pairs to remap the semantic space, and through experiments, showed strong improvements across six models and four datasets.
Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence.
Methods And Evaluation Criteria: The benchmark datasets and model selection cover vision and language, making the results broadly applicable. The evaluation criteria align well with the problem of semantic bias mitigation.
Theoretical Claims: The paper outlines two lemmas that explain how semantic clusters affect classification. Formal proofs are also included in the appendix.
Experimental Designs Or Analyses: The experimental design is well-structured, which includes comparison with related methods across a wide range of benchmarks and backbones. Multiple metrics are reported, and a computational efficiency analysis is also included.
Supplementary Material: Yes.
Relation To Broader Scientific Literature: In comparison to previous works, this work is motivated by the fact that balanced data can still have semantic biases. The proposed method is instead designed to remap the embedding space itself, complementing both dataset-centric and DRO-based methods by specifically targeting label-skewed clusters.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Please see sections above.
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your positive feedback on our work. We sincerely appreciate your time and support. We'll address any further questions you should have during the discussion period to improve our paper, and make it through the finish line. | Summary: This work introduces SCISSOR (Semantic Cluster Intervention for Suppressing Shortcut), a Siamese network-based debiasing approach that remaps the semantic space by discouraging latent clusters exploited as shortcuts. Shortcut learning is a critical issue that undermines model generalization to out-of-distribution data. Through extensive evaluation on various models and benchmarks, SCISSOR demonstrates its effectiveness in mitigating shortcut learning and promoting more robust machine learning models.
Claims And Evidence: The paper’s main claim that semantic bias can be mitigated through cluster-aware Siamese networks, is grounded in both theoretical observations and empirical experiments.
Methods And Evaluation Criteria: The semantic cluster intervention makes sense for suppressing shortcuts, with its performance evaluated using accuracy and F1 metrics on multiple datasets that are suitable for this assessment.
Theoretical Claims: I reviewed the lemma discussing the existence of semantic bias and the theory-grounded observations, and they appear logically sound and correct to me.
Experimental Designs Or Analyses: The experimental designs involve six models, and the results and analysis demonstrate both the validation of semantic shortcuts and the effectiveness of the proposed method.
Supplementary Material: I reviewed the lemma’s proofs and the discussions explaining why the lemma is relevant for demonstrating bias, along with the additional experiments. These supplementary materials support the main claim of the biases are commonly observed, and the importance of mitigating semantic bias through cluster-aware Siamese networks.
Relation To Broader Scientific Literature: This work is closely related to shortcut learning for generalization under distribution shifts, biases, and spurious correlations issues that are critical in the field of robust machine learning.
Essential References Not Discussed: Mitigating shortcut learning and spurious correlations is fundamental to achieving robust machine learning. Existing literature (for example [1, 2]) has introduced various strategies to address these challenges. A thorough, in-depth comparison with such works is necessary to highlight the position of the proposed approach in the broader research field.
[1] Invariant Risk Minimization. 2019.
[2] Distributionally Robust Neural Networks for Group Shifts: On the Importance of Regularization for Worst-Case Generalization. 2020.
Other Strengths And Weaknesses: Advantages
1. This work addresses the critical issue of shortcut learning in robust machine learning, which is an important and challenging research area.
2. The proposed method appears reasonable and is supported by experimental details that demonstrate its effectiveness.
Weaknesses
1. The theoretical analysis primarily focuses on identifying biases and discussing related observations. A more in-depth and stylized analysis of the cluster-aware Siamese network for robust classification would further strengthen the theoretical foundation.
2. Figure 1 illustrates sentiment classification, but more visualizations using real data would help showcase the semantic space and better demonstrate the proposed approach’s effectiveness.
3. A comprehensive evaluation on large-scale, real-world datasets (such as [1, 2]) would provide stronger evidence of the method’s practical applicability.
[1] WILDS: A Benchmark of in-the-Wild Distribution Shifts. ICML 2021.
[2] In Search of Lost Domain Generalization. 2020.
Other Comments Or Suggestions: None
Questions For Authors: Refer to the Weaknesses section for more detailed questions.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: **1. We added a comparative experiment with Invariant Risk Minimization (IRM) [1] and will include the results as well the corresponding references in the Related Work section.**
| Dataset | ACC (GYAFC) | F1 (GYAFC) | | ACC (Yelp) | F1 (Yelp) |
|---------------|------------|------------|--|------------|-----------|
| BERT | 70.40 | 0.70 | | 81.17 | 0.81 |
| BERT (w/IRM) | 77.82 | 0.77 | | 87.29 | 0.87 |
| BERT (w/Ours) | 78.20 | 0.78 | | 90.65 | 0.91 |
| RoBERTa | 73.66 | 0.73 | | 75.76 | 0.76 |
| RoBERTa (w/IRM) | 79.61 | 0.79 | | 84.53 | 0.84 |
| RoBERTa (w/Ours) | 81.34 | 0.81 | | 87.79 | 0.88 |
| LLaMA | 89.40 | 0.89 | | 95.00 | 0.95 |
| LLaMA (w/IRM) | 83.65 | 0.83 | | 94.87 | 0.95 |
| LLaMA (w/Ours) | 89.46 | 0.89 | | 95.20 | 0.95 |
|Dataset|ACC (Chest-XRay)|F1 (Chest-XRay)| |ACC (Not-MNIST)|F1 (Not-MNIST)|
|---------------|------------|------------|--|------------|-----------|
| ViT | 72.38 | 0.72 | | 88.87 | 0.89 |
| ViT (w/IRM) | 80.47 | 0.80 | | 89.37 | 0.89 |
| ViT (w/Ours) | 83.92 | 0.84 | | 90.89 | 0.91 |
| SWIN | 84.54 | 0.84 | | 92.72 | 0.93 |
| SWIN (w/IRM) | 76.92 | 0.76 | | 92.12 | 0.92 |
| SWIN (w/Ours) | 88.65 | 0.89 | | 92.74 | 0.93 |
| DINOv2 | 68.94 | 0.66 | | 85.40 | 0.85 |
| DINOv2 (w/IRM) | 69.64 | 0.67 | | 85.39 | 0.85 |
| DINOv2 (w/Ours) | 73.59 | 0.72 | | 85.75 | 0.86 |
We observe that although IRM does mitigate shortcuts in many cases, our method still significantly outperforms it across all tests. Moreover, IRM performs worse than the baseline on small datasets, such as Chest-XRay (w/SWIN) and GYAFC (w/LLaMA). We attribute this to IRM assigning excessive training weight to features that remain invariant, which prevents other useful features from being accurately identified and utilized.
**2. Adding a Visual Example of Semantic Bias**
We used PCA to reduce the BERT embeddings of the Yelp dataset to two dimensions and visualized three clusters, as shown in the Anonymous github we attached in the paper. In the image (https://anonymous.4open.science/r/SCISSOR-3F55/unb.svg), the colors represent cluster labels, while in the image (https://anonymous.4open.science/r/SCISSOR-3F55/unb_l.svg), colors indicate task labels (positive/negative).
We observe that different clusters are located in distinct regions. Furthermore, the task labels within each cluster are imbalanced: in two of the three clusters, positive samples are more prevalent, while in the remaining cluster, negative samples dominate. Nevertheless, the total number of positive and negative samples across all three clusters is equal.
In this scenario, negative samples belonging to clusters with more positive samples are theoretically more likely to be misclassified as positive by the models (as suggested by the Lemma 1 and Lemma 2), leading to the semantic shortcut we discuss.
Furthermore, in the image (https://anonymous.4open.science/r/SCISSOR-3F55/pca_SCISSOR.svg), we show the semantic representations of samples after being remapped by our debiasing module. Through using SCISSOR, we observe that the task-irrelevant cluster information has been successfully removed. In contrast, classification information is preserved, naturally leading to a distinct classification boundary among the samples. To better illustrate the separability of the samples here, we visualize the three-dimensional PCA results of the output of our debiasing module (https://anonymous.4open.science/r/SCISSOR-3F55/pca_SCISSOR_3d.svg).
**3. Experiments on Large-Scale benchmarks**
Thank you for these valuable pointers. We are running experiments on the WILD and DomainBed benchmarks. However, due to the large scale of these experiments, we may not be able to obtain results within the rebuttal period (we might make it within the discussion period). Nevertheless, we emphasize that our current experiments already include 4 commonly used datasets, which we believe provide a solid basis for evaluating our approach and demonstrating its reliability.
[1] Arjovsky, M., Bottou, L., Gulrajani, I., & Lopez-Paz, D. (2019). Invariant risk minimization. arXiv preprint arXiv:1907.02893. | null | null | null | null | null | null |
An Efficient Search-and-Score Algorithm for Ancestral Graphs using Multivariate Information Scores for Complex Non-linear and Categorical Data | Accept (poster) | Summary: This paper introduces a greedy search-and-score algorithm for Ancestral Graphs (AGs), which contain directed and bidirected edges to account for latent variables. The key innovation is a normalized likelihood score based on multivariate information over ac-connected subsets (subsets of vertices connected through collider paths within their ancestor set). The proposed method outperforms state-of-the-art causal discovery techniques (MIIC, M3HC, and GFCI) on benchmark datasets.
Claims And Evidence: - The claim that the likelihood of ancestral graphs can be explicitly decomposed using multivariate cross-information over ac-connected subsets is formally proven in Theorem 1.
- The claim that proposed search-and-score method outperforms MIIC, M3HC, and GFCI on benchmark datasets is supported by experimental results (Figures 2 and 3).
Methods And Evaluation Criteria: - The methodology is well-structured: derives a novel likelihood function for ancestral graphs, and provides an efficient search-and-score algorithm
- Experiments include synthetic Bayesian networks and benchmark datasets). Comparisons with state-of-the-art methods (MIIC, GFCI, M3HC) are well-presented.
Theoretical Claims: The theoretical contributions include:
- Theorem 1: Likelihood decomposition for ancestral graphs.
- Corollary 2: ac-connected subsets define Markov equivalence classes.
The mathematical formulations and proofs appear sound.
Experimental Designs Or Analyses: - The experimental design is generally valid, with well-chosen benchmarks and valid evaluation metrics.
- The comparisons with MIIC, M3HC, and GFCI are meaningful, as these represent reasonable baselines.
Supplementary Material: The supplementary material is not carefully checked.
Relation To Broader Scientific Literature: Structure learning is widely used in scientific research for exploratory analysis. Developing efficient algorithms for this task is significant for more accurate and scalable discovery.
Essential References Not Discussed: Not sure due to limited familiarity.
Other Strengths And Weaknesses: **Strengths**:
- It provdes novel likelihood function decomposition for ancestral graphs.
- The proposed method outperforms state-of-the-art methods on complex datasets.
- A computationally efficient heuristic for large graphs is provided.
**Weaknesses**:
- The paper is dense in notations, concepts, and terminology. It is extremely hard to process for average readers.
Other Comments Or Suggestions: - It would be better to provide a simple running example to illustrate the proposed method.
Questions For Authors: - Is there any theoretical guarantee can be said on the proposed scoring algorithm?
- How do we expect the sparsity of the graph to affect the performance?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your positive review and interesting suggestions to improve the manuscript.
We agree that a running example could illustrate the proposed method and its two-step implementation. We will include such an example in the final revised version to outline the main ideas of the method and facilitate the understanding of the paper, which might be a bit "dense in notations, concepts, and terminology". We have posted a first tentative Figure with this running example at the following double-blind github repository: https://tinyurl.com/34ej6j78
Concerning the theoretical guarantee of the approach, we will also clarify in the revised version the conditions for which theoretical guarantees exist. They concern models with (i) a maximum of two-collider paths in the method setting presented here and more generally the use of the proposed likelihood score (Eq.12) in conjunction with (ii) an exhaustive search-and-score method over small-sized MAG models or with (iii) an MCMC algorithm to efficiently sample larger MAG models, as outlined in our reponse to Reviewer 7jRW.
Finally, following your suggestion, we will discuss the effect of sparsity of the graphs on the method performance, which is already visible on Figure E.2. Namely, graphs with lower average connectivity lead to better performance for all methods including MIIC_search&score. This is actually expected as sparser models correspond to (much) fewer variable combinations to consider (eg to estimate local likelihood contributions of relevant ac-connected subsets), leading to more robust rankings between alternative sparse models and therefore to better performance at predicting correct models.
Finally, following your suggestion, we will discuss the effect of sparsity of the graphs on the method performance, which is already visible on Figure E.2. Namely, graphs with lower average connectivity lead to better performance for all methods including MIIC_search&score. This is actually expected as sparser models correspond to (much) fewer variable combinations to consider (eg to estimate local likelihood contributions of relevant ac-connected subsets), leading to more robust rankings between alternative sparse models and therefore to better performance at predicting correct models.
---
Rebuttal Comment 1.1:
Comment: I appreciate the resposne from the authors. They address some of my questions. Due to the limited knowledge in this field, I keep my original score. By the way, the resposne has replicating paragraphs.
---
Reply to Comment 1.1.1:
Comment: We would like to thank all Reviewers for their time, expertise and constructive feedbacks.
We have addressed all comments and suggestions in our detailed responses above, notably with new benchmark comparisons with DAG-GNN included in the revised Figure E2 at the following double-blind github repository: https://tinyurl.com/34ej6j78.
In particular, we have addressed both questions by this Reviewer concerning 1) theoretical guarantees and 2) the effect of sparsity. We apologize for duplicating by mistake the last paragraph in the last reply. We also followed the interesting suggestion to provide a running example with an additional Figure available at the following double-blind github repository: https://tinyurl.com/34ej6j78
(We have uploaded a revised legend concerning the edge orientations). | Summary: This paper proposes a greedy hybrid search-and-score algorithm to learn ancestral graphs from data with some latent variables marginalized out.
For this purpose, the authors first provides an explicit decomposition of the likelihood function of ancestral graphs in terms of multivari-
ate cross-information over relevant ‘ac-connected’ subsets of variables. Then, the authors use this decomposition to design a two-step approach using local information scores restricted to surrounding vertices and edges.
The proposed method is shown to outperform several state-of-the-art causal discovery methods.
---
## update after rebuttal:
My concerns about the usage of MAGs and the generalizability to cases with selection bias have been well addressed. My concerns about identifiability, though still existing, is kind of alleviated with the comparisons to other score-based works. I have therefore raised my score form 3 to 4.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: I read the theorems, the running examples in Figure 1 in details, and skimmed through the proofs. The results seem correct to me but I cannot guarantee.
Experimental Designs Or Analyses: Yes. The experimental design to show the empirical gain on causal discovery accuracy (especially on recall) seems reasonable to me.
Supplementary Material: I skimmed through the proofs.
Relation To Broader Scientific Literature: /
Essential References Not Discussed: /
Other Strengths And Weaknesses: Strengths:
1. The problem studied (greedy learning of graphs involving latent variables) is very crucial and needed in the field.
2. The proposed decomposition of the likelihood function of ancestral graphs is new to me. The decomposition over ‘ac-connected’ subsets of variables show a clear trajectory generalized from DAG case (Figure 1 G and F).
3. The experimental results show a generally better performance than other existing MAG learning algorithms.
Weaknesses:
1. There lacks an identifiability or correctness guarantee of the results.
- It seems that there is no guarantee of the results–at least I didn't see any theoretical claims in the whole Section 3–but please correct me and formally state such guarantee if I am wrong.
- The authors state that for computational efficiency, the proposed algorithm only involves two steps to search over local scores of each node and edges. Then, at the first place, when ignoring efficiency, do authors have an oracle version of algorithm that traverses ac components of all sizes that have correctness guarantee? Or, can the proposed likelihood decomposition be applied to some exhaustive search method that have guarantee?
- Following the proposed score decomposition, are there any good properties of the score to support the greedy search (e.g., such greedy search won't lead to local optima), just like the local consistency of BIC score in the Bayesian network and linear Gaussian case?
2. The proposed score decomposition, though being new, needs more justification.
- The authors say that their ac-component-based decomposition "do not rely on the head-and-tail factorization but coincide with the parametrizing sets (Hu & Evans, 2020) derived from the head-and-tail factorization...". But what is the exact advantage of this score decomposition than existing ones, in both the characterization form itself, and for the estimation in the algorithms? Is "head-and-tail factorization" something unwanted that we need to prevent from using?
- Other existing scores under latent variables, but with specific parametric assumptions like discrete variables, exponential or stratified Gaussian families, should also be discussed. It would be interesting to see how this information theoretical score characterization generalizes them or differs from them.
Other Comments Or Suggestions: Throughout this paper the term "MAG" is used. However, the authors actually only allows for latent variables but not selection bias, and thus only -> and <-> occurs, without -- edges. In this case, for convention, maybe the authors want to turn to the term "ADMG" instead? Or, are there any specific reasons to use "MAG"? Can the score characterization and decomposition also be applied to cases with selection bias?
Questions For Authors: /
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your detailed review. Thank you also for underlining that "the problem studied (greedy learning of graphs involving latent variables) is very crucial and needed in the field" and that "the decomposition over ‘ac-connected’ subsets of variables shows a clear trajectory generalized from DAG case (Figure 1 G and F)". We reply to your comments and suggestions below.
Concerning your first point about the guarantees of the results, it is true that the greedy approach does not come with guarantees unless the actual model has no ac-connected subsets with more-than-two-collider paths. In this case, the local decomposition gives a correct estimate of the global likelihood, based on Proposition 3, due to the local consistency of the likelihood score (Eq.12) just like in the Bayesian network case. However, as suggested by this reviewer, an oracle version of the approach could also be proposed, as the general likelihood decomposition (Eq. 12) could in principle be used in conjunction with an exhaustive search-and-score algorithm over MAGs or PAGs, which can be generated rather efficiently (Hu et al 2020). Alternatively, and in practice for graphs with more than about 10-15 nodes, an MCMC algorithm based on the likelihood decomposition (Eq.12) could also be used to efficiently search for high-scoring MAGs or PAGs.
The connection with head-and-tail factorization is discussed in some details in Appendix C. In brief, these head-and-tail factorizations introduced by Richardson in 2009 are definitely useful to enable the parametrization of the joint probability distribution with independent parameters for ancestrally closed subsets of vertices (see Richardson 2009). However, head-and-tail factorizations are also somewhat involved, as they do not correspond to a single factorized equation but instead to multiple factorized equations for a given ancestral graph. Moreover, head-and-tail factorizations (as well as the c-component decompositions introduced by Tian & Pearl 2002) cannot simply be used to estimate the likelihood function in terms of empirical distribution p(x), as shown and discussed in Appendix C. This limits their utility for the purpose of scoring MAGs and PAGs. Hence, the likelihood decomposition over ‘ac-connected’ subsets of variables (Eq.12) is particularly useful for complex non-linear data (Figs. 2 & E2), as discussed in our reply to Reviewer kxfn, as BIC scores have been shown to give good approximation of likelihood functions for linear Gaussian data.
Concerning the definition of MAGs, it is true that the original definition of MAGs in (Richardson & Spirtes 2002) includes also undirected edges, which are obtained by conditioning on selection variables in DAGs, while bidirected edges are obtained by marginalizing common causes in DAGs. However, following the usual terminology in the field (eg adopted in Triantafillou et al 2016 or Hu et al 2020), we also designate MAGs without undirected edges as MAGs throughout the paper, eventhough they are in fact a subclass of MAGs. Note, however, that this subclass of MAGs without undirected edges is more restricted than the class of Acyclic Directed Mixed Graphs (ADMG), which also contain directed and bidirected edges, but are not necessarily marginalized from DAGs. In particular ADMGs, unlike MAGs, allow for "almost directed cycles", where X->...->Y with X<->Y. In addition, the proposed likelihood decomposition (Eq.12) can only be applied to MAGs without undirected edge, as shown in the third section (iii) of the proof, Appendix B.
Yet, as pointed out by this reviewer, there are other existing scores under latent variables for specific parametric assumptions like discrete variables, exponential or stratified Gaussian families. This will be reminded in the final revised paper. In particular, discrete chain graph models (Drton 2009), fully bidirected graph models (Drton and Richardson 2008), and discrete nested Markov models (Richardson et al. 2017) have been shown to be curved exponential models, which can be scored consistently using BIC scores. Interestingly, one extension of these BIC scores has been recently proposed by (Bellot, Zhang, Bareinboim, AAAI 2024) to allow for the distinction between different ADMGs from (Verma & Pearl 1990), see Fig. 1c & d in (Bellot, Zhang, Bareinboim, AAAI 2024). These ADMGs imply the same set of conditional independence constraints and yet are distinguishable because they imply an equality between different functionals of the probability distribution, see Eq. 1 in (Bellot, Zhang, Bareinboim, AAAI 2024). It would definitely be interesting, in future follow-up studies, to further explore how our information theoretical score (Eq.12) generalizes or differs from these other existing scores under latent variables, as suggested by this reviewer. | Summary: This paper presents a greedy search-and-score algorithm for ancestral graphs with latent variables, using multivariate information for efficiency. Experimental results verify that it outperforms existing methods in causal discovery like M3HC and MIIC.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: I looked through the theoretical claims and did not find any specific issues.
Experimental Designs Or Analyses: Yes. The benchmark datasets come from hiding several variables from existing datasets. It should be reasonable if there is no relationship between the proposed method and this setting. However, as mentioned in the paper, *"the proposed method is limited to ac-connected subsets of vertices with a maximum of two-collider paths"*. Is the hiding process related to this constraint? Is it possible that the proposed approach underperforms the baselines in other settings?
Supplementary Material: Yes, I looked through the supplementary material. It contains some proofs and experimental settings.
Relation To Broader Scientific Literature: The contributions have potential for the broader scientific literature, where causal discovery with latent variables is important.
Essential References Not Discussed: I do not know any related works that are essential to understanding the paper.
Other Strengths And Weaknesses: - The paper is well-written
- The theoretical claims are solid
Other Comments Or Suggestions: I do not have any other comments or suggestions
Questions For Authors: See above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your careful review and for underlining that the "theoretical claims are solid" and "the contributions have potential for the broader scientific literature, where causal discovery with latent variables is important".
Concerning your question about the generation of the benchmark datasets, the proposed setting based on hiding some variables from DAG models is not related to the limitation of the method in terms of a maximum of two-collider paths. Indeed, this setting comes from the very definition of ancestral graphs (Richardson and Spirtes 2002) which are obtained by marginalizing or conditioning on DAG vertices, as further discussed in our reply to Reviewer 7jRW. The same benchmark setting based on hidding some vertices in DAGs has been used in many earlier studies on constraint-based or score-based methods to discover MAGs or their Markov equivalent class representatives, the PAGs.
In the present article, we have used two versions of this usual setting projecting DAGs onto MAGs. First, one simple version with toy models and, then, one more general version for all other more advanced benchmarks.
The first simple setting is to introduce hidden common causes explicitly in benchmark toy models as detailed in Appendix E. Fig. E.1 shows the three simple ancestral models we have used to test MIIC_search&score orientation scores (Table 1) to effectively predict bidirected orientations when the end nodes do not share the same parents (Model 1), share some parents (Model 2) or when the bidirected edge is part of a longer than two-collider path (Model 3). The predictions of the edge orientation scores are summarized in Table E.1 and show good predictions for large enough datasets.
The second more general setting, which has also been used in a number of earlier similar studies benchmarking constraint-based or score-based methods including latent variables, is to hide a fraction of the nodes from an underlying DAG model to reconstruct the corresponding MAG and PAG, which can be theoretically obtained. The results are reported in Figs. 2 & 3 and Figs. E2 & E3.
Importantly, the benchmark PAGs used to score the causal discovery methods with increasing proportions of latent variables (Figs. 2 & 3 and Figs. E2 & E3) include not only bidirected edges originating from hidden common causes but also additional directed or undirected edges arising, in particular, from indirect effects of hidden variables with observed parents. Irrespective of their orientations, all these additional edges originating from indirect effects of hidden variables generally correspond to weaker effects (i.e. lower mutual information of indirect effects due to the Data Processing Inequality) and are more difficult to uncover than the edges of the original DAG model without hidden variables. This explains the steady decrease in recall for complex ancestral models with higher proportions of hidden variables, while precision remains essentially unaffected, Figs. 2 & E2 and Figs. 3 & E3 (this is clearly apparent for complex random models with average degree 5 and complex real-world models, Insurance, Barley, Mildew). | Summary: This paper presents a novel search-and-score algorithm for discovering ancestral graphs - a class of graphical models used to represent causal relationships with latent variables. The key contribution is a new normalized likelihood score based on multivariate information measures applied to ac-connected subsets of variables.
Claims And Evidence: Most claims are supported by clear and convincing evidence.
Here are some potential limitations:
* In linear Gaussian settings, the method performs worse than M3HC and GFCI.
* The paper does not discuss how the method scales beyond moderate-sized graphs (e.g., 100+ nodes).
* The likelihood function approximation assumes limited higher-order dependencies, which may fail in highly non-linear causal systems.
Methods And Evaluation Criteria: Yes. The proposed method is evaluated on both Simulated datasets (both continuous and categorical) and Real-world datasets (Alarm, Insurance, Barley, Mildew). The Evaluation Metrics include Precision, Recall, F1-score, and Computational runtime.
The results would be more complete if they also discussed how the method scales beyond moderate-sized graphs (e.g., 100+ nodes).
Also, there is a lack of discussion on the robustness to noise and missing data.
Theoretical Claims: I did not check the proofs.
Experimental Designs Or Analyses: The experimental designs and analyses are reasonable in general. Two missing parts: 1. No analysis of performance on very large graphs. 2. No ablation study to separate the impact of local vs. global refinement steps.
Supplementary Material: I did not look closely at the supplementary material.
Relation To Broader Scientific Literature: This paper builds on Bayesian networks and ancestral graph theory, and is closely related to mutual information-based causal discovery
Essential References Not Discussed: The paper does not cite some recent scalable causal discovery methods.
Other Strengths And Weaknesses: Strengths
* The paper introduces a new likelihood function based on multivariate cross-information over ac-connected subsets.
* The local node-based scoring + edge refinement strategy makes causal discovery more scalable than exhaustive search.
* The paper is well-structured and easy to follow, with a clear explanation of the proposed method.
Weaknesses
* Scalability to large graphs is unclear
* The method underperforms M3HC and GFCI in linear Gaussian models, but the reason for this is not discussed.
* No comparison to deep learning-based approaches
Other Comments Or Suggestions: N/A
Questions For Authors: The paper evaluates the proposed method on relatively small- to medium-sized graphs. How does the approach scale to larger graphs (e.g., 100+ nodes)?
The experimental results indicate that the method performs worse than M3HC and GFCI in linear Gaussian models. Could you clarify why this is the case?
The paper mainly compares the proposed method with traditional search-and-score approaches. How does it compare with deep learning-based causal discovery methods such as DAG-GNN (Zheng et al., 2018) or causal representation learning (Ke et al., 2019)?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your detailed review and for underlying that "the paper is well-structured and easy to follow, with a clear explanation of the proposed method" and that "most claims are supported by clear and convincing evidence".
- Following your suggestions, we have extended our benchmarks to larger networks including up to 150 nodes and performed additional comparisons with the DL-based causal discovery method DAG-GNN (Zheng et al 2019). The results are included in the revised Fig.E2 accessible at the following double-blind github repository: https://tinyurl.com/34ej6j78
- MIIC_search&score is shown to outperform all other tested causal discovery methods on complex non-Gaussian datasets, including non-linear couplings between variables, while GFCI and DAG-GNN are the best performers on linear Gaussian datasets. These results demonstrate GFCI, M3HC and DAG-GNN's clear advantage in assuming linear Gaussian distributions when analyzing multivariate Gaussian datasets. However, these results also highlight the clear limitation of this assumption, when analyzing more complex datasets including non-linear couplings between variables, which goes beyond the variable transformation of post-nonlinear causal models. By contrast, MIIC_search&score and MIIC make no particular assumption on the data distributions and achieve similar good performance across a broad range of Gaussian or non-Gaussian multimodal distributions (Figs 2 & E2) as well as on complex categorical datasets (Figs 3 & E3).
- MIIC_search&score scalability is primarily limited by the quadratic complexity of MIIC wrt the number of nodes (see Fig S5 in Verny et al 2017). This is actually optimal considering that MIIC tests all pairs of variables for conditional independence (with only a small time increase when including latent variables, Fig S5 in Verny et al 2017, thanks to MIIC's greedy approach). By contrast, the two-step search-and-score scheme of MIIC_search&score is essentially linear in the numbers of nodes (step 1) and edges (step 2) for a fixed degree. In practice, Steps 1 and 2 take similar running time as for MIIC to provide the starting graph for MIIC_search&score. We will discuss the method scalability in the final revised version of the paper and also cite "some recent scalable causal discovery methods" notably based on Differential Causal Discovery and continuous optimization techniques (eg Lopez et al NeurIPS 2022, Montagna et al CLeaR 2023, Amin et al ICML 2024).
- Concerning "a lack of discussion on the robustness to noise and missing data", we have actually performed a bootstrap sensitivity analysis to sampling noise in Fig. E3, which shows that our method is rather robust to sampling noise. We will extend the discussion about these results in the final revised version of the paper. Likewise, considering that MIIC allows for missing data, MIIC_search&score can also allow for missing data. However, we have not investigated this specificity in the present paper which focuses on the comparison of MIIC_search&score with alternative approaches, such as M3FC, GFCI and DAG-GNN, that do not allow for missing data.
- Finally, we are not sure to understand the two following comments. We would be grateful for further clarification if our tentative responses below fail to address your concern. Here are the two points:
- "The likelihood function approximation assumes limited higher-order dependencies, which may fail in highly non-linear causal systems." The likelihood function approximation implemented in the two-step algorithm concerns the maximum length of the collider paths defining the ac-connected subsets of vertices included in the likelihood function estimate (Eq.12). Hence this likelihood approximation actually concerns the hidden variables from the underlying DAG models, not the linear or non-linear nature of the causal relations. In particular, vertices with many parent nodes define large "star-like" ac-connected subsets whose likelihood contributions are all included in MIIC_search&score's two-step likelihood estimate (ie independently from their linear or non-linear combinatorial relations with their parents). Likewise, the approach recovers the correct likelihood of Bayesian networks (Eq.2) in absence of latent variables (ie regardless of the linear or non-linear nature of the underlying relations between variables).
- "No ablation study to separate the impact of local vs. global refinement steps." It seems to us that an "ablation study" might not be required here as both MIIC and MIIC_search&score are intrinsically local methods without "local vs global refinement steps". In particular, MIIC_search&score's two-step approximation effectively separates the selection of relevant edges (step 1) and their orientation (step 2). Ablating the first step tends to retain more FP edges (with a corresponding loss in Precision), while ablating the second step returns an undirected graph. | null | null | null | null | null | null |
Efficient Network Automatic Relevance Determination | Accept (poster) | Summary: The paper introduces **Network Automatic Relevance Determination (NARD)**, an extension of Automatic Relevance Determination (ARD) designed for linearly probabilistic models. NARD aims to simultaneously model sparse relationships between input features X and output responses Y, while capturing correlations among the outputs Y. The method employs a **matrix normal prior** with a sparsity-inducing parameter to identify and discard irrelevant features, thereby promoting sparsity in the model.
The algorithm iteratively updates both the precision matrix and the relationship between Y and refined inputs. To address computational inefficiencies associated with the high cost per iteration, the authors propose two enhancements:
1. **Sequential NARD**: Evaluates features sequentially to reduce computational overhead.
2. **Surrogate Function Method**: Uses an efficient approximation of the marginal likelihood, simplifying determinant and matrix inverse calculations.
By combining these approaches, the computational complexity is further reduced. The paper demonstrates that these methods achieve significant improvements in computational efficiency while maintaining comparable predictive performance on both synthetic and real-world datasets.
## Update after rebuttal
Thank you for your response. I will maintain my positive score.
Claims And Evidence: The claims made in the paper are generally well-supported by clear evidence, as outlined below:
1. **Sparse Feature Selection via ARD Prior**
The paper claims that placing an ARD prior on the regression coefficient matrix enables effective feature selection by identifying relevant input features for predicting outputs.
2. **Sparsity in Output Dependencies via L1 Penalty**
The use of an L1 penalty on the precision matrix to model dependencies among outputs is a reasonable and theoretically sound approach. Sparse precision matrices are widely used in multi-output regression to capture conditional independence relationships between outputs.
3. **Computational Challenges in High Dimensions**
The claim that standard ARD methods incur $O(d^3)$ computational costs due to matrix inversion is accurate and consistent with the literature.
4. **Efficiency of Proposed Algorithms**
The paper introduces Sequential NARD, Surrogate NARD, and Hybrid NARD to reduce computational complexity. The stated reductions to $O(m^3+p^3)$, $O(m^3+d^2)$, and $O(m^3+p^2)$, respectively, are plausible given the described modifications (e.g., sequential updates and surrogate function approximations).
Methods And Evaluation Criteria: The evaluation criteria and datasets are generally appropriate for the problem. TPR and FPR on synthetic data effectively measure feature selection performance, while the Jaccard index is suitable for comparing biological associations in the absence of ground truth. The use of TCGA data validates NARD’s real-world applicability. However, the empirical evaluation has notable gaps:
1. **Limited Dataset Diversity**: The evaluation focuses heavily on biological datasets (e.g., TCGA and aging phenotype data). Including experiments on non-biological datasets (finance?) would better demonstrate the generalizability of NARD across diverse domains.
2. **Limited Baselines**: Comparisons are limited to MRCE and HS-GHS. Incorporating additional state-of-the-art baselines would strengthen validation and contextualize NARD's performance more comprehensively.
Addressing these issues would significantly enhance the robustness and completeness of the evaluation.
Theoretical Claims: The theoretical claims in the paper are well-supported by the proofs provided in the text. Key claims include:
1. Matrix Normal Prior and ARD Framework
2. Complexity Reductions
3. Theorem 3.1 (Sequential NARD)
4. Surrogate Function Approximation
Experimental Designs Or Analyses: The experimental designs and analyses in the paper are sound and appropriate for the stated objectives:
1. **Synthetic Data**: The use of synthetic datasets with controlled sparsity is well-suited for validating the feature selection and dependency modeling capabilities of NARD and its variants.
2. **Real-World Applications**: The use of Aging Phenotype data (evaluated via Jaccard index) and TCGA cancer data (focusing on biological associations) aligns with the goals of demonstrating NARD's applicability to high-dimensional, multi-output regression problems.
3. **Baseline Comparisons**: Comparisons with established methods like MRCE and HS-GHS are appropriate for benchmarking both computational efficiency and estimation performance. The inclusion of time as a metric strengthens the analysis of computational complexity claims.
Issues are discussed below.
Supplementary Material: Yes. I reviewed the following parts:
- Detailed proof of Theorem *3.1* in Appendix C.1
- Appendix C.2.
- Network plots in Appendix F.3.
- choice of hyperprior is provided in Appendix D.
Relation To Broader Scientific Literature: The paper builds on foundational concepts in Automatic Relevance Determination (ARD), introduced by MacKay (1992), extending it to multi-output regression with a matrix normal prior and sparsity-inducing penalties. It addresses computational challenges in high-dimensional settings, improving efficiency through Sequential and Surrogate methods inspired by Tipping's work (2003). The framework aligns with broader literature on Bayesian sparse modeling, such as MRCE (Rothman et al., 2010) and graphical lasso techniques (Friedman et al., 2008), while contributing novel algorithmic advancements for scalable multi-output regression.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: **Strengths:**
1. **Originality**: The paper introduces novel extensions to ARD, including Sequential NARD, Surrogate NARD, and Hybrid NARD, addressing computational inefficiencies in high-dimensional regression tasks.
2. **Significance**: The framework is highly relevant for sparse modeling in biological and genomic datasets, making it impactful for real-world applications.
3. **Clarity**: The paper clearly explains the theoretical foundations, algorithms, and experimental results, ensuring accessibility for readers familiar with Bayesian modeling.
**Weaknesses:**
1. **Numerical Stability**: Surrogate NARD exhibits instability during precision matrix estimation in high-dimensional datasets, which could limit its reliability in practice.
2. **Linear Assumptions**: The reliance on linear models may restrict applicability to scenarios involving complex non-linear relationships.
3. **Limited Dataset Diversity**: While synthetic and biological datasets are used, additional experiments on diverse real-world datasets could enhance generalizability.
4. Comparisons with baseline methods (MRCE and HS-GHS) are limited in scope. Including more state-of-the-art baselines would provide a clearer picture of NARD's relative performance.
Other Comments Or Suggestions: - Include experiments on non-biological datasets to evaluate generalizability across domains.
- Include more detailed comparisons with alternative methods beyond MRCE and HS-GHS to strengthen the empirical validation.
- Address the scalability of Hybrid NARD explicitly in larger datasets.
- Provide a more detailed analysis of how sparsity-inducing priors affect interpretability in biological applications.
- Suggestion for Images: It would be better to move some images, such as protein networks for COAD, to the supplementary materials. This would declutter the main text and allow the remaining figures to be enlarged for improved readability.
Questions For Authors: - How does Hybrid NARD balance computational efficiency and predictive accuracy compared to Sequential and Surrogate NARD? Could you provide additional insights into its practical advantages?
- Have you considered extending the framework to non-linear models or incorporating kernel methods for capturing complex relationships? If not, what challenges do you foresee?
- Could you elaborate on how the sparsity-inducing priors affect interpretability in biological applications like TCGA cancer data?
- To improve the empirical evaluation, could you expand the experiments to include non-biological datasets to assess generalizability across diverse domains, and explicitly address the scalability of Hybrid NARD on larger datasets, including runtime and performance metrics?
- Why are Hybrid NARD and Sequential NARD not included in Table 2, where the impact of data size on performance is analyzed? Including these methods would provide a more complete comparison of their scalability and efficiency relative to other approaches.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback and constructive comments. We appreciate your positive remarks on the novelty of our methods and the clarity of the paper.
>Dataset diversity
We have expanded our experiments to include 2 non-biological datasets: Kaggle’s air quality dataset (https://archive.ics.uci.edu/dataset/501/beijing+multi+site+air+quality+data) and A-shares stock dataset.
For the air quality dataset, we performed data imputation and timestamp alignment, then analyzed the relationships among 11 key indicators. The results show a strong correlation between PM2.5 and humidity, supporting the environmental principle that higher humidity promotes the adhesion of fine particles, leading to increased PM2.5 levels. This aligns with prior studies on atmospheric dynamics.
For the A-shares dataset, we collect nearly 7 years of daily trading data and use the previous 5 days' information to predict the next day's opening price. This results in a dataset of 3032 stocks. Our experiments show that Bayesian methods, such as HS-GHS and JRNS, could not complete calculations within 4 days, while our approach demonstrate excellent scalability. Analysis of the precision matrix reveals significant block structures, indicating that stocks from the same sector or industry tend to show similar trends in price movement.
Through these 2 experimental datasets, we have demonstrated the effectiveness of our method in both environmental and financial domains.
Since there is no ground truth, we used MRCE as a baseline algorithm for comparison. We reported the Jaccard index as a benchmark. Additionally, we presented the computational time to highlight the computational advantages of our method.
Table R4: Associations of A-shares stocks. ($m=3032, d=60640, N=1696$)
| Method | MRCE |CAPME |HS-GHS |JRNS | NARD |Sequential NARD | Suggorate NARD | Hybrid NARD |
| --- | --- | --- | --- | --- | --- |--- | --- |--- |
| # of association |97939|98335| -|-|99671|100309|99105|99475|
| Jaccard index |-|0.869|-|-|0.881|0.891|0.893|0.890|
| Time per iteration (second)|\~1200|\~1300| -|-|~1000|-|255|-|-|
| Time all (h)|\~17|\~16.5|-|-|\~14.5|\~8|~5.5|\~3|
>Interpretability in biological applications
Sparsity-inducing priors like ARD enhance interpretability in biological applications, such as TCGA cancer data, by identifying key features. In our analysis across 7 tumor types, ARD highlighted important genes and proteins linked to signaling pathways. In Figure 3, sparsity revealed consistent pathways across cancer types, exposing cancer-specific translational effects. In Figure 4, for COAD, sparsity highlighted critical protein interactions within pathways and cross-talk between them, aiding biological interpretation. In COAD, the PI3K/AKT pathway was highlighted by the interaction between GSK3ALPHABETAPS21S9 and AKTPS473. This association indicates a key regulatory role in tumor growth and survival. The AKT signaling axis, activated by various upstream kinases like GSK3, has been implicated in colon cancer progression, making it a valuable target for further investigation and therapeutic development.
>Question about Table 2
We understand your concern about the exclusion of Hybrid NARD and Sequential NARD in Table 2. Table 2 shows single update step times, while Table 1 presents total computation time. Since both methods involve iterative updates with varying step times, direct comparison in Table 2 is difficult. We apologize for any confusion and will clarify this in the final version.
>Linear assumptions
We have considered extending the framework to non-linear models, which can be easily adapted to sparse kernel regression without significant changes to NARD. Relevant experiments and results are discussed in our rebuttal to Reviewer gDpz.
>Numerical stability
We appreciate your attention to this issue, which we have discussed in detail in Appendix F.4. We have provided additional analysis in our rebuttal to Reviewer gDpz, which you can refer to for a more in-depth discussion.
>Discussion about Hybrid NARD
We have included further analysis on this issue in our rebuttal to Reviewer AaWq, which provides a more detailed discussion. | Summary: This paper introduces the Network Automatic Relevance Determination (NARD) framework for linearly probabilistic models. It proposes three novel algorithms, i.e. Sequential NARD, Surrogate NARD, and Hybrid NARD, which significantly reduce the computational complexity. These methods maintain comparable performance on synthetic and real-world datasets, effectively handling the sparse relationships between inputs and outputs while capturing output correlations.
Claims And Evidence: I didn't find any obvious problem.
Methods And Evaluation Criteria: Generally speaking, the proposed methods and evaluation criteria in the paper make sense for the problem at hand. Some of my concerns might stem from whether the proposed methods are SOTA. Given that the benchmark methods for comparison are relatively limited, as far as I know, there should be numerous methods for handling sparse multivariate regression and learning sparse graph structures. It would be better if the authors could explain why they chose MRCE and HS - GHS as the comparison techniques.
Theoretical Claims: Sorry, I didn't check the correctness of proofs.
Experimental Designs Or Analyses: 1. The paper only compares NARD and its variants with two baseline methods (MRCE and HS-GHS). Given the large number of methods available for sparse multivariate regression and graphical model estimation, this limited comparison may not fully establish the superiority of the proposed methods.
2. The authors assume linear relationships in their models. While this is a common starting point, real-world data, especially in biology, may contain complex nonlinear relationships. The experimental designs do not explore how well the methods perform in the presence of nonlinearity.
3. The Surrogate NARD method shows instability in computations, especially when dealing with large, high-dimensional real-world datasets.
Supplementary Material: Sorry, I didn't review the supplementary material.
Relation To Broader Scientific Literature: In the related work section, the authors establish connections between the content of this paper and the broader scientific literature. Additionally, in the experimental part, authors compares NARD and its variants with two baseline methods (MRCE and HS-GHS).
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your thoughtful feedback. We appreciate your positive remarks and the concerns raised, which will help us refine both the theoretical and empirical aspects of our work.
>Baseline comparison
As suggested, we add additional baseline methods: CAPME[1] and JRNS[2]. MRCE and CAPME are frequency-based representative methods, while HS-GHS and JRNS are Bayesian sampling-based algorithms. Our experiments show that NARD still outperforms these methods, which further strengthens the case for the proposed approach. The results are shown in Table R2.
In the original paper, we carefully selected MRCE and HS-GHS as the comparison methods because they represent two well-established approaches in the field: MRCE is a frequentist method, and HS-GHS is a Bayesian approach. Both of these methods have been compared with several other techniques in their respective papers, and in those comparisons, MRCE and HS-GHS consistently performed well. We believe these methods serve as strong baselines for our study, providing a comprehensive comparison between frequentist and Bayesian paradigms in sparse multivariate regression and graphical model estimation.
Table R2: Performance Comparison of Various Methods.
| Method | d | m | N | TPR | FPR | Time |
| --- | --- | --- | --- | --- | --- |--- |
|MRCE | 5000| 1500|1500 | 0.9083| 0.0072 | 53 |
|**CAPME** | 5000| 1500|1500 | 0.8972| 0.0124 | 52 |
|HS-GHS | 5000| 1500|1500| 0.9463| 0.0033 | >3000 |
|**JRNS** | 5000| 1500|1500| 0.9485| 0.0037 | >3000 |
|NARD | 5000| 1500|1500 | 0.9483| 0.0062 | 49|
|Sequential NARD | 5000| 1500| 1500| 0.9459| 0.0067 | 35|
|Surrogate NARD |5000 |1500 |1500 | 0.9462| 0.0072 | 31|
|Hybrid NARD | 5000| 1500 | 1500| 0.9471| 0.0068 | 23|
We also include the results of experiments on aging phenotype data as shown in Table R3.
Table R3: Associations under different algorithms.
| Method | MRCE |CAPME |HS-GHS |JRNS | NARD |NARD(Polynomial) | NARD(RBF) |
| --- | --- | --- | --- | --- | --- |--- | --- |
| # of association | 15330 |15094| 14983|15066| 15101|15094| 15072|
| Jaccard index | 0.979 |0.977|-| 0.988 |0.988 | 0.990 |0.989 |
[1] Covariate-adjusted precision matrix estimation with an application in genetical genomics, Biometrika 2013.
[2] A generalized likelihood based Bayesian approach for scalable joint regression and covariance selection in high dimensions, Statistics and computing 2022.
>Non-linearity
In our paper, Section 6.1 focuses on synthetic data, which follows a linear structure by design. However, in Sections 6.2 and 6.3, we evaluate our method on real-world biological datasets with more complex, nonlinear relationships. Despite these nonlinearities, our approach performs well, suggesting that our sparse linear approximation effectively captures dominant interaction patterns. We will clarify this fact more clearly in the final version.
Additionally, our method can be naturally extended to address nonlinearity through kernel method. Specifically, we consider the model
$$
Y = W\Phi(X) + \mathcal{E}, \quad \Phi(\cdot) \in { \text{Polynomial, RBF, ...} }
$$
where $\Phi(X)$ represents a nonlinear feature mapping that transforms the input space into a higher-dimensional space, allowing for more flexible modeling of complex relationships.
To explore this extension, we consider 2 different kernel functions: the polynomial kernel and the Gaussian (RBF) kernel. We evaluate the performance on a real-world aging phenotype data.
As shown in Table R3, our approach with the polynomial and RBF kernels demonstrates competitive performance, achieving high Jaccard index values. The results are consistent with our expectation that kernel-based extensions allow the model to capture more complex, nonlinear relationships in the data, further validating the robustness and flexibility of our method.
>Numerical stability in Surrogate NARD
We appreciate your attention to this issue, which we discuss briefly in Appendix F.4. It arises from numerical challenges encountered during the iterative optimization process, particularly when estimating the precision matrix. For large datasets, the covariance matrix can be ill-conditioned, leading to instability in the precision matrix estimation. This stems from the inherent properties of the data. As we mentioned in the discussion, one potential solution is to apply a more robust initialization for the precision matrix, which may help mitigate this issue. We will explore this aspect more deeply in future work. | Summary: This paper introduces Network Automatic Relevance Determination (NARD), an extension of Automatic Relevance Determination (ARD) designed for multiple-output regression in high-dimensional settings. NARD integrates a matrix normal prior with a sparsity-inducing mechanism to simultaneously select relevant input features and capture output dependencies.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: I have checked the theoretical derivations in Sections 3 and 4, and I find them reasonable. For example, in Section 4, I verified Lemma 4.2, which establishes an upper bound on $ \text{Tr}[g(W)] $ using a majorization argument. The proof correctly applies Lipschitz continuity, with $ L = 2\|XX^\top\| = 2\rho $, and a first-order Taylor approximation to derive the bound. The use of $ \rho $ (the largest eigenvalue of $ XX^\top $) ensures a valid approximation, confirming the correctness of the inequality.
Experimental Designs Or Analyses: I have checked the soundness and validity of the experimental designs and analyses, and I find them reasonable. For example, in the synthetic data experiments, the covariance and precision matrices were generated using an Erdős-Rényi random graph, ensuring a structured yet realistic sparsity pattern. The metrics used (TPR and FPR) are appropriate for evaluating feature selection.
Supplementary Material: I have reviewed the supplementary material, specifically Appendix C.1, which provides the proof for Theorem 3.1. The proof considers two cases for $ \eta_i := \text{Tr}(q_i q_i^\top V^{-1}) - m s_i $, distinguishing between $ \eta_i > 0 $ and $ \eta_i \leq 0 $. This case analysis ensures that the sequential update rule for $ \alpha_i $ correctly determines whether a feature should be included or pruned.
Relation To Broader Scientific Literature: The paper extends Automatic Relevance Determination (ARD) by incorporating a matrix normal prior to model both feature sparsity and output dependencies, addressing limitations in traditional ARD. NARD in this paper further improves computational efficiency by introducing Sequential NARD, which employs a greedy approach that sequentially adds and removes features, and Surrogate NARD, which introduces a surrogate function to approximate the marginal likelihood.
Essential References Not Discussed: I think all related works have already been cited.
Other Strengths And Weaknesses: The paper presents a novel extension of Automatic Relevance Determination (ARD) by incorporating a matrix normal prior. The introduction of Sequential NARD and Surrogate NARD significantly improves computational efficiency over traditional ARD methods, which is a notable strength.
The paper could provide more clarity on hyperparameter selection. Additionally, the paper could further discuss potential trade-offs in Hybrid NARD.
Other Comments Or Suggestions: Theorem 3.1 could benefit from additional explanation for edge cases and it would be helpful to provide more details on how hyperparameters were tuned.
Questions For Authors: The Hybrid NARD approach combines Sequential and Surrogate NARD. Are there any cases where this hybrid method performs worse than using either method individually? How does it balance efficiency vs. accuracy in practice? Do you have more experiments for this point?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your thoughtful and constructive feedback on our submission.
>Edge case of Theorem 3.1
Recall the Theorem 3.1, $s_i$ is called the sparsity and $q_i$ is known as the quality of $\varphi_i$, The sparsity measures the extent to which basis function overlaps with the other basis vectors in the model, and the quality represents a measure of the alignment of the basis vector with the error between the training set values and the vector of predictions that would result from the model with the vector excluded. The term $\eta_i = \text{Tr}(q_i q_i^{\top}V^{-1}) - m s_i$ actually measures the trade-off between the alignment quality of the basis vector and its sparsity in relation to the covariance structure. For $L(\alpha_i)$, when $\eta_i > 0$, the function exhibits an initial increase followed by a decrease, with the maximum value occurring at a stationary point. When $\eta_i \le 0$, the process is monotonically increasing, and the maximum value is asymptotically approached as $\alpha_i \rightarrow \infty$, consistent with the proof of Theorem 3.1. Furthermore, as $\alpha_i \to \infty$, the part of $L(\alpha_i)$ that depends on $\alpha_i$ diminishes, and $L(\alpha_i) = 0$ represents the situation where the corresponding feature can be pruned from the model.
>Tuning the $\lambda_{\text{glasso}}$ Parameter in the ARD Framework
To select the optimal $\lambda$, we employ a 5-fold cross-validation procedure. The dataset is partitioned into 5 disjoint subsets, and in each iteration, 1 subset is held out as the validation set while the remaining 4 subsets are used for model estimation. The objective function for selecting $\lambda_{\text{glasso}}$ is defined as:
$$
\lambda_{\text{glasso}} = \arg \min_{\lambda} \sum_{l=1}^5
\bigg[
\textbf{Tr}(\tilde{V_l}\Omega_{-l}) -\log |\Omega_{-l}| + \lambda \sum_{\substack{i \neq j}} |\omega_{ij}|
\bigg].
$$
Here $\tilde{V_l}$ is the empirical covariance estimator computed from the training data excluding the $l$-th fold, and $\Omega_{-l}$ is the estimated precision matrix based on this subset.
The log-likelihood is computed for each fold, and the $\lambda$ that maximizes the cross-validated log-likelihood is chosen. A grid search is performed over a range of candidate values for $\lambda$ , and the value that yields the best performance across all folds is selected for the final model and evaluation.
>Discussion about Hybrid NARD
In our experiments, we did not observe cases where the Hybrid NARD underperformed compared to either Sequential or Surrogate NARD individually. Despite incorporating approximations, the hybrid approach consistently provided a good balance between accuracy and efficiency. In the synthetic data shown in Table 1, we found that NARD and its variants performed similarly, with no significant differences. However, in terms of time efficiency, the Hybrid NARD was approximately twice as fast as the standard NARD. This aligns with our theoretical expectations, where combining sequential optimization with surrogate modeling helps leverage their respective advantages.
We acknowledge that performance trade-offs may become more pronounced in extreme cases. For smaller datasets, we hypothesize that the hybrid approach may not always outperform NARD. For instance, when tested with $d=80, m=50,N=100$ and $d=50, m=20,N=50$, the results (shown in Table R1) demonstrated that Hybrid NARD still performed well, highlighting the robustness of our method. Despite our efforts to identify challenging edge cases, the algorithm exhibited a notable degree of stability.
We appreciate your suggestion and will explore these scenarios further in future work.
Table R1: Performance Comparison of Our Methods on Small datasets.
| Method | d | m | N | TPR | FPR |
| --- | --- | --- | --- | --- | --- |
| NARD | 80| 50| 100 | 0.9695| 0.0031 |
| Sequential NARD | 80| 50| 100| 0.9689| 0.0033 |
|Surrogate NARD |80 |50 |100| 0.9693| 0.0029 |
|Hybrid NARD | 80| 50 | 100| 0.9689| 0.0035 |
| NARD | 50| 20|50 | 0.9542| 0.0051 |
| Sequential NARD | 50| 20| 50| 0.9544| 0.0046 |
|Surrogate NARD |50 |20 |50| 0.9540| 0.0050 |
|Hybrid NARD | 50| 20 | 50| 0.9540| 0.0049 |
---
Rebuttal Comment 1.1:
Comment: Dear the Authors,
Thank you for your thorough response, which addresses most of my concerns. Therefore, I decided to increase the rating accordingly. I encourage the authors to revise the manuscript as discussed above.
---
Reply to Comment 1.1.1:
Comment: Thank you for your feedback and for increasing the rating. We appreciate your suggestions and will revise the manuscript accordingly. Your comments have been very helpful in improving our work. | null | null | null | null | null | null | null | null |
Lifelong Learning of Video Diffusion Models From a Single Video Stream | Reject | Summary: This work proposes learning a video diffusion model from a single video stream using lifelong learning, specifically through experience replay. The results demonstrate improved performance compared to standard diffusion training. Additionally, three benchmarks are introduced to support the experiments.
Claims And Evidence: Yes, the experimental results strongly support the proposed lifelong learning claim.
Methods And Evaluation Criteria: The method is simple and aligns well with lifelong learning. However, the proposed datasets seem somewhat limited, as the scenarios covered are relatively narrow.
Theoretical Claims: This work does not propose any theoretical claims.
Experimental Designs Or Analyses: The experimental results are robust across the three proposed datasets.
Supplementary Material: No supplementary material is attached; however, multiple Google Drive links are provided for video samples.
Relation To Broader Scientific Literature: This work is the first to propose training a video diffusion model in a lifelong learning framework within the community. However, I question the necessity of this approach.
Essential References Not Discussed: I have concerns regarding the relevance of this work to the video prediction and world model literature.
Other Strengths And Weaknesses: Strengths:
1. The paper introduces a lifelong learning paradigm for training video diffusion models.
2. It proposes three datasets specifically designed to address the problem.
Weaknesses:
1. Limited Comparisons: This work is highly related to world models such as Genex, GameFactory, GameGen-X, and other world model-based video generation approaches. Additionally, video prediction is another closely related field. However, the paper does not provide comparisons with these works, despite the fact that the proposed datasets share strong similarities with them.
2. Necessity of This Approach: In foundational video generation training, the primary challenge is the lack of a single long continuous video stream, as datasets typically contain frequent scene changes, resulting in only a few-second clips. While continual learning and lifelong learning are important topics, their relevance to video generation seems more crucial in game generation settings, such as in Genie. The necessity of applying lifelong learning to general video generation remains unclear.
Other Comments Or Suggestions: No.
Questions For Authors: Please include video samples in the supplementary material rather than providing Google Drive links.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Before addressing specific concerns, we note that several reviewers have entirely missed the Google Drive links [[a](https://drive.google.com/drive/folders/1ToqSvdFsXJm0UqJZlRURI1uwsIvbHYHn),[b](https://drive.google.com/drive/folders/1IopUyb98v0ybqlaCtimayc9RXnMG63Y-),[c](https://drive.google.com/drive/folders/1Lz54gCnavXmsoFYv9EQEcd0n4mIt91Gi),[d](https://drive.google.com/drive/folders/19CQQShbv3dm04kVk5n4Xf8DvfmJo-9uR)] (Section 5 Footnotes 2,3,4,5) containing video samples from Lifelong and Offline Learning models for every dataset (lower quality videos viewable online [here](https://drive.google.com/drive/folders/1nlc1NZG8lFZIE7w0hYsa2Np-YXjKjSJl)). Some reviewers also missed the reference to the quantitative results for two more baselines in the appendix for every dataset (Section 5 Line 398, Appendix D). These oversights have led to lower scores and skepticism regarding our main claim—that lifelong learning of video diffusion models is not only feasible but can achieve performance comparable to offline training given the same number of gradient steps. We kindly ask that you visit these resources should you question the validity of our claim and re-evaluate your scores in light of these results.
To substantiate our main claim even further, we have incorporated new results on a real-world driving dataset despite operating within the limits of resources available to an academic lab. Please refer to the rebuttal addressed to Reviewer KLJW for the experiment setup and qualitative and quantitative results. These results support the validity of our claim on complex real-world video data.
We now address specific questions and concerns.
> The paper does not provide comparisons with world models and video prediction models, despite the fact that the proposed datasets share strong similarities with them.
We agree that our lifelong learning of autoregressive video generative models is related to those two problems. However, it is difficult to directly compare our lifelong models to the baselines from those problems. World models require action conditioning, and the introduced video datasets do not contain actions. Video prediction models are incapable of generating multiple data samples, so it's not straightforward to compare against them via video generative modeling metrics. However, given the relevance to our problem setup, we will cite these world model papers in the camera-ready draft.
> While continual learning and lifelong learning are important topics, the relevance of the proposed single long, continuous video streams seems more crucial in game generation settings, such as in Genie. The necessity of applying lifelong learning to general video generation remains unclear.
We are glad that the reviewer finds single long, continuous video streams valuable to game generation settings and highlight that the proposed datasets (ex. Lifelong PLAICraft) could also benefit the game generation community. As the reviewer noted earlier, autoregressive video modeling is intimately related to world modeling since we can view the former as the latter marginalized over actions. The key motivation behind this work is to make progress toward lifelong learnable vision-based embodied AI whose world model can be updated in real-time. We believe that agents with updatable world models that are combined with planners, for example a language model as in GenEx, is key to having flexible embodied AI agents that are not constrained to the behaviors associated with pretraining checkpoints or can only be updated periodically. We chose to work with video modality in the beginning because it is much easier to generate datasets that work purely on video modalities, and we expect action conditioning to not significantly perturb our results.
> Please include video samples in the supplementary material rather than providing Google Drive links.
Thank you for your comment. We will update this for the camera-ready draft.
---
We appreciate that the reviewer believes that our experimental results are robust and that they strongly support our claim. We hope that our rebuttal experimental results, which further support the fact that lifelong learning of video diffusion models is possible for complex, real-life datasets, addresses the reviewer’s concern that the previously covered scenarios are relatively narrow. In addition, we hope that our rebuttal’s clarification on the motivation of our work being the first step toward lifelong learnable vision-based embodied AI clarifies the value of our work. Thank you again for your engagement and we would be delighted to have further discussions and address any other questions. | Summary: This work shows that autoregressive video diffusion models can be effectively trained from a single continuous video stream, matching the performance of standard offline training given the same number of gradient steps. The authors further demonstrate that this result can be achieved using experience replay with a limited subset of past frames. Additionally, we introduce three new datasets designed for evaluating lifelong video model learning: Lifelong Bouncing Balls, Lifelong 3D Maze, and Lifelong PLAICraft. Each dataset consists of over a million consecutive frames, capturing environments of increasing complexity.
Claims And Evidence: The paper's central claim—that training video diffusion models in a lifelong manner from a single continuous video stream is as effective as offline training—is not sufficiently supported by robust evidence. The experiments rely on a narrow set of synthetic datasets, which fail to capture the complexity and variability of real-world video streams, making the generalizability of the approach highly questionable. Furthermore, the evaluation lacks critical ablations on forgetting, stability over long horizons, and model degradation, leaving the key claims speculative rather than convincingly demonstrated.
Methods And Evaluation Criteria: The proposed method—experience replay with limited memory—aligns well with the goal of lifelong video learning, as it helps mitigate catastrophic forgetting while maintaining efficiency.
Theoretical Claims: No theoretical claims are required for review in this work.
Experimental Designs Or Analyses: The evaluation metrics (FVD, minADE, ColorKL) are standard for assessing generative video models, but additional robustness tests on long-term temporal consistency and adaptation to domain shifts would enhance credibility. While the synthetic datasets capture key challenges like non-stationarity and temporal correlation, real-world video streams with more diverse dynamics would better reflect practical deployment scenarios.
Supplementary Material: Yes
Relation To Broader Scientific Literature: While it provides an empirical proof-of-concept that lifelong training can approximate offline training, its reliance on experience replay without deeper investigation into forgetting, stability, or adaptation makes it a limited contribution.
Essential References Not Discussed: Not found
Other Strengths And Weaknesses: Strength:
1. The paper investigates lifelong learning for video diffusion models, a relatively unexplored area within generative modeling. While the approach is simple, demonstrating that lifelong learning can approximate offline training is an interesting empirical finding that may inspire future research in more complex settings.
2. The paper presents a well-structured experimental setup, including a fair comparison between lifelong and offline training using the same number of gradient steps.
Weakness:
1. The proposed method primarily relies on experience replay, a well-established technique in continual learning, without introducing significant modifications or improvements. The lack of new architectural contributions or theoretical insights reduces the paper’s originality and impact on the field.
2. Synthetic Datasets Reduce Practical Relevance: the evaluation relies solely on synthetic datasets, which, while structured, fail to reflect the complexity, variability, and challenges of real-world video streams. Without testing on more diverse and dynamic environments, it remains unclear whether the proposed lifelong learning approach generalizes beyond controlled settings.
3. Overclaim: The paper claims that lifelong learning performs comparably to offline training, but does not provide sufficient ablations on long-term stability, forgetting, or sensitivity to different training conditions. The absence of experiments on how the model adapts to distribution shifts or scales with increasing data exposure weakens the validity of its claims.
Other Comments Or Suggestions: No
Questions For Authors: NA
Code Of Conduct: Affirmed.
Overall Recommendation: 1 | Rebuttal 1:
Rebuttal: Before addressing specific concerns, we note that several reviewers have entirely missed the Google Drive links [[a](https://drive.google.com/drive/folders/1ToqSvdFsXJm0UqJZlRURI1uwsIvbHYHn),[b](https://drive.google.com/drive/folders/1IopUyb98v0ybqlaCtimayc9RXnMG63Y-),[c](https://drive.google.com/drive/folders/1Lz54gCnavXmsoFYv9EQEcd0n4mIt91Gi),[d](https://drive.google.com/drive/folders/19CQQShbv3dm04kVk5n4Xf8DvfmJo-9uR)] (Section 5 Footnotes 2,3,4,5) containing video samples from Lifelong and Offline Learning models for every dataset (lower quality videos viewable online [here](https://drive.google.com/drive/folders/1nlc1NZG8lFZIE7w0hYsa2Np-YXjKjSJl)). Some reviewers also missed the reference to the quantitative results for two more baselines in the appendix for every dataset (Section 5 Line 398, Appendix D). These oversights have led to lower scores and skepticism regarding our main claim—that lifelong learning of video diffusion models is not only feasible but can achieve performance comparable to offline training given the same number of gradient steps. We kindly ask that you visit these resources should you question the validity of our claim and re-evaluate your scores in light of these results.
To substantiate our main claim even further, we have incorporated new results on a real-world driving dataset despite operating within the limits of resources available to an academic lab. Specifically, we evaluated Offline and Lifelong Learning on a continuous video stream comprising 550K training and 20K test frames, recorded from a single car’s dashcam over multiple driving sessions totalling 8 hours. We refer to this dataset as **Lifelong Drive** and will release it upon acceptance. The dataset consists of 1 to 40-minute driving sessions, with transitions occurring when the car is started and parked. Fade-in and out animations are applied at the video concatenation boundaries to ensure smooth session transitions. Frames are encoded into 4×64×64 latents using the Stable Diffusion encoder. Other experiment details are identical to the Lifelong PLAICraft experiment.
Lifelong Drive qualitative results are presented [here](https://drive.google.com/drive/folders/1BP6Uqr8R7979ZvbEU9JTGkW83v_lr2jq), and the quantitative results are below. The samples produced by Offline and Lifelong Learning are qualitatively indistinguishable and quantitatively comparable.
|Method|Train FVD|Train KVD|Train Loss|Test FVD|Test KVD|Test Loss|
|-|-|-|-|-|-|-|
|Offline Learning|25.9 ± 0.6|2.7 ± 0.1|0.0299 ± 0.0002|36.2 ± 2.3|12.8 ± 3.2|0.0311 ± 0.0002|
|Lifelong Learning|23.0 ± 0.3|0.9 ± 0.2|0.0303 ± 0.0004|33.7 ± 1.2|9.8 ± 1.7|0.0316 ± 0.0003|
In the style of Appendix E, we also report p-values from two-sided T-tests where the null hypothesis is that there is no difference between the test stream performance metrics from Offline and Lifelong Learning. The tests fail to reject the null hypothesis that the performances of the two algorithms are not different ($\alpha=0.05$).
|Dataset|FVD|KVD|Loss|
|-|-|-|-|
|Lifelong Drive|0.407|0.466|0.219|
We now address specific questions and concerns.
> There are no improvements to experience replay nor ablations on forgetting.
Please refer to the last and first points of our rebuttal addressed to Reviewer gPNY.
> There are no ablations on long horizon stability nor sensitivity to different training conditions.
We present new qualitative results that show the similarity in the quality of Lifelong and Offline learning's long video samples [here](https://drive.google.com/drive/folders/1KlHgtNsP_p1q6-rzJ_Dq-acXqyWa0Bkm).
Our original results also show that Lifelong and Offline Learning perform comparably across datasets, highlighting the former's robustness to training conditions.
> The absence of experiments on how the model adapts to distribution shifts or scales with increasing data exposure weakens the validity of its claims.
Distribution shift is synonymous with nonstationarity. Lifelong Bouncing Balls (O), (C) results in Appendix D localizes the effect of color nonstationarity on all baselines, while Lifelong PLAICraft measures the effect of distribution shift in multiple timescales. Appendix G shows the forgetting behaviors associated with distribution shift. Lastly, Appendix F illustrates how model performance on test stream scales with increasing data exposure for all datasets.
> FVD, minADE, ColorKL are standard for assessing generative video models.
We note that minADE (trajectory metric) and ColorKL (our original metric) as video metrics are not standard.
---
We appreciate that the reviewer finds our experiments well-structured and our findings interesting. We hope that our rebuttal results and analysis from the appendix will help the reviewer share Reviewer YRxK’s sentiment that our results strongly support the claim that training autoregressive video diffusion models from a single video stream can be as effective as offline training given the same number of gradient steps. | Summary: This work investigates ability to learn a video diffusion model in non-iid setting - from a single continuous video stream.
The overall method is an autoregressive UNet-based video diffusion model trained with a continuous stream of data and equipped with a replay buffer. Authors also introduce a collection of 3 synthetic datasets to validate their method and show performance on par with offline(iid) training.
Claims And Evidence: This paper claims that learning a video diffusion model on a video stream works. They do demonstrate this on a single architecture and lifelong learning setup (stream+replay buffer), and do not provide any actual generated video outputs - which is not fully convincing.
Methods And Evaluation Criteria: Overall method makes sense, the datasets and metrics are also meaningful, but the only baseline is the iid method seems to be insufficient to get an understanding.
Theoretical Claims: N/A
Experimental Designs Or Analyses: Given that there is no theoretical analysis of any sort, or any claim on technical novelty, it seems like experimental evaluation is severely lacking - no ablation study is present.
Supplementary Material: N/A
Relation To Broader Scientific Literature: Method combines a relatively recent VDM with an existing approach for lifelong learning (replay buffer).
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: + The key result of the paper is indeed interesting.
- This work does not claim or introduce any technical novelties, apart from applying an existing technique to a somewhat different context.
- Experimental evaluation does not include any ablations (on architecture / objective / context length) - and thus is not particularly informative.
Other Comments Or Suggestions: I find it strange that no visual results are provided in a supplementary.
Questions For Authors: - How critical is a replay buffer?
- Would a choice of the underlying architecture matter?
Code Of Conduct: Affirmed.
Overall Recommendation: 1 | Rebuttal 1:
Rebuttal: Before addressing specific concerns, we note that several reviewers have entirely missed the Google Drive links [[a](https://drive.google.com/drive/folders/1ToqSvdFsXJm0UqJZlRURI1uwsIvbHYHn),[b](https://drive.google.com/drive/folders/1IopUyb98v0ybqlaCtimayc9RXnMG63Y-),[c](https://drive.google.com/drive/folders/1Lz54gCnavXmsoFYv9EQEcd0n4mIt91Gi),[d](https://drive.google.com/drive/folders/19CQQShbv3dm04kVk5n4Xf8DvfmJo-9uR)] (Section 5 Footnotes 2,3,4,5) containing video samples from Lifelong and Offline Learning models for every dataset (lower quality videos viewable online [here](https://drive.google.com/drive/folders/1nlc1NZG8lFZIE7w0hYsa2Np-YXjKjSJl)). Some reviewers also missed the reference to the quantitative results for two more baselines in the appendix for every dataset (Section 5 Line 398, Appendix D). These oversights have led to lower scores and skepticism regarding our main claim—that lifelong learning of video diffusion models is not only feasible but can achieve performance comparable to offline training given the same number of gradient steps. We kindly ask that you visit these resources should you question the validity of our claim and re-evaluate your scores in light of these results.
To substantiate our main claim even further, we have incorporated new results on a real-world driving dataset despite operating within the limits of resources available to an academic lab. Please refer to the rebuttal addressed to Reviewer KLJW for the experiment setup and qualitative and quantitative results. These results support the validity of our claim on complex real-world video data.
We now address specific questions and concerns.
> The only baseline is the iid method and no ablation study is present (related: how critical is the replay buffer?).
We summarize the importance of replay buffer in the main text (Section 5.4, Line 398) where we point the readers to Appendix D where, for all datasets, we report and elaborate on the quantitative results for two additional baselines: naive sliding window lifelong learning that ablates replay loss from the replay objective (No Replay) and unlimited memory experience replay (Full Replay). Additional replay buffer size results for Lifelong 3D Maze are presented in Table 8. Furthermore, Appendix F and G respectively showcase how fast different baselines can learn to perform well on the test stream and how much forgetting affects the baselines’ final models on the train stream for all datasets.
> The paper demonstrates its finding on a single architecture. Would the choice of the underlying architecture affect the results?
Given the related work [1] that demonstrates how both U-Net and Transformer-based non-generative models can be online learned on a data stream made from stitching short, unrelated videos, we expect the architecture to not significantly affect the results. We underscore that we have successfully demonstrated that it is possible for lifelong learned video diffusion models with tens of millions of parameters to achieve a performance good enough to be comparable to offline learning across multiple datasets and two parameter sizes, a finding that was not previously evident.
> This work does not claim or introduce any technical novelties, apart from applying an existing technique to a somewhat different context.
We believe that the machine learning community will nonetheless benefit from being aware of our novel investigation into a different kind of video generative model training regime for both its carefully designed datasets and its surprising findings. Our streaming learning setup is particularly relevant to the development of lifelong learnable vision-based embodied AI that can update its generative predictive model in real time. In addition, although our focus is on investigating a new problem setup, no prior work has introduced batch-level duplication of the latest sliding window with different noising levels for better estimation of the streaming diffusion loss in Equations (3) to our knowledge.
---
We appreciate that the reviewer finds the key result of the paper interesting. We hope that our video model samples, additional baselines, analysis, and results will help the reviewer share Reviewer YRxK’s sentiment that our results strongly support the claim that training autoregressive video diffusion models from a single video stream can be as effective as offline training given the same number of gradient steps.
[1] Carreira, J., King, M., Patraucean, V., Gokay, D., Ionescu, C., Yang, Y., Zoran, D., Heyward, J., Doersch, C., Aytar, Y., Damen, D., and Zisserman, A. Learning from one continuous video stream, 2024. | Summary: This study shows that autoregressive video diffusion models can be effectively trained from a single, continuous video stream, matching the performance of standard offline methods given the same number of gradient steps. The key lies in using experience replay that retains only a subset of preceding frames. Additionally, the authors introduce three new lifelong video model learning datasets—Lifelong Bouncing Balls, Lifelong 3D Maze, and Lifelong PLAICraft—each containing over a million consecutive frames of increasing complexity.
Claims And Evidence: I don‘t think claims in the submission are supported by clear and convincing evidence due to unclear and extreme experiment settings. Please see weakness part for details.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: NO Theoretical Claims.
Experimental Designs Or Analyses: Yes. I checked the experimental but I don't think is clear and soundess. Please see weakness part for details
Supplementary Material: NO Supplementary Material provided.
Relation To Broader Scientific Literature: If these claims were substantiated by validated experiments, then lifelong learning for training a video diffusion model could make a valuable contribution to the broader scientific literature; however, that unfortunately is not the case.
Essential References Not Discussed: Missing several important references.
1. Diffusion Forcing: Next-token Prediction Meets Full-Sequence Diffusion. NeurIPS
2. GameGen. ICML
3. Genie. ICML
Other Strengths And Weaknesses: Strengths
The idea is interesting and novel.
The writing is clear and easy to follow.
Weaknesses
It appears that t in Equation (3) for lifelong learning exactly matches the total duration of the training video. In the comparison between lifelong learning and offline training, does i in Equation (2) cover the entire duration of the training video despite being randomly selected? Ensuring this would make the comparison fair.
Since the right-hand side of Equation (4) is also randomized, if the training steps greatly exceed the actual t in the training video (e.g., by a factor of 10), there is no difference between the lifelong loss and the offline loss. How does the comparison between lifelong and offline methods hold under these much longer training steps?
Most open-sourced video models currently exceed 500M parameters. It is unclear whether the same outcomes would apply for larger video U-Nets (beyond 10M or 100M parameters) rather than the smaller ones used in the paper.
Several recent papers (e.g., Diffusion Force, Genie, Genie2, Gamegen) demonstrate that offline training for autoregressive + diffusion (conditioning on past frames) can generate robust, infinite single-game simulations, which are more complex than the datasets presented here. These relevant works are missing from the paper.
There is a lack of visual results, which are essential for a video-focused study.
Other Comments Or Suggestions: NA
Questions For Authors: Please see weakness part
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Before addressing specific concerns, we note that several reviewers have entirely missed the Google Drive links [[a](https://drive.google.com/drive/folders/1ToqSvdFsXJm0UqJZlRURI1uwsIvbHYHn),[b](https://drive.google.com/drive/folders/1IopUyb98v0ybqlaCtimayc9RXnMG63Y-),[c](https://drive.google.com/drive/folders/1Lz54gCnavXmsoFYv9EQEcd0n4mIt91Gi),[d](https://drive.google.com/drive/folders/19CQQShbv3dm04kVk5n4Xf8DvfmJo-9uR)] (Section 5 Footnotes 2,3,4,5) containing video samples from Lifelong and Offline Learning models for every dataset (lower quality videos viewable online [here](https://drive.google.com/drive/folders/1nlc1NZG8lFZIE7w0hYsa2Np-YXjKjSJl)). Some reviewers also missed the reference to the quantitative results for two more baselines in the appendix for every dataset (Section 5 Line 398, Appendix D). These oversights have led to lower scores and skepticism regarding our main claim—that lifelong learning of video diffusion models is not only feasible but can achieve performance comparable to offline training given the same number of gradient steps. We kindly ask that you visit these resources should you question the validity of our claim and re-evaluate your scores in light of these results.
To substantiate our main claim even further, we have incorporated new results on a real-world driving dataset despite operating within the limits of resources available to an academic lab. Please refer to the rebuttal addressed to Reviewer KLJW for the experiment setup and qualitative and quantitative results. These results support the validity of our claim on complex real-world video data.
We now address specific questions and concerns.
> The t in Equation (3) for lifelong learning exactly matches the total duration of the training video. Does i in Equation (2) cover the entire duration of the training video despite being randomly selected? Ensuring this would make the comparison fair.
Thank you for the great question. Equation (3)’s t matches the number of training frames observed so far by the model. Equation (2)’s i covers the entire duration of the training video in our experiments to ensure that the comparison is fair. We will clarify this in the camera-ready draft.
> If the training steps exceed the actual t in the training video, there is no difference between the lifelong and offline losses. How does the comparison between the two methods hold under these much longer training steps?
Our lifelong learning setup requires at least one minibatch index at every timestep to be a real-time video frame, Lifelong Learning and Offline Learning cannot be compared once we have sequentially performed a single gradient step for all sliding windows of the video stream.
> Will the same outcome apply for larger video U-Nets?
While larger models could not be evaluated in this preliminary investigation due to compute restrictions, given Deepmind’s recent work [1] that shows that online learned discriminative models with 8 and 350 million parameters can match IID-trained model performance, we expect the same outcome to apply to larger video diffusion models. We note that our diffusion models have the same parameter count as the original paper's models [2]. Regardless, we have shown that lifelong learning can achieve a performance comparable to offline learning across a wide range of datasets.
> Recent papers demonstrate that offline training for autoregressive diffusion can generate robust infinite single-game simulations for complex datasets. These works are missing from the paper.
We believe that lifelong learning of advanced diffusion models capable of robustly generating long videos is an exciting next step and have included these references in the updated paper. We note that even the largest of these models, Genie 2, only stably generates videos up to a minute [3].
---
We appreciate that the reviewer believes that the lifelong learning of video diffusion models is novel and can make a valuable contribution to the literature. We hope that our qualitative results that highlight indistinguishableness of lifelong and offline learning samples and comparison fairness clarifications will help the reviewer share Reviewer YRxK’s sentiment that our results strongly support the claim that training autoregressive video diffusion models from a single video stream is not only possible but can also be as effective as offline training given the same number of gradient steps.
[1] Carreira, J., King, M., Patraucean, V., Gokay, D., Ionescu, C., Yang, Y., Zoran, D., Heyward, J., Doersch, C., Aytar, Y., Damen, D., and Zisserman, A. Learning from one continuous video stream, 2024.
[2] Harvey, W., Naderiparizi, S., Masrani, V., Weilbach, C., & Wood, F. (2022). Flexible diffusion modeling of long videos. Advances in Neural Information Processing Systems, 35, 27953-27965.
[3] Genie 2: A large-scale foundation world model. (2025, March 25). Google DeepMind. deepmind.google/discover/blog/genie-2-a-large-scale-foundation-world-model/
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttals. However, comparing models under limited training steps isn’t a fair evaluation. In most cases, the offline model can be trained for more steps. The scalability is still unproven, so I’ve decided to keep my original rating. | null | null | null | null | null | null |
Bivariate Causal Discovery with Proxy Variables: Integral Solving and Beyond | Accept (poster) | Summary: This paper aims to test the conditional independence relation 𝑋⊥𝑌∣𝑈 in the presence of an unobserved latent confounder U. Due to the unobservability of U, this conditional independent relation cannot be directly tested. The authors show that conditional independence can still be assessed using a proxy variable Z or W, thereby transforming the hypothesis test into solving an integral equation and evaluating whether the residuals are zero. The paper provides theoretical results on the solution of such an integral equation and an asymptotic analysis of the corresponding test statistic. These theoretical contributions extend previous discretization-based methods to a more robust approach.
Claims And Evidence: The claims are generally well-supported by theoretical analysis.
Methods And Evaluation Criteria: The proposed method is reasonable and well-motivated.
Theoretical Claims: The theoretical claims are clearly presented, and the proofs are well-structured.
Experimental Designs Or Analyses: The experimental design and analyses are generally sound.
Supplementary Material: The supplementary material provides detailed proofs for the theoretical results. I read the part of them.
Relation To Broader Scientific Literature: The proposed method contributes to the problem of CI testing in the presence of latent confounders.
Essential References Not Discussed: The author gives a well-reviewed on the related literature.
Other Strengths And Weaknesses: Strengths:
1. This paper addresses an important yet challenging problem in causal discovery involving latent confounders.
2. The proposed method is supported by rigorous theoretical analysis.
3. Experimental results demonstrate that the proposed approach outperforms state-of-the-art methods.
Weaknesses:
1. The method relies on a partially known prior structure, specifically that U causes both X and Y, and that Z and W serve as proxy variables for U. This assumption may be restrictive, as in practice, we may not have direct knowledge of the existence of U or whether Z and W are valid proxies.
2. The asymptotic properties of the proposed method depend on a set of assumptions, which may limit its applicability in real-world scenarios.
Other Comments Or Suggestions: NAN
Questions For Authors: 1. The uniqueness of the solution (not only the existence) to the integral equation seems crucial for the hypothesis test. Could the authors provide an intuitive explanation or illustration of its role?
2. If U is a vector rather than a single variable, does the proposed theoretical result still hold?
3. The theoretical contribution is intriguing! I am curious whether the proposed method can be viewed as a generalization of Tetrad constraints when considering Z and W as proxy variables.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We appreciate your efforts and valuable suggestions in reviewing our paper. We address your concerns below.
**Q1.** The method relies on a partially known prior structure, specifically that $U$ causes both $X$ and $Y$, and that $Z$ and $W$ serve as proxy variables for $U$. This assumption may be restrictive, as in practice, we may not have direct knowledge of the existence of $U$ or whether $Z$ and $W$ are valid proxies.
**A.** This is a standard setting in proximal causal inference. Indeed, since the observational causal inference/discovery typically involves unobserved measurements, it is essential to introduce substitution variables, such as instrumental variables and proxy variables, as considered here. In many scenarios, these assumptions naturally hold and such proxy variables are easy to obtain, for example, they can be noisy measurements of the confounders [1] or as descendants in time-series data [2].
**Q2.** The asymptotic properties of the proposed method depend on a set of assumptions, which may limit its applicability in real-world scenarios.
**A.** The most key assumption is completeness, which is standard in proximal causal inference and can easily hold in our scenario. Others are regularity conditions that are also standard for kernel regression.
**Q3.** The uniqueness of the solution (not only the existence) to the integral equation seems crucial for the hypothesis test. Could the authors provide an intuitive explanation or illustration of its role?
**A.** As claimed in lines 168-176 (right column), uniqueness is not required in our procedure, since our goal is to determine whether a solution exists. Among all solutions to the integral equation, our estimate converges to the least-norm solution (Lemma D.19 in Appendix D.5).
**Q4.** If $U$ is a vector rather than a single variable, does the proposed theoretical result still hold?
**A.** Our procedure remains valid even when $U$ is a high-dimensional variable, but requires the proxy variable $W$ to satisfy the completeness condition (Assumption 4.1). It implies that the dimension of $W$ is greater than that of $U$. When this holds, the completeness is easy to hold.
**Q5.** The theoretical contribution is intriguing! I am curious whether the proposed method can be viewed as a generalization of Tetrad constraints when considering $Z$ and $W$ as proxy variables.
**A.** Thank you for your insightful question. Our methods can be viewed as a generalization of Tetrad constraints. Specifically, the tetrad constraint was introduced to test whether $X, Y, Z, W$ are conditionally independent given $U$, under linear models. Specifically, the constraint is:
$$
\mathrm{cov}( X,Y ) \mathrm{cov}( Z,W )= \mathrm{cov}( X,Z ) \mathrm{cov}( Y,W )= \mathrm{cov}( X,W ) \mathrm{cov}( Y,Z ).
$$
When proxies $W,Z$ are assumed to satisfy $W \perp Y|U$ and $Z \perp X|U$, the constraint degenerates to:
$$
\mathrm{cov}( X,Y ) \mathrm{cov}( Z,W )= \mathrm{cov}( X,W ) \mathrm{cov}( Y,Z ),
$$
and it can be used to test $\mathbb{H}_0: X \perp Y|U$. In contrast, our constraint for testing $\mathbb{H}_0$ is formulated by integral equations (1) and (10). Under the linear Gaussian setting with standard normal exogenous noise, our constraint gives rise to the one in \#. Specifically, suppose $U \sim \mathcal{N}(0,1)$, $W=\mu_UU +\varepsilon_W$, $Z=\beta_UU +\varepsilon_Z$, $X=\alpha_UU +\varepsilon_X$, and $Y=\gamma_UU+\varepsilon_Y$, where $\varepsilon_W,\varepsilon_Z, \varepsilon_X, \varepsilon_Y$ are standard normal. We have $h(W) = \frac{\gamma_U}{\mu_U}W$. Since $\mathrm{cov}( Z,W ) = \beta_U \mu_U$ and $\mathrm{cov}( Z,Y ) = \beta_U \gamma_U$, we have $\mathrm{cov}( X,Y ) \mathrm{cov}( Z,W )=\gamma_U \alpha_U \mu_U \beta_U = \mathrm{cov}( X,W ) \mathrm{cov}( Y,Z )$.
[1] Kuroki, M. and Pearl, J. Measurement bias and effect restoration in causal inference. Biometrika, 101(2):423-437, 2014.
[2] Liu, M., Sun, X., Hu, L., and Wang, Y. Causal discovery from subsampled time series with proxy variables. Advances in neural information processing systems, 2023.
---
Rebuttal Comment 1.1:
Comment: Thank you for the author response—The rebuttal addressed my questions.
Maybe one can establish the graphical implication of your "generalized Tetrad constraints", which would be interesting in the causal discovery task. I believe this is an important contribution of this work.
Overall, I will keep my score leaning towards acceptance. | Summary: The paper concerns bivariate causal discovery in the presence of unobserved confounders with the assumption that certain "proxy variables" are observed. Existing literature translates the absence of a direct effect of treatment on outcome to the existence of a solution to a certain integral equation and proposes parametric approaches that involve discretization for continuous variables given that a proxy variable that affects the outcome is observed. This paper proposes a nonparametric approach to test the existence. While the integral equation is only a necessary condition (under certain assumptions) for the absence of a direct effect, the paper shows that the integral equation is not sufficient in the linear models case. By assuming that another proxy variable that affects the treatment, they show that a condition involving the existence of a solution for a modified integral equation is necessary and sufficient for the absence of direct effect.
Claims And Evidence: Most of the theoretical claims are clear in the statement but I have not checked the proofs. There are some unsubstantiated claims.
1) For e.g. in Section 3, lines 125-127 state that the proposed test is sample-efficient compared to discretization. I don't see any evidence in the form of any comparison for this claim to hold.
2) "Besides our power approximates to one as n increases" (line 422) - Figure 3b does not show the same.
Methods And Evaluation Criteria: The paper draws heavily from previous experimental setups in Liu et. al. 2023 which makes comparison easier. However, it is to be noted that these are synthetic datasets and perhaps evaluations on real-world datasets could make the paper stronger.
Theoretical Claims: No
Experimental Designs Or Analyses: 1) I am not sure why the orange line in Figure 3 does not achieve type I error level in the paper's experiments whereas it does in Liu et. al. 2023. Apart from the sqrt function, the function classes and the noise types are the same in both papers.
Supplementary Material: No
Relation To Broader Scientific Literature: This work continues the recent thread of research in proximal causal inference where the exchangeability assumption is relaxed by assuming that certain proxy variables are observed. The absence of a direct effect is then translated in terms of conditions on joint distribution of the treatment, outcome and proxy variables. Recently, Miao et. al. 2023 introduced one such condition that concerned the existence of a solution of an integral equation but used a parametric approach to testing. This work introduces a nonparametric approach.
Essential References Not Discussed: None that I am familiar with.
Other Strengths And Weaknesses: The paper finds an appropriate set of well-motivated assumptions under which a non-parametric test is constructed for bivariate causal discovery assuming access to a proxy variable that affects the outcome. It seems believable that under the nonparametric setup, nonidentifiability is an issue which the paper shows is true even under a linear model. Despite this a condition is proposed under which identifiability is restored. The paper combines multiple ideas that come together for the non-parametric test such as using the characteristic function instead of first order moments like previous work, using a weight function to convert the conditional restriction to an unconditional one.
Regarding weaknesses, 1) I think the writing of the paper is a major weakness and needs to be improved greatly. Currently, the flow of ideas is abrupt and a few claims/choices don't seem to be well-explained in the text. For e.g., Pg 4, the part before "Equivalently speaking" does not seem to be relevant to the final estimate $\hat{H}^{\lambda}(w,t)$, the regularization in (5) is not motivated, the term 'bridge function' is never defined in line 172 (see questions for more such instances).
2) Through each successive approximation, the null hypothesis is being enlarged. While, the paper provides a power analysis, it was not clear to me what whether there are any theoretical guarantees for a type I error control with finite-sample guarantees which makes an earlier claim about sample-efficiency untestable.
3) Some claims don't seem to be supported by evidence (see claims and evidence section)
Other Comments Or Suggestions: 1. Line 101 - "allowing test" should be "allowing to test"?
2. Please specify $\mathbb{H}_1$ in line 118.
Questions For Authors: 1) Does (4) hold if (1) does not hold under the assumptions in Thm 4.5?
2) Like assumption 4.1, does assumption 6.1 say that all variability in U is captured by X? That seems like a very strong assumption and makes a "confounder" immaterial.
3) "There exists $H^0(w,t)$" (line 234). Could you explain why this is a "there exist" statement. The motivation behind this form of the local alternative seems unclear.
4) It is not clear to me why in Line 129, you need h(w,y) to be square-integrable? The text prior does not seem to motivate this requriement.
5) What does "feasibility of solutions" mean in Line 204? Feasibility in what sense?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We appreciate your efforts and suggestions in reviewing our paper. We address your concerns below.
**Q1.** About the type I error level in Liu's paper.
**A.** Liu's type-I error control requires a Lipschitz smooth function, but sqrt does not satisfy this. To make it hold, we conduct experiments with the sigmoid function. As shown, Liu's method approximates the type-I error as $n$ increases. It is also seen that our method has better sample efficiency than theirs.
|Sample|200|400|600|800|1000|1200|
|-|-|-|-|-|-|-|
|Our|0.06|0.05|0.05|0.06|0.05|0.05|
|Liu|0.15|0.13|0.09|0.09|0.08|0.08|
**Q2.** No evidence to support sample efficiency.
**A.** The table above shows our method's superior sample efficiency in type-I error control. As claimed in lines 107-109, this is because Liu's method requires the bin number to go to infinity to make the discretization error vanish.
**Q3.** The power does not approach to one in Fig. 3b.
**A.** We further conduct experiments as $n$ increases. Combined with Fig. 3b, they show a clear trend of power approaching one. When $n=4000$, the power is 0.97.
|Sample|2000|3000|4000|
|-|-|-|-|
|Power|0.89|0.94|0.97|
**Q4.** About definitions and claims.
**A.** **About $\hat{H}^\lambda(w,t)$, its relation to the paragraph before "equivalent speaking".** As mentioned in lines 201-211, $\hat{H}^\lambda(w,t)$ is the minimizer of (5), which is designed to solve the least-norm solution as introduced in lines 176-181. From lines 182-208, we explain how the risk (5) is derived. We first introduce the risk in lines 192-193, followed by its equivalent form in lines 197-198. Indeed, the risk (5) is its empirical version, augmented with a regularization term. Additionally, we provide a more detailed explanation in Appx. C.4-C.5 due to space limitations. We will offer further clarifications in the updated version.
**About the motivation of Tikhonov regularization.** The regularization provides a way to solve ill-posedness problems and is commonly adopted in the literature of kernel regression. We have explained in more detail in Appx. C.5.
**About the bridge function.** The bridge function is the solution to the integral equation used in proximal causal inference, serving a similar role to $H$ in (4). It is a fundamental concept in proximal causal inference, and we have provided references in lines 167-168 due to space limits.
**Q5.** About power analysis after successive approximation.
**A.** Our power analysis focuses on alternatives in (4), which are defined based on characteristic restrictions. By utilizing the characteristic function, our approach demonstrates significantly better power compared to first-order moment methods in the literature (Fig. 4).
**Q6.** Does (4) hold if (1) does not hold under the assumptions in Thm 4.5?
**A.** We are sorry that we're unsure if we fully understand the question. Under assumptions in Thm. 4.5, both (1) and (4) should hold, as (1) is the conclusion of Thm. 4.5, and (4) naturally follows if (1) holds. We would appreciate your clarification if we've misunderstood.
**Q7.** Does assump. 6.1 say that all variability in U is captured by X? That seems to make a "confounder" immaterial.
**A.** Yes, it means variability in $U$ is captured by $X$. However, we want to clarify that this assumption is easy to hold and will not make $U$ immaterial.
First, the assumption applies to a wide range of models, as long as the dimension of $X$ is greater than that of $U$. Under this condition, [Andrews et al. 2017] demonstrated that completeness generically holds. Even if this condition holds, the confounding problem still matters as it also causes bias since completeness does not imply that $p(x)$ would determine (up to transformation) $p(x,u)$ (this is known as the ``equivalence condition" in Miao et. al. 2023) and hence the distribution of $p(u)$. In this regard, we still cannot determine whether $p(y|do(x)) \neq p(y|x)$ when $U$ is unobserved.
**Q8.** About "there exist" statement in $H^0(w,t)$" (line 234).
**A.** Local alternative refers to alternatives that are very close to the null hypothesis. That means, it adds a small deviation $r(X)/n^\alpha (\alpha > 0)$ to some $H^0$. To ensure that it does not degenerate to $\mathbb{H}_0$, we require $r(X)/n^\alpha$ cannot be written as $E(H-H^0|X)$ for any $H$, as claimed in lines 239-241.
**Q9.** Why need h(w,y) to be square-integrable?
**A.** If $h(w, y)$ is not square-integrable, it cannot be solved using regression methods, as regression approaches minimize the squared loss that involves the second-order moment of the solution. Besides, our method and analysis are built upon the completeness assumption, which is fundamental in proximal causal inference and is imposed on the square-integrable function class.
**Q10.** What does "feasibility of solutions" mean in Line 204?
**A.** It refers to the capability of regression methods to achieve consistency for integral solving (Lines 200-203). | Summary: The paper proposes a nonparametric procedure for bivariate causal discovery for determining $X \perp Y \mid U$, where $U$ is an unmeasured confounder of $X$ and $Y$. It introduces the Proxy Maximum Characteristic Restriction (PMCR) method to solve an integral equation where a proxy variable is available to determine $X \perp Y \mid U$. The paper also introduces a second proxy (NCE) and derive an additional integral equation for identifiability. Theoretical results establish asymptotic validity and power, while experiments demonstrate improved type-I error control and power compared to baselines in synthetic settings.
Claims And Evidence: Yes, claims are backed up by theoretical statements (e.g., Thm. 4.5) and experimental results.
Methods And Evaluation Criteria: Synthetic datasets generated under varying conditions (with/without direct proxy effects) are used to evaluate type-I error and power. The evaluation criteria are appropriate for assessing causal discovery methods in the bivariate setting.
Theoretical Claims: Yes, I checked the proof of Theorem 4.5 in Appendix B.
Experimental Designs Or Analyses: Yes, the ones in the main text.
Supplementary Material: Yes, Appendix A and B.
Relation To Broader Scientific Literature: It extends prior work by addressing the sample efficiency limitations of discretization and by clarifying non-identifiability conditions when using a single proxy.
Essential References Not Discussed: None to the best of my knowledge.
Other Strengths And Weaknesses: **Strengths**
- Comprehensive theoretical analysis with detailed asymptotic results.
- Clear identification and resolution of non-identifiability issues via a second proxy.
- Thorough experimental results (albeit all synthetic).
**Weaknesses**
- The paper is very dense and details are hard to grasp without reading parts of the 45+ page appendix!
Other Comments Or Suggestions: 1. In line 289 it should read "bootstrapped statistic."
2. The numbering of lemmas in the main text and the appendix is inconsistent (e.g., Theorem 4.5 in the main text is Theorem B.4 in the appendix). Consider using a LaTeX package like `restatable` to maintain consistent numbering.
Questions For Authors: 1. How sensitive is PMCR to kernel and bandwidth choices?
2. What is the computational efficiency of the proposed approach? How well does it scale to larger datasets and/or high-dimensional variables?
3. Are the non-identifiability results (Prop. 5.1) generalizable beyond linear models?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for the positive assessment and valuable suggestions about our paper. We address your questions below. We will correct typos and make the number of lemmas consistent in the updated version.
**Q1.** How sensitive is PMCR to kernel and bandwidth choices?
**A.** We would like to clarify that PMCR requires kernels to be bounded, continuous, and integrally strictly positive definite (Assumptions C.2 and C.3). Typical types of kernels satisfying these conditions include Gaussian RBF and Laplacian kernels. We adopt the Gaussian kernel because it is commonly used in the literature. Its bandwidth is typically initialized via the median distance heuristic, ensuring robust performance.
**Q2.** What is the computational efficiency of the proposed approach? How well does it scale to larger datasets and/or high-dimensional variables?
**A.** PMCR has a time complexity of $ O(pn^3) $, where $ n $ is the sample size and $ p $ is the dimension of $ W $. The complexity is linearly proportional to $ p $ because, for multivariate $ W $, the kernel can be constructed as the product of scalar kernels applied to each input dimension $ w $, i.e., $ k(w, w') = \prod_{j=1}^{p} k(w[:, j], w[:, j]') $, where $ w[:, j] $ denotes the $ j $-th feature. The complexity with respect to $ n $ can be reduced using standard Cholesky techniques when computing the inverse. We record the running time of a single trial as $ n $ varies (when $ W $ is univariate). As shown, the actual running time increases much more slowly than $ O(n^3) $.
| Sample | 200 | 400 | 600 | 800 | 1000 | 1200 |
|----------------------|------|------|------|------|------|------|
| Computational time (s) | 0.39 | 0.97 | 2.64 | 5.62 | 9.65 | 14.84 |
**Q3.** Are the non-identifiability results (Prop. 5.1) generalizable beyond linear models?
**A.** We believe that nonlinear models also suffer from the non-identifiability issue when $W$ has a strong effect on $Y$, but the theoretical analysis is challenging as there is no closed-form solution for non-linear models.
To empirically verify this point, we consider a non-linear setting: $U \sim \mathcal{N}(0,1)$, $W=U +\varepsilon_W$, $X=U +\varepsilon_X$, and $Y=X^2+\gamma_W W^2+U+\varepsilon_Y$, where $\varepsilon_W, \varepsilon_X, \varepsilon_Y$ are standard normal. We generate $n=800$ samples and record the average power over 20 times. We can see that the power decreases as $\gamma_W$ increases, which suggests the difficulty of identifying the causal relation when $W$ strongly affects $Y$.
| $\gamma_W$ | 0 | 0.5 | 1 | 3 | 5 | 7 | 9 | 11 |
|---------------|------|------|------|------|------|------|------|------|
| Power | 0.85 | 0.88 | 0.85 | 0.68 | 0.58 | 0.32 | 0.14 | 0.07 |
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their response and for adequately addressing my questions and comments. | null | null | null | null | null | null | null | null |
Randomized Dimensionality Reduction for Euclidean Maximization and Diversity Measures | Accept (poster) | Summary: The paper studies several network design problems, including maximum weight matching, maximum TSP, and subgraph diversity on Euclidean doubling space.
In brief, the paper shows the Gaussian JL dimensionality reduction with $O(\lambda \log{(1/\epsilon)} / \epsilon^2)$ dimensions where $\lambda$ is the doubling dimension on the Euclidean space, suffices to achieve $(1 + \epsilon)$-approximation for these problems. The paper also develops lower bounds that show the dependence on $\log{(1/\epsilon)} / \epsilon^2)$ is essential for $(\sqrt{2} - \epsilon)$-approximations.
Previously, similar theoretical results (e.g., reduced dimensions dependent on $\lambda$) exist on approximate near neighbor search (Indyk & Naor 07) and clustering (Narayanan et al. 21, Jiang et al. 24).
Though it studies several related problems, the main upper bound result of the maximum weight matching problem (Lemma 2.2), i.e., the reduced dimensions of $O(\lambda / \epsilon^2)$ suffices to achieve $(1 + \epsilon)$-approximation, is the key tool to show similar results on other problems, including max TSP, max k-hyper-matching, max spanning tree, max k-coverage on Table 1.
Claims And Evidence: Unfortunately, I do not have expertise on these computational problems and the difficulty on showing their approximation so I could not judge the significance of the contributions of this work.
However, I found an issue while trying to understand the proofs of Theorem 2.1.
The max match found on $G(P)$ differs from that found on $ P $ as the random projection matrix $ G $ distorts the pairwise distance within $(1 \pm \epsilon)$ error (say).
Hence, I do not understand how $opt(G(P)) \geq cost(G(S))$ given that $S$ is the optimal solution in $P$ but $G(S)$ is not the optimal solution of $G(P)$ (Line 120).
Methods And Evaluation Criteria: The empirical results of max matching compares the quality and running time of the max matching solver on original data vs reduced dimension data. That makes sense since the contribution of the paper show that we can achieve similar results on reduced dimensional space.
Theoretical Claims: I have not checked the proofs since I am not expertise in these problem.
Experimental Designs Or Analyses: The empirical designs are quite straighforward and the size of data is small, $n = 1000$.
Since max-matching solver has high complexity in $n$ (NP-hard in geometric space?), I wonder on the signifcance of the reduced dimensionality method vs. the fast approximation solutions of max-matching problem that run in $poly(n)$.
Supplementary Material: None
Relation To Broader Scientific Literature: I do not know since I am not familiar with these problems and its hardness to achieve $(1 + \epsilon)$-approximation.
Essential References Not Discussed: None
Other Strengths And Weaknesses: I lean on weak reject as I am uncertain on the signifcance of the contribution. I will engage the discussion to understand the problem better and will change the decision based on other reviewers' comments.
---------------
I change the score to Weak Accept after reading the rebuttal messages.
Other Comments Or Suggestions: As several theoretical results in the paper are derived from the maximum matching problem in Euclidean space, defining the max match problem and its $(1 + \epsilon)$ approximation would give better to readers.
Lemma 2.4: a data set $P$ with radius $r$? $P$ covered by the ball of radius $r$.
Theorem 3.5: $GP$ should be $G(P)$
Theorem 4.1: $g: R^d \mapsto R$ should be $g: R^d \mapsto R^t$
One of the highlight contributions is showing the existence of a randomized dimensionality reduction for diversity maximization, why would this result be placed in the appendix?
Questions For Authors: Q1) What are the definitions of max matches and its $(1 + \epsilon)$-approximation?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their careful reading and comments. We address the weaknesses they mentioned below.
**Significance of the contribution:** We give an example for obtaining a significantly faster algorithm for estimating weighted matchings, but similar examples exist for all the other problems considered here. Given a weighted graph with $n$ nodes and $m$ edges, one can estimate the size of a matching up to a $(1+\varepsilon)$ factor in time $O(m+n)$, ignoring lower order terms [1]. In Euclidean spaces, this is always $O(n^2)$, as all edges exist and computing all of these edges takes time $O(n^2d)$. This is clearly not a linear time algorithm as the input has only size $O(nd)$. A fast alternative using coresets exists [2], but it requires time $O(n\cdot\exp(d))$, which for worst case instances does not offer an improvement even when using previous dimension reduction bounds. Specifically, our lower bound shows that we can only replace $d$ with $O(\log n)$ in the worst case. However, if the doubling dimension is constant, we can use the coreset in combination with our new dimension reduction bounds to obtain a linear time algorithm, which to the best of our knowledge was not previously known.
In general, we prove that for a wide range of optimization problems, any dependency on $d$ may be replaced with a dependency on the doubling dimension. This almost always speeds up an algorithm, as the running time of the random projection is inexpensive. But it is particularly useful if the algorithm runs in time $\exp(d)$, as illustrated for the matching example above. Such examples are common in literature for the other problems considered here as well, due to the proverbial curse of dimensionality. The running time also gets reduced significantly in practice. Our experiments show that the runtimes are reduced by up to $10-100$x as shown in Table 2 in the appendix, and the reduction in solution quality is small.
- [1] R. Duan and S. Petite. Linear-Time Approximation for Maximum Weight Matching. J. ACM 2014.
- [2] G. Frahling and C. Sohler. Coresets in Dynamic Geometric Data Streams. STOC 2005.
**Definition for max-matching:** A matching is a set of pairs of points, with no point belonging to more than one pair; the cost of a matching is the sum of distances between the pair of points; and a maximum matching is a matching with the maximum cost. A (1+eps)-approximate matching is a matching whose cost is a (1+eps)-approximation to the cost of a max-matching. In other words, it is an approximation whose cost is approximately as large as the best possible cost.
**Relation to broader scientific literature:** We would like to point out that many papers related to the JL lemma and random projection based dimensionality reduction have recently appeared in top ML venues. See below for a small selection.
- Beyond Worst-Case Dimensionality Reduction for Sparse Vectors. ICLR 25
- MUVERA: Multi-Vector Retrieval via Fixed Dimensional Encodings NeurIPS 24
- Sparse Dimensionality Reduction Revisited. ICML 24
- Dynamic Metric Embedding into lp Space. ICML 24
- Simple, Scalable and Effective Clustering via One-Dimensional Projections. NeurIPS 23
- Fast Optimal Locally Private Mean Estimation via Random Projections. NeurIPS 23
- Dimensionality Reduction for General KDE Mode Finding. ICML 23
- Dimensionality reduction for Wasserstein barycenter. NeurIPS 21
- Randomized dimensionality reduction for facility location and single-linkage clustering. ICML 21
- Dimensionality Reduction for the Sum-of-Distances Metric. ICML 21
**Explaining proof of Theorem 2.1:** We consider maximum matching, so the cost of an optimal solution is larger than the cost of any other solution. Opt$(G(P))$ is the optimal solution in the projected space by definition, so it must be larger than the solution given by $G(S)$ where $S$ is the optimum in the original dimension.
**Regarding placing the proof for diversity maximization in the appendix:** Ideally, all proofs should be in the main body. However, due to the page limit, we chose to provide the entire proof for the max-matching problem, rather than providing fragmented proofs for all the claims.
We believe we have addressed your main concern about the significance of the contribution. Please let us know if you have any further concerns or questions; we are very happy to provide further clarifications!
Many thanks,
The authors
---
Rebuttal Comment 1.1:
Comment: Thank the authors for detailed feedback. I have increased my score to Weak Accept as I am familiar a few papers related to this work. | Summary: The paper studies randomized dimensionality reduction for a range of the Euclidean optimization problems, including max-matching,
max-spanning tree, max TSP, max k-coverage, and subgraph diversity. In particular, the paper relates the target dimension to the doubling dimension $\lambda_X$ of the dataset $X$ and shows that $O(\lambda_X)$ suffices to approximately preserve the near-optimal solution. The paper also provides a lower dimension bound for a $\sqrt{2}$ approximation. Finally, the paper also gives an empirical evaluation that shows the speed-up of the proposed dimensionality reduction and demonstrates that the effects of doubling dimension is an empirically observable phenomenon which can be quantitatively measured.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: The proof in the main body looks correct to me, but I did not carefully check the proof in the appendix. I have some question about the claim made in Table 1 (for details, see the below question).
Experimental Designs Or Analyses: The experimental design looks reasonable to me.
Supplementary Material: I took a brief look, but did not carefully check the correctness of the proof.
Relation To Broader Scientific Literature: See "Summary".
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strengths:
- The paper is technically solid. The paper relates the target dimension to the doubling dimension $\lambda_X$ and gives a careful analysis, which is interesting to me.
- The paper gives a detailed empirical evaluation that demonstrates the power of the proposed dimensionality reduction and the effects of doubling dimension is an empirically observable phenomenon which can be quantitatively measured.
- The organization of the paper is generally good and easy to follow.
Weaknesses:
I currently do not see any other major weakness of the paper.
Other Comments Or Suggestions: See the next question.
Questions For Authors: In Table 1, the paper claims that they prove a $\lambda$ dimension lower bound for these optimization problems. However, it seems to me they only give an $O(\log n)$ lower bound, which corresponds to the special case ($\lambda = \log n$) but not for a general range of $\lambda$?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: > However, it seems to me they only give an lower bound, which corresponds to the special case (lambda = log n) but not for a general range of lambda?
We thank the reviewer for their careful reading and comments.
The lower bound can be extended to general $\lambda$ as follows. For every $\lambda$, consider the pointset that consists of the $2^\lambda$ first basis vectors, and duplicate each point $n/2^\lambda$ times. If you are concerned with having a set with multiplicities, you can move the copies slightly along one axis (this will not affect the doubling dimension). | Summary: This paper studies randomized dimensionality reduction for geometric optimization problems such as max-matching, max-TSP, and max-spanning tree. It introduces a novel approach where the reduction is based on the doubling dimension of the dataset instead of the dataset size. The authors prove that reducing the dimension to O(λX), where λX is the dataset's doubling dimension, is sufficient to preserve the value of near-optimal solutions. They provide both theoretical proofs and experimental results to validate this claim. The experiments show that this method maintains solution quality while significantly improving computational efficiency.
Claims And Evidence: Some theoretical claims, particularly on the optimality gap after dimensionality reduction, lack detailed derivations.
The experimental validation is limited to specific datasets (e.g., MNIST, CIFAR). It is unclear how well the method generalizes to other domains, such as NLP or structured data.
The paper does not directly compute the doubling dimension (λX) but estimates it through experimental trends. This indirect approach makes it unclear how to determine λX efficiently in real-world applications.
Methods And Evaluation Criteria: The method relies on estimating the doubling dimension, but it is not clear how to precisely compute λX for arbitrary datasets. In some cases, especially for high-sparsity data (e.g., NLP embeddings), estimating λX may be difficult or computationally expensive.
Theoretical Claims: The method assumes that estimating λX is feasible, but does not provide a practical way to compute it efficiently.
It would be helpful to discuss whether similar guarantees hold in non-Euclidean spaces.
Experimental Designs Or Analyses: 1.The paper mainly uses image-based datasets, which may not reflect the challenges of high-dimensional sparse data.
2.The paper does not provide a direct computation method for λX but relies on experimental behavior to infer its impact. It is unclear how well this estimation method applies to other types of data.
3.The paper mainly compares against JL transforms but does not benchmark against other adaptive dimensionality reduction techniques (e.g., PCA).
Supplementary Material: The supplementary material provides additional proof details and experimental settings. However, additional experiments on larger and more diverse datasets would improve the paper.
Relation To Broader Scientific Literature: The paper contributes to the intersection of dimensionality reduction and combinatorial optimization. It builds on the Johnson-Lindenstrauss lemma, extending it to optimization problems.
Essential References Not Discussed: The paper does not compare well with manifold learning techniques , which also reduce dimensions while preserving geometric structures.
Other Strengths And Weaknesses: Strengths
1.Theoretical novelty: The paper introduces an innovative way to determine the target dimension using the doubling dimension.
2.Computational efficiency: The method shows a clear advantage in reducing computational costs.
3.General applicability: It applies to a variety of geometric optimization problems.
Weaknesses
1.Limited empirical validation: The experiments do not cover enough diverse datasets.
2.Unclear computation of doubling dimension: The paper does not provide a direct method to compute λX, which limits its practical use.
3.Missing comparisons: Other adaptive dimensionality reduction techniques (e.g., PCA, deep learning-based methods) are not compared.
Other Comments Or Suggestions: Please see the above weakness
Questions For Authors: Please see the above weakness
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their careful reading and comments. We address the weaknesses they mentioned below.
**1) Datasets:** We focused on datasets that have been used in prior empirical studies on diversity maximization and dimensionality reduction (Tenenbaum et al., 2000; Naeem et al. 2020). Furthermore, we believe our data is quite high dimensional. E.g. the Resnet embeddings we use are in dimension > 6000.
As for sparse NLP datasets, **we performed a new experiment** on maximum matching on TF-IDF embeddings for the 20 newsgroup dataset. These resulted in extremely sparse vectors with dimension > 170,000 (TF-IDF is based on word frequencies). We selected a set of 2000 vectors. The average sparsity was 100. As seen in the figure in this anonymous link (https://ibb.co/2YsR6kXS), the qualitative behaviour is the same as in our submission. By just projecting to 500 dimensions (~ 0.2% of the original dimension), we can preserve the max-matching cost up to relative error < 5%. In fact, since our bounds depend on the doubling dimension (always at most $O(\log n)$), their effect becomes increasingly pronounced as the ambient dimension of the dataset increases.
**2) Comparison to manifold learning:**
Our paper focuses on proving worst-case theoretical guarantees. We do this by using data-oblivious maps based on the Johnson-Lindenstrauss (JL) Lemma.
In contrast, the methods suggested by the reviewer have no such theoretical guarantees for Euclidean distance, and can perform very poorly. This is well known, e.g. see [Ref 1] where it is shown theoretically and empirically that JL has better performance over PCA and Isomap heuristics. The underlying intuition is that manifold learning (such as Isomap) considers data points that are in a lower dimensional manifold embedded in a high dimensional space. Their goal is to recover the low dimensional manifold by approximating the geodesic distance along the unknown manifold. However, the geodesic distance could be very far from Euclidean. This deviates from our setting where we aim to approximate high dimensional Euclidean distances.
For concreteness, here is a simple example where PCA catastrophically fails: consider the dataset X consisting of all basis vectors and negations. Weight the first $n/2$ basis vectors and their negatives by a factor of $2$. This has doubling dimension $O(\log(|X|))$ and JL guarantees that all pairwise distances (and thus all the optimization problems we consider) is preserved up to $1\pm \epsilon$ factor when projected to $O(\log(|X|)/\epsilon^2)$ dimensions. However, for any $k < n/2$, the top $k$ PCA directions align with the first $k$ basis vectors. When we project onto them, this maps all other basis vectors to $0$, so all information about them is lost, and max-matching on this PCA projected dataset has large distortion.
[Ref 1]: Dimensionality reduction: theoretical perspective on practical measures. NeurIPS 19.
We also refer to the response to Reviewer jiNF for many works related to the JL lemma that have recently appeared in top ML venues.
**3) Doubling dimension:** The way JL transforms are applied is by determining a desired target dimension and performing the projection. In practice, users will use as many dimensions as are affordable and for many problems, worst case bounds are significantly larger than the dimensions that end up being used. Naturally, we are interested in understanding why this phenomenon occurs. An explanation is that data sets have low “intrinsic dimension”, which the doubling dimension captures and models. This is a popular approach in the literature which has led to development of several practically efficient algorithms (e.g. Cover Tree by Beygelzimer et al.). Our real-data experiments exemplify this, and demonstrate that data sets with low ``intrinsic dimension'' can be projected to very low dimension without significantly reducing the accuracy. This is why we selected real data sets (e.g. MNIST 2) that were studied in prominent works studying embeddings of data sets with low intrinsic dimension (Tenenbaum et al, Science'00).
We emphasize that the algorithm does not have to know the doubling dimension and as long as the target dimension is larger than the doubling dimension, it is always guaranteed to succeed with high probability. Works in this line of research typically only have to show that a given target dimension is sufficient, for a specific task, as the algorithm itself is oblivious to the dataset. This sets random projections apart from PCA and manifold learning and similar methods that are computationally expensive, have to know properties of the data set, and typically perform very poorly as metric embedding algorithms (see earlier example on PCA). If it is nevertheless desired by the user, we could compute an approximation of the doubling dimension in linear time (see Sect. 9 in https://arxiv.org/pdf/cs/0409057), but the efficiency of our method does not rely on this. | null | null | null | null | null | null | null | null |
Haste Makes Waste: A Simple Approach for Scaling Graph Neural Networks | Accept (poster) | Summary: This paper proposes a simple yet highly effective training algorithm (REST) to effectively reduce feature staleness. The proposed REST significantly improves performance and convergence across varying batch sizes, especially when staleness is predominant. Experiments demonstrate that REST achieves a 2.7% and 3.6% performance enhancement on the ogbn-papers100M and ogbn-products dataset.
## update after rebuttal
I thank the authors for the authors' rebuttal. I would like to keep my original evaluations.
Claims And Evidence: Theorem 3.1 gives the approximation errors by Equation (5). However, Equation (5) may not hold, as the assumption of Lipschitz continuity about $\nabla_{\theta} L$ is not reasonable. Consider a two-layer GCN message passing on a graph $(\\{v\_1,v\_2\\}, \\{ (v_1,v_2) \\})$. The message passing is $h\_1^{(2)}=W\_2h\_2^{(1)}=W\_2\sigma(W\_1h\_1^{(0)})$. We construct two subgraphs $S\_1,S\_2$ induced by $\{v\_1\}$ and $\{v\_2\}$ respectively. Suppose that the historical embeddings are equal to true embeddings, i.e., ${\hat{h}}\_i^{(j)}=h\_i^{(j)}$. The gradients with the historical embeddings are $\nabla_{W\_1}L({\hat{h}}\_1^{(2)})=\nabla_{W\_1}{\hat{h}}\_2^{(1)} \nabla_{{\hat{h}}\_2^{(1)}}{\hat{h}}\_1^{(2)} \nabla_{\hat{h}\_1^{(2)}}L({\hat{h}}\_1^{(2)})=\nabla_{W\_1}{\hat{h}}\_2^{(1)} \cdot 0 \cdot \nabla_{\hat{h}\_1^{(2)}}L({\hat{h}}\_1^{(2)})=0$, while the gradients of the true embeddings $\nabla_{{h}\_2^{(1)}}{h}\_1^{(2)} \neq 0 $.
Methods And Evaluation Criteria: The authors propose REST to reduce feature staleness, which is useful to accelerate convergence.
Theoretical Claims: 1. Equation (5) may not hold, as the assumption of Lipschitz continuity about $\nabla_{\theta} L$ is not reasonable. Consider a two-layer GCN message passing on a graph $(\\{v\_1,v\_2\\}, \\{ (v_1,v_2) \\})$. The message passing is $h\_1^{(2)}=W\_2h\_2^{(1)}=W\_2\sigma(W\_1h\_1^{(0)})$. We construct two subgraphs $S\_1,S\_2$ induced by $\{v\_1\}$ and $\{v\_2\}$ respectively. Suppose that the historical embeddings are equal to true embeddings, i.e., ${\hat{h}}\_i^{(j)}=h\_i^{(j)}$. The gradients with the historical embeddings are $\nabla_{W\_1}L({\hat{h}}\_1^{(2)})=\nabla_{W\_1}{\hat{h}}\_2^{(1)} \nabla_{{\hat{h}}\_2^{(1)}}{\hat{h}}\_1^{(2)} \nabla_{\hat{h}\_1^{(2)}}L({\hat{h}}\_1^{(2)})=\nabla_{W\_1}{\hat{h}}\_2^{(1)} \cdot 0 \cdot \nabla_{\hat{h}\_1^{(2)}}L({\hat{h}}\_1^{(2)})=0$, while the gradients of the true embeddings $\nabla_{{h}\_2^{(1)}}{h}\_1^{(2)} \neq 0 $.
Experimental Designs Or Analyses: The authors may want to report the standard deviation in Table 1.
Supplementary Material: I have reviewed all supplementary material.
Relation To Broader Scientific Literature: The related work mainly focuses on graphs with less than three million nodes (e.g., ogbn-products) [Ref1][Ref2], while the proposed REST can scale to ogbn-papers100M and MAG240M, which contain at least one hundred million nodes.
[Ref1] Gnnautoscale: Scalable and expressive graph neural networks via historical embeddings. ICML 2021.
[Ref2] Lmc: Fast training of gnns via subgraph sampling with provable convergence. ICLR 2023.
Essential References Not Discussed: The cited references are sufficient.
Other Strengths And Weaknesses: Strengths:
1. The proposed REST is simple yet highly effective.
2. Experiments demonstrate strong scalability of REST to large-scale datasets, such as ogbn-papers100M, ogbn-products, and MAG240M.
Weaknesses:
1. Equation (5) may not hold, as the assumption of Lipschitz continuity about $\nabla_{\theta} L$ is not reasonable. Consider a two-layer GCN message passing on a graph $(\\{v\_1,v\_2\\}, \\{ (v_1,v_2) \\})$. The message passing is $h\_1^{(2)}=W\_2h\_2^{(1)}=W\_2\sigma(W\_1h\_1^{(0)})$. We construct two subgraphs $S\_1,S\_2$ induced by $\{v\_1\}$ and $\{v\_2\}$ respectively. Suppose that the historical embeddings are equal to true embeddings, i.e., ${\hat{h}}\_i^{(j)}=h\_i^{(j)}$. The gradients with the historical embeddings are $\nabla_{W\_1}L({\hat{h}}\_1^{(2)})=\nabla_{W\_1}{\hat{h}}\_2^{(1)} \nabla_{{\hat{h}}\_2^{(1)}}{\hat{h}}\_1^{(2)} \nabla_{\hat{h}\_1^{(2)}}L({\hat{h}}\_1^{(2)})=\nabla_{W\_1}{\hat{h}}\_2^{(1)} \cdot 0 \cdot \nabla_{\hat{h}\_1^{(2)}}L({\hat{h}}\_1^{(2)})=0$, while the gradients of the true embeddings $\nabla_{{h}\_2^{(1)}}{h}\_1^{(2)} \neq 0 $.
2. The authors may want to report the standard deviation in Table 1.
Other Comments Or Suggestions: See Weaknesses.
Questions For Authors: See Weaknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **Anonymous link: https://anonymous.4open.science/r/REST_ICML2025-0972/REST_ICML_2025_Reviewer_y9S9.pdf**
**A1** Counterexample of Theorem 3.1
We sincerely appreciate the reviewer's careful and thoughtful comments, and we are happy to clarify this counterexample.
Based on our understanding of the review, a special graph was constructed with only two nodes ($v_1, v_2$), where each induced subgraph contains exactly one node. Under this extreme scenario, each subgraph includes only the node itself. The reviewer further assumes that the historical embeddings exactly match the true embeddings, which leads to the gradient becoming zero under these special conditions:
$\nabla_{\hat{h}_2^{(1)}}\hat{h}_1^{(2)}= 0$
In the full batch training scenario, due to the existence of actual message passing, we have:
$\nabla_{h_2^{(1)}} h_1^{(2)} \neq 0$
Thus, the statement argues that the assumption of Lipschitz continuity may not hold.
However, we would like to clearly point out that this scenario does **not** align with the historical embedding or neighbor aggregation logic used in our method (and in the broader family of historical embedding methods):
Firstly, we want to clarify the definition of "out-of-batch" nodes commonly used in all historical embedding methods. Existing methods typically first sample a mini-batch (or subgraph) of nodes termed "in-batch" nodes, whose embeddings are updated every iteration. Then, they select all direct one-hop neighbors of these "in-batch" nodes as "direct one-hop out-of-batch" nodes (simply referred to as "out-of-batch" nodes in existing literature) and use their historical embeddings to approximate the true embeddings during aggregations to reduce computation bias. All other nodes that are not directly connected to the batch are simply discarded and referred to as "other out-of-batch nodes." Therefore, since the message passing is given by
$h_1^{(2)} = W_2 h_2^{(1)} = W_2 \sigma(W_1 h_1^{(0)})$, this indicates that there is an edge between $v_1$ and $v_2$.
if we consider $v_1$ as an in-batch node, then $v_2$ should exactly be treated as a "direct one-hop out-of-batch node" whose historical embedding is used in aggregation—even though they are not in the same subgraph—rather than simply discarding it as an "other out-of-batch node." We also provide a figure in the anonymous link that better explains the definition.
Specifically, in most historical embedding methods, the absence of node $v_2$ explicitly from the subgraph does **not** imply the complete loss of its influence on node $v_1$. Instead, historical embedding methods utilize historical embeddings $\hat{h}_2^{(1)}$ to preserve neighbor information, and gradients in the backward pass will still propagate through these stored embeddings. For "out-of-batch" nodes, their previously stored activation values are retrieved from memory to ensure their influence can still be incorporated. A typical form can be represented as:
$h_i^{(l)} = \text{Agg}\left(\\\{h_j^{(l-1)}\mid j \in \text{in-batch}\\\} \cup \\\{\hat{h}_k^{(l-1)}\mid k \in \text{out-of-batch}\\\}\right)$
Thus, the update of $v_1$'s embeddings still references $\hat{h}_2^{(1)}$, even if $v_2$ is not explicitly included in the sampled mini-batch/subgraph. Consequently, node $v_1$ will indeed incorporate $\hat{h}_2^{(1)}$, preserving
$\nabla_{\hat{h}_2^{(1)}}(\hat{h}_1^{(2)})\neq 0$,
and $\nabla_{W_1}L$ does **not** simply vanish but instead includes the gradient contributions from $\hat{h}_2^{(1)}$. Thus, the proposed counterexample does not hold under historical embedding methods.
Moreover, **the Lipschitz continuity assumption regarding the gradients is a general and widely accepted assumption in the literature, such as in LMC[1].** It has consistently proven valid empirically across a wide range of realistic datasets and practical training scenarios.
**A2** The standard deviation in Table 1.
We conducted multiple runs and included the standard deviation in Table 1. Please see the updated table in the anonymous link.
We appreciate your feedback and have worked diligently to address all your concerns. If you have any further questions, please let us know, and we kindly request that you consider adjusting the scores in light of our revisions.
[1] LMC: Fast Training of GNNs via Subgraph Sampling with Provable Convergence
---
Rebuttal Comment 1.1:
Comment: Thanks for your rebuttal. My remaining concerns are as follows.
1. Does REST compute $\nabla\_{W\_1} \hat{h}\_2^{(1)}$? In my opinion, as the historical embedding $\hat{h}\_2^{(1)}$ is directly pulled from an offline storage, the forward pass $\hat{h}\_2^{(1)}=\sigma(W\_1 \hat{h}\_1^{(0)})$ and the corresponding backward pass are missing. If $\nabla\_{W\_1} \hat{h}\_2^{(1)}=0$, then $\nabla\_{W\_1} L(\hat{h}\_1^{(2)}) \neq \nabla\_{W\_1} L(h\_1^{(2)})$.
2. In LMC [1] and VR-GCN [2], the Lipschitz continuity assumption is made with respect to the gradients of the GNN parameters (i.e. $W\_1$ and $W\_2$ in the example) rather than the GNN embeddings (i.e. $h\_i^{(1)}$ and $h\_i^{(2)}$ in the example).
[2] Stochastic Training of Graph Convolutional Networks with Variance Reduction.
---
Reply to Comment 1.1.1:
Comment: We appreciate your quick reply, and we are happy to answer all your further questions.
**A1**
First, we would like to note that the proposed REST is a general training framework designed to address the staleness issue inherent in all related works by decoupling forward and backward propagation, so **the gradient flow during the backprop entirely follows that of the backbone historical embedding models** (e.g., GAS, GraphFM, and LMC). This design renders our model general and applicable to any existing approach. Therefore, whether $\nabla_{W_1} \hat{h}_2^{(1)}$ is computed depends entirely on the backbone historical embedding model with which REST is combined, rather than on REST itself.
Nonetheless, we are happy to provide additional details on this matter.
Consider the gradient of the loss $L$ with respect to $W_1$ expanded via the chain rule:
$$
\nabla_{W_1} L = \sum_{i\in B}\sum_{j \in N(i)} \frac{\partial L}{\partial h_i^{(2)}} \frac{\partial h_i^{(2)}}{\partial h_j^{(1)}} \frac{\partial h_j^{(1)}}{\partial W_1}
$$
For GAS and GraphFM, only the in-batch nodes participate in gradient computation and backprop. Since the historical embeddings from each layer are retrieved from a cache, the backward prop cannot flow through these cached values. Hence, the gradients with respect to the historical embeddings satisfy $\nabla_{W_1} \hat{h}_2^{(1)} = 0 $ and they do not affect the model parameter updates.
In other words, $\nabla_{W_1} L(\hat{h}_1^{(2)})$ is not equal to
$\nabla_{W_1} L(h_1^{(2)})$ ,
which is the gradient bias introduced by GAS and GraphFM.
Formally, in GAS and GraphFM, for the term $\frac{\partial h_j^{(1)}}{\partial W_1}$, if node $j$ is an in-batch node, then $h_j^{(1)} $ requires a gradient and participates in backprop, so $\frac{\partial h_j^{(1)}}{\partial W_1} \neq 0$,
However, if $j$ is an out-of-batch neighbor (its embedding is cached as $ \hat{h}_j^{(1)} $), then $\frac{\partial \hat{h}_j^{(1)}}{\partial W_1} = 0$.
**In contrast**, for LMC, this drawback of losing the gradient for historical embeddings in GAS motivates their approach. It maintains a memory table for the historical gradients and explicitly retrieves and compensates for the discarded messages during backprop. By employing a gradient momentum technique, LMC proactively compensates for the gradients of out-of-batch nodes, thereby avoiding the loss of these gradients (Equation 12 in the paper). Consequently, $ \nabla_{W_1} \hat{h}_2^{(1)} $ is maintained and approximated. In other word, during the backprop, LMC maintains the gradient dependencies related to $ \hat{h}_2^{(1)} $ using the proposed compensation formulas (Equations 11–13 and Figure 1), which significantly reduces the gradient bias and provides convergence guarantees (Theorems 2 and 3). This is why LMC achieves accurate gradients and better performance. Therefore, if apply REST to LMC, that gradient is still preserved (Appendix E)
In REST, we focus on the root cause of staleness as detailed in Section 2. The improved performance and accelerated convergence observed with GAS in the main text, GraphFM and LMC in Appendix I and E demonstrate that REST is applicable to any scenario—such as the two cases regarding gradient flow discussed above—without requiring modifications to the workflow of the underlying model, thereby highlighting the generalizability and novelty of our approach.
**A2** We first note that two papers adopt different analytical perspectives
(1) In LMC, to analyze convergence and how the gradients change with respect to the parameters $\theta$, a Lipschitz assumption evaluated on the gradients with parameters is made.
(2) In REST, to analyze how the discrepancy between the historical and true embeddings affects the final gradient, we make a Lipschitz assumption on the gradient evaluated on embeddings $h$.
Secondly, we emphasize that our assumption is also reasonable since several key components of GNNs satisfy Lipschitz continuity. Consider a GNN with $L$ layers, a typical layer can be expressed as
$$
h^{(\ell+1)}_v = \text{UPDATE}\Bigl( h^{(\ell)}_v, \; \text{AGG}\bigl(\{h^{(\ell)}_u : u \in \mathcal{N}(v)\}\bigr)\Bigr).
$$
Then, the following assumptions usually hold in the literature (e.g., in GAS):
(1) The aggregation function evaluated on the embeddings, $\text{AGG}$, is $\alpha$-Lipschitz, where functions such as mean, sum, and max are all Lipschitz continuous.
(2) The update function evaluated on the embeddings, $\text{UPDATE}$, is $\beta$-Lipschitz.
Furthermore, common activation functions and loss functions with respect to the embeddings are also Lipschitz continuous. When these components are composed to form the full GNN, the overall Lipschitz continuity is preserved. Consequently, the gradient evaluated on the embeddings is Lipschitz continuous. This assumption is standard and reasonable in the analysis of GNNs, particularly when investigating how errors in the node embeddings affect the overall gradient estimation. | Summary: This paper presents a simple yet effective training approach called REST for scaling GNNs. The authors analyze the issue of embedding staleness in historical embedding methods, demonstrating that stale features negatively impact model convergence and performance. REST addresses this issue by decoupling forward and backward propagations and adjusting their execution frequencies, significantly reducing feature staleness. Experimental results indicate that REST achieves superior performance and faster convergence on several large-scale benchmark datasets.
##Update after rebuttal
In the 2nd-round rebuttal, the authors provided more clearer differences between their work and existing works, and hence I changed my mind from "weak reject" to "neutral". I will respect AC or other reviewers to further discuss the outcome.
Claims And Evidence: The proof of Theorem 3.2 is missing. Hence the claim “REST achieves a faster convergence rate theoretically” is not convincing.
Methods And Evaluation Criteria: Yes
Theoretical Claims: The proof of Theorem 3.2 is missing. Although there are some references proving the upper bound of the expectation of gradients’ norm, its complex form seems not trivial to prove. A more detailed or self-contained proof should be provided.
Experimental Designs Or Analyses: Yes. The experiments are designed scientifically and the results can support the claim.
Supplementary Material: The proof of theorems and the extra experimental results are reviewed.
Relation To Broader Scientific Literature: Starting from the scalable issues in GNN, this paper builds its contribution on the drawbacks of other GNN methods which utilize the historical embeddings. It also highlights its novelty in directly addressing the embedding staleness issue at its root by decoupling forward and backward propagations.
Essential References Not Discussed: The related works are discussed in the appendix. However, the most related works such as GAS are not discussed in detail in the main context. Consequently, the differences between the proposed work and previous works are not well explained and compared.
Other Strengths And Weaknesses: Strengths:
1. The experiments are well set, and the results seem promising with respect to accuracy.
2. The issues in the scalable GNN are well-targeted. In other words, the motivation of this paper is built properly.
Weakness:
1. This paper should add more explanation to some specific phrases. For example, this paper can provide the definition for the words such as “in-batch” and “out-batch”. The lack of explanation will make this paper hard to follow.
2. The novelty of this paper is not properly highlighted. It seems that this paper incorporates many insights from the paper of GAS (Fey et al., 2021), including the reference of Theorem 3.1. From my point of view, this paper only modifies a minor implementation detail of GAS and has not proposed enough theoretical contribution. In order to highlight the main contribution, this paper should provide a detailed introduction of strongly related works and compare them theoretically.
3. The detailed proof of theorem 3.2 is missing.
Other Comments Or Suggestions: None
Questions For Authors: Please see the weakness part.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: **Anonymous link**: https://anonymous.4open.science/r/REST_ICML2025-0972/REST_ICML_2025_Reviewer_1EQa.pdf
**A1** Proof of Theorem 3.2
Please check the detailed proof in the anonymous link . We would like to clarify that the main result in Theorem 3.2 shares foundational assumptions with previous works, which we explicitly cited in line 253. To avoid misinterpretation, we initially chose to cite the references rather than showing our proofs in submission.
**A2** Difference between GAS and REST
We believe that the essential technical aspects of GAS are adequately covered in **Section 2.1**. Given the space constraints of the rebuttal, please refer to our main text, and we are happy to address any further questions you may have.
Difference: **Overall, GAS and other related works are specific historical embedding methods, whereas REST is a novel, general training framework that effectively solves the most notable staleness issue in these methods, boosting performance and accelerating convergence.** In GAS, most node embeddings remain stale because these methods inherently suffer from a frequency mismatch between cache updates and model parameter updates. Our work tackles this issue by introducing a general training framework that can be integrated with any related approach, enabling more frequent refreshing of historical embeddings and thereby eliminating staleness without altering the model architecture—rather than merely designing a model that differs from GAS.
**Thus, the target of our work is not the introduction of a new model, but the development of a widely applicable training framework that addresses a key limitation—staleness—in all related works. Moreover, it provides promising solutions for other tasks involving asynchronous update scenarios in machine learning.**
**A3** Specific Phrases
In this work, we follow the definitions used in existing related works. Due to space constraints, we did not reiterate these definitions in our submission. However, please refer to the figure in the anonymous link for a more visual explanation.
**A4** Contribution Highlight
We would like to respectfully disagree with the statement. We hope to clarify any misunderstandings regarding our objectives and highlight the significant novelty and contributions:
First, the main focus of our work is **not** to introduce a new historical embedding model architecture, but rather to **propose a general, effective, efficient, and simple training framework that fundamentally resolves the staleness issue present in all existing methods.** In other word, our model aims to address the critical shortcomings in GAS and other related works, rather than designing a model that differs from GAS. **The requirement of minor implementation modification precisely demonstrates our model's strength rather than its weakness.** REST exhibits strong generalizability and can significantly improve performance and accelerate convergence by eliminating staleness in historical embeddings, without requiring any changes to the underlying model.
Second, our work is the first to provide a comprehensive analysis of staleness and to reveal that a mismatch in update frequencies is its root cause—a research aspect that has not been explored before. We not only employ Theorem 1 to directly link the frequency mismatch to staleness, but also present extensive empirical results demonstrating that existing methods inevitably suffer from severe staleness issues.
Third, REST introduces a previously unexplored method: decoupling forward and backward prop so that the memory table can refresh at a frequency different than the parameter updates. **To the best of our knowledge, this core innovation does not appear in GAS or any other existing method. Rather than merely being a modification, our approach can be viewed as a more general form of GAS.** Our experiments demonstrate substantial performance improvements as well as significantly faster convergence. These empirical results not only validate our theoretical claims but also highlight practical advantages that GAS does not achieve. Moreover, this idea can be applied to any staleness issue arising from asynchronous updates—a common challenge in various machine learning tasks—providing promising solutions for future work that hold significant value for the entire community. Furthermore, we introduce a novel variant, REST-IS, which employs a unique importance sampling strategy to further mitigate staleness, showcasing additional methodological innovation.
Fourth, Theorem1 in our paper diverges from GAS. We also introduce Theorem 3.2, which substantiates the empirical observation that REST achieves faster convergence. This rigorous theoretical contribution clearly differentiates our work from GAS. Please refer to the anonymous link for proof.
We believe our work offers a more substantial advancement and provides promising solutions for future research, rather than merely proposing a specific model design.
---
Rebuttal Comment 1.1:
Comment: Thanks for your explanation and I understand better of your work's position.
In your rebuttal, you emphasize that “GAS and other related works are specific historical embedding methods, whereas REST is a ... framework". However, GAS is also self-described as a framework and its experiments integrate various existing models, blurring the distinction between the GAS and REST. Moreover, both approaches address training optimization, which may further diminish the uniqueness of your contribution. As such, I am not fully unconvinced that REST represents a significant advancement over the baseline GAS.
---
Reply to Comment 1.1.1:
Comment: We appreciate your reply and we want to clarity the difference between GAS and REST from both "training optimization" and "framework" mentioned in your reply.
(1) Conceptually different training optimization:
Characterizing both approaches simply as "training optimization" oversimplifies their distinct contributions. Under such broad categorization, most training innovations could be similarly dismissed. **REST operates at a different level than GAS - it's not an alternative to historical embedding methods but rather a complementary framework for improving them.** Specifically:
**GAS introduces a fixed training optimization method aimed primarily at enhancing the scalability of GNN backbones (e.g., GCN) while mitigating bias from traditional sampling methods.** It achieves this by caching historical embeddings to reduce the variance in mini-batch approximations. However, during training, its inflexible update strategy causes significant embedding staleness—historical embeddings update far more slowly than the model parameters, creating a fundamental performance bottleneck. In other words, GAS mainly focuses on reducing memory costs and achieving better performance via historical embeddings, with staleness emerging as an unintended side effect of this and all subsequent historical embedding methods.
In stark contrast, **REST provides a fundamentally novel training optimization strategy whose primary optimization goal is to exactly solve the staleness issue introduced by GAS and other historical embedding methods, explicitly addressing the root cause of embedding staleness.** We emphasize that instead of specifically focusing on modifying the computation graph during message passing by using cached embeddings like in GAS, REST focuses on general machine learning training optimization. It dynamically decouples the frequencies of forward and backward propagation, enabling frequent and independent updates. This strategy drastically reduces staleness, as clearly proven by our theoretical analysis and extensively validated by empirical results. **Moreover, REST is not limited to a fixed scalable training method like GAS; rather, it serves as a flexible and general optimization framework compatible with virtually all existing historical embedding-based methods (including GAS itself)**, significantly enhancing their performance and convergence speed. It can also be applied to scenarios with asynchronous updates, such as distributed training and federated learning, as mentioned in the rebuttal to Reviewer uPqW.
In summary, **the training optimization objectives, methods, and motivations of the works are completely different. REST should be considered a complementary framework for enhancing historical embedding methods rather than a modification to them.**
(2) Completely different framework scopes:
We want to emphasize that the training framework also operates at a different and more general level—similar to the training optimization discussed above:
In GAS, the authors describe it as a framework, meaning that it only can be combined with different GNN backbones (e.g., GCN, GCNII), which indicates that the proposed caching strategy does not require a specific message passing pattern, as observed in their experiments. However, it still relies on a specific, flawed update strategy and graph partitioning to achieve its objective, which is why we refer to it as a specific historical embedding model.
In contrast, **REST is a meta-framework that operates at a higher level to improve any historical embedding method, including GAS itself. This hierarchical relationship clearly distinguishes REST from GAS.** As shown in Table 2 and 3, our model can be integrated with various historical embedding methods, different GNN backbones, and diverse sampling strategies—an approach that is entirely different from GAS and cannot be achieved by GAS.
Thus, the fundamental distinction is clear: **GAS is a specific historical embedding-based scalable training method, limited by its fixed update and caching strategy, while REST is a universally applicable optimization framework providing a general and principled approach to embedding updates on any historical embedding methods, fundamentally resolving a key bottleneck (staleness) inherent in methods like GAS.** This substantial conceptual difference in flexibility, generality, and staleness-awareness decisively positions REST as a significant advancement over GAS.
Based on the two points above, **GAS and REST address completely different problems, with distinct training optimization objectives and techniques. REST also operates at a higher hierarchical level than GAS with respect to training.** We believe that REST offers a broader contribution to existing work and provides a promising solution for future research.
We hope we have addressed all your concerns. We kindly request that you consider adjusting your scores in light of our revisions, as this is very important to us. | Summary: The paper studies the problem of scaling the use of graph neural
networks to large graphs. Existing techniques make use of the
historical features, which may become outdated. On the contrary,
the paper introduces REST algorithm, which contains the
influence of the outdated features. Also, the new model
can merge with current pipelines and advance their
performance and the machine learning algorithm's convergence.
## Update after rebuttal: I thank the authors for their response about the F1 metric. After the extensive answers to the other reviews of the paper, I am acknowledging the deeper expertise of the other reviewers and basing my opinion on their comments.
Claims And Evidence: The paper is clear to read and carefully written for ideas and their materialization to `REST Technique'.
Methods And Evaluation Criteria: In the comparisons' two tables, the authors report the accuracy of the Graph Classification task. How about the other metrics, e.g. AUC ROC, Mean squared error, Pr/Recall, Pr/Recall AUC, F1 with 0.5 as threshold (or with varying thresholds)?
Theoretical Claims: The paper suggests the solution of containing the
outdated features
by decoupling the forward and backward propagation
of the neural network weight updates and
dynamically adjusts their execution frequencies,
permitting the
memory table to be renewed faster than than the model
parameters.
Experimental Designs Or Analyses: The authors employ extensive experiments of their
solution (`REST') with 5 datasets
and list the performing accuracy metric in them; it is
higher than that of the existing techniques (VR-GCN,
MVS-GCN, GCN, APPNP, GCNII).
Supplementary Material: Not reviewed.
Relation To Broader Scientific Literature: The suggested approach seems novel from a first overview of the GNN literature.
Essential References Not Discussed: After a first study, there are not any references missing.
Other Strengths And Weaknesses: No more comments.
Other Comments Or Suggestions: NA.
Questions For Authors: No more questions.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We sincerely appreciate your insightful comments and thoughtful questions.
**A1** Other Metrics of performance:
We choose to use accuracy as the metric in our submission since all other baselines use it, ensuring a fair comparison. Since the major task is a multi-class classification problem, we follow your suggestion and further include micro and macro F1 scores as additional metrics to better highlight the advantages of REST/REST-IS. Due to the limited time available for the rebuttal, we currently report the performance of state-of-the-art baselines, GAS and GraphFM on ogbn-arxiv and ogbn-products. We use GCN as our GNN backbone for all methods including REST. We will include similar results for other baselines on other large-scale datasets in our revision of the main text.
From the results in the following table, we observe that REST outperforms state-of-the-art baselines across various evaluation metrics on all datasets, demonstrating superior performance in large-scale training. This is consistent with the conclusions in our main submission. Moreover, while historical embedding methods still suffer from the staleness problem—resulting in significant performance drops on large-scale datasets such as ogbn-products—REST addresses the problem at its source, delivering significant performance enhancements that further validate our main text's findings.
| Models | ogbn-arxiv Micro-F1 | ogbn-arxiv Macro-F1 | ogbn-products Micro-F1 | ogbn-products Macro-F1 |
|----------|---------------------|---------------------|------------------------|------------------------|
| Sage | 71.5 ± 0.2 | 52.1 ± 0.1 | 78.7 ± 0.1 | 37.0 ± 0.1 |
| GAS | 71.7 ± 0.2 | 52.5 ± 0.1 | 76.7 ± 0.2 | 35.3 ± 0.1 |
| GraphFM | 71.8 ± 0.2 | 52.6 ± 0.2 | 76.8 ± 0.2 | 35.5 ± 0.1 |
| REST | 72.2 ± 0.2 | 53.3 ± 0.1 | **79.6 ± 0.1** | **39.1 ± 0.1** |
| REST-IS | **72.3 ± 0.1** | **53.4 ± 0.1** | 78.6 ± 0.1 | 38.6 ± 0.1 |
We appreciate your feedback and have worked diligently to address all your concerns. If you have any further questions, please let us know, and we kindly request that you consider adjusting the scores in light of our revisions. | Summary: In this paper, the author proposes an algorithm to mitigate the issue of feature staleness.
## update after rebuttal
I would thank the author for the rebuttal, and my score remains the same.
Claims And Evidence: The claims appear to be valid.
Methods And Evaluation Criteria: N/A
Theoretical Claims: The given theory looks solid.
Experimental Designs Or Analyses: The overall experimental design seems reasonable. The author compares various baselines across datasets of different scales. The results suggest that for large-scale datasets, the proposed method achieves better performance while maintaining a better convergence speed.
However, in terms of memory efficiency, Table 4 and Table 5 suggest that REST and REST-IS do not show a significant advantage. It would be helpful if the author provided a more in-depth analysis of this aspect, discussing potential trade-offs and explaining why the proposed method does not yield substantial improvements in memory efficiency.
Supplementary Material: I mainly checked the experiment-related part of the appendix.
Relation To Broader Scientific Literature: N/A
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: The paper is well-organized and easy to follow.
The proposed method is simple and can be applied to existing GNNs.
The main concern is that the experimental results do not demonstrate significant improvements in terms of computational complexity. Additionally, the applicability of the proposed method seems restricted to scenarios where the memory table updates at a different frequency than the model.
Other Comments Or Suggestions: N/A
Questions For Authors: 1. In the experiment session, Table 4 and Table 5 suggest that REST and REST-IS do not show a significant advantage. It would be helpful if the author provided a more in-depth analysis of this aspect, discussing potential trade-offs and explaining why the proposed method does not yield substantial improvements in memory efficiency.
2. Do you think the difference in update frequencies is a general issue, or is it specific to historical embeddings? Could your method be extended to other scenarios involving asynchronous updates?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **A1** Memory Efficiency:
First, we highlight that REST consistently demonstrate significant advantages over GraphSAGE in terms of memory efficiency across all datasets as shown in Table 4 and 5.
When comparing REST with GAS and VR-GCN, REST maintains a similar memory cost. This is because our work aims to introduce a novel, general training framework that can be applied to any historical embedding model to address the serious staleness issue by adjusting the execution frequency of forward propagation, rather than altering the model architecture, thereby naturally preserving the memory efficiency of the chosen baselines. This design choice makes our approach compatible with any historical embedding methods.
We want to emphasis that baseline methods come with more notable drawbacks— deteriorate performance and slow convergence—than memory efficiency. Therefore, our work focuses on resolving these critical issues to benefit all existing methods and provide a promising solution for future research, rather than further reducing memory cost.
**A2** Computational Complexity:
We would like to emphasize that REST still demonstrates significant computational advantages over the baselines. In terms of time complexity, our model consistently achieves much faster convergence and requires considerably less running time compared with SOTA historical embedding baselines such as GAS (Table 4, 12; Figure 5, 6, 10, 11), VR-GCN (Table 5; Figure 12, 13), LMC (Table 7). It maintains a comparable running time to GraphSAGE while achieving superior performance. Regarding memory complexity, please refer to the previous answer. Overall, since related works are more severely impacted by staleness than by memory cost, our work focuses on addressing critical staleness problems in these methods. REST aims to enhance performance and accelerate convergence while preserving the high scalability that these methods have achieved.
**A3** Applicability of REST:
We would like to clarify that the memory table being updated at a different frequency from the model is a key advantage of our method rather than a limitation. Based on our finding that frequency mismatch is the root cause of staleness, our model is designed to decouple the forward and backward operations, enabling the memory table to be refreshed at a flexible frequency that existing works cannot achieve. In other words, REST actively assigns different update frequencies, thereby alleviating the staleness. If we set the frequency $f$ to 0, it degenerates into the conventional training mode. Thus, our approach is actually an general form of the existing methods rather than an imposed restriction.
In summary, REST is not limited to any specific case; rather, it represents a novel strategy that enables flexible adjustment of the forward and backward operations to address the root cause of staleness. Moreover, it is adaptable to existing training frameworks and broader techniques, making it both highly valuable and widely applicable.
**A4** Broad Impact and asynchronous scenario:
REST addresses the general issue of staleness arising from asynchronous updates—a common challenge in various scenarios, not just limit to the historical embeddings, even though our focus is on large-scale GNN training in this work. We provide several other common scenarios in which REST can be applied:
(1) Distributed Training:
Asynchronous updates often arise in distributed training due to communication overhead. Nodes may operate on potentially stale global parameters from central servers, leading to parameter staleness. Applying REST in this scenario means performing extra forward passes without gradient calculation asynchronously, which refreshes local embeddings (or activation caches) more frequently. Consequently, when the node eventually computes gradients or synchronizes with the central server, the resulting update is less affected by staleness, improving overall convergence.
(2) Federated Learning:
Federated learning suffers from parameter staleness due to infrequent client–server communications. REST’s decoupling strategy can be applied by letting the clients (or server) perform additional forward-only steps between global synchronizations. These extra forward passes keep local representations up to date with the client’s current model version, so that when the client finally computes gradients and communicates them back, the cached embeddings are no longer heavily stale. These additional asynchronous forward updates serve as opportunities for refreshing beyond global synchronization. This mitigates the mismatch arising from stale parameters and promotes faster convergence in federated learning.
This broader applicability indicates REST is not merely a method specific to historical embeddings, but a generally beneficial framework for addressing asynchronous update issues widely prevalent in machine learning training. We believe that this idea can offer new insights for various scenarios. | null | null | null | null | null | null |
ConceptAttention: Diffusion Transformers Learn Highly Interpretable Features | Accept (oral) | Summary: The paper introduces a method that repurposes the attention mechanisms of multi-modal diffusion transformers (DiTs) to generate highly precise and interpretable saliency maps. Instead of relying solely on traditional cross attention, CONCEPTATTENTION leverages both cross and self attention in the output space of DiT layers to produce contextualized concept embeddings. These embeddings effectively map textual concepts (like “cat” or “sky”) onto corresponding regions in images. The method operates without additional training and is lightweight, making it a practical tool for enhancing the interpretability of diffusion models. Empirical results show that CONCEPTATTENTION achieves state-of-the-art performance in zero-shot image segmentation tasks on benchmarks such as ImageNet-Segmentation and PascalVOC, outperforming several existing approaches.
Claims And Evidence: The experimental evidence largely backs the paper’s central claims. In particular, the authors support the claim that using the attention output space (via a combination of cross and self attention) produces sharper and more transferable saliency maps by demonstrating significant improvements in zero‐shot segmentation benchmarks (as shown in multiple tables and qualitative comparisons). The ablation studies further clarify that the combination of both attention types is crucial for the observed performance gains.
However, a couple of points could benefit from additional evidence:
• The claim that these representations are “highly interpretable” is mainly evaluated through segmentation metrics. Although improved segmentation performance is a strong indicator, a more in-depth human evaluation or analysis on other interpretability aspects could further substantiate this claim.
• The broader assertion regarding the transferability of DiT representations to other downstream tasks is demonstrated only in the context of segmentation. Additional experiments on diverse tasks would help confirm the generality of this transferability.
Overall, while the core experimental results are convincing, the claims about interpretability and broad transferability might be seen as slightly overreaching without further supporting evidence.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria are well-aligned with the problem. The paper targets the challenge of interpreting diffusion models through precise, concept-specific saliency maps, and it does so by leveraging both cross and self attention within multi-modal DiTs. The use of standard zero-shot segmentation benchmarks like ImageNet-Segmentation and PascalVOC provides a robust framework to quantitatively and qualitatively assess the quality of these saliency maps. Additionally, the comprehensive ablation studies help confirm that the specific design choices, such as the combination of attention types, directly contribute to improved performance. This setup effectively demonstrates the method's utility for the intended application without introducing unnecessary complexity.
Theoretical Claims: The paper does not include formal proofs for its theoretical claims. Instead, the authors provide algorithmic descriptions, equations, and intuitive justifications—such as the use of linear projections in the attention output space (e.g., Equation 13)—to support the conceptual basis of the method. The claim that this approach yields sharper and more transferable saliency maps is primarily validated through extensive empirical experiments and ablation studies, rather than through rigorous theoretical proofs. Consequently, there were no formal proofs to verify for correctness.
Experimental Designs Or Analyses: I reviewed the experimental setups and analyses, and overall they appear sound and well-aligned with the paper’s objectives. For example:
- Segmentation Benchmarks:
The use of standard zero-shot segmentation benchmarks (ImageNet-Segmentation and PascalVOC) to evaluate the quality of the saliency maps is appropriate. These benchmarks provide widely accepted metrics (mIoU, pixelwise accuracy, mAP) that serve as a robust proxy for assessing how well the method localizes textual concepts in images.
- Ablation Studies:
The paper includes ablations that isolate the contributions of using just cross attention, just self attention, and their combination. This analysis clarifies that the integration of both mechanisms is key to achieving superior performance. Additionally, the study on the influence of diffusion timesteps helps understand how noise levels affect segmentation performance.
- Layer-wise Analysis:
The experiments also examine the impact of using features from different layers of the model. This layered analysis is useful for demonstrating that deeper layers contribute more refined representations, and that aggregating information across layers further improves the results.
One potential concern is that while segmentation performance is an effective proxy for interpretability, it does not fully capture all aspects of what makes a model’s internal representations interpretable from a human perspective. A complementary human study or alternative qualitative analysis might have provided additional validation. However, given the context and common practices in this research area, the experimental designs and analyses are both reasonable and convincing.
Supplementary Material: n/a
Relation To Broader Scientific Literature: The paper’s contributions build directly on and extend several strands of prior work in model interpretability, transformer architectures, and diffusion models. Specifically:
• Previous research has shown that attention mechanisms in models like UNet-based diffusion models can yield useful cross attention maps for localizing textual concepts (e.g., Tang et al., 2022). This work extends that idea by demonstrating that the output space of multi-modal diffusion transformers can be repurposed—using both cross and self attention—to generate even sharper, more transferable saliency maps.
• In the broader literature on transformer interpretability, methods such as GradCAM, Layer-wise Relevance Propagation, and Attention Rollout have been applied to vision transformers (including models like CLIP and DINO) to visualize and understand model decisions. The proposed CONCEPTATTENTION method builds on these insights by leveraging the rich, multi-modal representations inherent to diffusion transformers, thereby offering a new perspective on how internal representations can be made more interpretable.
• The paper also connects to recent work that explores how the representations of diffusion models can be utilized for downstream tasks such as segmentation. By showing that the same representations can be interpreted through concept embeddings to achieve state-of-the-art zero-shot segmentation performance, the paper bridges the gap between generative modeling and practical image analysis.
Overall, the work synthesizes ideas from transformer-based interpretability and diffusion model research, advancing the understanding of how multi-modal attention mechanisms can be manipulated to yield more precise and meaningful explanations of model behavior.
Essential References Not Discussed: The paper is satisfactory, but a few additional references would help frame the contributions even better. For example:
• TCAV (Testing with Concept Activation Vectors by Kim et al., 2018) is a seminal work on concept-based interpretability. It shows how high-level concepts can be used to explain model decisions, which directly relates to the paper’s idea of using concept embeddings to generate saliency maps. Including a discussion of TCAV would help readers see how the current approach builds on or differs from established concept-based methods.
• The critique “Attention is not Explanation” by Jain and Wallace (2019) offers important context for any work that leverages attention mechanisms for interpretability. Although the authors argue that the attention output space in DiTs yields sharper and more reliable saliency maps, contrasting their findings with the limitations highlighted in that work would provide a more nuanced perspective.
Including these related works would better situate the paper’s contributions within the broader literature on interpretability and help clarify how its proposed method advances beyond previous approaches.
Other Strengths And Weaknesses: Other Strengths:
- Originality:
The paper creatively repurposes the attention mechanisms in multi-modal diffusion transformers to generate interpretable concept embeddings without requiring additional training. This inventive combination of ideas from diffusion models and attention-based interpretability represents a fresh perspective that advances the state of the art.
- Significance:
By demonstrating state-of-the-art performance on zero-shot segmentation tasks, the work highlights the practical impact of its method. Its ability to produce sharp, transferable saliency maps not only deepens our understanding of DiT representations but also has potential implications for enhancing the transparency and controllability of generative models.
- Clarity:
The paper is generally well-structured and clearly written, with detailed descriptions of the methodology, comprehensive experimental evaluations, and helpful pseudo-code that clarifies the proposed approach. The extensive ablation studies further reinforce the clarity of the experimental design and results.
Other Weaknesses:
- Generality:
The method is demonstrated on multi-modal DiTs, and it remains somewhat unclear how well the approach would generalize to tasks beyond image segmentation. A discussion of these limitations could provide a more balanced perspective.
- Theoretical Underpinning:
The paper could benefit from a deeper theoretical analysis of why the attention output space yields superior saliency maps compared to traditional cross-attention methods. While the empirical results are convincing, additional theoretical insights would enhance the overall robustness of the claims.
Overall, the paper makes a compelling contribution with its original approach and significant empirical findings, though further exploration in the areas noted above would provide additional depth and context to its contributions.
Other Comments Or Suggestions: n/a
Questions For Authors: 1. Could you provide further insights—either theoretical or through additional experiments—on why the attention output space yields sharper and more transferable saliency maps compared to traditional cross-attention methods?
2. How sensitive is the method to the selection and number of concept tokens? For example, how does varying the vocabulary size or the choice of specific tokens affect the segmentation performance and interpretability?
3. Can you comment on the generalizability of CONCEPTATTENTION beyond multi-modal DiTs and the specific segmentation tasks evaluated? Have you explored or do you foresee its applicability to other architectures or downstream tasks?
4. Have you considered or conducted any human-centric evaluations of the interpretability provided by the saliency maps (e.g., user studies or qualitative assessments beyond segmentation metrics)?
5. Are there specific failure cases or limitations of CONCEPTATTENTION, particularly when dealing with images containing multiple overlapping or ambiguous objects?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your insightful comments. After reading all of the reviews, we have implemented many of the requested experiments at [this anonymous website](https://concept-attention-anonymous.github.io/) and we will incorporate these updates into the camera ready paper. We are glad that the reviewers highlight the strengths of our work:
1. ConceptAttention is a **simple and elegant approach** (JHK8, XwNm) that produces **high quality saliency maps** with compelling empirical results (JHK8, XwNm, Njw6)
2. ConceptAttention **requires no additional training** (JHK8, XwNm, FGxX, Njw6)
3. and has the potential for **practical impact** to the community (JHK8, FGxX, Njw6) and is **well written and communicated** (JHK8, XwNm, FGxX, Njw6)
We hope our responses below address your specific concerns.
---
> a few additional references would help frame the contributions even better.
We will absolutely include your suggeted references in the related works section of the final manuscript.
> it remains somewhat unclear how well the approach would generalize to tasks beyond image segmentation. A discussion of these limitations could provide a more balanced perspective
We actually found that ConceptAttention [generalizes seamlessly to video generation models](https://concept-attention-anonymous.github.io/#a)! We implemented ConceptAttention on the CogVideoX MMDiT video generation model and it generates qualitatively better saliency maps than the cross attention maps. Certainly, ConceptAttention has limitations, and we are happy to discuss more in the paper. For example, see our response to `2.` below.
> 1. Could you provide further insights—either theoretical or through additional experiments—on why the attention output space yields sharper and more transferable saliency maps compared to traditional cross-attention methods?
Great question! Textual information initially flows from the prompt tokens to the image patches. However, after the initial layers the image tokens themselves will encode the rich semantic information from the prompt. Cross attention only captures the *direct* contributions of text tokens to the image patches. Our approach captures both this information and the *indirect* semantic information flowing through the other image patches.
> 2. How sensitive is the method to the selection and number of concept tokens?
ConceptAttention is designed to pick the best concept for each patch out of those available, in much the same way that a zero-shot CLIP classifier would. This may lead to misattribution when there are very few concepts and none match the image contents. See the picture of a bike in [Fig F](https://concept-attention-anonymous.github.io/#f) for example. If the concepts “car” and “background” are chosen then “car” will be assigned to the bike as it is more similar than "background". However if both “car” and “bike” are given then the correct concept “bike” will be chosen.
On the other hand, when there are many concepts and several have overlapping meanings, then ConceptAttention will still pick the one it decides is "best". This can result in one concept (i.e. "mountain") overpowering another, perhaps correct, concept like "tree". See [Fig G](https://concept-attention-anonymous.github.io/#g) for this example.
> 3. Can you comment on the generalizability of ConceptAttention beyond multi-modal DiTs and the specific segmentation tasks evaluated? Do you foresee its applicability to other architectures or downstream tasks?
As mentioned above, we found that ConceptAttention [generalizes seamlessly to video generation models](https://concept-attention-anonymous.github.io/#a). Additionally, we found it also generalizes to Stable Diffusion 3.5 Turbo, another T2I MMDiT model. We quantitatively evaluated ([see Table B](https://concept-attention-anonymous.github.io/#b)) it using the same protocol from Tab 1 in the manuscript and found it outperforms existing baselines. See [Fig C](https://concept-attention-anonymous.github.io/#c) for qualitative results.
> 4. Have you considered or conducted any human-centric evaluations of the interpretability provided by the saliency maps (e.g., user studies or qualitative assessments beyond segmentation metrics)?
A human-centric evaluation of our approach compared to other zero-shot interpretability methods would be a great line of future work. Of particular interest would be identifying if ConceptAttention can be used by non-experts to debug models, identifying why a model may not generate a proper image that aligns with the given prompt.
> 5. Are there specific failure cases or limitations of ConceptAttention?
Please see our answer to question 2 above.
---
Once again, we thank the reviewer for their feedback and we hope our responses answered your remaining questions. | Summary: - This paper presents ConceptAttention, a method that leverages the attention of diffusion transformers (DiTs) to generate saliency maps for localizing textual concepts in images.
- By repurposing pre-trained DiT's attention weights, the approach produces more accurate segmentation maps without requiring extra training.
- The work is timely, as DiTs are widespread, yet investigations to their attention were limited; this offers a fresh perspective with both scientific and practical impact.
Claims And Evidence: - l.295-297: ‘However, these have a key limitation in that their vocabulary is limited to the tokens in the user’s prompt.’, I don’t think this is the right claim, as the concept is also the user’s prompt anyway, as stated in l.246-247.
- The claim made by the work provides a simple approach to visualising DiT attention without training and is, in general, innovative. There are sufficient visual results to support the claim. However, the quantitative evaluations are relatively weak, with unclear dataset specification and missing key related works to compare (specified in the following).
Methods And Evaluation Criteria: - The overall evaluation is on the right track, but some specifications need clarification:
- The threshold used to generate segmentation masks from saliency maps is a crucial hyperparameter that significantly impacts results. However, this parameter is not reported in the methodology or experiments.
- The multiclass evaluation is particularly questionable. The proposed method should naturally handle an arbitrary number of classes, yet the primary quantitative results focus on a simplified single-class setting (Table 1), with only limited results for the multiclass setting (Table 4).
- The setup in Table 4 is unclear. For instance, how many classes from PascalVOC are included? Given the various PascalVOC versions, specifying these details is essential. Additionally, key baseline methods, such as OVAM [1] and CLIPasRNN [2], should be compared, as both provide PascalVOC results and share baseline models with this work (e.g., DAAM).
Theoretical Claims: - A key finding of ConceptAttention is that, in multi‐modal diffusion transformers (DiT), the prompt embeddings are dynamically updated alongside the image tokens, yet the concept tokens are designed to receive information from image tokens without feedback. This one‐way update mechanism allows the concept tokens to act as a semantic “anchor”—enabling the extraction of high-fidelity saliency maps that accurately localize textual concepts while preserving the image’s appearance. In contrast, U-Net–based diffusion models use static prompt embeddings, which simplifies visualization but lacks the flexible decoupling achieved in DiT.
- One key question arises: **Why is disabling the feedback from concept token to image token so important?** The author argued this is one key design innovation but doesn’t explain it in theory nor provide empirical ablation to support this claim.
- Another key question is, although the author attempts distinct concept tokens with prompt tokens, they are essentially the same (both from user-provided text and encoded with the same text encoder). The only difference is how they interact with image token (i.e. queation (9) and (10). **Therefore, my question is, what if the author just replaces the concept token with the prompt token and computes the saliency in the same way as concept attention**, i.g. $o_p = softmax(q_pk_{xp}^T)v_{xp}$?
Experimental Designs Or Analyses: - To validate how the threshold impacts the proposed method, it would be helpful to plot the ROC curve comparing it with some key baseline methods (e.g. DAAM and Rollout CLIP).
- Important and closely related baseline methods need to be compared, e.g. OVAM [1] and CLIPasRNN [2].
Supplementary Material: All.
Relation To Broader Scientific Literature: See below the ‘Essential References Not Discussed’ section.
Essential References Not Discussed: - The following key literature is missing:
- OVAM [1] is highly relevant to the proposed method, as both share the same core architecture—using a parallel "concept prompt" to extract attention from a pre-trained diffusion model without training. The key difference is that [1] is implemented on a U-Net-based diffusion model, while ConceptAttention is based on DiT. Given this similarity, it is crucial to include [1] in both the related works section and the quantitative evaluation. Currently, the reported results in ConceptAttention’s Table 4 are not comparable to Table 2 in [1]. For example, DAAM achieves an mIoU of 66.2–79.7 in Table 2 of [1], whereas in Table 4 of ConceptAttention, it is only 10.97. While the specific PascalVOC subset used remains unclear, such a large discrepancy is unexpected.
- CLIPasRNN [2], another training-free approach, should also be included in the evaluation. Specifically, zero-shot image segmentation results can be compared against Table 1 in [2].
[1] Marcos-Manchón, P., Alcover-Couso, R., SanMiguel, J.C. and Martínez, J.M., 2024. Open-vocabulary attention maps with token optimization for semantic segmentation in diffusion models. CVPR 2024.
[2] Sun, S., Li, R., Torr, P., Gu, X. and Li, S., 2024. Clip as rnn: Segment countless visual concepts without training endeavor. CVPR 2024.
Other Strengths And Weaknesses: The proposed approach, despite its similarity to [1], addresses a timely and important problem—investigating attention in DiT. The provided visual results sufficiently support this claim. The main concern lies in unclear details and missing quantitative results. Once these are clarified, I would be happy to reconsider my score.
Other Comments Or Suggestions: The main paper is well written, but additional details, such as the experimental setup, should be included in the appendix.
Questions For Authors: See above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your insightful comments. After reading all of the reviews, we have implemented many of the requested experiments at [this anonymous website](https://concept-attention-anonymous.github.io/) and we will incorporate these updates into the camera ready paper. We are glad that the reviewers highlight the strengths of our work:
1. ConceptAttention is a **simple and elegant approach** (JHK8, XwNm) that produces **high quality saliency maps** with compelling empirical results (JHK8, XwNm, Njw6)
2. ConceptAttention **requires no additional training** (JHK8, XwNm, FGxX, Njw6)
3. and has the potential for **practical impact** to the community (JHK8, FGxX, Njw6) and is **well written and communicated** (JHK8, XwNm, FGxX, Njw6)
We hope our responses below address your specific concerns.
---
> 'However, [existing models are] limited to the tokens in the user’s prompt.’, I don’t think this is the right claim as the concept is also the user’s prompt anyway
Cross attention maps are by default restricted to the tokens in the user’s prompt. However, when generating images it is often desirable to segment concepts (e.g., "background") not explicitly in the prompt. We somehow need to add these new concepts to the prompt without impacting the generated image's appearance. Our method allows this.
> Why is disabling the feedback from concept token to image token so important?
ConceptAttention is a method for *interpreting the representations of MMDiT models during generation,* but we can’t use it as a tool for interpretation if our concepts change the image we are studying. Hence, we need to decouple concepts from image tokens.
> [Can we just use the output vectors of prompt tokens?]
Yep! Our first discovery was that the output space of MMDiT attention layers encode highly interpretable features. However, these maps are restricted to the prompt vocabulary (see previous answer). Our one-way attention flow removes this restriction.
> The threshold used to generate segmentation masks [...] is not reported in the methodology or experiments.
We hope to clarify this:
1. We choose the mean value of our saliency maps as the threshold. This choice was made to strictly adhere to the evaluation protocol laid out in (Chefer et al., CVPR 2021) and used in (Gandelsman et al., ICLR 2024), which both use the mean. We will more improve our description in the final paper.
2. We were also concerned that a particular choice of threshold could favor certain methods. Thus, we included the mean Average Precision (mAP) metric (Tab 1, 2, and 3) which is a *threshold agnostic metric* of segmentation performance that measures the weighted mean of precisions achieved across multiple thresholds.
> The primary quantitative results focus on a simplified single-class setting (Tab 1), with only limited results for the multiclass setting (Table 4).
Our experiments focus on a single-class setting to directly compare to the many zero-shot interpretability baselines which are only capable of generating single predictions. It would be unfair to expect methods like DINO to predict sensible maps for images with multiple classes. Of the subset of methods which can produce open vocabulary saliency maps (i.e., DAAM, TextSpan, and Cross Attention) our approach outperforms each of them.
> Given the various PascalVOC versions, specifying these details is essential.
Thank you for your feedback. We will include extensive experimental details in the appendix. Our single-class experiments cover all 20 classes in PascalVOC, but are restricted to 930 images with only one class present in them. Our multi-class experiments cover all 20 classes and all examples in the entire dataset of 1464 images, many containing multiple classes.
> The reported results in ConceptAttention’s Tab 4 are not comparable to Tab 2 in OVAM (Marcos-Manchón et al., CVPR 2024).
Tab 2 in the OVAM paper shows the evaluation of DAAM and OVAM on a synthetically generated dataset (introduced by the authors) called "VOC-sim", which is distinct from the VOC dataset we evaluate on. VOC-sim consists of images synthetically generated with prompts “a photograph of a {classname}” (Sec 4.1 of OVAM). This dataset is completely different from the VOC dataset we used.
> It would be helpful to plot the ROC curve
While not showing the ROC curve, the mean Average Precision (mAP) metric does capture the area under the precision-recall curve.
> Baseline methods need to be compared [OVAM and CLIPasRNN]
Following the reviewer's suggestions we implemented OVAM and CLIPasRNN as additional baselines. We found that our method outperforms both of these (see [Tab B](https://concept-attention-anonymous.github.io/#b)).
---
Thanks again! If our responses above are satisfactory, we would greatly appreciate the reviewer increasing their score to reflect their increased confidence in our work. | Summary: The paper introduces ConceptAttention, a novel method for generating saliency maps based on user-defined textual concepts. These maps are of high quality and achieve state-of-the-art performance on zero-shot image segmentation benchmarks, surpassing other interpretability methods. Notably, ConceptAttention does not require any retraining and is easy to understand. This demonstrates that the features of multi-modal diffusion transformers (MMDiTs) are highly transferable and potentially beneficial for various downstream vision tasks.
Claims And Evidence: Claims are well supported by experiments.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: N/A
Experimental Designs Or Analyses: Yes, experimental designs and analysis are good. Ablation studies are interesting and insightful.
Supplementary Material: Yes, the code is included in the supplementary.
Relation To Broader Scientific Literature: N/A
Essential References Not Discussed: 1) DINOv2: Learning Robust Visual Features without Supervision, TMLR 2024: Although comparison to DINO features were explored in the paper, it would be nice to also compare to the latest version of DINO.
2) Vision Transformers Need Registers, ICLR 2024: This work improves the DINOv2 features even further. Again, it would be great to see the performance against this method.
Other Strengths And Weaknesses: Strengths:
1) The paper is well written and easy to read.
2) The idea is simple and effective.
3) The claims are well supported empirically.
4) The proposed method is interesting from the interpretability perspective and potentially useful in downstream vision tasks.
Weaknesses:
1) Some missing references and comparisons (please see above).
2) Limited model evaluation, i.e., only Flux-Schnell model was validated so far. It would be great to see the method performance with other MMDiTs too.
Other Comments Or Suggestions: 1) In Section 4.1, in equations (4), (5), and (6), there are in total $k$ concepts which I believe should be $r$ instead as it was mentioned in the beginning of the paragraph: "The user specifies a set of $r$ single token concepts...".
Questions For Authors: 1) How about other MMDiT models except Flux-Schnell (e.g., Stable Diffusion 3)? Is the performance of the method equally good?
2) Will the method work with usual DiT-based models (e.g., PixArt family of models)? Is there a way to make it work with this architecture?
3) I also wonder about video generation DiT models? Do you think it is possible to extend your method to them? What kind of information can be extracted from there?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their insightful comments. After reading all of the reviews, we have implemented many of the requested experiments which you can see at this [anonymous website](https://concept-attention-anonymous.github.io/). We are glad to see that multiple reviewers recognize the strengths of our work:
1. ConceptAttention is a **simple and elegant approach** (JHK8, XwNm) that produces **high quality saliency maps** with compelling empirical results (JHK8, XwNm, Njw6)
2. ConceptAttention **requires no additional training** (JHK8, XwNm, FGxX, Njw6)
3. and has the potential for **practical impact** to the community (JHK8, FGxX, Njw6) and is **well written and communicated** (JHK8, XwNm, FGxX, Njw6)
The major changes of particular relevance to reviewer XwNm are:
1. **ConceptAttention works on a Video Generation model!** Watch the demo [here](https://concept-attention-anonymous.github.io/#a).
2. **ConceptAttention generalizes to Stable Diffusion 3.5 Turbo.** Quantitative results are shown in [Fig B](https://concept-attention-anonymous.github.io/#b) and qualitative results are shown in [Fig C](https://concept-attention-anonymous.github.io/#c).
3. **ConceptAttention is now compared against additional baselines.**
Following reviewer suggestions, we implemented: DINOv2, DINOv2 with registers, iBOT, OVAM, and CLIP as RNN and found that our method outperforms each of them. [See Table B](https://concept-attention-anonymous.github.io/#b).
These updated results will all be incorporated into our camera ready paper. Below we aim to address the particular concerns of reviewer XwNm.
---
> Although comparison to DINO features were explored in the paper, it would be nice to also compare to the latest version of DINO. `[...]` This work improves the DINOv2 features even further.
The reviewer raises an interesting request for comparison against DINOv2 (Oquab et al., TMLR 2024) and DINOv2 with Registers (Darcet et al., ICLR 2024), both of which are highly relevant. Sec. 5 of our original submission evaluated the performance of the self-attention maps from both of these methods, and we found that ConceptAttention outperforms both of them.
Our further analysis showed that DINOv2 actually has less interpretable self-attention maps than DINOv1, despite performing better on downstream tasks. See a table summarizing these results in [Table B](https://concept-attention-anonymous.github.io/#b). Surprisingly, we also found that DINOv2 with registers under-performed compared to DINOv2. We also provided qualitative results which subjectively match the self-attention results shown in each respective paper in [Fig D](https://concept-attention-anonymous.github.io/#d).
> 1. How about other MMDiT models except Flux-Schnell (e.g., Stable Diffusion 3)? Is the performance of the method equally good?
Yes! We followed your suggestion and implemented our approach on a Stable Diffusion 3.5 Turbo model and found that it produces competitive results on the same quantitative evaluation we conducted for Table 1 of the manuscript. ConceptAttention on SD3.5 Turbo beats all tested baselines on both ImageNet-Segmentation and PascalVOC, though ConceptAttention on the Flux-Schnell architecture is slightly better than it on most metrics. See the [Table B]() for quantitative results and [Fig C]() for qualitative results.
> 2. Will the method work with usual DiT-based models (e.g., PixArt family of models)? Is there a way to make it work with this architecture?
Our approach hinges upon MMDiT models that leverage multi-modal attention layers that jointly process both text and image modalities, and thus will not work with the T2I paradigm of e.g., the PixArt family.
> 3. Do you think it is possible to extend your method to [video generation models]?
Yes! To answer your question, we implemented ConceptAttention on CogVideoX (Yang et. al, ICLR 2025) and found that our approach seamlessly generalizes to video generation models. The only difference is that we also average information over the temporal dimension. We found that ConceptAttention produces qualitatively better results than cross attention maps from the same model. See the [video demonstration](https://concept-attention-anonymous.github.io/#a).
---
Once again, we thank the reviewer for their feedback. If our responses and new results are satisfactory, we would greatly appreciate the reviewer increasing their score to reflect their increased confidence in our work. | Summary: The authors present a new method to extract well-refined saliency maps from pre-trained DiT models without having to perform any additional training, mainly by directly leveraging the attention weights of the multi-modal model in a clever way to establish correspondences to a set of provided ‘concepts’ that might appear in the image – providing a neat approach for improved (layer-wise) insights into these blackbox models.
Claims And Evidence: The main claims made during the early stages of the paper are all well substantiated through experimental evidence.
The only critical point I see is the claim that ConceptAttention has “minimal impact on model latency” (l 217 right) – as this is only true for a small set of concept embeddings; As this set is included in a self-attention operation, larger sets will inevitably cause larger latencies due to the quadratic complexity of this operation!
Methods And Evaluation Criteria: The task of zero-shot image segmentation as main basis for the evaluation of the ‘concept maps’ is well-chosen, as it is a reasonable way to quantify the object-specific salience maps;
The comparative baselines might be slightly skewed to the authors’ advantage, see ‘experimental designs’ section below.
The choice of the threshold as the mean value to produce binary segmentation masks for the quantitative analysis is an understandable but potentially suboptimal choice that could distort the results – the mean is, after all, highly affected by outliers/extreme values; Choice of median, and/or top-x % as a cut-off might be more reliable (see questions);
Theoretical Claims: No theoretical claims beyond well-known/established formulas.
Experimental Designs Or Analyses: The comparative baselines in Fig. 5 and Table 1 might slightly be skewed to the authors’ advantage, e.g. the choice of DINO instead of the much-newer and more powerful DINOv2, or other alternatives that commonly return better saliency maps (e.g. iBOT, …);
As mentioned previously:
In Section 5.1, the choice of the threshold as the *mean value* to produce binary segmentation masks for the quantitative analysis is an understandable but potentially suboptimal choice that could distort the results – the mean is, after all, highly affected by outliers/extreme values;
$\textrightarrow$ Choice of median, and/or top-x % as a cut-off might be more reliable (see questions);
Minor: Additional ablation regarding the choice of a simple dot-product to produce saliency maps could be interesting to justify this choice (L 258 right).
Supplementary Material: Appendix provides some helpful insight how to easily implement the idea in the form of pseudo-code, as well as additional visualisations;
The authors also provide the code, which I haven't checked in detail though.
Relation To Broader Scientific Literature: Relation to broader literature is sufficient; The authors also discuss their constraints in terms of not comparing to methods trained on large datasets like SAM.
Essential References Not Discussed: None that come to mind in direct relation to the work's main contributions;
Potential updates to Table 1 could be DinoV2 (Oquab et al., TMLR 2024) or iBOT (Zhou et al., ICLR2022)
Other Strengths And Weaknesses: **Strengths**:
*Originality & Significance:*
- The authors provide a simple and neat but powerful approach which yields high-quality saliency maps and allows a variety of query-concepts to be tested for, hence provides a good measure of flexibility on this axis
- The authors’ method repurposes the already trained parameters of the underlying multi-modal DiT model, which entirely removes any need for additional training and/or fine-tuning – providing great benefit to the community
*Clarity:*
- The paper is generally well written and easy to follow, with a good number of clear visualisations (e.g. Figure 4) supporting the contributions and explanations
---
**Weaknesses**:
- Missing discussion of cases where concepts are queries that are in fact NOT in the image – see questions.
- Missing discussion and/or analysis of behaviour dependent on number of concept queries, as well as potential partial overlap of provided concepts – see questions
- Quantitative evaluation might be skewed towards certain methods that have a clearer separation around the mean – which is the threshold the authors choose; This could be improved upon by additionally evaluating using the median or top-X% -- see questions;
- Quality of the manuscript should be improved – there are several typos and grammatical errors that can (and should) easily be corrected (see comments)
- Minor: Comparative methods (Table 1) are mostly pre-2022, e.g. DINO v1 instead of the more powerful v2; see questions
Other Comments Or Suggestions: I’d suggest the authors go through their manuscript in detail and correct the typos / grammatical mistakes, e.g.
- L 142 right: “a diffusion models” ($\textrightarrow$ model)
- L 173 left: “line of work attempts perform” ($\textrightarrow$ attempts to or performs)
- L 241 right: “at the end of attention operation” ($\textrightarrow$ end of the attention ..)
- L 265 right: $\textrightarrow$ Should start upper-case after period: This is..
- L 412 right: “pixewlise” ($\textrightarrow$ pixelwise)
- L 414 right: “out performed” ($\textrightarrow$ outperformed)
- …
Questions For Authors: *TLDR;* I do like the approach, as I think it is a very simple and elegant yet powerful method to provide insights! However, I'd like the authors to address a number of questions! Depending on the responses, I'm happy to update my rating!
**Major:**
1. What happens if concepts are provided as a query that are NOT contained in the image? I’d be curious to hear/see whether the model will be able to recognise their non-existance, or still pick out irrelevant areas as a saliency map! And are other concepts’ saliency maps negatively affected?
2. How does the quality of the saliency maps change if more or fewer concepts are provided? Is there a ‘sweet-spot’ in terms of number of concepts? What happens if overlapping concepts are provided, e.g. “landscape” and “mountain”/”grass”?
3. As previously mentioned, I feel like the mean as the threshold to create the binary decision / saliency map might skew the results towards methods that don’t produce outliers (which is, of course, still a valid choice). However: have the authors investigated how median as a metric, as well as smaller top-X% (e.g. top 30%) would change this?
**Others:**
4. Fig 6: is the ‘combined’ information from all layers, or the layers 10-18 as detailed previously in the experimental setup description?
Independent of this, why do the authors think the combined approach outperforms all individual layers? How exactly are the layers combined? I am slightly surprised about this result, since I’d expect e.g. the average to lie somewhere in-between the extremes; Or are the individual contributions simply making it more robust in terms of the threshold metric?
5. Although not necessarily in direct competition with the proposed approach, Table 1 does list methods like DINO – however, there have been significant improvements after 2021 in terms of DinoV2 as well as other methods like iBOT, which have shown to often produce better saliency maps; I’d be good to see some results for these methods as well if possible to get a feeling how well their saliency maps perform (Note: I don’t expect your method to be better, but it would just provide more up-to-date insight to the reader!)
More of a suggestion:
6. I feel like the “*Impact of diffusion timestep on segmentation*” section in Section 5.2 would deserve more highlighting! It is quite interesting to see that the middle diffusion timesteps perform significantly better than both early and late ones!
Do the authors have more intuitions why this could be?
7. I think it would be interesting to the reader to include a visualisation, i.e. qualitative analysis, how the saliency maps progress across different layers throughout – in addition to the quantitative plot that’s currently shown.
---
---
## Post-Rebuttal Update:
Given that the authors have sufficiently addressed all my concerns and have provided many additional convincing insights, I am raising my score from 2 to 4 and recommend acceptance of this work.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your thorough response. After reading all of the reviews, we have implemented many of the requested experiments at [this anonymous website](https://concept-attention-anonymous.github.io/) and we will incorporate these updates into the camera ready paper. We are glad that the reviewers highlight the strengths of our work:
1. ConceptAttention is a **simple and elegant approach** (JHK8, XwNm) that produces **high quality saliency maps** with compelling empirical results (JHK8, XwNm, Njw6)
2. ConceptAttention **requires no additional training** (JHK8, XwNm, FGxX, Njw6)
3. and has the potential for **practical impact** to the community (JHK8, FGxX, Njw6) and is **well written and communicated** (JHK8, XwNm, FGxX, Njw6)
We hope our responses below address your specific concerns.
---
> Only critical point I see is the claim that ConceptAttention has “minimal impact on model latency”
We agree this is imprecise wording. This statement indeed holds only when $c << p$ (for $c$ concepts and $p$ patches). Thankfully, for typical values of $c$ (i.e., 1, 10, 50), the patch self-attention operations $O(p^2)$ dominate the number concept attention operations $O(pc)$. ConceptAttention on a single NVIDIA A40 for 1,5, and 50 concepts the model takes 1.12, 1.14, and 1.20 seconds respectively to perform a forward pass.
> 1. What happens if concepts are provided as a query that are NOT contained in the image?
This is a great question! ConceptAttention is designed to pick the most relevant concept out of those given, in the same way that a zero-shot CLIP classifier would. This means if the most similar concept out of those given is incorrect then it may still be chosen.
For example, take an image of a bike on the street, if the concepts “car” and “background” are chosen then “car” will likely be assigned to the bike as it is more similar than "background". However if both “car” and “bike” are given then the correct concept “bike” will be chosen ([see Fig F](https://concept-attention-anonymous.github.io/#f)).
> 2. How does the quality of the saliency maps change if more or fewer concepts are provided? [...] What happens if overlapping concepts are provided?
ConceptAttention picks the best concept for each patch out of those available. This may lead to misattribution when there are very few concepts and none match the image contents (see previous answer). However, when there are many concepts and several have similar or overlapping meanings, then ConceptAttention will still emphasize just one. This can result in one concept (i.e, "mountain") overpowering another valid concept like "tree" ([see Fig G](https://concept-attention-anonymous.github.io/#g)).
> 3. The choice of the threshold as the mean value [is] potentially suboptimal
Thank you for the opportunity to clarify our choice of threshold:
1. The mean value was chosen in an effort to strictly adhere to the evaluation protocol laid out in (Chefer et al., CVPR 2021, also used by (Gandelsman et al., ICLR 2024) which uses the mean.
2. To prevent a particular choice of threshold favoring certain methods, we included the mean Average Precision (mAP) metric (Table 1, 2, 3) which is a *threshold agnostic metric* measuring segmentation performance that takes the weighted mean of precisions achieved across multiple thresholds.
> 4. Fig 6: is the ‘combined’ information from all layers?
We collect concept and image embeddings from each of these layers, compute their projections, and then average over the layer dimension. This improves robustness to noise from individual layers.
> 5. Potential updates to Tab 1 could be DINOv2 or iBOT.
Thank you for the suggestion. We implemented iBOT, DINOv2 and DINOv2 with registers. We found that our approach outperformed each of them quantitatively on the same evaluation shown in Table 1. Intriguingly, the raw self-attention maps of DINOv2 underperform compared to the DINOv1 model. An example of maps from each of these methods is [shown in Fig D](https://concept-attention-anonymous.github.io/#d).
> 6. Do the authors have intuitions for why [the middle steps are better than late ones]?
This was an interesting result to us as well. We have observed that early steps early steps shape the semantic, high-level structure of an image (with too much noise for high quality segmentation maps) while later steps focus on high-frequency minor details. Thus, the middle steps likely offer a good balance between these two extremes.
> 7. It would be interesting to [show] how the saliency maps progress across different layers
This is a great suggestion! See [Fig E](https://concept-attention-anonymous.github.io/#e) for these results, which align with Fig 6 from the paper.
---
Thanks again for your feedback! If our responses and new results are satisfactory, we would greatly appreciate the reviewer increasing their score to reflect their increased confidence in our work.
---
Rebuttal Comment 1.1:
Comment: I'd like to congratulate the authors on the additional insights they have provided in the rebuttal, which (in my opinion) make the paper significantly stronger.
All my queries have been sufficiently addressed; I also couldn't spot any other prohibitive weaknesses when reading through the other reviews -- and hence, I'm updating my rating to recommend acceptance. | null | null | null | null | null | null |
Learning Compact Semantic Information for Incomplete Multi-View Missing Multi-Label Classification | Accept (poster) | Summary: The paper develops a model named COME for incomplete multi-view missing multi-label classification tasks. Unlike most existing methods, the approach aims to learn compact semantic information by minimizing task-independent redundant information. Additionally, a dual-branch soft pseudo-label generation strategy is introduced in the model to alleviate the negative impact of missing supervisory information.
Claims And Evidence: The claims are clear and convincing. Main claim: the proposed multi-view semantic consistency enhancement strategy learns compact multi-view shared information, which effectively alleviates the performance degradation caused by incomplete contrastive learning.
Methods And Evaluation Criteria: The authors used six evaluation metrics commonly used in multi-label classification, which are rational and common.
Theoretical Claims: The adequacy hypothesis of multiple views shared information proposed in the paper is reasonable.
Experimental Designs Or Analyses: A large number of experiments are carried out on five datasets. The experimental settings are reasonable, and the experimental analysis is sufficient.
Supplementary Material: Yes, all supplementary materials are reviewed.
Relation To Broader Scientific Literature: This paper studies the inadequacy of contrastive learning in dealing with incomplete multi-view data, which is worth further exploration. Unlike most existing methods, the proposed method aims to learn compact semantic information by minimizing task-independent redundant information.
Essential References Not Discussed: The literature citations of the article are reasonable, and the latest relevant theories and specific methods are introduced in detail.
Other Strengths And Weaknesses: Strengths:
1. The paper is well-written, with explicit problem definition and clear description. It’s readily understandable even without prior knowledge.
2. The paper provides comprehensive experimental results to validate the effectiveness and robustness of the proposed methods.
3. The setting of double incompleteness in views and labels is novel, and the proposed pseudo-label strategy alleviates the problem of missing labels.
Weaknesses:
1. Some minor mistakes should be revised, the reference of equation 14 in line 205 is incorrect and it should be capitalized in line 374: “In table 1”.
2. The code is not provided. To improve reproducibility, implementation details and code should be provided.
Other Comments Or Suggestions: I noticed that in the experimental results on the complete dataset in Figure 8, there are only results of UPDGD on the Corel5k and Pascal07 datasets. Could you provide more results on other datasets?
Questions For Authors: 1. Is the data still incomplete in the inference phase? I know that views and labels are not complete in the training, so is this still the case when inferencing?
2. In Figure 2(b), is this label distribution common? Are the patterns found in a single dataset universal?
3. From Figure 4, why is the negative impact of missing labels smaller than that of missing views? More discussions should be added.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thanks for your constructive reviews and suggestions. Below, we will address each of your questions.
> Q1: Some minor mistakes should be revised, the reference of equation 14 in line 205 is incorrect and it should be capitalized in line 374: “In table 1”.
Thanks for your correction. We have corrected the incorrect reference to Eq. (14) in line 205, capitalized “Table 1” in line 374, and addressed other minor errors throughout the manuscript to ensure consistency with academic standards.
> Q2: The code is not provided. To improve reproducibility, implementation details and code should be provided.
Thanks for your suggestion. The code will be made publicly available upon acceptance to support reproducibility.
> Q3: I noticed that in the experimental results on the complete dataset in Figure 8, there are only results of UPDGD on the Corel5k and Pascal07 datasets. Could you provide more results on other datasets?
We sincerely apologize for this and we have provided the complete experimental results on the five datasets without any missing views and labels in Figure 1 of the PDF document (provided via the anonymized link). It can be observed that our Model (COME) achieves superior performance across six metrics compared to other methods on most datasets.
> Q4: Is the data still incomplete in the inference phase? I know that views and labels are not complete in the training, so is this still the case when inferencing?
We still maintain the view’s incompleteness in the inference phase, but we do not use any missing settings for labels as the evaluation criteria for real performance.
> Q5: In Figure 2(b), is this label distribution common? Are the patterns found in a single dataset universal?
Thanks to your questions. The label distribution of the other datasets is summarized in Figure 2 of the PDF document. We observed that the number of labels per sample varies across datasets and within individual datasets. This inherent variability makes the Top-K pseudo-labeling strategy [1] unsuitable for such scenarios.
> Q6: From Figure 4, why is the negative impact of missing labels smaller than that of missing views? More discussions should be added.
Thank you for your discussion. Intuitively, in the fusion stage of multi-view representations using a Mixture of Experts (MoE), the learned weights reflect the relative importance of different view representations within the fused representation. The absence of key view representations causes rapid degradation in the quality of multi-view joint representations, which poses significant challenges to multi-label classifiers. Additionally, we introduce a dual-branch soft pseudo-label imputation strategy to mitigate the multi-label missing problem. We present the ablation results of the “dual-branch soft pseudo-label imputation” strategy under varying missing label rates. The results of COME without dual-branch soft pseudo-label imputation strategy are summarized in the following table:
| label-missing rate | AP | 1-HL | 1-RL | AUC |
|--------------------|-------|-------|-------|-------|
| 0% | 0.604 | 0.937 | 0.862 | 0.879 |
| 30% | 0.594 | 0.935 | 0.857 | 0.875 |
| 50% | 0.586 | 0.935 | 0.854 | 0.872 |
| 70% | 0.571 | 0.934 | 0.843 | 0.864 |
And the results of COME with dual-branch soft pseudo-label imputation strategy are summarized in the following table:
| label-missing rate | AP | 1-HL | 1-RL | AUC |
|--------------------|-------|-------|-------|-------|
| 0% | 0.602 | 0.937 | 0.859 | 0.878 |
| 30% | 0.596 | 0.936 | 0.859 | 0.876 |
| 50% | 0.590 | 0.935 | 0.855 | 0.873 |
| 70% | 0.580 | 0.933 | 0.852 | 0.870 |
For clarity, the results are presented in Figure 3 of the PDF. The experimental results demonstrate that the proposed strategy effectively mitigates the negative impact of missing labels, particularly under a high missing rate of 70%.
References:
[1] Class-Distribution-Aware Pseudo-Labeling for Semi-Supervised Multi-Label Learning. NeurIPS 2023.
Anonymous link:
https://anonymous.4open.science/r/6513/rebuttal.pdf
---
Rebuttal Comment 1.1:
Comment: Thanks for the reply. I have no further questions and decide to keep my rating.
---
Reply to Comment 1.1.1:
Comment: We appreciate your careful review and valuable comments on the manuscript. We attach great importance to your feedback and incorporate your suggestions to improve the final version of the paper. | Summary: The method COME is developed to address the missing data in multi-view multi-label classification tasks, which pursues the maximization of cross-view information to compress the irrelevant information and develops a pseudo-label filling strategy to handle the unavailable labels. Besides, the authors claim that missing data leads to insufficient contrastive learning and then build an information theory-based model to handle it.
Claims And Evidence: 1. Claim: missing data results in insufficient contrastive learning. Evidence: shown in Figure 1.
2. Claim: all the task-relevant information is contained by multi-view shared information. Evidence: Assumption 2.1 and Proposition 2.2.
Methods And Evaluation Criteria: The authors propose a compact semantic learning framework, named COME, for iM3C task. Six metrics such as AUC, AP, OE and so on are used to evaluate the performance of the method.
Theoretical Claims: Assumption 2.1 and Proposition 2.2 provide the basic theoretical claims for the method, which is solid and reliable according to existing methods.
Experimental Designs Or Analyses: The authors compared eight methods across five datasets in both incomplete and complete cases. Ablation experiments and parameter analysis experiments were conducted. It would be beneficial for the authors to clarify through experiments why dual-branch model is preferred compared to a single-branch structure.
Supplementary Material: Extra appendix is provided in the paper, and I have checked all the appendix.
Relation To Broader Scientific Literature: In contrast to prior studies, the manuscript explores how contrastive learning degenerates in multi-view multi-label classification when both view and label-missing coexist. By learning compact representations, the degeneration issue is alleviated, thus enabling contrastive learning to remain robust even in the case of view missing. The authors proposed a novel dual-branch architecture to generate soft pseudo-labels, effectively addressing the label missing problem.
Essential References Not Discussed: No
Other Strengths And Weaknesses: **Strengths**
1. Understanding the performance degradation mechanisms of contrastive learning under incomplete multi-view scenarios remains an open challenge, with significant implications for robust representation learning. The authors proposed an effective solution to this problem and I think this framework has the potential to be extended to other multi-view learning tasks.
2. Overall, the experimental results presented in this paper are rich and convincing.
3. The description of the method is clear and understandable, which I think will be helpful for readers to replicate the results.
4. The authors study the complexity of the multi-view multi-label classification problem from various aspects, including multi-view representation learning and multi-label classification, from feature extraction to pseudo-label generation, showing a large workload.
**Weaknesses**
1. Some details need to be improved, such as the typo “we suppose all the task-relevant information contained by multi-view shared information.” in line 97. I suggest the authors to polish it to help improve the fluency.
2. Equations in the Appendix should not appear in the main text commonly, such as Eq. (14).
3. The authors claim that in pseudo-label generation, the use of hard pseudo-labels may lead to error accumulation and over-fitting, but there is no relevant comparative experiment in the paper. Please present the comparative experiments using hard labels and soft pseudo-labels.
Other Comments Or Suggestions: 1. The cross-view reconstruction in Eq.(8) does not use the joint representation z. Could you explain it? Additionally, is the joint representation z utilized in the classification tasks? I think the authors should provide a detailed clarification on this.
2. In Figure 7, the hyperparameters $\lambda_1$ and $\lambda_2$ appear to tiny influence on the experimental outcomes. Could the authors elaborate on the empirical or theoretical rationale behind this observed insensitivity?
Questions For Authors: 1. The difference in Figure 4(b) is very small. Is this caused by using pseudo labels? How does the model perform at different missing rates if the pseudo-label module is disabled?
2. Figure 3 does not seem to be mentioned in the text. The font styling in Figure 6 is inconsistent.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We greatly appreciate your thoughtful and detailed feedback, and we will address your questions one by one.
> Q1: It would be beneficial for the authors to clarify through experiments why dual-branch model is preferred compared to a single-branch structure.
Thank you for your suggestions. We carried out experiments on the Pascal07 dataset to investigate the differences between dual-branch and single-branch architectures. The results are summarized in the following table:
| Model | AP | 1-HL | 1-RL | AUC |
|------------------------|--------|--------|--------|--------|
| COME(dual-branch) | 0.590 | 0.935 | 0.855 | 0.873 |
| COME(single-branch) | 0.586 | 0.935 | 0.852 | 0.872 |
The experimental results demonstrate that the dual-branch architecture exhibits superior performance compared to its single-branch counterpart.
> Q2: Some details need to be improved, such as the typo “we suppose all the task-relevant information contained by multi-view shared information.” in line 97. I suggest the authors to polish it to help improve the fluency.
Thank you for your suggestion. In line 97, the original sentence has been revised to the following in the manuscript: “We suppose that all the task-relevant information is contained in the multi-view shared representation.”.
> Q3: Equations in the Appendix should not appear in the main text commonly, such as Eq. (14).
I am sorry for the serious typos. The corresponding equation is Eq. (6) rather than Eq. (14). We have updated this in the revised manuscript.
> Q4: Please present the comparative experiments using hard labels and soft pseudo-labels.
Thank you for your suggestion. We compared the soft pseudo-labeling strategy with the hard pseudo-labeling strategy and summarize the results in the follwing table:
| Model | dataset | AP | 1-HL | 1-RL | AUC |
|------------------------------|-----------|-------|-------|-------|-------|
| COME with hard pseudo-labeling | Corel5k | 0.425 | 0.988 | 0.916 | 0.918 |
| COME with hard pseudo-labeling | Pascal07 | 0.585 | 0.934 | 0.852 | 0.872 |
| COME with soft pseudo-labeling | Corel5k | 0.432 | 0.988 | 0.917 | 0.920 |
| COME with soft pseudo-labeling | Pascal07 | 0.590 | 0.935 | 0.854 | 0.873 |
The experimental results demonstrate that the soft pseudo-labeling strategy exhibits superior performance compared to the hard pseudo-labeling approach.
> Q5: The cross-view reconstruction in Eq.(8) does not use the joint representation z. Could you explain it? Additionally, is the joint representation z utilized in the classification tasks? I think the authors should provide a detailed clarification on this.
Thank you for your question. Combining the Mixture of Experts (MOE) with Eq. (15), we derive Eq. (17). This implies that the mutual information between a given view and others can be enhanced through cross-view representation reconstruction, thereby improving semantic consistency. Moreover, the joint representation z has indeed been utilized in multi-label classification tasks, and we have supplemented this in the revised manuscript.
> Q6: In Figure 7, the hyperparameters and appear to tiny influence on the experimental outcomes. Could the authors elaborate on the empirical or theoretical rationale behind this observed insensitivity?
Thank you for your question. We conducted sensitivity analysis experiments to investigate the influence of $\lambda_1$ and $\lambda_2$ on five datasets by varying their values. Subsequently, we identified hyperparameter configurations that ensured stable model performance and refined the intervals for detailed analysis. These results are visualized in Figure 4 of the PDF document (available via the anonymized link). The experiments show that the model maintains stable performance across a wide range of $\lambda_1$ and $\lambda_2$, demonstrating high robustness to these hyperparameters.
> Q7: The difference in Figure 4(b) is very small. Is this caused by using pseudo labels? How does the model perform at different missing rates if the pseudo-label module is disabled?
Thank you for your valuable discussion. We conducted ablation experiments on the pseudo-labeling strategy under varying label missing rates. The performance of COME without this strategy is summarized in Table 1 of the provided PDF document. And the results of COME with the proposed strategy are presented in Table 2 of the PDF. For a clear observation, we show the results in Figure 3 of the PDF. The experimental results demonstrate that the proposed strategy mitigates the negative effects caused by label missing.
> Q8: Figure 3 does not seem to be mentioned in the text. The font styling in Figure 6 is inconsistent.
Thanks to the suggestion. The manuscript will be revised to include extended annotations for Figure 3 and to ensure font style consistency in Figure 6.
Anonymous link:
https://anonymous.4open.science/r/6513/rebuttal.pdf
---
Rebuttal Comment 1.1:
Comment: Thanks for your responses. They have addressed my concerns.
---
Reply to Comment 1.1.1:
Comment: We thank you for your thorough review of our paper and for providing constructive feedback that has significantly contributed to its improvement. Your insights have been invaluable in helping us refine our work. | Summary: The authors delve into the study of incomplete Multi-view Missing Multi-Label Classification (iM3C) and aim to address the inadequacy of contrastive learning in dealing with incomplete multi-view data and the negative impact of missing labels. To tackle this problem, they propose a consistent semantic representation learning framework named COME. Firstly, COME learns the minimum sufficient representation by maximizing the mutual information across views. Secondly, it employs a dual-branch soft pseudo-label cross-imputation strategy to mitigate the negative impact of missing supervisory information. To verify the effectiveness of COME, the authors conduct experiments across various datasets and missing settings.
Claims And Evidence: They claim that a model is proposed to solve the double missing problem of view and label, and experiments confirm the effectiveness of the proposed model.
Methods And Evaluation Criteria: The proposed method has obvious pertinence to the iM3C problem, and the metrics used in the paper are also used in many related literatures.
Theoretical Claims: The authors suppose all the task-relevant information contained by multi-view shared information, which is intuitive and used in previous works.
Experimental Designs Or Analyses: The experimental design is detailed, and the experiments are conducted separately under different missing rates.
Supplementary Material: I have reviewed the supplementary material.
Relation To Broader Scientific Literature: This article provides a new approach to solving the problem of insufficient contrastive learning from the perspective of information theory. Based on existing works, it further studies the beneficial effects of mutual information maximization on multi-view learning.
Essential References Not Discussed: There are no Not Discussed Essential References have been found.
Other Strengths And Weaknesses: Strengths:
1) The mutual information enhancement strategy effectively compresses redundant information while retaining task-relevant shared semantic information, improving representation learning for incomplete data.
2) The dual-branch soft pseudo-label generation innovatively reduces the negative impact of missing labels, avoiding error amplification compared to hard pseudo-labeling methods.
3) The paper is well-written and has high readability.
Weaknesses:
1) Three important hyperparameters in equation 13, \lambda_1, \lambda_2, \beta, need the subsequent experiment to analyze their impact.
2) The authors should check the full text carefully for grammatical errors, such as ‘consitent’ in line 107, “random” in line 116, to meet the high quality requirements of ICML.
3) In line 112, “As shown in Fig. 4”, what does that mean? You may want to say “Fig. 1”.
4) Subfigure needs captions as well, such as that in Fig. 2.
5) In line 319, “further technical” should be revised to “and further technical”.
Other Comments Or Suggestions: See weakness.
Questions For Authors: 1) Why did the author use two separate models for pseudo-tag generation and padding?
2) In Eq. 11, whether the selection of threshold value is empirical or not?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your valuable review. We will address your questions one by one.
> Q1: Three important hyperparameters in equation 13, $\lambda_1$, $\lambda_2$, $\beta$, need the subsequent experiment to analyze their impact.
Thank you for the suggestion. In Appendix D, we investigate the impacts of hyperparameters $\lambda_1$ and $\lambda_2$ across three datasets. Experimental results indicate that our method is robust to hyperparameter variations. For visual clarity, Figure 4 in the PDF (provided via the anonymized link) illustrates heatmaps of the average precision (AP) on two datasets. Additionally, in Section 4.3, we analyze the hyperparameter $\beta$, which balances view-shared and view-specific information. When the $\beta$ is too small, the model tends to focus excessively on view-specific representations, hindering its ability to learn effective shared representations. Conversely, when $\beta$ is too large, the model overemphasizes the consistency of shared representations, thereby compressing an excessive amount of information.
> Q2: The authors should check the full text carefully for grammatical errors, such as ‘consitent’ in line 107, “random” in line 116, to meet the high quality requirements of ICML.
Thank you for the correction. The typo “consitent” has been revised to “consistent” and “random” in context has been updated to “randomly” to ensure grammatical accuracy. We thoroughly reviewed the manuscript to detect and fix spelling errors, ensuring compliance with ICML’s high quality requirements.
> Q3: In line 112, “As shown in Fig. 4”, what does that mean? You may want to say “Fig. 1”.
We apologize for the confusion. We want to illustrate the performance degradation of inadequate contrastive learning by “Fig.1” rather than “Fig.4”.
> Q4: Subfigure needs captions as well, such as that in Fig. 2.
Thank you for emphasizing the need for clearer figure captions. In the revised manuscript, we have incorporated detailed captions for each subfigure in Fig. 2.
> Q5: In line 319, “further technical” should be revised to “and further technical”.
Thank you for your suggestion. We have revised the phrase “further technical” to “and further technical” in the manuscript as recommended.
> Q6: Why did the author use two separate models for pseudo-tag generation and padding?
Some previous works that train the model using pseudo-labels generated by the same model may lead the model to accumulate error once incorrect pseudo-labels are generated [1,2]. To address this issue, we propose a dual-branch soft pseudo-label generation strategy for missing label imputation. Experimental results on Pascal07 dataset demonstrate that the dual-branch architecture exhibits superior performance and enhanced robustness compared to its single-branch counterpart. As shown in the following table:
| Model | AP | 1-HL | 1-RL | AUC |
|------------------------|--------|--------|--------|--------|
| COME(dual-branch) | 0.590 | 0.935 | 0.855 | 0.873 |
| COME(single-branch) | 0.586 | 0.935 | 0.852 | 0.872 |
> Q7: In Eq. 11, whether the selection of threshold value is empirical or not?
Thank you for your question. For the upper bound of the threshold $\tau_h$, we simply set it to 0.5, which is conventionally used as the threshold in classification tasks. We have conducted analysis experiments to investigate the influence of $\tau_l$ on five datasets. For pascal07 dataset, we varying the value of $\tau_l$ from [0, 0.4] with an interval of 0.1. The results are summarized in the following table:
| Dataset \ $\tau_l$ | $\tau_l=0.4$ | $\tau_l=0.3 $ | $\tau_l=0.2$ | $\tau_l=0.1$ | $\tau_l=0.0$ |
|-----------|---------------|---------------|---------------|---------------|---------------|
| Pascal07 | 0.5852 | 0.5901 | 0.5904 | 0.5873 | 0.5841 |
From the results, we simply set the $\tau_l$ as 0.25 for Pascal07 dataset. Intuitively, during the initial phases of training, a more stringent threshold is employed to ensure the high quality of pseudo-labels, while the threshold is gradually relaxed to extend the range of generated pseudo-labels.
References:
[1] Dual-Decoupling Learning and Metric-Adaptive Thresholding for Semi-supervised Multi-label Learning. ECCV 2024.
[2] Debiased self-training for semi-supervised learning. NIPS 2022.
Anonymous link:
https://anonymous.4open.science/r/6513/rebuttal.pdf | Summary: This paper proposes a multi-view multi-label learning approach by integrating compact semantic information learning and pseudo-labeling imputation to address the degenerative multi-view contrastive learning and missing label issues. The authors elaborate the failure of multi-view contrastive learning in view absence and introduce the compact semantic information extraction framework. Moreover, the soft label filling approach is used to improve classification performance.
Claims And Evidence: The authors claim to learn compact representations by maximizing mutual information, provide rigorous formula derivations, and demonstrate the model's superior performance through experiment results.
Methods And Evaluation Criteria: Experimental results demonstrate the model's robustness and superiority across diverse scenarios, with performance evaluated using standard metrics commonly adopted in multi-view multi-label classification tasks.
Theoretical Claims: The authors hypothesize that classification-relevant information is embedded within the multi-view shared representations. Building on this premise, they propose to learn such shared representations by maximizing cross-view mutual information, accompanied by mathematical derivations.
Experimental Designs Or Analyses: The authors have presented solid experimental results, including tests under varying missing rates and ablation studies. However, it is essential to clarify whether identical hyperparameters were applied to all datasets. Providing comprehensive experimental details (e.g., dataset-specific hyperparameters) or releasing the source code would significantly enhance the transparency of this work.
Supplementary Material: The appendix contains mathematical derivations, dataset descriptions, details of baseline methods, and supplementary experiments.
Relation To Broader Scientific Literature: The paper aims to design a novel network for the incomplete multi-view missing multi-label Classification (iM3C) task, exploring the implementation of compact cross-view representation learning and dual-model pseudo-label generation. On the basis of the existing information theory and hypothesis, the paper explores the problem of contrastive learning in multi-view learning.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Other Strengths:
1. This paper explores an interesting problem: in practical applications, view missing and label missing are very common, and how to learn the compact semantic information between different views is critical but challenging in multi-view learning.
2. This article provides a complete explanation of motivation, theoretical derivation and experimental verification.
3. The proposed compact information learning method is simple but efficient for multi-view classification.
Other Weaknesses:
1. Some statements are not clear enough: on Equation (16) and line 625, “Since we adopt the MoE fusion strategy to model $p(z| X^V)$, we have:” which is not very clear.
2. In the paper, the pseudo label generation models are able to effectively mitigate label missingness. However, beyond the ablation studies, I recommend further investigation into the efficacy of this claim.
Other Comments Or Suggestions: (Line 88) The period before “(Federici et al., 2020; Tasi et al., 2020)” should be removed. Double check for this manuscript is crucial.
Questions For Authors: 1.I suspect that the dual-branch design could incur high computational costs (e.g., training time or memory consumption). However, the authors do not provide runtime efficiency with other models. Including such experiments would strengthen the practical relevance of this work.
2.Why do the authors introduce some methods that can not handle the missing views and labels simultaneously?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your valuable suggestions! Below, we will address each of your questions.
> Q1: Providing comprehensive experimental details (e.g., dataset-specific hyperparameters) or releasing the source code would significantly enhance the transparency of this work.
Thank you for your suggestion. We will release the code and pre-trained checkpoints upon acceptance. For reproducibility we build on the open source implementation of COME.
> Q2: Some statements are not clear enough: on Equation (16) and line 625, “Since we adopt the MoE fusion strategy to model $p(z\vert X^V)$, we have:” which is not very clear.
We sincerely apologize for the confusing. Similar to previous work [1], we propose to factorise the joint variational posterior as a combination of unimodal posteriors, using a Mixture Of Experts (MOE), therefore we have Eq. (16).
> Q3: In the paper, the pseudo label generation models are able to effectively mitigate label missingness. However, beyond the ablation studies, I recommend further investigation into the efficacy of this claim.
Thank you for your suggestion. We conducted an in-depth investigation of the dual-branch soft pseudo-labeling strategy under varying label-missing rates on the Pascal07 dataset, with the corresponding four metrics presented in Figure 3 of the PDF document provided via the anonymous link. The experimental results demonstrate that the dual-branch soft pseudo-labeling strategy exhibits superior performance under elevated label-missing conditions, particularly when the missing rate is 70%.
> Q4: (Line 88) The period before “(Federici et al., 2020; Tasi et al., 2020)” should be removed. Double check for this manuscript is crucial.
We appreciate the reviewer’s careful attention to detail. The period before the citations in line 88 has been removed in the revised manuscript to adhere to proper punctuation conventions. Thank you for highlighting this oversight.
> Q5: I suspect that the dual-branch design could incur high computational costs (e.g., training time or memory consumption). However, the authors do not provide runtime efficiency with other models. Including such experiments would strengthen the practical relevance of this work.
Thank you for your constructive suggestion. We conducted a fair comparison between COME and three other deep models capable of simultaneously addressing both missing views and missing labels under identical experimental protocols, and report their training time and inference time (Unit: seconds) in the following table:
| Dataset | Phase | DICNet | SIP | UGDPD-NET | COME (Ours) |
|-----------|-----------|----------|----------|-----------|----------|
| Corel5k | Training | 935.65 | 166.17 | 314.34 | 877.03 |
| Corel5k | Inference | 0.822 | 0.896 | 0.572 | 0.873 |
| Pascal07 | Training | 460.41 | 301.664 | 1480.44 | 1260.50 |
| Pascal07 | Inference | 5.096 | 5.467 | 3.572 | 4.637 |
As indicated in the table, COME requires a prolonged training period on both datasets. The additional computational overhead is primarily attributed to the dual-branch architecture in our method. In future work, we plan to explore efficient alternatives to the dual-branch soft pseudo-label generation strategy to optimize computational efficiency.
> Q6: Why do the authors introduce some methods that can not handle the missing views and labels simultaneously?
Thank you for this important question. Due to the scarcity of methods in iM3C that simultaneously address missing views and labels, we incorporated approaches handling single missing scenarios (e.g., missing views or labels) as comparative benchmarks. Specific details are provided in Appendix C. Moreover, the experimental results demonstrate that models handling individual missing scenarios exhibit suboptimal performance, further underscoring the inherent challenges of iM3C tasks.
References:
[1] Variational Mixture-of-Experts Autoencoders for Multi-Modal Deep Generative Models. NeurIPS 2019.
Anonymous link:
https://anonymous.4open.science/r/6513/rebuttal.pdf | null | null | null | null | null | null |
Sparse Causal Discovery with Generative Intervention for Unsupervised Graph Domain Adaptation | Accept (poster) | Summary: This paper studies unsupervised graph domain adaptation from a causal perspective. The authors claim that existing methods fail to achieve optimal performance due to the entanglement of causal-spurious features. To address this issue, the authors proposed SLOGAN for graph classification domain adaptation by sparse causal modeling and dynamic intervention mechanisms. Specifically, mutual information bottleneck is utilized to construct a sparse causal graph structure, then a generative intervention mechanism is designed to break local spurious couplings. Experimental results on 5 public graph classification datasets demonstrate that the proposed model can outperform recent baselines with different gains.
Claims And Evidence: The authors claim that they focus on sparse stability and dynamic robustness in unsupervised graph domain adaptation. However, there are no experimental results to support this claim. For instance, how stable and robust is the proposed model?
Methods And Evaluation Criteria: The authors fail to include synthetic graphs that are generated by causal factors and spurious factors. This could be a direct way to show the proposed model indeed could transfer among the causal factors.
Theoretical Claims: I did not fully check the correctness of the proof in the appendix.
Experimental Designs Or Analyses: The authors only use density to split the graphs to construct the domain discrepancies. However, lots of other perspectives are overlooked, i.e., feature shift, label shift. It is unclear whether the proposed model still can achieve satisfied performance under these settings.
Supplementary Material: I reviewed the experiment parts in the supplementary material.
Relation To Broader Scientific Literature: The key contribution combines causal discovery and graph domain adaptation. As we can see from the experimental results, the performance improvement is only marginal.
Essential References Not Discussed: The authors did not discuss and compare with the following paper:
[1] Yin, Nan, et al. "Deal: An unsupervised domain adaptive framework for graph-level classification." Proceedings of the 30th ACM International Conference on Multimedia. 2022.
[2] Zeng Z, Xie J, Yang Z, et al. TO-UGDA: target-oriented unsupervised graph domain adaptation[J]. Scientific Reports, 2024, 14(1): 9165.
Other Strengths And Weaknesses: Pros:
1. This paper investigates unsupervised graph domain adaptation from a causal discovery perspective, which is less explored in the community.
2. Theoretical analyses are given to prove the effectiveness of the proposed model.
3. Experiments on different datasets and ablation studies are given verify the effectiveness of the proposed model.
Cons:
1. The authors fail to include synthetic graphs that are generated by causal factors and spurious factors. This could be a direct way to show the proposed model indeed could transfer among the causal factors.
2. The authors claim that they focus on sparse stability and dynamic robustness in unsupervised graph domain adaptation. However, there are no experimental results to support this claim. For instance, how stable and robust is the proposed model?
3. As we can see from the experimental results, the performance improvement is only marginal. For instance, the improvement is less than $2\%$ in most datasets.
4. According to ablation studies in table 5, sometimes $L_{dis}$ is useless and the authors did not explain why it fails in these situations.
5. The authors only verify the GCN architecture. It is not clear whether the proposed model also works in other architectures like GAT and GIN.
6. The authors did not discuss and compare with the following papers:
[1] Yin, Nan, et al. "Deal: An unsupervised domain adaptive framework for graph-level classification." Proceedings of the 30th ACM International Conference on Multimedia. 2022.
[2] Zeng Z, Xie J, Yang Z, et al. TO-UGDA: target-oriented unsupervised graph domain adaptation[J]. Scientific Reports, 2024, 14(1): 9165.
Other Comments Or Suggestions: In Equation (17), it is not clear what is $L_{re}$, which is not defined in the paper.
Questions For Authors: Please refer to the weakness part above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the thorough review. Below, we address each of your concerns in detail.
---
> Q1. Should test more domain split perspectives
Thank you for this valuable suggestion. Our cross-dataset experiments (Tables 1, 3, 4, 6) incorporate natural shift, feature shift, and label shift between datasets (e.g., PTC). To directly address feature shift concerns, we conducted additional experiments on Letter-Med using clustering-based feature distributions to create domain shifts. Results demonstrate SLOGAN's robustness across varied shift scenarios. We will review the paper to include the setup and comprehensive discussion of this experiments.
Method|LF0→LF1|LF0→LF2|LF1→LF3|LF3→LF0|Avg
---|---|---|---|---|---
CoCo|79.9|75.6|69.8|61.3|71.7
MTDF|80.4|75.3|70.2|61.9|72.0
Ours|82.7|76.8|72.1|64.2|74.0
> Q2. Should verify in synthetic experiments
We appreciate your suggestion. To complement evaluations, we implemented synthetic experiments following [3]. We constructed Erdös-Rényi graphs containing both causal factors (structural motifs consistent across domains) and spurious factors (domain-specific correlations). Our experimental design established two domains (D0 and D1). Analysis confirms that SLOGAN successfully conducts knowledge transfer based on causal factors. We will incorporate comprehensive details of this experiment in our revised manuscript.
Method|D0→D1|D1→D0|Avg
---|---|---|---
CoCo|74.2|73.7|74.0
MTDF|74.3|74.0|74.2
Ours w/o $L_{dis}$|76.8|76.0|76.4
Ours|79.1|78.4|78.8
> Q3. Should explain stability and robustness
Thanks for your comment. Our claims are supported by adaptation results (visualizations in Figure 5 and 6), ablation studies (Table 5), and theoretical guarantees (Section 3.5 on bounded target error).
Additionally, to directly address your question about robustness, we conducted additional experiments on Letter-Med by adding Gaussian noise (σ) to node features. Results show the robustness across noise levels. We will update the manuscript with these results.
Method|σ=0.1|σ=0.2|Avg
---|---|---|---
CoCo|67.5|64.4|66.0
MTDF|67.8|64.2|66.0
Ours|70.4|67.8|69.1
> Q4. Question on performance improvement
Thanks for your comment. SLOGAN shows consistent improvements across all six datasets, with gains reaching 2.3% and 2.2% on challenging benchmarks. In graph learning, particularly in UDA settings where labeled target data is unavailable, such consistent improvements are considered significant. For context, recent work TO-UGDA [2], although achieving relatively limited improvements on some datasets (e.g., 0.5% over CoCo), is still recognized for its methodological contributions. Notably, our synthetic experiments (Q2) show even more substantial gains, with SLOGAN outperforming baselines by 4.6%. Our work's value lies in both the performance gains and the novel perspective for graph domain adaptation theory.
> Q5. Should explain L_dis's effectiveness
Thanks for your comment. The effectiveness varies due to dataset characteristics, with greater benefits observed in datasets having cleaner feature distributions and more challenges under higher noise or complex feature correlations. This phenomenon aligns with our theoretical analysis, as disentanglement is inherently more effective when causal signals are more clearly. Our newly added synthetic experiments (Q2) also support this. Across synthetic and real-world datasets (in paper), $L_{dis}$ contributes meaningfully to overall performance (average 1.7%). We will enrich this discussion in the revised manuscript.
> Q6. Should test various architectures
Thank you for this suggestion. We chose GCN as primary backbone for fair comparison with baselines. We have now conducted additional experiments as shown below. This confirms the architecture-agnostic nature of SLOGAN. We will revise accordingly.
Method|TWITTER|NCI1|Letter-Med|PTC
---|---|---|---|---
Ours(GCN)|64.7|70.6|73.5|67.8
Ours(GAT)|64.5|70.4|75.6|66.4
Ours(GIN)|64.4|70.4|70.3|65.9
> Q7. Should enrich the comparison
Thank you for the constructive suggestion. We will add [1,2] into comparison and enrich the background section. The results on PTC are shown below, which shows SLOGAN's superiority.
Method|MR→MM|MM→MR|MR→FM|FM→MR|PTC Avg
---|---|---|---|---|---
DEAL|64.5|63.4|73.2|59.9|63.3
CoCo|65.1|63.8|73.0|60.3|63.8
TO-UGDA|66.2|64.2|73.8|61.5|65.0
Ours|71.1|65.7|74.6|66.8|67.8
> Q8. Undefined Term
The $L_{re}$ in Equation (17) refers to the reconstruction loss. This definition will be explicitly included in the revised manuscript.
---
We sincerely appreciate your constructive feedback, which has helped us improve the clarity and comprehensiveness of our work.
[1] Deal: An unsupervised domain adaptive framework for graph-level classification. ACM MM 2022.
[2] TO-UGDA: target-oriented unsupervised graph domain adaptation. Scientific Reports 2024.
[3] Nikolentzos, G., & Vazirgiannis, M. (2020). Random Walk Graph Neural Networks. NeurIPS 2020.
---
Rebuttal Comment 1.1:
Comment: Thanks for your rebuttal. I will increase the score to 3.
---
Reply to Comment 1.1.1:
Comment: We greatly appreciate your response. We will revise the article according to your suggestions, including incorporating new analytical experiments and enriching comparisons with relevant methods. Thank you again for your support and constructive feedback on this paper! | Summary: The paper presents SLOGAN, a novel approach for transferring knowledge from a labeled source domain to an unlabeled target domain on graph data. The key innovation of SLOGAN lies in its three-component framework: sparse causal discovery, generative intervention mechanisms that break local spurious couplings; and category-adaptive dynamic calibration for stable pseudo-label learning. The authors provide theoretical guarantees for the optimization error bound and demonstrate SLOGAN's effectiveness on benchmark datasets, showing consistent improvements over existing UGDA methods.
Claims And Evidence: Yes, all the claims made in the submission supported by clear and convincing evidence.
Methods And Evaluation Criteria: Yes, the methods and evaluation criteria make sense for the problem at hand.
Theoretical Claims: The paper presents theoretical claims regarding optimization error bounds. The theoretical claims are generally well-formulated and grounded in established principles. The proofs are provided and seem sound, though a more detailed analysis of the theoretical section would be beneficial to fully verify all mathematical derivations.
Experimental Designs Or Analyses: The authors evaluate SLOGAN on benchmark datasets covering diverse domains, with multiple source-target adaptation scenarios for each dataset. The ablation studies effectively isolate the contribution of each component, while the visualization experiments provide intuitive understanding of the feature disentanglement.
Supplementary Material: Yes, I have reviewed the supplementary material. The supplementary material includes a reproducibility statement, algorithm descriptions, mathematical proofs, additional experiments, and comprehensive dataset introductions. The authors provide sufficient information to understand the implementation details and theoretical foundations of their approach.
Relation To Broader Scientific Literature: The broader scientific literature on Graph classification, Unsupervised domain adaptation, Graph Domain Adaptation and Causal Discovery is well-established.
Essential References Not Discussed: The related papers are surveyed comprehensively.
Other Strengths And Weaknesses: Strengths:
1. The framework integrates causal principles with graph adaptation in a novel way, providing a theoretically grounded approach to domain adaptation.
2. The proposed SLOGAN offers a solution to the problem of feature entanglement between causal and spurious factors in graph domain adaptation.
3. The category-adaptive calibration strategy addresses a common challenge in pseudo-labeling approaches.
4. Comprehensive experimental evaluation with consistent performance improvements.
Weaknesses:
1. The symbols in Eq. 16 are not well-defined.
2. The disscussion on Unbiased Discriminative Learning is insufficient.
3. The authors could more clearly articulate their contributions and explain why the various modules form a unified whole.
4. The experimental setup details are inadequate. For reproducibility purposes, the authors should provide specific information about their computational resources.
5. While the authors provide theoretical guarantees and proofs, which is commendable, more discussion around these theoretical aspects is needed.
6. In Figure 4, the color used to indicate "Ours" should be adjusted to make it more visible and distinguishable from other methods.
Other Comments Or Suggestions: 1. Eq. 6 contains punctuation issues that should be corrected for final revision.
2. In Table 5, the method is inconsistently labeled as both "SLOGAN" and "Ours".
3. Algorithm 1 should be adjusted to appear on a single page with the section for readability.
4. The appendix lacks a detailed description of the MTDF baseline.
Questions For Authors: See above in Weaknesses.
I may change my score based on the authors' responses regarding weaknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the thorough assessment of our work and the constructive feedback. Below, we address each of the concerns raised.
---
> 1. Symbols in Eq. 16 not well-defined
We apologize for the lack of clarity in Eq. 16. This equation describes our generative intervention mechanism where we swap spurious features between samples from different domains. Specifically, $z^c_i$ represents the causal features from sample $i$, $z^s_k$ represents the spurious features from sample $k$ (from a different domain), $G$ is our generative model, and $z^+_{i,k}$ is the newly generated composite representation. We will improve the definition of these symbols in the revised manuscript.
> 2. Insufficient discussion on Unbiased Discriminative Learning
We agree that this section deserves more explanation. In our approach, unbiased discriminative learning addresses two critical challenges:
1. Class imbalance: Our category-adaptive confidence thresholds (Eq. 11-12) dynamically adjust selection criteria based on class-specific confidence distributions, preventing majority class dominance.
2. Error propagation: By implementing cross-domain stability through both source supervision and target pseudo-labeling, we create a balanced optimization objective that mitigates the risk of error accumulation.
We will expand this discussion in the revised manuscript, further clarifying how these mechanisms ensure unbiased learning across domains.
> 3. Articulation of contributions and module cohesion
We appreciate this feedback and will revise the manuscript to more clearly articulate our contributions. Specifically, we will emphasize how our three components form a unified framework:
1. Sparse causal discovery identifies stable causal patterns while isolating spurious correlations
2. Generative intervention breaks residual spurious couplings through cross-domain feature recombination
3. Category-adaptive calibration ensures stable pseudo-label learning
These components work together synergistically: causal discovery provides the foundation, generative intervention strengthens it by eliminating remaining spurious correlations, and adaptive calibration ensures robust knowledge transfer.
> 4. Experimental setup details
We agree that more detailed information would enhance reproducibility. In the revised manuscript, we will add a dedicated section specifying hardware (e.g., NVIDIA A100 GPU with 40GB memory) and training parameters (e.g., batch size, optimizer, learning rate, and training epochs).
> 5. Discussion of theoretical aspects
Our theoretical framework provides a principled foundation for SLOGAN through a probabilistic error bound directly connected to our three-component architecture. The bound shows that target domain error depends on three stability conditions: causal sufficiency (ensuring predictive information is preserved), spurious suppression (minimizing label-spurious correlations), and generative intervention fidelity (maintaining semantic consistency during feature recombination). The bound's dependence on $\sqrt{\epsilon}_1$ and $\sqrt{\epsilon}_2$ demonstrates why our unified approach outperforms single-strategy methods—optimal domain adaptation requires both preserving causal mechanisms and breaking spurious correlations simultaneously. This theoretical insight explains our empirical results where each component contributes to reducing a specific term in the overall error bound.
> 6. Color visibility in Figure 4
We thank the reviewer for this practical suggestion. We will adjust the color scheme to improve visibility, specifically making the "Ours" indicator more distinct from other methods using a higher contrast color.
> 7. Minor issues
We will address all minor issues in the revised manuscript, including correcting punctuation in Eq. 6, ensuring consistent labeling (SLOGAN vs. Ours) in Table 5, reformatting Algorithm 1 to appear on a single page, and adding a detailed description of the MTDF baseline in the appendix.
---
We appreciate the reviewer's careful reading and thoughtful suggestions, which will significantly improve the quality of our final manuscript.
---
Rebuttal Comment 1.1:
Comment: Thanks, my concerns have been solved. I will raise my score.
---
Reply to Comment 1.1.1:
Comment: We are very pleased to hear that your concern has been resolved, and the score has been improved. We will carefully incorporate the content of the reply into the revised version.
Thank you!
The Authors. | Summary: This paper studies the unsupervised graph domain adaptation problem, which aims to transfer the knowledge learned on labelled data to the data in the target domain with significantly different distribution.
The motivation of the paper is that the existing works cannot obtain satisfying performance due to the entanglement of causal -spurious features and the failure of global alignment.
The proposed method, SLOGAN, aims to resolve the challenge by adopting the sparse causal modelling technique. This technique is mainly developed with the mutual information bottleneck constraints based on the constructed sparse causal graph. Beside, a generative intervention is also proposed to address the residual spurious correlations. Finally, the error accumulation in target domain pseudo-labels are addressed with a category adaptive dynamic calibration method.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: Yes
Experimental Designs Or Analyses: Yes
Supplementary Material: All of the Appendix
Relation To Broader Scientific Literature: Domain adaptation of graph learning model will be useful for various application scenarios involving graph data.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strengths:
1. The adopted datasets are comprehensive, ranging from cheminformatics to social networks, therefore the applicability of the proposed method is evaluated over different scenarios.
2. The proposed strategy to remove spurious correlation and discover the causal relationship for boosting the domain generalization performance, is promising and reasonable. The proposed method also outperforms the baselines in most tasks.
Weakness:
1. The code of the method has been released, but the model.py and main.py seems only contain graph neural networks, while the location of the code for the proposed method is unclear.
Other Comments Or Suggestions: 1. I would recommend the authors to also highlight the second best methods in the tables. Besides, the year of the baselines would also be helpful for checking whether the baselines are up-to-date.
2. The graph domain generalization problem seems to be closely related to continual graph learning, which also aims to train a model over graphs with different distributions. I would recommend the authors to discuss the difference between these two research directions.
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review and constructive feedback. We appreciate the opportunity to clarify these points and improve our paper.
---
> Q1. The code of the method has been released, but the model.py and main.py seems only contain graph neural networks, while the location of the code for the proposed method is unclear.
We thank the reviewer for pointing out the lack of clarity in our code organization. The proposed method SLOGAN is indeed implemented in the provided code, but we acknowledge that the current structure and variable naming could be improved for better readability.
The core components of our method can be found in the main.py file:
- Causal Feature Extraction: Implemented in lines 28-34 with causal_loss function, which corresponds to our sparse causal modeling approach using contrastive objectives.
- Spurious Feature Suppression: Implemented in lines 36-42 with non_causal_loss function, which implements our variational information bottleneck for separating domain-specific correlations.
- Generative Intervention Mechanism: Implemented in lines 44-54 with the GraphGenerator class, which enables cross-domain feature recombination.
- Information Bottleneck Disentanglement: Lines 204-224 in the training loop implement our causal-spurious feature disentanglement using mutual information constraints.
- Generative Intervention Processing: Lines 224, where augemented views are generated by recombining causal and shuffled spurious features, with covariance constraints implemented as an MSE loss.
To improve clarity, we will:
1. Reorganize our code to better align with the paper's methodology sections
2. Rename variables to directly match terminology in the paper
3. Add comprehensive comments explaining the implementation of each component
4. Create separate modules for each key component (causal discovery, intervention mechanism, and confidence calibration)
We will update our repository with these improvements to facilitate understanding and reproducibility.
> Q2. I would recommend the authors to also highlight the second best methods in the tables. Besides, the year of the baselines would also be helpful for checking whether the baselines are up-to-date.
We appreciate this valuable suggestion. In the revised version, we will:
1. Highlight the second-best methods in all result tables (using underlined values or alternative formatting)
2. Add publication years for all baseline methods to provide context on the recency of comparisons
Regarding up-to-date baselines, we have already compared with recent state-of-the-art methods published in 2024, such as MTDF. In the revised version, we will also enrich the comparison with recent methods [1,2]. This will enhance table readability and allow readers to better assess our method's improvements relative to the most recent state-of-the-art approaches.
Method|MR→MM|MM→MR|MR→FM|FM→MR|PTC Avg
---|---|---|---|---|---
DEAL|64.5|63.4|73.2|59.9|63.3
TO-UGDA|66.2|64.2|73.8|61.5|65.0
Ours|71.1|65.7|74.6|66.8|67.8
[1] TO-UGDA: target-oriented unsupervised graph domain adaptation. Scientific Reports, 2024.
[2] Deal: An unsupervised domain adaptive framework for graph-level classification. ACM MM 2022.
> Q3. The graph domain generalization problem seems to be closely related to continual graph learning, which also aims to train a model over graphs with different distributions. I would recommend the authors to discuss the difference between these two research directions.
Thank you for highlighting this important connection. We agree that discussing the relationship between graph domain generalization and continual graph learning would strengthen our paper. We will add a dedicated paragraph with proper references in the related work section addressing this relationship:
"While graph domain generalization and continual graph learning both address distribution shifts in graph data, they differ in several key aspects. Continual graph learning focuses on sequential learning across multiple tasks without catastrophic forgetting, enabling models to adapt to new distributions while retaining performance on previously encountered ones. In contrast, graph domain generalization aims to learn domain-invariant representations that transfer directly to unseen target domains without adaptation. Our approach, SLOGAN, specifically addresses the latter by identifying stable causal mechanisms that generalize across domains rather than incrementally adapting to new distributions."
We believe this discussion will provide valuable context and clarify the positioning of our work within the broader landscape of graph learning research.
---
Thank you again for your constructive comments, which will help improve our paper.
---
Rebuttal Comment 1.1:
Comment: 1. The promise to update the repository is good.
2. The inclusion of methods in 2024 is good, but I would also recommend to include some more methods insteado d just one.
3. The discussion is good. I would recommend to make the discussion more concrete with comparisons on different specific settings. For example, the paper 'CGLB: Benchmark Tasks for Continual Graph Learning' describe some settings like task increment learning and class increment, while the paper 'Online Continual Graph Learning' describe something else. I think it would be more insightful if the comparison can be detailed to specific settings.
Anyway, I don't have other major concerns, and will keep my rating
---
Reply to Comment 1.1.1:
Comment: We are pleased that our previous responses have addressed your main concerns. Thank you for your continued guidance, which will significantly improve our paper.
We will implement your valuable suggestions in our revision by:
1. Expanding our comparison to include multiple recent methods from 2024, not just MTDF and TO-UGDA, to provide a more comprehensive evaluation of our approach.
2. Enhancing our discussion section with concrete comparisons between our method and specific continual graph learning settings. We will explicitly address how our approach relates to both task-incremental and class-incremental paradigms [1] as well as the streaming data scenario [2].
Thanks again for your constructive feedback. These additions will provide a clearer context for our work within the broader graph learning literature.
[1] CGLB: Benchmark Tasks for Continual Graph Learning
[2] Online Continual Graph Learning | Summary: This paper proposes SLOGAN, a framework for Unsupervised Graph Domain Adaptation that addresses two key challenges: the entanglement of causal and spurious features, and the failure of global alignment strategies in graph data. SLOGAN constructs a sparse causal graph using mutual information bottleneck principles to disentangle stable causal features from spurious ones. It introduces a generative intervention mechanism to suppress residual spurious correlations via cross-domain feature recombination and employs a category-adaptive calibration strategy to improve pseudo-label reliability in the target domain.
Claims And Evidence: The paper provides both theoretical guarantees and empirical validation across six benchmark datasets. The improvements over strong baselines are consistent and often exceed 3%, with ablation studies demonstrating the necessity of each proposed component. However, one minor issue is that standard deviations are not consistently reported when improvements are marginal, which could better support claims of significance in those cases.
Methods And Evaluation Criteria: The use of sparse causal discovery and generative intervention directly targets the core challenges in UGDA making the methodology appropriate and novel for this setting. The evaluation is thorough and reflects real-world graph distribution shifts. Comparisons with a broad range of baseline methods, including graph neural networks, semi-supervised models, and domain adaptation techniques, further validate the relevance and robustness of the proposed framework.
Theoretical Claims: The paper provides a theoretical result in Theorem 3.1, which presents a probabilistic bound on the target domain error under three stability conditions: sufficient mutual information between causal features and labels, suppression of mutual information between spurious features and labels, and low reconstruction error via a generative model. However, the proof is not included in the main body and is deferred to Appendix C.
Experimental Designs Or Analyses: The authors conduct extensive evaluations across six benchmark datasets, using both cross-dataset and dataset-split settings to simulate realistic domain shifts.
Supplementary Material: The supplementary material referenced in the main paper is reviewed, including Appendix B, C, D, E, and F, as they are cited in discussions of the overall algorithm, theoretical proof, additional experiments, sensitivity analysis, and dataset details.
Relation To Broader Scientific Literature: While previous UDA methods have focused on Euclidean data using global domain alignment techniques, SLOGAN addresses the unique challenges of graph-structured data, which involve complex topologies and high-dimensional sparsity.
Essential References Not Discussed: While the paper thoroughly discusses prior work on UDA, GNNs, and causal representation learning, it overlooks several recent works that integrate causal inference with domain adaptation in structured data.
[1] Lu, Chaochao, et al. "Invariant causal representation learning for out-of-distribution generalization." International Conference on Learning Representations. 2021.
Other Strengths And Weaknesses: 1. Although the paper reports significant improvements, it does not consistently report standard deviations.
2. The paper’s writing occasionally suffers from dense technical jargon, which may hinder readability for a broader machine learning audience.
Other Comments Or Suggestions: NA
Questions For Authors: See weaknesses
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely thank you for your valuable feedback and thoughtful comments. We address each point below:
---
> Q1. While the paper thoroughly discusses prior work on UDA, GNNs, and causal representation learning, it overlooks several recent works that integrate causal inference with domain adaptation in structured data.
Thank you for your constructive suggestion. We will enhance our literature review to include recent works integrating causal inference with domain adaptation in structured data. [1] presents important contributions to invariant causal representation learning with exponential family distributions. In our revised manuscript, we will provide a discussion and add proper references of how our approach relates to and advances beyond these existing methods. The revised paragraph is as follows:
```
While [1] makes significant contributions to invariant causal representation learning for general out-of-distribution generalization, our work extends these principles specifically for graph-structured data by introducing sparse causal discovery mechanisms that capture the unique interplay between node features and graph topology, enhancing transfer capabilities across heterogeneous graph domains.
```
[1] Invariant causal representation learning for out-of-distribution generalization. ICLR 2021.
> Q2. Although the paper reports significant improvements, it does not consistently report standard deviations.
Thanks for your constructive suggestion. We will add the standard deviation metrics for results. We have already conducted these measurements across 5 independent runs with different random seeds. The results for PTC, Letter-Med and NCI1 datasets are shown below.
| Method | PTC | Letter-Med | NCI1 |
|--------|-----|------------|------|
| CoCo | 63.8±0.8 | 71.0±1.0 | 67.7±0.8 |
| MTDF | 65.5±0.6 | 71.3±1.2 | 69.5±1.3 |
| Ours | 67.8±0.6 | 73.5±1.0 | 70.6±0.9 |
> Q3. The paper's writing occasionally suffers from dense technical jargon, which may hinder readability for a broader machine learning audience.
We appreciate your feedback. We will improve the manuscript's accessibility by focusing on two key areas:
1. Clarifying complex technical concepts with intuitive explanations
2. Adding illustrative examples for abstract mechanisms
For instance, in Section 3.4, we will revise the description of our generative intervention approach from:
"We design a generative model to reconstruct original graph representations with a cross-domain spurious feature exchange strategy. By perturbing local coupling of spurious features, this approach forces the model to rely solely on causal features for reconstruction, effectively suppressing spurious residuals."
To the more accessible:
```
Our method uses a targeted approach to ensure the model doesn't rely on misleading patterns. Consider the TWITTER dataset in our experiments: when classifying discussion topics in social networks, our method can distinguish between fundamental network structures (like community clusters and information flow patterns) and platform-specific features (like temporary trending hashtags or regional engagement patterns). It achieves this by deliberately exchanging these platform-specific features between different network samples while preserving the essential community structures, forcing the model to focus only on truly predictive patterns that work consistently across different social media environments.
```
These revisions will maintain the paper's technical rigor while making it more accessible to readers from various machine learning backgrounds.
---
Thank you again for your constructive comments, which will help us improve the quality of our paper. | null | null | null | null | null | null |
Puzzle: Distillation-Based NAS for Inference-Optimized LLMs | Accept (poster) | Summary: The paper introduces Puzzle, a distillation-based NAS approach for extracting inference-optimized LLMs for existing trained models such as Llama. The authors first introduce the search space for their NAS-based optimizations - including the different attention and FFN subblocks to use, followed by the number of combinations, and introduce a decoupled block distillation algorithm to reduce the number of total combinations to explore for search. Following this, they define their overall algorithm for Puzzle:
- Search constraints focused on real-time memory and throughput on H100 GPUs
- Scoring method for each subblock
- Mixed Integer Programming for selecting the best solution with their constraints
- Additional global training to improve overall model quality
For results, the authors present the 51B model, which is distilled from Llama 70B, including evaluations on evaluation benchmarks, blind test comparisons, training a 49B model with similar memory requirements but with longer context length support, and a final smaller model distilled from the 8b model.
-----------
### After Rebuttal
Based on the provided rebuttal and the other reviews / addressed comments, I've decided to retain my score. All concerns have been addressed.
Claims And Evidence: Yes, the claims made in the submission have sufficient evidence for each aspect. Just highlighting some of them below:
- The authors present a series of ablations for aspects of the algorithm, for example, the scoring method for each sub-block (using KL vs cross entropy vs accuracy on downstream tasks), which datasets they used for their BLD, and how long to train each sub-block for during exploration. The ablations are empirically supported by results from reasonable downstream evaluations, measuring throughput, etc.
- For the mixed integer programming algorithm, they add a diversity constraint, and show graphs representing how different blocks have different sub-blocks selected (see Fig. 6 for example).
Methods And Evaluation Criteria: Yes, most of the methods and the equivalent evaluation criteria make sense for the problem the authors are solving.
Theoretical Claims: N/A
Experimental Designs Or Analyses: Yes, I did check the design and associated analysis. Here are some follow up questions:
- For the throughput comparison of table 2, can the authors clarify what batch sizes they used for the numbers?
- For the same table, when using TP=1 for the 128/1024 scenario (entry 2) - the slow throughput for Llama-70B almost seems to be an issue of hitting the memory + optimization wall very quickly with an H100 80GB instance rather than the model not being "efficient" here. Can the authors comment on this? If so, comparing with say TP=2 might have been a better comparison point vs TP=4 here?
- For Figure 5, can the authors clarify if they used MMLU or MMLU Chat for the accuracy computations?
Supplementary Material: Yes, I read through the whole supplementary material, including the main MIP function; the ablations for different parts of the algorithm and additional detailed benchmarks that were for RULER evals and blind test evals.
Relation To Broader Scientific Literature: The paper presents a NAS+distialltion based approach to get best in class LLMs for training. Given the complexity of using NAS for LLMs, the authors present new approaches to reduce complexity for NAS, especially dealing with sub-block optimizations that are initially greedy (optimizing for that block w/ inputs only from previous blocks). This is similar to approaches like LayerNAS [1] which have been previously explored for CV architectures.
[1] LayerNAS: https://arxiv.org/abs/2304.11517
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: **Strengths**
- The paper does thorough ablations of each decision choice in the creation of the algorithm
- For the NAS objective to minimize, the authors consider real-world deployments over theoretical measures such as parameter count or FLOPs
- The approach identifies potential algorithm pieces that may introduce extra complexity (such as the large search space) and focuses on solving those through feedback from ablations.
**Weakness**
The paper discusses an approach to find smaller networks from an existing trained LLM using the suggested NAS-based approach for efficiency. This is similar to other work done previously to find smaller networks such as Minitron [1], but also other pruning approaches such as ShortGPT [2] and SlimGPT [3]. However, none of these approaches are explicitly compared in the paper, beyond trying to recover the accuracy of the original trained model.
[1] Minitron: https://arxiv.org/abs/2408.11796
[2] ShortGPT: https://arxiv.org/abs/2403.03853
[3] SlimGPT: https://arxiv.org/abs/2412.18110
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Ethical Review Concerns: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: Thank you for your positive feedback!,
*"For the throughput comparison of table 2, can the authors clarify what batch sizes they used for the numbers?"*
This is a good question. For every model and hardware setting we selected the optimal batch size to get the best throughput per GPU. This was done automatically by the inference engine, selecting for each run the optimal batch size to use. For example, Puzzle-51B's optimal batch size for TP=1 was 256, and for Llama-3.1-70B at TP=4, it was 384. We will make sure to note this in our revision. Do you think it is beneficial to include the full list of batch sizes as well?
*"For the same table, when using TP=1 for the 128/1024 scenario (entry 2) - the slow throughput for Llama-70B almost seems to be an issue of hitting the memory + optimization wall very quickly with an H100 80GB instance rather than the model not being "efficient" here. Can the authors comment on this? If so, comparing with say TP=2 might have been a better comparison point vs TP=4 here?"*
You are right about TP=1. As described in the caption, for each model the optimal TP was chosen for speed measurements, which is why we selected TP=4 and not TP=2 for Llama-70B: TP=4 is more flattering to the parent's throughput for this scenario.
We also included the speed measurements on TP=1 to also present a "fair" comparison to Puzzle-51B for a single GPU setting. We will clarify this point explicitly in the revision.
*"For Figure 5, can the authors clarify if they used MMLU or MMLU Chat for the accuracy computations?"*
We used MMLU (and not MMLU Chat).
*"This is similar to other work done previously to find smaller networks such as Minitron [1], but also other pruning approaches such as ShortGPT [2] and SlimGPT [3]. However, none of these approaches are explicitly compared in the paper"*
You make a good point, and we are working on including these relevant comparisons in the revision.
In short, some of these methods share similar aspects to Puzzle, but their solutions remain a subset of Puzzle's search space. For example, Minitron only considers homogeneous solutions (i.e., any modified block is applied across all layers), whereas Puzzle allows for heterogeneous, layer-specific configurations.
ShortGPT considers layer removal (similar to how Puzzle allows "no-op" layers), using a cosine similarity score to identify redundant layers.
SlimGPT introduces an extension of the "Optimal Brain Surgeon" method, called "Batched Greedy Pruning", that could also be used by Puzzle to prune individual blocks. SlimGPT sets an "Incremental Pruning Ratio" heuristic that follows a fixed logarithmic curve. Puzzle, on the other hand, can consider the pruning ratio for each layer in a custom manner, which could also consider assigning higher pruning ratios to later layers like SlimGPT does.
Finally, thank you for mentioning LayerNAS, we will also include it as a reference to related literature in the revision.
---
Rebuttal Comment 1.1:
Comment: > "Do you think it is beneficial to include the full list of batch sizes as well?"
Yes, it will be good to have this included in the result section.
> We also included the speed measurements on TP=1 to also present a "fair" comparison to Puzzle-51B for a single GPU setting.
I do agree that while it is fair for the comparison, it is unfair that this setting where 70B is hitting other issues you are reporting ~5x improvement in throughput. Just wanted to note this, do not expect new results here.
----------
Based on the rebuttal provided, my questions and associated concerns have been addressed. I will retain my current score based on this.
---
Reply to Comment 1.1.1:
Comment: We thank reviewer Fna8 for their response,
*"I do agree that while it is fair for the comparison, it is unfair that this setting where 70B is hitting other issues you are reporting ~5x improvement in throughput."*
We included the single-GPU comparison to provide a helpful comparison for practitioners who may encounter such constraints. However, we agree that models should primarily be compared under their optimal settings. This is why, throughout the paper—including in the Abstract and in the "Throughput comparison" paragraph in Section 5—we consistently report a 2.17× speedup, rather than emphasizing the ~5× improvement seen in the TP=1 setting. We also agree with you that the issues with this comparison should have been made more explicit and will ensure they are clearly stated in the revision.
Additionally, we will include the batch sizes in the revised version, as discussed. | Summary: The paper introduces Puzzle, a hardware-aware framework that optimizes LLM inference efficiency using neural architecture search (NAS), blockwise local knowledge distillation (BLD), and mixed-integer programming. The authors demonstrate its effectiveness with Puzzle-51B, a 51B-parameter model derived from Llama-3.1-70B, achieving 2.17× inference speedup on a single H100 GPU while retaining 98.4% accuracy despite training on only 45B tokens.
Claims And Evidence: Yes
Methods And Evaluation Criteria: This paper compresses a 70B model to 51B, achieving a very limited compression rate. Although it attains a 2x inference speedup, the performance still falls short of the original level even after additional training. Furthermore, there are no compression results for a 7B model, significantly limiting its practical value.
Theoretical Claims: None
Experimental Designs Or Analyses: Although comparisons were conducted on some common benchmarks, the advantages of the proposed method are not clearly demonstrated. The paper lacks comparisons with widely-used compression techniques, such as SparseGPT[1] and Sheared LLaMA[2], which limits the ability to assess its relative effectiveness and innovation.
[1]Frantar, Elias, and Dan Alistarh. "Sparsegpt: Massive language models can be accurately pruned in one-shot." International Conference on Machine Learning. PMLR, 2023.
[2]Xia, Mengzhou, et al. "Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning." The Twelfth International Conference on Learning Representations.
Supplementary Material: No
Relation To Broader Scientific Literature: None
Essential References Not Discussed: [1]Frantar, Elias, and Dan Alistarh. "Sparsegpt: Massive language models can be accurately pruned in one-shot." International Conference on Machine Learning. PMLR, 2023.
[2]Xia, Mengzhou, et al. "Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning." The Twelfth International Conference on Learning Representations.
Other Strengths And Weaknesses: **Weaknesses**:
1. The distillation-based approach for model compression lacks innovation and new insights, as it relies on well-established techniques without introducing novel methodologies or deeper understanding.
2. The experiments only focus on compressing a 70B model to 51B, which has limited practical value. The additional training cost further diminishes the trade-off, making it less appealing for real-world applications.
3. The paper lacks discussion and comparisons with other compression methods, such as **low-rank approximation** and **unstructured/structured sparsity**, which are critical for evaluating the proposed method's competitiveness and effectiveness in the broader context of model compression.
Other Comments Or Suggestions: None
Questions For Authors: See Weaknesses
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for your constructive feedback,
*"no compression results for a 7B model"*
*"experiments only focus on compressing a 70B model to 51B"*
We demonstrate Puzzle's robustness by applying it 11 times with varied constraints, datasets, and budgets:
(1) 4 derivatives of Llama-3.1-70B (including Puzzle-51B).
(2) 6 derivatives of Llama-3.3-8B.
(3) A derivative of Llama-3.3-70B: "Puzzle-49B" (Sec. 5).
Moreover, post-submission, we applied Puzzle in two additional scenarios:
(4) A 253B derivative of Llama-3.1-405B, constrained for a single H100 node at 1.5X latency, retaining 99.5\% of parent performance (benchmarks: MMLU, MT-Bench, MMLU-Pro, HumanEval, Arena Hard).
(5) A novel 50B+ Mamba-hybrid derivative, constrained for RTX 5090 with 1M context length, retaining 99.94\% parent performance (benchmarks: MMLU, MMLU-Pro, GSM8K, HellaSwag).
We'll include these in the revision to highlight Puzzle's practical value.
*"Although it attains a 2x inference speedup, the performance still falls short of the original level even after additional training."*
We believe a 98\% accuracy retention is impressive. Moreover, while Puzzle-49B already retains high accuracy, we show that a lightweight alignment phase further boosts its performance, leading it to outperform its parent model, Llama-3.3-70B, at 105.5\% relative accuracy.
*"The additional training cost further diminishes the trade-off, making it less appealing for real-world applications."*
Indeed 45B tokens might be expensive for some users, even if it is much lower than the trillions of tokens necessary to train LLMs from scratch. However:
(1) GKD is meant to squeeze extra performance when budget allows. Puzzle derivatives remain competitive even without GKD (Table 14, Appendix F.3), retaining 96.5\% or 90\% parent performance without GKD.
(2) While we didn't mention it in the paper, even a *significantly* shorter GKD training may suffice for a substantial increase in performance. After 3.7B tokens, Puzzle-51B reached 98.8\% parent performance on MMLU and MT-Bench (MMLU 79.38, MT-Bench 8.96). Puzzle-49B, after 8.68B tokens GKD (pre long-context KD in Sec. 5), reached 99.63\% parent performance (80.73 MMLU, MT-Bench 8.87). Even just 2.9B tokens GKD recovered 98.47\% for Puzzle-49B (MMLU 80.72, MT-Bench 8.675).
We agree that it is important to include these results in the revision to clarify how GKD length can be adjusted based on the available budget.
*"compresses a 70B model to 51B, achieving a very limited compression rate."*
As noted in the paper, we argue that categorizing models by parameter count alone (50B vs. 70B) is less meaningful. Real-world choices should depend on hardware, budget constraints, and usage profiles (sequence length, batch size). Hypothetically, a good and resource-efficient 80B model that is faster than an 8B model is preferable to it. We note that even for parameter reduction alone, our accuracy retention remains significant.
*"comparisons with other compression methods, such as low-rank approximation"*
The Puzzle framework is complementary to techniques such as structured sparsity and low-rank approximation, and can be incorporated within its search space. Thus, these techniques enhance rather than compete with Puzzle.
Still, we agree comparisons with methods like [1] and [2] would indeed benefit the paper. We've initiated such evaluations for the revision. Instead of [1], we've chosen Wanda [3], a newer structured sparsity method with good results, which we hope you find acceptable.
Below are preliminary results comparing Puzzle, Wanda, and low-rank approximation. Wanda pruned Llama-3.1-70B (2:4 structured sparsity) targeting similar speedups as Puzzle-51B. The low-rank approximation resembles [4], with subsequent distillation.
| Model | MMLU | MT-Bench | Average Accuracy | Accuracy Preserved |
|-------------------|-------|----------|------------------|--------------------:|
| Puzzle-51B | 80.20 | 8.99 | 85.05 | 99.49 |
| Wanda | 72.99 | 8.39 | 78.44 | 92.23 |
| Low-rank | 72.87 | 8.01 | 76.05 | 88.96 |
| Llama-3.1-70B | 81.66 | 8.93 | 85.48 | 100 |
For Wanda [3], which doesn't include additional training, distillation post-pruning yielded marginal gains (MMLU 73.69; MT-Bench unchanged).
We are also working to evaluate Sheared Llama [2] as you suggested.
Finally, we're exploring integrating these methods within Puzzle, demonstrating their complementary strength, which we'll aim to present clearly in the revision.
[3] Sun et al. A Simple and Effective Pruning Approach for Large Language Models, ICLR
[4] Khodak et al. Initialization and Regularization of Factorized Neural Layers, ICLR
*"The distillation-based approach...lacks innovation and new insights..."*
We respectfully disagree; limited space prevents elaboration but we are happy to clarify if needed. | Summary: The paper proposes a NAS pipeline for pruning a pre-trained large language model. The search space includes pruning the attention heads for the attention module and pruning FFN columns (intermediate size) for the FFN module. The pipeline includes three pieces: 1) blockwise local distillation: by training each pruned module with distillation loss to recover the original module output, this part produces a library of pruned module. 2) block scoring: the pruned modules are evaluated by their quality relative to the original modules 3) searching the best combination of pruning strategies: by formulating the pruning problem as a mixted-integer programming problem (constraint is the desired efficiency after pruning), we can search for the best set of pruned modules based on their scores.
Afterwards, the pruned model is trained with distillation loss end-to-end again to additional heal the gap. The entire procedure utilizes 45B training tokens and is able to get a 51B model out of llama-3.1-70B-Instruct with almost no performance lost.
Claims And Evidence: Yes, both the speedup and the accuracy of the model support the efficacy of the method.
Methods And Evaluation Criteria: The accuracy evaluation setup makes sense, but the author only evaluates the method on one model (llama-3.1-70B), which raises concern on its generalizability.
Theoretical Claims: Not applicable for this paper.
Experimental Designs Or Analyses: The main expeirmental design makes sense. There are not much ablations/analysis in the main paper though, which can be an area for further improvement.
Supplementary Material: I read all the ablation-related experiments.
Relation To Broader Scientific Literature: There are two branches of LLM pruning papers: structural pruning and unstructural pruning. This paper falls under the category of structural pruning and proposes a systematic NAS method to prune the model which is not found in previous literature. The methodology (recovery error, distillation) isn't super new but in combined showed promising performance. The main strength of this paper is its empirical success, where most of the unstructured pruning have either worse performance or an insufficient evaluation setup.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strengths
- Comprehensive evaluation and strong empirical results.
Weakness
- There are several metrics considered for the replace-1-block score, but I don't find where/if the author conducts ablation studies on them and which one is the best (there is only one analysis in the appendix which doesn't fully address this question).
- The global distillation phase requires a large amount of tokens. 45B tokens in total is not a small number and hinder the applicability of the method.
- The method is only tested on llama-3.1-70B, it is unclear if the methodology can transfer to other model sizes/families.
Other Comments Or Suggestions: TLDR: put more ablation results in the main text instead of leaving them in the appendix
With a lot of design choices and experiments covered in the paper, I think the paper lacks a clear outline of the core research question/methodology it is investigating and the takeaways from the experiments. As the paper proposes a new pruning methodology for a fairly well-known pipeline, the focus should be on the ablation of various method design choices. I saw the author had a good amount of ablation in the appendix and the author should select some of them to be put in the main text along with relevant discussion to provide the reader more insights on what is important for pruning the model.
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your feedback and appreciation of the ablation studies,
*"There are several metrics considered for the replace-1-block score, but I don't find where/if the author conducts ablation studies on them and which one is the best"*
Appendix F.1.4. examines the impact of different replace-1-block scores. In short, for general use, it is best to use KL divergence as the metric. We'll make sure to state this conclusion more clearly in the revision.
We also found that if a particular downstream task is prioritized, using data similar to that task for block scoring will produce results that outperform the KL divergence *on this specific task*, but underperform KL Divergence solutions across different tasks. See the Half-MMLU experiment in Appendix F.1.4. for more details. Additionally, LM Loss always underperforms KL divergence as a block score (as shown in Figure 7).
*"the author only evaluates the method on one model (llama-3.1-70B), which raises concern on its generalizability."*
The paper shows the application of the Puzzle method in crafting 11 different models with various constraints, datasets and budgets to show the robustness of the method:
(1) 4 derivatives of Llama-3.1-70B:
a) Puzzle-51B,
b) 2 other 51B derivatives with a different BLD token budget (Appendix F.1.3., Table 9),
c) A Puzzle derivative from a different dataset (Gutenberg dataset, Appendix F.1.2., Table 8),
d) A limited search space variant (Appendix F.1.5., Table 11)
(2) 6 derivatives of Llama-3.3-8B:
a) A "coupled BLD" derivative (appearing both in the main paper at Table 5 and in Appendix F.1.4., Table 7),
b) 2 derivatives with LM loss block scoring (Figure 7 in Appendix F.1.4.),
c) 2 derivatives with decoupled BLD (Figure 7 in Appendix F.1.4.),
d) A derivative with a downstream, "Half-MMLU" block score (Table 10 in Appendix F.1.4.)
(3) A derivative of Llama-3.3-70B: "Puzzle-49B" (Section 5).
We agree that applying Puzzle to produce a large variety greatly strengthens the paper. That is why, after submission, we applied Puzzle in two additional scenarios with different parent models:
(4) Using Puzzle, we created a 253B derivative of Llama-3.1-405B with Puzzle constraints to fit a single H100 node at a 1.5X latency. The resulting model retains 99.5% of the parent model performance (averaged on MMLU, MT-Bench, MMLU-Pro, HumanEval and Arena Hard).
(5) Experimenting with a novel Mamba-hybrid model (consisting of more than 50B parameters) as a parent, we crafted a Puzzle derivative while constraining it to fit a single RTX 5090 with a 1M context length. The resulting model retains a 99.94% of the parent's performance (average on MMLU, MMLU-Pro, GSM8K and HellaSwag).
We will make sure the revision includes these examples to emphasize Puzzle's robustness across a variety of constraints and models.
*"The global distillation phase requires a large amount of tokens. 45B tokens in total is not a small number and hinder the applicability of the method."*
We agree 45B tokens might not be an applicable budget for every user, even if it is much lower than the trillions of tokens necessary to train LLMs from scratch. However:
(1) GKD is meant to squeeze extra performance when budget allows. Puzzle derivatives remain surprisingly competitive even without GKD (see Table 14 in Appendix F.3., where Puzzle derivatives retain 96.5% or 90% of the parent's performance without applying GKD at all).
(2) While we didn't mention it in the paper, even a *significantly* shorter GKD training may suffice for a substantial increase in performance, and we will emphasize this in the revision. After only 3.7B tokens invested in GKD, Puzzle-51B had already recovered 98.8% of its parent's performance on MMLU and MT-Bench (MMLU 79.38, MT-Bench 8.96). Puzzle-49B underwent a GKD of only 8.68 tokens (prior to the long context KD described in Section 5), at which stage it already recovered 99.63% of its parent's performance (80.73 MMLU, MT-Bench 8.87). Even after just 2.9B tokens for GKD, Puzzle-49B had already recovered 98.47% (MMLU 80.72, MT-Bench 8.675). We agree that it is important to include these results in the revision to clarify how GKD length can be adjusted based on the available budget.
*"the author had a good amount of ablation in the appendix and the author should select some of them to be put in the main text"*
Thank you for your positive feedback, the ablation studies were of utmost importance for us to objectively conclude the best configurations for using Puzzle in a robust way. We intend to move several ablations into the main body.
In particular, since our core question in the ablation studies was to find the best configuration for applying Puzzle, we believe the ablations presented in F.1.1., F.1.3. and F.1.4. are the most important for practitioners (with F.1.2. also contributing to data preparation, if space constraints in the main body allow us to add it as well). What is your opinion on this selection?
---
Rebuttal Comment 1.1:
Comment: I think the rebuttal provides further evidence of the generalizability of the method in terms of both the target model and training size. I will thereby increase my score to 3.
Regarding the ablation, I would love to have the BLD section (F.1.3 and F.1.4) be brought to the main text since the BLD seems to be one major methodology novelty of the paper and I think a clean presentation on these two sections can allow the reader better understand how it should be applied.
---
Reply to Comment 1.1.1:
Comment: We thank reviewer bqJm for their response and score increase,
*"Regarding the ablation, I would love to have the BLD section (F.1.3 and F.1.4) be brought to the main text since the BLD seems to be one major methodology novelty of the paper and I think a clean presentation on these two sections can allow the reader better understand how it should be applied.*"
We agree with your suggestion and will move these sections into the main text in the revision. | Summary: This paper is concerned with model compression, which aims to compress the scales of LLMs. This paper proposes a NAS framework named Puzzle to conduct easy-to-achieve NAS. The Puzzle framework firstly trains decoupled blocks for each layer via block-wise local distillation, then searches best-fit plan for architecture, and finally uptrains the searched architecture for preserved performance. The experimental results show that the NAS-based architecture is more efficient than original architecture and is competitive with the original model in a wide range of tasks.
Claims And Evidence: The claims are supported by clear and convincing evidence. However, I still have several concerns:
1) The comparison is sub-optimal and lacks critical baselines concerning about distillation, making the claims seem to be overclaimed.
Methods And Evaluation Criteria: The proposed methods are clear and the evaluation criteria is mostly adequate. However, I still have several concerns:
1) Key baselines are missing, which 1) uses distillation on a fully random architecture 2) uses distillation on a random-from-block-library architecture.
Theoretical Claims: The proofs for theoretical claims are correctly justified.
Experimental Designs Or Analyses: The experimental designs and analyses are valid. However,I still have several concerns:
1) It would be much better to integrate the above-mentioned baselines as performance references.
Supplementary Material: N/A
Relation To Broader Scientific Literature: N/A
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your feedback and suggestion,
*"Key baselines are missing, which 1) uses distillation on a fully random architecture 2) uses distillation on a random-from-block-library architecture."*
We conducted evaluations on the baselines you suggested, namely (1) a fully random architecture and (2) a random-from-block-library architecture. Both were constructed to adhere to the same speed constraints as Puzzle-51B.
Additionally, we extended (1) with an extra baseline: (3) using Llama-3.1-70B itself with randomized weights (an experiment we dub "Parent-Randomized"). This baseline explores whether increased capacity might have contributed to better performance.
To ensure fairness, we allocated the same 10B token budget for training each of these models:
| Model | MMLU | MT-Bench | Average Accuracy | Relative to Llama-70B |
|----------------------------|:-----:|:-------:|:---------------:|:--------------------:|
| Puzzle-51B (10B tokens) | 79.7 | 8.89 | 84.3 | 98.61% |
| Random-from-block-library | 66.02 | 8.2 | 74.01 | 86.58% |
| Fully Random | 23.13 | 0.89 | 16.015 | 18.73% |
| Parent-Randomized | 23.42 | 0.95 | 16.46 | 19.25% |
| Llama-3.1-70B | 81.66 | 8.93 | 85.48 | 100% |
We will include these baselines in the revision.
We believe the "Random-from-block-library" experiment is particularly informative, as it highlights the value of the MIP algorithm in selecting high-quality blocks from the block library. Additionally, our paper examines another baseline to the MIP algorithm in Appendix F.2.2., where we use a greedy algorithm to select blocks from the block library. | null | null | null | null | null | null |
Adapting to Evolving Adversaries with Regularized Continual Robust Training | Accept (poster) | Summary: Most robust training methods focus on specific attack types and struggle to maintain robustness when new attacks arise, making continual robust training (CRT) necessary. This paper proposes a logit-space regularization approach to preserve robustness across both previous and new attacks efficiently, demonstrating its effectiveness through theoretical analysis and extensive experiments on multiple datasets.
Claims And Evidence: The claims for CRT are clearly defined and thoroughly discussed.
Methods And Evaluation Criteria: The evaluations include baselines from both multi-attack robustness and unforeseen attack robustness. The experiment is comprehensive with many settings/attack types, as well as different regularization methods.
Theoretical Claims: Yes. The proof looks reasonable to me.
Experimental Designs Or Analyses: The experimental designs are mostly reasonable to me: covering many attack scenarios (as long as 4 types of attacks continually), and comparing many baselines as well as regularization methods.
Supplementary Material: Yes. The authors included the code as supplementary material and the experiments should be reproducible.
Relation To Broader Scientific Literature: In adversarial robustness for multi-norm robustness and unforeseen robustness field, this paper could be contributing to the intersection of the two by proposing a new CRT scenario, where the attacks are continually deployed.
Essential References Not Discussed: The work of [1] which is the newest work on multi-norm robustness could be compared and discussed in the paper.
[1] RAMP: Boosting Adversarial Robustness Against Multiple lp Perturbations for Universal Robustness.
Other Strengths And Weaknesses: The paper is well-written and the problem is presented clearly and motivated.
Other Comments Or Suggestions: - The ALR component seems to be sort of lack of novelty since it is very similar to TRADES. Also, I noticed there are two terms for regularization in Theorem 3.1, I wonder for the implementations, the authors regularize on which term?
- In real-world applications like autonomous driving, attacks may occur in real-time, yet CRT requires multiple rounds of fine-tuning. How can these challenges be addressed to ensure the practical deployment of CRT in such dynamic environments?
- noticed that the tables do not include an $l_1$ attack. Does this omission have any specific significance?
Questions For Authors: - My primary concern is the significance of studying CRT, given that in practice, attacks may not arrive sequentially. Additionally, does the order in which different attacks occur impact the final results of the proposed methods?
- Another issue is the limited novelty of the method within the CRT setting. What potential future directions could further enhance ALR?
- Also, could authors discuss and compare this important related work [1] in their work?
[1] RAMP: Boosting Adversarial Robustness Against Multiple lp Perturbations for Universal Robustness.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your insightful review.
> Discussion of RAMP
Thank you for pointing us to this interesting relevant work. This work looks at achieving robustness against multiple Lp norms and proposes a logit pairing loss which aims to minimize the KL divergence between the logits of predicting on 2 different Lp attacks. Additionally, they use gradient projection to integrate model updates between natural training and adversarial training for better clean accuracy-robustness tradeoff. In comparison, our work looks at robustness against sequences of attacks including non-Lp attacks. Our regularization term uses $\ell_2$ distance between clean and adversarial logits. We will add this discussion into Appendix A and add a citation to the RAMP paper into Section 5.
> Significance
We believe that it is reasonable to model the defender’s knowledge of potential threats in a sequential manner. Firstly, it takes time for researchers and attackers alike to develop new attacks. For example, the UAR benchmark [1] was initially released in 2019 and was then expanded with new non-Lp attacks in 2023. When new attacks are discovered, the defender would want to quickly adapt their model for robustness. Additionally, in the case that multiple attacks are discovered simultaneously, our regularized CRT can be used with multiple attacks at a single timestep. For example, in initial training, we can use existing methods for multiattack training and add ALR with respect to all attacks, and in finetuning, we can use a finetuning strategy such as FT Croce + ALR which is able to take into account multiple attacks.
[1] Kaufmann et al. (2019). Testing robustness against unforeseen adversaries. arXiv preprint
> Real world attack setting
We discuss challenges of extending to real-time attacks in the “Extension to scenarios where defender has limited knowledge about the attack type” section of our response to Reviewer 9HKR.
> Does the order in which different attacks occur impact the final results of the proposed methods?
We provide results for another ordering of the 4 attacks (Linf->StAdv->Recolor->L2) in Appendix Table 5. Overall, we observe the same trends with ALR helping in reducing forgetting of robustness on previous attacks and improving robustness on held out attacks. The final model obtained after all rounds of finetuning for this sequence achieves higher union all accuracy (3.32% higher than the sequence in the main paper) so order may have some impact on the final model performance. This is also shown in the finetuning ablations in Fig 3 as the matrix is not symmetric.
> Which term is regularized in Theorem 3.1
For initial training with ALR, only the adversarial loss term corresponding to the initial known attack is regularized. In finetuning, the term regularized depends on the finetuning strategy. We use regularization with the attack that is selected to be computed on the batch by the finetuning strategy. Specifically, if we have $\mathcal{L}_1$ be the loss on the previous attack and $\mathcal{L}_2$ be the loss on the new attack used in finetuning, then for FT Single, only the second term is regularized since we only use the new attack in finetuning. For FT Croce, both terms are regularized across training since both attacks have a chance of being chosen during training (although we are only computing regularization with respect to a single attack per batch).
> Novelty and future directions
We acknowledge that there are similarities between our ALR regularizer and TRADES regularizer as they both maximize a distance in the logit space, but we highlight that ALR is theoretically motivated from the standpoint of improving generalization to new/unforeseen attacks and reducing forgetting of previous attacks while TRADES is motivated from the standpoint of balancing clean accuracy-robustness tradeoff. We also highlight that the study of obtaining robustness against sequences of different attack threat models is novel and we contribute extensive experiments across a variety of attack types and investigate the performance of random-noise based regularization as well.
For an experimental comparison to TRADES, please refer to the “comparisons with TRADES” portion of response to reviewer 1uU6. We observe that ALR works better for our task as it does not trade off robustness on the initial attack.
We discuss future directions for enhancing this line of work in Appendix B. These include ways of detecting attacks, further improving finetuning efficiency, studying the impact of model capacity, and theoretical analysis comparing loss under different attacks for the initial model after training and model obtained after finetuning.
> L1 attack
Please see our response to Reviewer JuEE portion on “L0 Attack” for explanation as to why we used the set of evaluation attacks used in the paper. If the reviewer thinks it is necessary, we can add a few experiments with L1 attack for the final version of the paper.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response, which addresses my concerns. I will raise my score accordingly.
---
Reply to Comment 1.1.1:
Comment: Thank you for your response and for raising your score. We will add the RAMP discussion and explicit comparison with TRADES into the updated version of the paper. | Summary: The paper presents a regularization method for robust continual learning and evaluates it using extensive experiments.
Claims And Evidence: The author claims ALR is essential for maintaining robust performance, while from the experiment section, it seems adding ALR sometimes may not even be the optimal, while fine-tuning with MAX performs better.
Methods And Evaluation Criteria: I appreciate the intense experimental results. However, I didn't find how many trials the author ran the experiment for each set of parameters and there's no standard deviation in the result table.
Theoretical Claims: 1. Does Theorem 3.1 hold for any $h$ in the hypothesis set? If so, it can be very loose. Normally, in the non-robust setting, literature studied continual learning by controlling both the generalization bound (the generalization performance of the final model in terms of the average model error over all tasks) as well as the forgetting bound (the averaged loss difference between the final model with the model right after learning the tasks). In this paper the author seems to combine these two metrics together to provide the bound.
2. Is it possible to generalize to subsequently multiple attacks instead of just two?
3. While I appreciate the definition of 2.1, it seems not used in any of the theorems.
Experimental Designs Or Analyses: 1. The attack method considers 10-PGD with 0.075 attack step size, which seems cannot search over the perturbation ball with radius 0.5 for CIFAR 10 under random initialization. The number of steps * step size should be at least 2 * perturbation radius. Otherwise, the paper does not consider random initialization for PGD attacks.
2. Any reason why select the specific perturbation budget for each attack or its purely random? Are you assuming the attacks are roughly the same attack strength? What would happen if randomly change the perturbation budget for each attack?
3. From Table 2, seems different algorithm has different regularization parameters. Then, how do we set the regularization parameter (optimal)? In Appendix H, the author presents so many tables (Table 10-Table 17) indicating that having regularization performs better, but still unclear how to choose the regularizer, and seems from these tables larger regularizer gives better results in terms of facing unseen attack, but obviously we cannot set the regularizer parameter to be infinity.
4. For model selection, why not perform a separate validation set for model selection? Don't you observe robust overfitting phenomena if using training data for model selection?
5. It seems having ALR as the regularize won't always give the optimal performance. Should we focus on average accuracy or union accuracy? Should we focus on (known) or (all). For example, Table 6,7,8,9 shows ALR is not the optimal, instead MAX or FT-MAX is.
Supplementary Material: I have read the appendix of the paper.
Relation To Broader Scientific Literature: The paper addresses the problem of maintaining robustness while transferring to different attack models, which is a rather important topic that has been studied in the literature.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: 1. Are there any comparisons with TRADES, as the regularizer is similar except one is L2 norm and the other is KL divergence?
Other Comments Or Suggestions: 1. What is RCRT at the end of the paragraph of the paper? Regularized continual robust learning? Seems the abbreviation is not coherent, and as there are so many abbreviations in the paper, I feel it’d be helpful to repetitively state the full name for the sake of readability.
2. In general I feel the explanation of the experimental result can be more detailed and more clear, instead of simply listing all the tables and figures in the appendix. For example, what kind of attack do you think ALR helps in terms of robust transfer? As the theorem only considers two attacks, it might be helpful to also consider such a setting in the experiment instead of 4 attacks.
3. It's unclear if ALR regularization works better compared with other regularization. In Appendix H.1, the author tries to provide multiple tables, each dealing with one regularization. One better option would be to fix the attack order and change different regularizations, each with the same or different regularization parameters, and see which works better. I really appreciate the intensity of the experiment, but there's no need to consider that many sequences of attack if we do not even understand and analyze the results.
typo: start of page 5 left column, repetitively saying for l2 attacks. I imagine the second one should be L-infty attack.
Questions For Authors: 1. What do you mean by single step optimization for ALR? I'm not sure how accurate it is compared with multi-steps.
2. What would happen if we switch the order of the attacks? The paper seems to only consider two orders of attacks with both starting with Lp attack. What would happen if we start with a non Lp attack
Ethical Review Concerns: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: ## General clarifications
*Goals in CAR*
We optimize three objectives (Def 2.1): (1) robustness to known attacks, (2) robustness to unforeseen attacks, and (3) update efficiency. (Known) metrics correspond to (1), (all) metrics to (2), and training time to (3). Thus, (known), (all), and time metrics are all key for comparing techniques. MAX and FT-MAX may achieve better accuracy but are less efficient.
*Finetuning Ablations*
We direct the reviewer to Fig 3 and discussions in Sec 4.4 which addresses a few of the reviewer’s questions. There, we study sequences of 2 attacks including non-Lp starting attacks. These experiments make it clear that in finetuning ALR is much better compared to random (Uniform and Gaussian) regularization which hurts Union accuracy across the 2 attacks.
## Theory
> Thm 3.1, generalization and forgetting bounds
Thm 3.1 holds for any hypothesis $h$, so any model trained with the regularizer in the RHS will have a reduced loss gap. We find that this bound correlates with robust loss gaps in practice (Appendix E). Refining the bound to account for both generalization and forgetting is an interesting future direction. Our analysis relies on distances between representations for inputs to a fixed model rather than changes in representations through training. The latter must take into account changes in the representation space induced by training on new attacks, which is not well-understood. With Thm 3.1 and Cor 3.2, we aim to strike a balance between robustness against individual attacks (both seen and unseen), union accuracy, and accuracy on clean samples.
> Generalizing theory to more than 2 attacks
Thm 3.1 and Cor 3.2 hold for any two attacks, whether or not they were seen during the course of training. For a larger set of attacks, the maximum loss gap between any attack pair is subject to the bound in Thm 3.1. Attacks can be defined to optimize over the union of multiple adversarial constraints, allowing us to extend our theoretical results to the union of multiple attacks.
## Experimental
> PGD step size
PGD is used only in adversarial training. Evaluations use AutoAttack, which adapts step size for accurate robustness assessment. Prior work (Gowal et al. 2020, Rice et al. 2020) also omits random initialization in L2 adversarial training.
> Selection of the perturbation budget
Comparing attack strengths, especially non-Lp, is challenging. We use default budgets from the original attack papers. Developing comparative metrics for attack strengths is an interesting direction.
> How to set the regularization strength?
See “How should we set the regularization strength parameter?” in our response to Reviewer 9HKR.
> Model selection
We select the epoch with the highest average validation accuracy on known attacks, effectively performing optimal early stopping to avoid robust overfitting.
> Average vs. Union Accuracy
The choice between them depends on application. Safety-critical settings prioritize union accuracy since it captures the worst case.
## Other
> TRADES comparison
TRADES is designed for improving clean accuracy tradeoff while ALR is designed for improving generalization across (seen and unforeseen) attacks. Since TRADES regularizer also maximizes a distance (KL instead of L2) in the logit space, we expect it can also improve generalization across attacks as well and provide results below. Similar to experiments with ALR, we regularize on top of PGD L2 and Linf adversarial training. Regularization strength in parentheses.
| Threat model | Reg. | Clean | L2 | Linf | StAdv | ReColor | Union |
| --- | --- | --- | --- | --- | --- | --- | --- |
| L2 | None | 91.17 | 69.7 | 28.41 | 2.08 | 44.94 | 1.24 |
| L2 | Trades (1) | 90.43 | 70.08 | 31.33 | 0.89 | 38.51 | 0.6 |
| L2 | Trades (3) | 88.93 | 70.05 | 33.81 | 9.04 | 58.25 | 6.74 |
| L2 | Trades (6) | 88.76 | 69.69 | 33.00 | 7.04 | 56.82 | 5.51 |
| L2 | ALR (1) | 89.43 | 69.84 | 34.00 | 48.23 | 65.46 | 31.27 |
| Linf | None | 85.93 | 59.48 | 51.44 | 14.87 | 62.48 | 11.9 |
| Linf | Trades (1) | 85.39 | 59.33 | 49.23 | 14.11 | 64.45 | 11.45 |
| Linf | Trades (3) | 83.97 | 58.54 | 47.00 | 20.51 | 69.33 | 16.34 |
| Linf | Trades (6) | 85.72 | 56.44 | 41.70 | 23.17 | 70.23 | 17.83 |
| Linf | ALR (0.5) | 83.18 | 58.15 | 51.49 | 34.78 | 58.15 | 29.87 |
Notably, increasing TRADES strength in Linf training trades off Linf performance, whereas ALR does not.
> Appendix results
Tables in App. H.1 mirror Table 3’s ablations on initial training, showing consistency across datasets and attacks. These do not involve finetuning; for finetuning results, see Fig 3. Discussion of Appendix figures is currently located early on before related figures. We will fix this in the camera-ready.
> Single step optimization in ALR
ALR optimizes worst-case logit distance which we compute with a single PGD step. This is less precise than multiple steps but more efficient, aligning with our goal of improving efficiency. | Summary: The paper proposes a algorithm to robustly finetune the model for newly proposed attacks. Specifically, the paper proposes a regularization term called ALR at the both pretraining and finetuning stage. The regularziation term bound the difference between clean logits and the adversarial logits. The experiment results show that ALR significantly improves the robustness of the newly attack.
Claims And Evidence: The paper claims are generally well supported by the experiment results. However, there are several points that I find lacks of support.
1. What is the function of the regularizaton term ALR? When using it at the pretraining stage, does it accelerate the funetuning stage or make the initial model more robust to unknown attack? To prove this, it is better to add ablation study using AT and AT + ALR as init model to conduct the finetuning process.
2. The attack family lacks of $\ell_0$ attack.
Methods And Evaluation Criteria: The paper performs extensive experiments on different datasets using different kinds of attacks. The experiments result is able to demonstrate the effectiveness of the regularization ALR as it largely increases the robust accuracy.
Theoretical Claims: The theoretical claims seem correct but I did not carefully check them.
Experimental Designs Or Analyses: Yes, I have check the setting of the experimental designs and they are correct.
Supplementary Material: No.
Relation To Broader Scientific Literature: No
Essential References Not Discussed: None
Other Strengths And Weaknesses: 1. The presentation of paper needs improvement. For example, in Definition 2.1, the paper introduces introduces several concepts such as $t$ and $\delta_{known}$. They are actually unnecessary for the paper. The author could provide a more direct introduction to the method itself.
Other Comments Or Suggestions: None.
Questions For Authors: 1. In Table 1, the robust accuracy of Union(All) drops after the fine-tuning stage, does this mean the initial model is actually the best model against unknown adversarial attack?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your positive appraisal of the paper and interesting questions.
> What is the function of the regularization term ALR? When using it at the pretraining stage, does it accelerate the funetuning stage or make the initial model more robust to unknown attack? To prove this, it is better to add ablation study using AT and AT + ALR as init model to conduct the finetuning process.
ALR in the pretraining stage serves to improve the generalization to unforeseen attacks (discussed in the second paragraph of Section 4.2), which then provides a better starting point when finetuning the model to the new attack. In Figure 5 in the Appendix, we have provided a comparison between finetuning (without regularization) from an AT + ALR initial model and finetuning (without regularization) from an AT initial model.
When used in finetuning, ALR serves to reduce forgetting of robustness to previous attacks in the sequence, which we demonstrate in Table 1.
> L0 Attack
Because the goal of CRT is to quickly adapt the model to unforeseen attacks when they become known to the defender, we chose to use the same set of attacks (Linf, L2, StAdv, ReColor) used for evaluation in works on unforeseen robustness [1,2] as well as incorporate attacks from a benchmark for unforeseen robustness (Gabor, Snow, Pixel, Kaleidoscope, Glitch, Elastic, JPEG, Wood) [3]. This is why we opted to use the attack set that we used in evaluation.
L0 attack involves combinatorial optimization and it is unclear whether it can be easily integrated into adversarial training which our framework is based off of. Additionally, [4] demonstrates that L0 attacks can be quite weak and needs to use large per-pixel perturbations.
[1] Laidlaw et al. (2021). Perceptual Adversarial Robustness: Defense Against Unseen Threat Models. International Conference on Learning Representations (ICLR).
[2] Dai, S., Mahloujifar, S., & Mittal, P. (2022). Formulating robustness against unforeseen attacks. Advances in Neural Information Processing Systems, 35, 8647-8661.
[3] Kaufmann, M., Kang, D., Sun, Y., Basart, S., Yin, X., Mazeika, M., ... & Hendrycks, D. (2019). Testing robustness against unforeseen adversaries. arXiv preprint arXiv:1908.08016.
[4] Zuo, F., Yang, B., Li, X., & Zeng, Q. (2019). Exploiting the inherent limitation of l0 adversarial examples. In 22nd International Symposium on Research in Attacks, Intrusions and Defenses (RAID 2019) (pp. 293-307).
> Definition 2.1
The variables $t$ and $\delta_{\text{known}}$ are important for rigorously defining the problem that we are studying (continual adaptive robustness) and what goals we hope a good algorithm for this setting will achieve, which we connect to our proposed CRT in Section 3.1. Specifically, we become aware about the existence of attacks sequentially over time and at a specific point in time $t$, we would want to be robust against attacks that we have known for a while, have some robustness to recently introduced attacks, and recover quickly from new attacks. To model this, we have 3 problem parameters, (1) $\delta_{\text{known}}$ which specifies the loss threshold that we can tolerate on attacks that we’ve known for a while, (2) $\delta_{\text{unknown}}$ which specifies the loss threshold that we can tolerate on new recently introduced attacks, and (3) $\Delta t$ which specifies how long the model has to recover from new attacks. “Recovering from new attacks” basically means that threshold for tolerated loss switches from $\delta_{\text{unknown}}$ to $\delta_{\text{known}}$ with $\delta_{\text{known}} < \delta_{\text{unknown}}$.
These three quantities also serve to motivate the metrics we measure in the experimental section: Union (known) and Avg (known) correspond to $\delta_{\text{known}}$, Union (all) and Avg (all) give a sense of $\delta_{\text{unknown}}$ and the training time corresponds to $\Delta t$. These connections are discussed within the results discussion in Section 4.2.
> In Table 1, the robust accuracy of Union(All) drops after the fine-tuning stage, does this mean the initial model is actually the best model against unknown adversarial attack?
This result suggests that the features used for classifying robustly under StAdv attack are different from those for Linf attack, so when we fine-tune the model on StAdv attack at time step 1, the generalization to Linf attack gets worse, which in turn impacts the union accuracy. The fine-tuned model's performance also depends on the attack sequence, so it is hard to conclude that the initial model will be better in terms of unforeseen robustness or not. For example, in Table 5 in the Appendix, union accuracy steadily increases after fine-tuning. In general, it is better to fine-tune on known attacks because these attacks are the ones that the defender is confident will affect the model’s performance. | Summary: This paper introduces Regularized Continual Robust Training (RCRT), a framework for adapting deep learning models to evolving adversarial attacks while maintaining robustness to previously seen threats. The authors theoretically demonstrate that the gap in robustness between different attacks is bounded by logit-space distances, and propose adversarial L2 regularization (ALR) to minimize this bound during both initial training and fine-tuning.
Claims And Evidence: The claims are generally well-supported by both theoretical analysis and extensive empirical evidence. The theoretical bound connecting the robustness gap to logit distances (Theorem 3.1 and Corollary 3.2) is well-established and forms a sound basis for the proposed regularization technique. The empirical evaluation is comprehensive, covering multiple datasets, attack types, and regularization approaches, with clear performance metrics (Union accuracy, Average accuracy, and time overhead).
Methods And Evaluation Criteria: The methods and evaluation criteria are appropriate for the problem. The paper properly defines continual adaptive robustness and establishes clear metrics for success (known and unforeseen robustness thresholds with grace periods). The evaluation considers both effectiveness (robustness across attacks) and efficiency (training time), which are both critical for practical deployment.
Theoretical Claims: I checked the proofs in Theorems 3.1 and Corollary 3.2 and found them to be mathematically sound.
Experimental Designs Or Analyses: The experimental design is sound with clear selection of datasets and attack types and ppropriate baseline methods for comparison.
Supplementary Material: I have reviewed the related work, experiment result and theoretical result of supplementary material
Relation To Broader Scientific Literature: The paper properly positions its contributions on the robustness for model under continual shift in relation to Dai et al., 2023; Kaufmann et al., 2019. The author should also discuss literature in gradual domain adaptation, and discuss paper like [1].
[1] Understanding Self-Training for Gradual Domain Adaptation. Ananya Kumar, Tengyu Ma, and Percy Liang. Understanding self-training for gradual domain
adaptation. In International Conference on Machine Learning, pp. 5468–5479. PMLR, 2020.
Essential References Not Discussed: Not aware.
Other Strengths And Weaknesses: - The regularization approaches considered (VR, ALR, UR, GR) all operate on model outputs (logits or later features) rather than exploring regularization of earlier representational layers, which might provide complementary benefits.
- While the paper examines the effect of regularization on the tradeoff between robustness and clean accuracy, there's limited discussion of the optimal balance for different applications.
- The evaluation focuses on image classification tasks; applicability to other domains remains unexplored.
- The computational overhead of ALR, while modest, might still be a concern for very large models.
Other Comments Or Suggestions: - The paper would benefit from more visual examples of different attack types to help readers understand their qualitative differences.
- Some discussion of potential applications and use cases where CAR would be particularly valuable would strengthen motivation.
Questions For Authors: - The paper focuses on regularization at the logit level. Have you explored regularizing intermediate representations in the network, and if so, how does this compare to logit-level regularization?
- For practical deployment, how would you recommend balancing the tradeoff between robustness and clean accuracy that regularization introduces? Are there guidelines for selecting the regularization strength λ based on the specific application needs?
- Your results in Table 1 show that fine-tuning with only the new attack (FT Single) leads to significant forgetting of previous attacks. Have you explored methods from the continual learning literature (like replay buffers or elastic weight consolidation) that specifically target catastrophic forgetting, and if so, how do they compare to your regularization approach?
- How might your approach extend to scenarios where the defender has limited knowledge about the attack type but can only observe the adversarial examples? This would be closer to real-world security scenarios where attackers don't reveal their methods.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your insightful review and positive appraisal of our paper.
> Discussion of gradual domain adaptation
Thank you for pointing us to this line of work. The work referenced studies shifts in *data distribution over time* and proposes gradual self-training to adapt the source model without access to labels. Meanwhile, we propose regularized CRT as a solution to the *expanding space of attacks over time*, with the data distribution itself remaining the same, with access to attacks and labels. We will add a discussion of this related direction into Appendix A.
> Regularization on intermediate representations
We provide experiments with regularization on the features at the layer before the logits (Results in Appendix Table 4 rows labelled “+ALR feature”). Overall, we observe similar results compared to logit level regularization. We also provide theoretical results for regularization at the layer before the logits in Appendix C.3.
> Comparing to methods from continual learning
Thank you for this suggestion. We experimented with using EWC for finetuning for StAdv robustness from an L2 robust model with the FT Single approach. We provide results for 3 different strengths (in parentheses) of EWC compared to unregularized FT Single and FT Single + ALR. Overall, we find that ALR’s improvement in robustness on known and unforeseen attacks is significant compared to EWC. EWC’s improvement over FT Single is similar to using FT Croce results in Table 1 (Time step 1) which uses replay of previous attacks in finetuning. We will add these comparisons into the Appendix.
| Method | Clean | L2 | StAdv | Linf | ReColor | Avg Known | Union Known | Avg all| Union all |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| FT Single | 80.89 | 45.45 | 54.5 | 6.09 | 41.98 | 49.98 | 41.05 | 37.0 | 5.87 |
| FT Single + EWC (0.5) | 83.98 | 58.85 | 51.15 | 15.44 | 51.55 | 55.00 | 46.25 | 44.25 | 14.54 |
| FT Single + EWC (1) | 85.20 | 57.69 | 56.18 | 13.07 | 50.99 | 56.93 | 49.42 | 44.48 | 12.69 |
| FT Single + EWC (2) | 85.10 | 57.96 | 55.14 | 13.54 | 51.23 | 56.55 | 48.9 | 44.47 | 12.99 |
| FT Single + ALR | 87.24 | 62.22 | 61.5 | 21.4 | 70.87 | 61.86 | 55.04 | 54.0 | 21.14
> Extension to scenarios where defender has limited knowledge about the attack type
This is an interesting direction. In our work, we focus on changes in the defender’s knowledge of attacks over time which is useful in cases such as a research or security team discovering a new attack type. A real-time attack setting poses new challenges:
*No access to threat model*- the defender does not know the threat model and cannot generate adversarial examples. They only have access to the perturbed data generated by the adversary.
*Missing true labels and no access to the original unperturbed input* - the defender also does not have the corresponding true labels or the original clean input for use in training.
*Few shot updates* - it becomes critical that the model can be made robust with only a few examples of successful attacks, otherwise it means that the adversary has been exploiting the vulnerabilities of the model for a long time
Defending in this setting is outside of the scope of this paper, but potentially using generative models in order to model the perturbation [1] used by the adversary can help to bridge the gap from points (1) and (2) and allow for the defender to apply the attack on their own dataset and finetune with our proposed CRT + ALR. If the generative model is able to learn to model perturbations with only a few adversarial examples, then this can also address (3). We will add this discussion into Appendix B’s discussion of future directions.
[1] Wong et al. 2020 Learning perturbation sets for robust machine learning. ICLR
> Visual examples of different attack types
Thank you for this suggestion, we will add visual examples of each attack into the Appendix.
> Discussion of potential applications
Solving CAR is of interest in any safety-critical domain where an attacker is motivated to evade a ML model. A good example is automated content moderation, where malicious actors try to post content that violates policies by uploading obfuscated images [2]. Strategies naturally evolve over time for motivated attackers who can also use open-source methods proposed in the literature. As ML models will continue to be used in sensitive domains such as finance, cyber-physical systems and medicine, model deployers need methods to update their models to evolving threats. We will add this discussion to the updated paper.
[2] Stimberg et al. (2023) "Benchmarking robustness to adversarial image obfuscations." NeurIPS 2023
> How should we set the regularization strength parameter?
We recommend selecting regularization strength based on how much tradeoff in clean accuracy (and starting attack accuracy in the case of uniform and gaussian regularization) that the model deployer can tolerate for the application. | null | null | null | null | null | null |
Multi-objective Linear Reinforcement Learning with Lexicographic Rewards | Accept (poster) | Summary: This work focuses on development of an algorithmic framework with theoretical performance guarantees in Multi-objective RL where the underlying Multi-Objective Markov Decision Process (MO-MDP) is assumed to be linear. The algorithmic strategy optimizes for lexicographic rewards which are essentially hierarchically ordered. To this end, Lexicographic Linear RL (LLRL), is proposed in the finite-horizon episodic learning setup. The method refines agent's policy over time by backward pass updates on the model parameters, careful management of exploration-exploitation trade-off, and handling lexicographic rewards through multi-stage action refinement via a dedicated action elimination routine. The paper also surfaces key challenges with MORL and connects them to its key algorithmic innovation pieces.Finally, a mathematical analysis of LLRL's performance measured in regret against optimal rewards is also presented.
Claims And Evidence: The authors' central claim is around developing an efficient strategy for MOLRL with lexicographic rewards settings. To this end, LLRL is presented, and the evidence of LLRL's performance is captured in theoretical regret analysis (Theorem 1, 2). To the best of my knowledge, an empirical performance evaluation is missing for LLRL.
Methods And Evaluation Criteria: The method LLRL is proposed towards solving an MOMDP in lexicographic rewards setting, and it is being benchmarked by worst case regret bound against a optimal policy. To the best of my understanding, both the method and the evaluation criteria is relevant to the underlying problem setup.
Theoretical Claims: The key theoretical contributions are stated in Theorem 1,2 providing LLRL's regret bounds. These results are then proven using intermediate supporting results through Appendix A-J. In my understanding, the results are derived systematically using well-known statistical concentration inequalities, and linear algebra results.
Experimental Designs Or Analyses: A thorough experimental analysis of LLRL is missing.
Supplementary Material: The supplementary material through Appendix A-J includes detailed proofs of the lemmas and key regret theorems presented in the paper.
Relation To Broader Scientific Literature: In my understanding, the paper discusses relevant prior works on linear RL as well as MORL, identifies a concrete gap for MOMDP with lexicographic reward structure. The proposed LLRL method addresses the aforementioned problem statement, and is supported by Theorem 1,2 where the regret bounds are derived for a finite horizon setup. However, an empirical validation of LLRL is missing.
Essential References Not Discussed: I think the paper provides a good overview of the relevant literature.
Other Strengths And Weaknesses: Strengths:
1. The LLRL algorithm is well-described with the key techniques and challenges being clearly highlighted.
2. The proofs are rigorous and detailed.
3. The results in the misspecified setting are interesting and valuable.
Weakeness:
1. The lack of experimental results is a significant limitation.
2. The assumption that the transition kernel and reward function are linear may limit the applicability of the algorithm in practice.
Other Comments Or Suggestions: 1. While the paper is well-written overall, some of the notation can be dense and difficult to follow. Consider simplifying the notation where possible and providing more intuitive explanations of the key concepts. A diagramatic workflow could be helpful in this regard.
2. Investigate alternative assumptions for managing inter-objective trade-offs and explore whether comparable regret bounds can be derived without Assumption 1.
Questions For Authors: 1. How scalable will be LLRL ? In other words, what are the computational complexities of the LLRL algorithm and the LAE procedure?
2. Can we have a mathematical sketch of how the performance would look like in the absense of Assumption 1 ? Also, is there any other way based on practical considerations that we can measure these objective trade-offs ?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely appreciate your constructive feedback. We have carefully considered your concerns (including Weaknesses and Questions raised), and our responses are provided below.
---
*W1. The lack of experimental results is a significant limitation.*
Thank you for raising this issue. The absence of empirical validation in our work aligns with the foundational single-objective linear MDP studies (Jin et al. 2020; Zanette et al. 2020; He et al. 2023), which similarly omit experiments due to challenges in constructing valid linear MDP benchmarks (e.g., enforcing low-rank dynamics and linear payoff structures). We plan to address this limitation in two phases: **i)** Synthetic experiments will be added to illustrate key theoretical properties. **ii)** Comprehensive empirical comparisons against heuristic baselines and ablations will be conducted in follow-up work.
---
*W2. The assumption that the transition kernel and reward function are linear may limit the applicability of the algorithm in practice.*
We acknowledge that the linear assumptions may not hold universally in all practical scenarios, particularly in environments with highly complex dynamics or non-linear relationships. However, linearity allows us to leverage well-established mathematical tools from linear algebra and convex optimization, which are essential for deriving rigorous regret bounds. Similar assumptions are common in foundational RL theory (e.g., Jin et al. (2020), Zanette et al. (2020), and He et al. (2023)) to balance generality and analyzability.
Moreover, in finite state-action spaces, any nonlinear system can be represented as a linear MDP by encoding each state-action pair $(x, a)$ as a one-hot feature vector in $\mathbb{R}^d$, where $d=|\mathcal{S}| \times |\mathcal{A}|$. The transition kernel $\mathbb{P}(x' \mid x, a)$ and reward function $r(x, a)$ can be expressed as inner products between the feature vector of $(x, a)$ and learnable parameters.
In the future, we will try to adopt techniques of generalized linear bandits or Lipschitz bandits to extend the model into generalized linear or Lipschitz.
---
*Q1. How scalable will be LLRL? In other words, what are the computational complexities of the LLRL algorithm and the LAE procedure?*
The computational cost of LLRL mainly lies on LAE and policy update (Steps 19-24). Below, we provide a detailed complexity analysis of Algorithm 1:
1. Step 7: The complexity is $O(d^2|\mathcal{A}|)$.
2. Step 8: LAE requires $O(md|\mathcal{A}|)$ computations.
3. Step 20: The complexity is $O(d^2)$, as $U_h$ can be updated incrementally.
4. Step 21: The complexity is $O(mk)$ for updating $m \cdot k$ values.
5. Step 22: The complexity is $O(mkd+md^2)$, dominated by inverting $U_h$ ($O(d^2)$) and computing $m$ linear regressions ($O(mkd+md^2)$).
6. Step 23: No additional resources are required, as Q-values can be updated directly from retained $\{\hat{w}_h^i\}_{i\in[m]}$.
- Summing across all $H$ MDP layers, the complexity of $k$-th round is $O(Hmd|\mathcal{A}|+Hd^2|\mathcal{A}|+Hmkd+Hmd^2)$.
- Summing over $K$ rounds, the **overall computational complexity of LLRL** is $O(KHd|\mathcal{A}|(m+d)+KHmd(K+d))$.
We will incorporate this complexity analysis in the revised paper to clarify scalability.
---
*Q2. Can we have a mathematical sketch of how the performance would look like in the absense of Assumption 1? Also, is there any other way based on practical considerations that we can measure these objective trade-offs?*
In the context of lexicographic bandit problems, Huyuk and Tekin [1] establish regret bounds without relying on assumptions analogous to Assumption 1 in our work. Extending their analysis to linear MDPs, we hypothesize that a similar regret bound of order $O((d^2H^4K)^{\frac{2}{3}})$ may hold, though a formal proof remains an open question for future investigation.
Regarding practical trade-off quantification, domain-specific expertise often provides empirically grounded ratios for prioritizing objectives. For example, environmental policy frameworks frequently employ comparative metrics where 1 tonne of SO2 emissions is considered $\leq10$ times as harmful as 1 tonne of CO emissions (Podinovski [2], 1999), which directly corresponds to $\lambda=10$.
*[1] Huyuk, A. and Tekin, C. Multi-objective multi-armed bandit with lexicographically ordered and satisficing objectives. Machine Learning, 110(6):1233–1266, 2021.*
*[2] Podinovski, V. V. A dss for multiple criteria decision analysis with imprecisely specified trade-offs. European Journal of Operational Research, 113(2):261–270, 1999.*
---
**Other Comments Or Suggestions:** *Notations and Alternative Assumptions.*
Thank you for your positive feedback and constructive suggestions. In the revised paper, we will carefully revise all notation and compile it into a summary table for clarity. Meanwhile, we will try to establish comparable regret bounds without Assumption 1 in the future work.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for providing the clarifications and revision plan for some of the comments. It appears that authors have acknowledged some of the current highlighted limitations which deserves careful considerations, for instance, related to experiments, and assumptions. Therefore, I would like to maintain my original score. | Summary: This paper studies multi-objective RL (MORL) with lexicographic rewards in linear MDPs, where rewards comprise hierarchically ordered objectives. A key challenge in MORL is the failure of Bellman optimality. They propose the LLRL algorithm and establish the first regret bound for MORL under a certain assumption (Assumption 1).
Claims And Evidence: Yes.
Methods And Evaluation Criteria: N/A
Theoretical Claims: My major concern is on Assumption 1. I am unable to find the definition of "$a_2$ lexicographically dominates $a_1$" in the main text (please point it out if there is any). Please refer to for follow-up questions.
Experimental Designs Or Analyses: N/A
Supplementary Material: N/A
Relation To Broader Scientific Literature: N/A
Essential References Not Discussed: No.
Other Strengths And Weaknesses: **Strength:** This paper establishes the first theoretical regret bound for MORL.
**Weakness:** See Questions.
Other Comments Or Suggestions: Typo: The inequality under Assumption 1: $\text{LHS} \le \lambda \cdot \max_{j \in [i - 1]} \\{ r_h^j(x, a_2) - r_h^j (x, a_1) \\}$
Questions For Authors: 1. What is the definition of "$a_2$ lexicographically dominates $a_1$" in Assumption 1? Does it mean the reward vector $[r_h^1(x,a_2), \cdots, r_h^m(x,a_2)]$ lexicographically dominates vector $[r_h^1(x,a_1), \cdots, r_h^m(x,a_1)]$?
2. Following Q1, I have one question regarding the proof of Lemma 3 in Appendix E. Specifically, why would Assumption 1 result in the argument in lines 797-800?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Many thanks for your constructive reviews. We have carefully considered your concerns and our responses are provided as follows.
---
*Q1. What is the definition of $a_2$ lexicographically dominates $a_1$ in Assumption 1? Does it mean the reward vector $[r_h^1(x, a_2),\cdots,r_h^m(x, a_2)]$ lexicographically dominates vector $[r_h^1(x, a_1),\cdots,r_h^m(x, a_1)]$?*
Yes, the definition aligns with your interpretation: $a_2$ lexicographically dominates $a_1$ if and only if the reward vector $[r_h^1(x, a_2),\cdots,r_h^m(x, a_2)]$ lexicographically dominates vector $[r_h^1(x, a_1),\cdots,r_h^m(x, a_1)]$.
---
*Q2. Following Q1, I have one question regarding the proof of Lemma 3 in Appendix E. Specifically, why would Assumption 1 result in the argument in lines 797-800?*
We appreciate this critical observation. Upon re-examination, we recognize that Assumption 1 alone does not suffice to support the argument in lines 797-800. This oversight has led us to propose a revised assumption, detailed below.
**Key Revision Rationale:** Initially, we presumed that if individual rewards satisfied Assumption 1 (lexicographic dominance on immediate rewards), their weighted aggregation $\bar{Q}\_{k,h}^i$ would inherently preserve this property. However, this reasoning neglected the temporal dynamics of MDPs: actions at step $h$ influence not only immediate rewards but also future state distributions (via $\mathbb{P}\_h$). The original assumption only bounded trade-offs in immediate rewards ($r\_h^i$), failing to account for long-term value interactions in $\bar{Q}\_{k,h}^i$.
**Revised Assumption:** Let $\tilde{Q}^i\_h(x,a)=r\_{h}^i(x,a)+\[\mathbb{P}\_h \tilde{V}^i\_{h+1}\](x,a)$ for any $i\in[m]$ and $(x,a,h)\in S\times A\times [H]$. Let $\pi_*(x,h)$ denote the action chosen by the lexicographically optimal policy at $(x,h)$. We assume the trade-off among objectives is governed by $\lambda\geq0$, such that for all $h\in[H]$ and $i\in[m]$,
$$
\tilde{Q}^i\_{h}(x,a)-\tilde{Q}^i\_{h}(x,\pi_*(x,h))\leq \lambda \cdot \max\_{j\in[i-1]}\left\\{\tilde{Q}^i\_{h}(x,\pi\_*(x,h))-\tilde{Q}^i\_{h}(x,a)\right\\}.
$$
Here, $\tilde{V}^i_h(x)=\langle w(x), \mathbf{r}^i_{h:H}\rangle$, where $w(x)\in\mathbb{R}^{H-h+1}$ is a shared weighting vector across all objectives, and $\mathbf{r}^i_{h:H}=[r^i_{h}(\cdot,\cdot),r^i_{h+1}(\cdot,\cdot),\cdots, r^i_{H}(\cdot,\cdot)]$.
**Lines 797-800:** Under the revised assumption, we can demonstrate that Lines 797-800 hold because the action-value function $\bar{Q}^i_h(x,a)$ decomposes as $r_{h}^i(x,a) + \[\mathbb{P}\_h \hat{V}^i_{k,h+1}\](x,a)$, where $\hat{V}^i_{k,h+1}$ represents a weighted sum of rewards from step $h+1$ to $H$. Crucially, the weighting parameter $w(x) = \phi(x, \pi_k(x,h))^\top U_h^{-1}F_h$ is shared across all objectives, ensuring consistency in multi-objective optimization. Here, $F_h \in \mathbb{R}^{d\times k}$ denotes the feature matrix comprising historical state-action pairs, defined as $F_h = [\phi(x_{1,h},a_{1,h}), \ldots, \phi(x_{k,h},a_{k,h})]$.
**Comparison with Assumption 1:**
- *Advantage.* Assumption 1 requires that for any action $a_1,a_2\in A$, if $a_2$ lexicographically dominates $a_1$, then their rewards satisfies the trade-off value $\lambda$. The revised assumption fixed the one action as $\pi_*(x,h)$, relaxing the trade-off among actions.
- *Limitation.* The revised assumption introduces a shared weight $w(x)$ across objectives, ensuring **consistent reward processing** via $\tilde{V}^i_{h+1}(x)$. While this is natural in tabular MDPs (where $w(x)$ represents visit-count normalization), it imposes stricter conditions in linear MDPs due to the non-fixed nature of $w(x)$.
---
We thank the reviewer for prompting this clarification, which strengthens our theoretical framework. We are happy to answer more questions.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed explanation. This addresses my problem with Lemma 3. However, under the revised assumption, it seems that the lexicographically optimal policy can be directly solved by standard single-objective RL, where the rewards given by $R_h(x,a) := \sum_{i=1}^m \lambda^{t(i-1)} \cdot r_h^i(x, a)$ for some integer $t := t(\lambda)$. This suggests the revision might be too restrictive. Below, I analyze the state-less case (but I think it can be generalized directly to the episodic setting).
Consider that $a^\star$ lexicographically dominates $a$. We assume $r^1(a^\star) > r^1(a)$ WLOG. It can be shown that $q(a^\star) \ge q(a)$, where $q(a) := \sum_{i=1}^m \lambda^{t(i-1)} \cdot r^i(a)$, for some integer $t$. First, we select $i_1 := \arg\max_{j \in [m]} r^j(a^\star) - r^j(a)$ and the "tail summation" over $j = i_1, i_1 + 1, \cdots, m$ satisfies $$\sum_{j=i_1}^m \lambda^{t(j-1)} \cdot (r^j(a^\star) - r^j(a)) \ge (1 - \sum_{j=i_1+1}^m \lambda^{t(j-i_1)+1}) \cdot \lambda^{t(i_1 - 1)}(r^{i_1}(a^\star) - r^{i_1}(a)).$$ Choose $t$ such that $\lambda^{t+1} \le 1 - \lambda^t$, and the above difference is positive. Next, we select $i_2 := \arg\max_{j \in [i_1 - 1] } r^j(a^\star) - r^j(a)$ and analyze the weighted summation over $j = i_2, i_2 + 1, \cdots, i_1 - 1$. Repeat the above procedure until $i_n=1$, and the argument is proved.
---
Reply to Comment 1.1.1:
Comment: Thank you for your response. I am unclear about your strategy for arm selection. Do you propose selecting arm $a_t$ as follows?
$$
a_t = \text{argmax}\_{a \in \mathcal{A}} \sum_{i=1}^m \lambda^{m-i} \hat{Q}_h^i(x, a)
$$
If this is your basic idea, I identify two potential concerns:
1. **Case when $\lambda = 0$:** In this case, this strategy becomes invalid because $\sum_{i=1}^m \lambda^{m-i} \hat{Q}_h^i(x, a) = \hat{Q}_h^m(x, a) $ for any arm $a \in \mathcal{A}$.
2. **Case when $\lambda > 0$:** In this case, this strategy introduces additional regret due to the weighted aggregation of rewards. We provide a detailed analysis below.
Since rewards are aggregated via a weighted sum, we must analyze the regret of this weighted sum across objectives:
$$
\lambda^{m-1} R^1(K) + \lambda^{m-2} R^2(K) + \ldots + R^m(K) = \sum_{\tau=1}^K \sum_{i=1}^m \lambda^{m-i} \left( V_{\pi_*,1}^i(x_{k,1}) - V_{\pi_k,1}^i(x_{k,1}) \right)= \sum_{\tau=1}^K \left[ \sum_{i=1}^m \lambda^{m-i} V_{\pi_*,1}^i(x_{k,1}) \right] - \left[ \sum_{i=1}^m \lambda^{m-i} V_{\pi_k,1}^i(x_{k,1}) \right].
$$
Following an analysis similar to that of single-objective MOMDP (Jin et al., 2020), the regret bound for the weighted sum method is:
$$
\lambda^{m-1} R^1(K) + \lambda^{m-2} R^2(K) + \ldots + R^m(K) \leq \sum_{i=1}^m \lambda^{m-i} \widetilde{O}\left( \sqrt{d^2 H^4 K} \right).
$$
This result shows that when using a weighted sum of rewards, **the regret bounds of all objectives are scaled with $m$**. For instance, the regret bound for the most important objective $(i = 1)$ becomes:
$$
R^1(K) \leq \sum_{i=1}^m \lambda^{1-i} \widetilde{O}\left( \sqrt{d^2 H^4 K} \right),
$$
which depends on both $\lambda$ and the number of objectives $m$. In contrast, our algorithm achieves a regret bound of $\widetilde{O}(\sqrt{d^2 H^4 K})$ for the first objective, **independent of $m$**.
---
Thank you again for your response. We are happy to answer more questions. | Summary: This paper studies linear Markov Decision Processes (where the transition function and reward function can be expressed using a known linear kernel and two unknown vectors). The paper introduces a novel algorithm for finding policies according to a lexicographic objective with bounded regret. While prior work has studied multi-objective optimisation in linear MDPs and lexicographic objectives in finite MDPs, no prior work has considered lexicographic objectives in linear lexicographic MDPs (which generalise finite MDPs). The paper also proves a PAC regret bound (in terms of each objective separately) for their algorithm, which may be the first regret bound for lexicographic MDPs. This bound assumes unknown (& linear) transition dynamics and reward. The paper also bounds regret when an MDP can be approximately expressed as a linear MDP.
## Update after rebuttal
I'm grateful to the authors for including additional information about, e.g., time/space complexity. I think this information significantly improves the paper. Given the complexity of the algorithms presented and the absence of real implementation, I don't feel I can increase my score to a 5, although I still believe the paper should be accepted.
Claims And Evidence: All claims in the submission are made as formal statements and are supported by proofs in the appendix (see below).
Methods And Evaluation Criteria: The only method, Algorithm 1, is justified in terms only of the regret bound. The regret bound is an appropriate criterion if the work is viewed as foundational and is meant to lead to future algorithms that could be applied.
The paper does not state a specific application (a few potential applications of RL in general are mentioned); however, evaluation of the algorithm’s suitability for even toy applications is not given. For example, there is no analysis of the space or time complexity of the algorithm nor empirical application to a toy lexicographic linear MDP (see weaknesses below).
Theoretical Claims: I reviewed the major results presented in the appendix, and, to the best of my understanding, there are no problems with the correctness of the proofs.
Experimental Designs Or Analyses: There are no empirical experiments in this paper.
Supplementary Material: Yes, I reviewed the major results in the appendix.
Relation To Broader Scientific Literature: The results are well-connected to regret-bounded algorithms in single-objective RL. As far as I know, this is the first result bounding the regret of an algorithm for lexicographic RL.
A number of other papers propose lexicographic RL algorithms without regret bounds. Therefore, a full understanding of the key contributions of this paper may require an empirical comparison.
Essential References Not Discussed: Although there are a number of other papers exploring lexicographic multi-objective RL (e.g., https://arxiv.org/abs/2408.13493), I don’t know of any other results that attempt to prove regret bounds.
Other Strengths And Weaknesses: Strengths:
1. The paper is mostly presented with excellent clarity despite the technicality of its content.
2. The regret bound is, to the best of my knowledge, novel and requires no additional assumptions when compared with prior work (see weakness 1).
Weaknesses:
1. To aid with comparison to existing work investigating lexicographic objectives (not-necessarily-linear) MDPs (such as Skalse et al. 2022), a more explicit discussion of the relationship between MDPs and linear MDPs would be beneficial. For example, it could be clearly stated that finite MDPs can always be represented as linear MDPs with d=|S|\times|A|, as is shown in Example 2.1 of Jin et al. 2020.
2. The space and time complexity of Algorithm 1 is not discussed. I am uncertain, but it is possible that the complexity of this algorithm would make it intractable for real applications. For example, computing lines 20-23 of Algorithm 1 appears to require storing H * K state-action pairs in memory and then computing a least-squares estimate over K values. Although this could be competitive with similar algorithms with regret bounds (in the single-objective case), it may not be scalable to real environments. An explicit readers to understand any (potential) limitations to application.
3. Compounding on the above, the paper does not implement the proposed algorithm and empirically demonstrate how well it performs in comparison to prior literature (which do conduct empirical analysis).
Other Comments Or Suggestions: I found very few typos in the paper. Example 1 says “arm” (which is not used elsewhere) instead of “action”, but in some contexts, these words are synonymous, so this did not impede clarity.
In the statement of theorems 1 and 2, it is not immediately clear whether the events that each of the m objective regrets satisfy the bound (with probability 1-2\delta) are co-occurring.
Questions For Authors: No further questions.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely appreciate your thorough and constructive feedback. We have carefully considered each weaknees and present our responses below, which will be incorporated into the revised paper.
---
*W1. To aid with comparison to existing work investigating lexicographic objectives (not-necessarily-linear) MDPs (such as Skalse et al. 2022), a more explicit discussion of the relationship between MDPs and linear MDPs would be beneficial.*
Thank you for highlighting this important connection. We have clarified the relationship between general and linear MDPs as follows:
1. Any finite MDP with state space $\mathcal{S}$ and action space $\mathcal{A}$ can always be represented as a linear MDP by encoding each state-action pair $(x, a)$ as a one-hot feature vector in $\mathbb{R}^d$, where $d=|\mathcal{S}| \times |\mathcal{A}|$. The transition kernel $\mathbb{P}(x'\mid x, a)$ and reward function $r(x, a)$ can be expressed as inner products between the feature vector of $(x, a)$ and learnable parameters.
2. While linear MDPs impose structure that enables tractable theoretical analysis (e.g., regret bounds in Jin et al. 2020), general MDPs with lexicographic objectives (as in Skalse et al. 2022) may not always adhere to this linearity. Our results for linear MDPs directly apply to finite MDPs, but lexicographic objectives in non-linear MDPs may require different algorithms.
---
*W2. The space and time complexity of Algorithm 1 is not discussed $\cdots$ An explicit readers to understand any (potential) limitations to application.*
We thank the reviewer for the constructive feedback. Below, we provide a detailed complexity analysis of Algorithm 1 and discuss its limitations:
1. Step 7: Computational complexity is $O(d^2|\mathcal{A}|)$, while memory complexity is $O(|\mathcal{S}||\mathcal{A}|)$ for storing state-action pairs.
2. Step 8: LAE requires $O(md|\mathcal{A}|)$ computations and $O(|\mathcal{S}||\mathcal{A}|)$ memory.
3. Step 20: Both computational and memory complexity are $O(d^2)$, as $U_h$ can be updated incrementally.
4. Step 21: Computational complexity is $O(mk)$ for updating $m \cdot k$ values. Memory complexity is $O(mk+md)$ to store $\\{\hat{r}\_{\tau,h}^i, (x\_{\tau,h+1}, a\_{\tau,h+1})\\}\_{\tau\in[k]}^{i\in[m]}$ and $\\{\hat{w}\_h^i\\}\_{i\in[m]}$.
5. Step 22: Computational complexity is $O(mkd+md^2)$, dominated by inverting $U_h$ ($O(d^2)$) and computing $m$ linear regressions ($O(mkd+md^2)$). Memory complexity is $O(md)$ for storing weight vectors $\\{\hat{w}\_h^i\\}\_{i\in[m]}$.
6. Step 23: No additional resources are required, as Q-values can be updated directly from retained $\{\hat{w}_h^i\}_{i\in[m]}$.
Summing across all $H$ MDP layers:
- The computational complexity of is $O(Hmd|\mathcal{A}|+Hd^2|\mathcal{A}|+Hmkd+Hmd^2)$ .
- The memory is $O(Hd^2+Hmk+Hmd+|\mathcal{S}||\mathcal{A}|)$.
Summing over $K$ rounds:
- The **total computational complexity** is $O(KHd|\mathcal{A}|(m+d)+K^2Hmd+KHmd^2)$ .
- The memory of our algorithm is $O(Hd^2+HmK+Hmd+|\mathcal{S}||\mathcal{A}|)$.
The $O(K^2)$ computational complexity and $O(K)$ memory are much more expensive than standard bandit algorithms, which typically achieve $O(K)$ computation and $O(1)$ memory. However, our approach remains competitive with existing methods in single-objective MDPs (Jin et al. 2020; Zanette et al. 2020; He et al. 2023). In the revised paper, we will explicitly discuss the computational and memory complexities of our method to clarify its practical applicability.
---
*W3. Compounding on the above, the paper does not implement the proposed algorithm and empirically demonstrate how well it performs in comparison to prior literature (which do conduct empirical analysis).*
Thank you for raising this issue. The absence of empirical validation in our work aligns with the foundational single-objective linear MDP studies (Jin et al. 2020; Zanette et al. 2020; He et al. 2023), which similarly omit experiments due to challenges in constructing valid linear MDP benchmarks (e.g., enforcing low-rank dynamics and linear payoff structures). We plan to address this limitation in two phases: i) Synthetic experiments will be added to illustrate key theoretical properties. ii) Comprehensive empirical comparisons against heuristic baselines and ablations will be conducted in follow-up work. We appreciate your feedback and welcome suggestions for specific experimental protocols or baseline implementations.
---
*W4. **Other Comments Or Suggestions:** In the statement of theorems 1 and 2, it is not immediately clear whether the events that each of the m objective regrets satisfy the bound (with probability 1-2\delta) are co-occurring.*
Many thanks for your detailed reviews. The events that each of the $m$ objective regret satisfy the bound are co-occuring. We have polished the paper to avoid typos and improve the clarity in the revised version. | Summary: This paper provides an algorithm for the multi-agent reinforcement learning (MORL) setting and regret bounds. Notably the regret bounds given match the single-objective setting up to the leading order term.
## update after rebuttal
I found the work to be making a substantial contribution. I am glad to see the authors considering the removal of contribution 4 and to tone down the primacy claims. I think the paper makes a significant contribution without over-claiming or exaggerating. I think this paper would make a nice contribution to the conference.
Claims And Evidence: The paper explores how techniques in the finite horizon linear MDP setting can be applied to the space of the lexicographically ordered objectives. The paper claims four contributions: A MORL algorithm, a regret bound for the algorithm, a regret bound for the algorithm in the misspecified MORL setting, and a claim of being the first MORL algorithm with a regret bound.
Contributions 1-3 are well supported and insightful. However, the finite-horizon assumption shouldn't be glossed over as just part of the formalization of the policy 3 pages in, but should be stated much earlier in regards to contextualizing the contributions.
Contribution 4 seems disingenuous. Since the authors’ work is limited to linear MDPs, I don't know what it means to be the "first theoretical regret bound for MORL". In a pedantic sense, it's not true: consider MDPs with a single state and action, all algorithms have zero regret, so there’s a theoretical bound. What about lexicographic ordering in bandits (e.g., Huyuk et al., 2019; Xu & Klabjan, 2023; Xue et al., 2024; all just from a cursory Google search)? Isn't that just a restricted class of MDPs like the authors' restriction? I would strongly recommend dropping this bullet point, and just roll the observation of the sqrt(K) term into one of the other stated contributions (which is the main defensible statement in the contribution).
Methods And Evaluation Criteria: Yes. However this is marred by what seems like overclaiming and exaggeration of the results.
For example, "All of the aforementioned MORL studies primarily rely on empirical evaluations, with limited attention given to theoretical guarantees. This absence of formal analysis has impeded the development of principled algorithms with provable performance bounds." This feels needlessly pejorative. It is true that these past works did not report a regret bound on linear MDPs, but many did some form of formal analysis. Not providing regret bounds does not mean they are not theoretically grounded. For example, Gabor et al. (1998) provided convergence results. How is that not a theoretical guarantee? The second statement is nonsensical and more pejorative; essentially it’s saying that the lack of algorithms with provable performance bounds has impeded the development of algorithms with provable performance bounds.
Table 1 I find to also be disingenuous. Why is Xu (2020) a row in the table when it introduced an algorithm for finding the Pareto-optimal frontier, which is an entirely different problem? Even Skalse (2022) seems odd as it makes no linear MDP assumptions. These rows seem to be needlessly propping up the authors' work by critiquing others that had different goals.
The paper makes a valuable contribution without this need to oversell their own results and minimize the results of others. What seems a more accurate description of what's going on is that the authors are using recent techniques for the only-recently explored space of linear MDPs and establishing how they extend into the space of the lexicographic ordering objective. This is interesting research and deserving to be disseminated without any need to exaggerate the contribution.
Theoretical Claims: I mostly followed the definition of the algorithm and the approach all seems plausible, but I did not verify any of the proofs (all in the supplementary material).
Experimental Designs Or Analyses: NA.
Supplementary Material: No.
Relation To Broader Scientific Literature: This is one of the weaknesses of the paper.
According to the introduction, MORL started in 2013, but no interesting advance happened until this decade.
Yet the cited 2013 paper is, in fact, a survey with over a hundred citations of work in this area going back decades. Lexicographic ordering in RL goes back at least to Gabor et al. (1998), which (to be fair) is discussed eventually on page 3, but long after the introduction seems to disregard this history.
I would say this is doing a poor job of placing the contributions in the broader scientific literature.
Essential References Not Discussed: I don't think there's an essential reference missing, but I hope the authors consider being more careful in discussing the historical line of work.
Other Strengths And Weaknesses: I really like section 6. I find it helpful to understand what's going on. However, I feel like Section 6 is out of order. Shouldn't this come before Section 4, as it motivates the underlying techniques employed in the algorithm? Or at least 6.1 should come first?
Some additional discussion of where assumption 1 comes from would be nice. This assumption seems super restrictive! Might it actually allow the objective to satisfy the Continuity axiom of von Neumann and Morganstern? This would then admit a single scalar reward signal that would allow maximizing its expectation reducing the entire problem to traditional RL. For finite states and actions, there always exists such a lambda, and so this assumption just creates a constant to use in the bound. However, what if the action space allows for continuous probabilities such as admitting stochastic policies, this seems to rule out certain discontinuities that would naturally arise with lexicographic orderings and are what makes them challenging to begin with (e.g., see Gabor et al., 1998; Bowling et al., 2023).
I think the introduction of linear RL in Section 2 should include some discussion of what "linearity in both the reward function and transition probabilities" means, and a definition of “misspecified”. The current related work section has no real value. It's merely a list of recent papers and the resulting regret bound. The form of the problem addressed (e.g., exactly what is linear and how) would seem way more important than the regret bound form itself. After all, your main deviation is in the form of the problem to explore a lexicographic objective.
Other Comments Or Suggestions: * Line 189: "while still maintaining the feasibility of optimizing": the word choice of "feasibility" seems odd here. Maybe a better wording would be "while still allowing some optimization of".
* Line 301: "is a \epsilon-approximate" shouldn't it be "is an \epsilon-approximate"?
* Line 371: "Next, the agent proceeds to eliminate arms based on the second objective." First time the word "arms" is used in the paper. I understand, but this is adding confusion.
* Line 373: "which is disappointing because a_3 is awful for the third objective." Isn't it disappointing because a_3 is not the optimal action (since 4 < 5)?
* The term "the lexicographically optimal policy..." is used in several places. The policy is likely not unique, so maybe this should say "A lexicographically optimal policy..."
* Why is lexicographic ordering defined as "dominates"? Dominance I would expect to be reserved for a partial order, not the total order of lexicographic ordering.
* I believe there are typos in the definitions of the action value functions. Throughout the paper, x is used to denote state however in the value function definitions, reward is written as a function of s.
* There is also a typo in Assumption 1 that is critical to fix. I believe the superscript of the reward terms should be a j. As stated, the inequality does not make sense.
* Brackets around the transition-kernel value-function product before/in Equation (1) is confusing given the meaning of brackets as a set.
Questions For Authors: None.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely appreciate the constructive feedback and have carefully considered the raised concerns. Our point-by-point responses follow below.
---
*Q1. **Claims And Evidence:** Contributions 1-3 are well supported and insightful. However, the finite-horizon assumption shouldn't be glossed over $\cdots$.*
We appreciate the reviewer’s positive feedback on Contributions 1-3 and their helpful suggestion. We agree that introducing the finite-horizon assumption earlier in the paper will better clarify our work’s scope. In the revised version, we will highlight this in the Introduction.
---
*Q2. **Claims And Evidence:** Contribution 4 seems disingenuous $\cdots$ Isn't that just a restricted class of MDPs like the authors' restriction?*
We agree that calling our work the "first theoretical regret bound for MORL" was inaccurate, especially since prior work on lexicographic bandits (e.g., Huyuk et al., 2019), which is a special case of MDPs with $H=1$. We will remove Contribution 4 and instead integrate the discussion of the $\sqrt{K}$ regret term into our other contributions. Meanwhile, we will add comparisons to bandit-based MORL frameworks (Huyuk et al., 2019) in the related work section.
---
*Q3. **Methods And Evaluation Criteria:** Why is Xu (2020) a row in the table $\cdots$ Even Skalse (2022) seems odd as it makes no linear MDP assumptions.*
To the best of our knowledge, no prior work specifically tackles multi-objective linear MDPs, so we refer to general multi-objective MDP (MOMDP) frameworks. Xu et al. (2020) focuses on finding Pareto-optimal frontiers in MOMDPs under Pareto ordering. Skalse et al. (2022) studies lexicographic ordering in MOMDPs without assuming linearity. We will update the table by removing Xu et al. (2020) and Skalse et al. (2022), retaining only works directly relevant to **linear MDPs** to avoid confusion.
---
*Q4. **Relation To Broader Scientific Literature:** Lexicographic ordering in RL goes back at least to Gabor et al. (1998), which (to be fair) is discussed eventually on page 3, but long after the introduction seems to disregard this history.*
We thank the reviewer for their helpful comment on the history of lexicographic ordering in RL. We will restructure the introduction to foreground the seminal work of Gabor et al. (1998) as the conceptual origin of lexicographic RL so as to strengthen the paper’s scholarly context.
---
*Q5. **Other Strengths And Weaknesses:** Some additional discussion of where assumption 1 comes from would be nice.*
Assumption 1 addresses the Optimal Action Preservation Dilemma (**Section 6**). In Example 1, there are three Q-value vectors: $[5,5,5], [1,5,5]$ and $[4,10,1]$ for actions $a_1, a_2$ and $a_3$, where $\lambda=\frac{10-5}{5-1}=5$. $a_1$ is lexicographically optimal. When eliminating actions based on the first objective, $a_2$ is eliminated since $1$ is far from $5$, but $a_3$ is kept as $4$ is close to $5$, leaving $\\{a_1,a_3\\}$. Next, elimination considers the second objective. Although $10$ (from $a_3$) is much bigger than $5$ (from $a_1$), the confidence term $\beta_k\cdot C$ is scaled by $2+4\lambda$ (Step 4 of Algorithm 2), ensuring $a_1$ stays in $A_s^2$.
---
*Q6. **Other Strengths And Weaknesses:** Might it actually allow the objective to satisfy the Continuity axiom of von Neumann and Morganstern?*
The continuity axiom says that for three options where $A \succ B \succ C$ , some mix of $A$ and $C$ should be equally good as $B$. This means no outcome is infinitely better or worse than another. But in lexicographic optimization, higher-priority goals (like safety) are infinitely more important than lower ones (like cost). Thus, the continuity axiom may not apply, since no trade-off can make such different-priority goals equivalent.
---
*Q7. **Other Strengths And Weaknesses:** I think the introduction of linear RL in Section 2 should include some discussion of what "linearity in both the reward function and transition probabilities" means, and a definition of “misspecified”.*
We will revise Section 2 to clarify the concepts of "linearity in both the reward function and transition probabilities" and discuss "misspecified." Specifically, we will explain that in linear RL the reward function is $r_h(x,a) = \phi(x,a)^\top \theta_h$, where $\phi(s,a)$ is a known feature vector and $\theta_h$ is unknown. The transition probabilities follow $\mathbb{P}_h(x'|x,a)=\phi(x,a)^\top \mu_h(x')$, where $\mu_h(x')$ is an unknown measure. Additionally, we will clarify that "misspecified" refers to settings where the true environment deviates from the assumed linear class (e.g., due to approximation errors in rewards or transitions).
---
*Q8. **Other Comments Or Suggestions.***
We sincerely appreciate the reviewer’s detailed feedback, which has significantly contributed to improving the quality of our paper. All suggested revisions have been carefully considered and will be incorporated into the final version of the paper. | null | null | null | null | null | null |
Residual Matrix Transformers: Scaling the Size of the Residual Stream | Accept (poster) | Summary: The Residual Matrix Transformer (RMT) replaces the residual stream in transformers with an outer product memory matrix, allowing independent scaling of the residual stream size. This results in improved training efficiency and better performance on downstream tasks.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: None
Experimental Designs Or Analyses: Yes
Supplementary Material: No
Relation To Broader Scientific Literature: None
Essential References Not Discussed: None
Other Strengths And Weaknesses: 1. Runtime efficiency concerns: The authors acknowledge that runtime is currently the biggest limitation of their model, with RMT being 4% slower than the transformer despite being more FLOP-efficient. This suggests potential implementation inefficiencies that could limit practical adoption.
2. Limited model size exploration: Due to resource constraints, the authors couldn't explore how efficiency trends continue at larger model sizes beyond 405M parameters, leaving questions about scalability to truly large models.
3. Baseline Comparisons: The transformer variants in §4.3 (Dou et al., 2018; Xu et al., 2024) are not state-of-the-art (e.g., recent works like Mamba or RWKV). Including stronger baselines would better contextualize RMT’s advancements.
Other Comments Or Suggestions: None
Questions For Authors: None
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank Reviewer ciWT for their comments and provide responses to their concerns about runtime efficiency and baseline comparisons.
For concerns about runtime, we will copy and paste a relevant snippet of our response to reviewer J7dP. We encourage Reviewer ciWT to read our discussion with Reviewer J7dP if more context is needed.
We do not think runtime is an insurmountable obstacle for our model for two reasons. The first is that, without any hardware optimizations, the RMT only takes 4% more time to achieve the same train loss as the Transformer. This shows that the performance gain needed is within reasonable reach. The second reason is that, while the residual stream size is much larger in the RMT, many of the parameter matrices are much smaller. One can use this fact to devise more efficient kernels for the RMT. For example, many of the “key” parameters are so small that they can fit entirely into SMEM. One can then imagine a GEMM kernel that only has to reload one operand matrix into SMEM, significantly decreasing the data transfer overhead.
We further assert that the main focus of our paper is to explore residual stream size as a new scaling axis. While we do discuss runtime in the interest of full transparency, we consider the discussion of specific hardware optimizations to be out of the scope of this work.
Next, we address concerns about baseline comparisons.
> The transformer variants in §4.3 (Dou et al., 2018; Xu et al., 2024) are not state-of-the-art (e.g., recent works like Mamba or RWKV). Including stronger baselines would better contextualize RMT’s advancements.
We consider works like Mamba and RWKV to be orthogonal to our work because they primarily change the attention layer of the Transformer. Because these architectures use the residual stream in the same way the Transformer does, our method can be extended to be integrated with them. We agree with the reviewer that some clarifying discussion should be added to our paper. | Summary: The paper introduces Residual Matrix Transformers (RMT), which increases the size of the residual stream in a transformer without incurring significant compute over memory overhead by using an outer product memory matrix. In training GPT-2 language models, RMT achieves better loss per unit of compute or parameters.
Claims And Evidence: The GPT-2 experiments support the claim that RMT outperforms the standard transformer, but experiments on other datasets (e.g. images) would be helpful for assessing the generalizability of the finding.
Methods And Evaluation Criteria: The GPT-2 experiment on OpenWebText is a reasonable benchmark. But it would be useful to understand if applying RMT still improves performance when applied on top of the modern transformer architecture actually used in practice such as the one used in llama models (with rotary embedding, SwiGLU activation, RMSNorm instead of LayerNorm and no biases).
Theoretical Claims: N/A
Experimental Designs Or Analyses: The use of µP for estimating optimal learning rates for different models is not necessarily sound, given that the training iterations are not constant across model sizes, which is an assumption in µP. In addition, I'm concerned that the larger learning rate used for RMT may unfairly favor it over standard transformers. Overall I think a more careful learning rate sweep is necessary to demonstrate the superiority of RMT convincingly.
Supplementary Material: I have read the full supplementary material.
Relation To Broader Scientific Literature: Finding new axes worth scaling beyond the usual ones, like compute and parameters, is an important research direction. The finding that scaling the residual stream size alone leads to considerable performance gains is interesting and relevant to the community. On the other hand, as the authors discussed, this approach increases the memory overhead and data transfer, which can translate to slower runtimes. While RMT performs better than standard transformers when controlling for FLOPs, ultimately, we care about performance per unit of time, and using fewer FLOPs is not relevant if the hardware utilization is compromised. Indeed, operations like attention are often memory-bound rather than compute-bound, and techniques that reduce the data transfer can significantly speed up the runtimes even while increasing the FLOPs, as exemplified by FlashAttention [1].
[1] Dao, Tri, et al. "Flashattention: Fast and memory-efficient exact attention with io-awareness." Advances in neural information processing systems 35 (2022): 16344-16359.
Essential References Not Discussed: None that I'm aware of.
Other Strengths And Weaknesses: Strength: Scaling the residual stream size is a novel idea and appears to be a promising research direction.
Weakness: RMT makes multiple modifications to the transformer architecture. These modifications do not represent the unique way of scaling the residual stream size and deserve to be motivated better or be ablated to illustrate which components actually matter for its performance.
Other Comments Or Suggestions: While the authors do not address the problem of the slower runtime of RMT, I believe it is an important question and it is unclear whether the runtime overhead of RMT can be addressed even in principle due to the additional data transfer. I suggest that the authors analyze and discuss whether the runtime overhead of RMT can, in fact, be addressed and by what kind of approaches. For example, if most of the overhead is in the additional data transfer involving the larger residual stream matrix, that overhead may in fact be irreducible without changing the hardware.
Questions For Authors: 1. When using µP for estimating optimal learning rates, do you properly scale the found learning rate per layer from smaller to larger models?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank reviewer J7dP for their thoughtful reading of the paper and appreciate their comment that this work presents an important research direction. We provide responses to many of their concerns grouped by subject.
We first address the concerns related to our application of µP transfer.
> The use of µP for estimating optimal learning rates for different models is not necessarily sound, given that the training iterations are not constant across model sizes, which is an assumption in µP.
We argue that using µP to transfer hyperparameters across training iterations is reasonable given the empirical evidence provided in the µP paper. The authors empirically showed that their method works across sequence length, depth, batch size, and training time (see Table 1 of their paper). In fact, the only hyperparameter that the paper theoretically proved transferability across was model width. In their GPT-3 experiment (the closest setting to ours), the authors successfully transfered hyperparameters from a proxy model trained with significantly fewer training iterations. Given the success of their experiment, we felt that it was reasonable to take the same approach that they did.
> In addition, I'm concerned that the larger learning rate used for RMT may unfairly favor it over standard transformers. Overall I think a more careful learning rate sweep is necessary to demonstrate the superiority of RMT convincingly.
In our main comparison between the RMT and the Transformer (§4.2), we included learning rate as one of the hyperparameters in our μP search and used the best performing learning rates of both the RMT and transformer models.
> When using µP for estimating optimal learning rates, do you properly scale the found learning rate per layer from smaller to larger models?
We scale learning rate in accordance with Table 8 and Appendix B.1 of the µP paper (i.e. by $\frac{1}{\textrm{width ratio}}$ for hidden parameters).
Here we give a response to the reviewer’s concerns about the runtime of our model.
We do not think runtime is an insurmountable obstacle for our model for two reasons. The first is that, without any hardware optimizations, the RMT only takes 4% more time to achieve the same train loss as the transformer. This shows that the performance gain needed is within reasonable reach. The second reason is that, while the residual stream size is much larger in the RMT, many of the parameter matrices are much smaller. One can use this fact to devise more efficient kernels for the RMT. For example, many of the “key” parameters are so small that they can fit entirely into SMEM. One can then imagine a GEMM kernel that only has to reload one operand matrix into SMEM, significantly decreasing the data transfer overhead.
We further assert that the main focus of our paper is to explore residual stream size as a new scaling axis. While we do discuss runtime in the interest of full transparency, we consider the discussion of specific hardware optimizations to be out of the scope of this work.
Finally, we respond to the following concern:
> RMT makes multiple modifications to the transformer architecture. These modifications do not represent the unique way of scaling the residual stream size and deserve to be motivated better or be ablated to illustrate which components actually matter for its performance.
We acknowledge the concern that RMT introduces several modifications to the transformer architecture that may seem arbitrary without proper justification. However, we would emphasize that these changes represent the minimal adjustments necessary to support the outer product memory matrix structure of its residual stream. Each modification is integral to the model's coherent functioning - removing any single element would create architectural inconsistencies. While we certainly don't claim RMT represents the only possible approach to scaling residual stream size, our work demonstrates one viable method for doing so, allowing us to investigate the effects of scaling along this axis. | Summary: To achieve more data-efficient and compute-efficient models, this paper introduces a new transformer-variant called the Residual Matrix Transformer (RMT), which replaces the traditional residual stream with an outer product memory matrix. The authors present theory showing that the RMT exhibits efficient scaling of the residual stream and improved variance propagation properties in some cases.
Experimental results demonstrate that when using an RMT with a larger residual stream size, it is more efficient than traditional transformer models in terms of data, FLOPS, and parameters. Additionally, it proves to be superior in performance compared to various transformer variants. Ultimately, the paper shows that increasing the residual stream size leads to better model performance.
Claims And Evidence: Most of the claims made in the submission are supported by clear and compelling evidence, but some of them also need further justification (see Experimental Designs Or Analyses).
Methods And Evaluation Criteria: The proposed methods and evaluation criteria are well-suited for the problem and application at hand.
Theoretical Claims: I checked the proofs for the theoretical claims, and they were correct and well-justified.
Experimental Designs Or Analyses: - The experiments may lack some ablation studies to demonstrate the contributions of different factors. For instance, the authors need to demonstrate whether the superior performance of RMT over the Transformer stems from the outer product memory matrix structure or is simply a result of the increased residual stream size. In the main comparison, RMT outperforms the Transformer, but its residual stream is 2.5 to 4 times larger, which could also be a contributing factor to the observed performance gains. Beyond the analysis provided in §3.1 of the paper, thorough ablation experiments are crucial to validate these claims.
- The paper states in lines 81-83 that "RMT has improved variance propagation." However, Table 2 reveals that RMT underperforms the Transformer in the Attention retrieval case. These "bad cases" need further analysis to provide readers with a comprehensive understanding of the proposed method's performance and its potential application scenarios.
Supplementary Material: Yes, I reviewed the supplementary material, and I found it to be satisfactory.
Relation To Broader Scientific Literature: The key contributions of the paper align well with existing literature, building on prior findings and enhancing our understanding of the topic.
Essential References Not Discussed: Yes, there are essential references that could enhance understanding.
Other Strengths And Weaknesses: No
Other Comments Or Suggestions: No
Questions For Authors: No
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We appreciate the reviewer fZLC’s thoughtful feedback and positive comments. We will address each of this reviewer’s concerns in the order they appear.
> The experiments may lack some ablation studies to demonstrate the contributions of different factors. For instance, the authors need to demonstrate whether the superior performance of RMT over the Transformer stems from the outer product memory matrix structure or is simply a result of the increased residual stream size.
This is a valid point that we will make more clear in the paper. In our experiments, we find that the performance gains are not attributable to the outer product memory matrix structure itself, but rather to the expanded residual stream that this structure enables. One of the RMT models in §4.4 has the same residual stream size as the Transformer in §4.3, and these models are trained with the exact same experimental settings. The final train loss that the RMT achieves is 3.42 while the final train loss that the Transformer achieves is 3.43, showing that when the residual stream size is the same, the observed performance is about the same. These results suggest that the observed performance gains of the RMT over the Transformer in §4.1 are due to the expanded residual stream.
> In the main comparison, RMT outperforms the Transformer, but its residual stream is 2.5 to 4 times larger, which could also be a contributing factor to the observed performance gains. Beyond the analysis provided in §3.1 of the paper, thorough ablation experiments are crucial to validate these claims.
As mentioned above, the observed performance gains are most likely due to the residual stream being 2.5 to 4 times larger. We would like to clarify, however, that the ability to expand the residual stream to this size is made possible by the RMT’s outer product memory matrix structure. Expanding the residual stream of the Transformer would substantially increase the model’s parameter count and per-example compute cost. As such, an ablation where the Transformer’s residual stream size is expanded but the parameter count, FLOP count, and tokens consumed is fixed is not possible.
> The paper states in lines 81-83 that "RMT has improved variance propagation." However, Table 2 reveals that RMT underperforms the Transformer in the Attention retrieval case. These "bad cases" need further analysis to provide readers with a comprehensive understanding of the proposed method's performance and its potential application scenarios.
This is also a valid point, however we consider that performing a full analysis of the end-to-end signal propagation through the RMT to be out of the scope of this paper (for example, an entire paper was dedicated to this for the Transformer (Kedia et al., 2024)). We maintain that improving the signal propagation of 3 out of the 4 replaced components roughly shows that our model has superior signal propagation properties compared to the Transformer. We also want to note that we found an error in the submitted manuscript. In Table 2, the attention storage and retrieval numbers are swapped, i.e. the RMT outperforms the Transformer in the Attention retrieval case and underperforms the Transformer in the Attention storage case. An correction to the table is provided below.
\begin{array} {|r|r|}\hline Layer & Operation & Model & \frac{\sigma^{2}\_{x\_{out}}}{\sigma^{2}\_{x\_{in}}} & \frac{\sigma^{2}\_{g\_{in}}}{\sigma^{2}\_{g\_{out}}} \\\\ \hline Attn & Storage & RMT & 0.4 & 1.6 \\\\ \hline Attn & Storage & Transformer & 1 & 1 \\\\ \hline Attn & Retrieval & RMT & 1.14 & 0.86 \\\\ \hline Attn & Retrieval & Transformer & 0.5 & 1.5 \\\\ \hline \end{array}
> Yes, there are essential references that could enhance understanding.
We have included all the references that we find enhance the understanding of our work, and we welcome any specific suggestions for additional sources that would strengthen the manuscript. | null | null | null | null | null | null | null | null |
From Debate to Equilibrium: Belief‑Driven Multi‑Agent LLM Reasoning via Bayesian Nash Equilibrium | Accept (poster) | Summary: The paper introduces ECON, a hierarchical reinforcement learning framework designed to optimize multi-agent reasoning in Large Language Models (LLMs) by leveraging Bayesian Nash Equilibrium (BNE). ECON replaces inter-agent communication with a belief mechanism to save communication costs and make it easier to scale up the population of agents. LLMs integrate distributed reasoning and achieve centralized commitment by a Coordinator. Experiments show the proposed method surpasses the existing methods.
Claims And Evidence: 1. The paper claims that "they conceptually formalize BNE in Multi-LLM systems and instantiate it through a hierarchical optimization framework to improve collaborative reasoning", but the paper seems not to clearly describe how BNE is implemented in practice in the section 2.3 Framework of ECON.
2. One of the main claims of the paper is the scalability of the proposed method, while in Figure 4, the performance does not improve while increasing the number of Execution LLMs.
Methods And Evaluation Criteria: 1. I agree that inter-agent communication is one of the key concerns in LLM agents, but it does not mean it is better to remove such communication entirely. The belief mechanism proposed in the paper seems to follow the design in MARL, which lacks interpretability and generability. I believe the direction of this area should improve the efficiency of inter-agent communication instead of giving it up.
2. Benchmarks (GSM8K, MATH, TravelPlanner) are standard for reasoning tasks. Metrics (accuracy, token usage) align with the goals of efficiency and performance.
Theoretical Claims: I did not find errors here.
Experimental Designs Or Analyses: 1. The Abaltion study does not analyze the effectiveness of different modules of the mix network. It may not be clear why the mix network should be designed in this way.
2. The paper mentions the method "outperforms single-LLM approaches by 10.9% and surpasses existing multi-LLM methods by 11.2%". Why the multi-LLM methods are even worse than single ones?
Supplementary Material: Yes, mainly the parts C, D.
Relation To Broader Scientific Literature: 1. The paper incorporates the belief mechanism from MARL into LLM agents.
2. The paper further improves multi-agent debate method to executor-coordinator architecture.
3. The paper introduces Nash Equilibrium from game theory to LLM agents.
Essential References Not Discussed: 1. Belief mechanisms in MARL are not discussed, e.g. [1].
2. Multi-LLM-agent collaboration methods beyond debates are not included, e.g. [2][3].
[1] Wang, Yuanfei, Jing Xu, and Yizhou Wang. "ToM2C: Target-oriented Multi-agent Communication and Cooperation with Theory of Mind." International Conference on Learning Representations.
[2] Li, Guohao, et al. "Camel: Communicative agents for" mind" exploration of large language model society." Advances in Neural Information Processing Systems 36 (2023): 51991-52008.
[3] Guo, Xudong, et al. "Embodied LLM Agents Learn to Cooperate in Organized Teams." Language Gamification-NeurIPS 2024 Workshop.
Other Strengths And Weaknesses: Strengths:
1. The paper explores a novel method to optimize the multi-agent reasoning with LLM.
Weaknesses:
1. The definition and effectiveness of belief in such tasks are not clear.
2. In Figure 2, the components of the framework are not clear enough. Which parts are based on LLM?
Other Comments Or Suggestions: 1. A space is missing on Page 1 " challenge(Liu et al., 2024b)."
2. On page 6 "Figure4"
3. On page 37 "Figure 14: case study of math"
Questions For Authors: Refer to the previous parts.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear Reviewer t4Dv, thanks for the feedback and suggestions, we will add clarification where needed as space permits.
### Q1:Explain the Implementation of BNE in Sec 2.3
**Regarding the definition of belief**: belief represents each Execution LLM's policy derived from partial observations and local history, we use belief network to update belief. To enable coherent joint behavior, belief encoder aggregates individual belief states into group-level representation. Then the mix network coordinates this integrated belief information for global Q-value and compute loss and reward after commitment made. This architecture aligns with our formal definition of BNE in Sec 2.2, where optimizes agent policies through belief network updates until reaching a strategy profile where no agent can unilaterally improve its payoff given its beliefs.
**Regarding the effectiveness** of belief-based coordination, we conduct additional experiments to response this concern. **The result of performance comparison with /without achieving BNE demonstrated an average performance improvement of 14%,** which can be found at the **response to reviewer 7mra Q1**.
### Q2: Explain the Result in Figure 4
We would like to clarify that Figure 4 in Sec 3.4 intentionally demonstrates that **simply increasing the number of execution LLMs without appropriate coordination mechanisms can actually decrease performance.** We attribute this to the challenge faced by the Coordinator LLM in managing an excessive number of Execution LLMs in Sec 3.4.
**To address this scalability challenge, we propose a global Nash equilibrium through local Nash equilibria by introducing additional coordinators in our manuscript (Sec 3.4).** This setup ensures that each Coordinator handles a reasonable amount of data, as demonstrated by the improvements reported in Figure 5. We provide the corresponding pseudocode and detailed explanation for scaling up ECON in Appendix A.4.
### Q3: Regarding The Inter-agent Communication in ECON
> I believe the direction of this area should improve the efficiency of inter-agent communication instead of giving it up.
We would like to clarify that ECON does not eliminate inter-agent communication entirely, but rather **adopts an incomplete-information perspective that minimizes communication** (as recognized by Reviewers sVjj and hmqe). ECON's optimization objective focuses on achieving consensus and constructing implicit communication among execution LLMs.
**Although ECON is implicit inter-agent communication, it is not conflicted with explicit inter-agent communication**, we make additional experiment to demonstrate that by incorporating explicit interaction into ECON (which makes it become a complete information formation). The performance improve 1.1% while the token consumption increase 42.4% on average, demonstrate potential to achieve stronger performance with sufficient token budget.
|Dataset|LLaMA3.1|Complete Info (%)|ECON (%)|Token Consumption|
|---|---|---|---|---|
|GSM8K|8B|81.4|80.3|+35.6%|
||70B|96.1|96.7|+42.7%|
|GSM-Hard|8B|30.2|29.9|+62.3%|
||70B|53.6|51.4|+40.9%|
|MATH|8B|59.6|60.4|+33.8%|
||70B|83.1|81.5|+39.4%|
### Q4: The Effectiveness of Different Modules of the Mix Network
Theoretically, the design of our mixing network optimizes local policies to improve the global objective, monotonicity guarantee are demonstrated in Appendix A.5. **We make additional ablation experiments to validate this point by removing the concatenation of $e$ and remove belief encoder** to claim that each parts of ECON is essential, as shown in the experiments.The baseline result is ECON.
|Dataset|LLaMA3.1|No concat(%)|No encoder(%)|
|---|---|---|---|
|GSM8K|8B|85.9(-1.8)|84.1(-3.4)|
||70B|93.6(-3.1)|91.1(-5.6)|
|GSM-Hard|8B|24.9(-5.0)|21.7(-8.2)|
||70B|47.2(-4.2)|42.6(-8.8)|
|MATH|8B|55.6(-4.8)|52.3(-7.1)|
||70B|77.0(-4.4)|75.3(-6.2)|
### Q5: Why are Multi-LLM Methods are Even Worse Than Single Ones?
Our abstract needs clarification: **the 10.9% for MA-LLM is averaged across six datasets, while the 11.2% for Single is across five datasets (exclude TravelPlanner)**. Since single agent approach with open-source model have 0% pass rate on TravelPlanner, ECON's improvement ratio cannot be calculated. With TravelPlanner excluded, ECON's comparative results on five datasets are reported in Sec 3.2 which indicates the single-LLM methods are worse than multi-LLM methods.
### Q6: The Missing References Need to Discuss
We agree that it's necessary to incorporate discussions of these works. We will update our related work section in the revised version.
### Q7: Clarification about Figure 2
Figure 2 illustrates both the inference phase and optimization phase of ECON. The inference phase (left part) is primarily based on LLMs, where the Coordinator LLM and Execution LLMs work together to generate solutions. For a more detailed understanding of this process, we provide a comprehensive case study of the inference process in Appendix D.1. | Summary: This paper proposes a Multi-agent LLM framework (ECON) to improve the communication efficiency and consensus. It formulates multi-agent LLM as Decentralized Partially Observable Markov Decision Process. It reduces the token consumption from incomplete-information perspective, and optimizes towards Bayesian Nash Equilibrium to improve the consensus. The proposed method has a lower regret bound, making it possible to scale up effectively. Experimental results on six reasoning tasks show that ECON surpasses single-agent solutions and outperforms existing multi-agent approaches with a lower token consumption.
Claims And Evidence: This paper claims the proposed ECON can ensure the Multi-agent LLM system can converge towards Bayesian Nash Equilibrium, thus enhancing the degree of consensus and performance.
Methods And Evaluation Criteria: ECON uses a Coordinator-Executor architecture. Each Execution LLM keeps a brief network that updates its belief state with the local trajectory and the current observation. A shared belief encoder aggregates the belief states from all agents to model coherent joint behavior.
However, Section 2 mainly focuses on the optimization phase and it is not clear how the inference phase works in ECON. What is the observation of each agent, what is the strategy provided by the coordinator, how Coordinator aggregates the answers, etc are not mentioned. This make the method part confusing.
The evaluation metric is accuracy on four mathematical reasoning datasets and the common sense reasoning dataset. On TravelPlanner, the metric is the final pass rates.
Theoretical Claims: I didn't check the correctness of the correctness of the proof
Experimental Designs Or Analyses: The experimental section can be divided into 4 parts:
(1) performance against baseline models across 5 reasoning tasks with 3 llama models and 1 mistral model, and complex planning on Travelplanner with GPT-4 turbo
(2) strong coordinator with weak execution & weak coordinator with strong execution to evaluate the heterogeneous Execution
(3) the result of scale-up agent numbers.
(4) ablation study.
These experiments and analyses are comprehensive and reasonable, demonstrating the effectiveness of ECON in different settings.
Supplementary Material: No supplementary material is provided
Relation To Broader Scientific Literature: It is related to the multi-agent reinforcement learning and methods in game theory
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strengths:
The idea is interesting and novel.
Theoretical proofs are provided to guarantee the effectiveness of ECON.
The experiments are comprehensive and the findings are insightful.
Weaknesses: While Section 2 provides the formulation and optimization of ECON, it is not clear how ECON works during the inference phase. The meaning of notations such as O_i and e_i and their instantiation in a concrete task should be introduced (e.g. text or hidden representation)
Other Comments Or Suggestions: Figure 2, there is no red gradient flow.
Figure 3, it is better to group the bins by LLMs instead of the baseline methods because it should highlight the difference between different baselines rather than different LLMs.
Questions For Authors: the prompt embedding e_i = [T_i, p_i], where T_i is the temperature and p_i is the threshold for repetition. How do T_i and p_i construct the token embedding for the prompt?
How does ECON scale up to more agents? Does it require additional training?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Dear Reviewer hmqe:
Thanks for your constructive review, we provide some response regarding your question:
### Q1: About the inference phase of ECON
> Section 2 mainly focuses on the optimization phase and it is not clear **how the inference phase works in ECON**. What is the observation of each agent, what is the strategy provided by the coordinator, how Coordinator aggregates the answers, etc are not mentioned.
We provide a more intuitive and detailed explanation of our framework below:
1. Intuitive Framework Explanation
During the **inference phase** of ECON, a Coordinator LLM generates an informative strategy and a format based on the input question. These are then disseminated to the Execution LLMs, which independently produce their respective answers. Finally, the Coordinator LLM aggregates these answers to form a final commitment. A detailed case study demonstrating this inference process is provided in Appendix D.2.
In the **optimization phase** of the ECON framework, we update each Execution LLM's belief state through its belief network, aggregate these belief states via the belief encoder to form group-level information, then the mixing network outputs the final Q_tot. After the current answer is completed, we calculate loss and reward based on the commitment, which are then used to update the parameters of each Execution LLM's belief network and the belief encoder. We describe our specific DEC-POMDP components setting as follows:
1. Detailed DEC-POMDP Components
**Partial Observation:**
In our framework, each agent receives **only three types of information: the question it needs to answer, the format and strategy provided by the coordinator, and the final commitment** (aggregated response) after the coordinator combines all LLM answers. This setting prevents Execution LLMs from directly accessing the outputs of others.
**Local History:**
Local history includes its actions and received observations. This local history enables the agent to learn from past behaviors and environmental feedback, refining its situational understanding and updating its belief state accordingly.
**Action: Prompt embedding**
The **action space** is represented by the **prompt embedding**, which encodes the agent's action output. The prompt embeddings directly influence the LLM's generation process, adjusting the balance between exploration and exploitation.
**State Transition:**
The state transition in our framework is essentially the update of the agent's **belief state**. The belief network updates the belief state based on the agent's local history and current observation, influencing future actions.
### Q2: Clarification about prompt embedding:
>the prompt embedding e_i = [T_i, p_i], where T_i is the temperature and p_i is the threshold for repetition. How do T_i and p_i construct the token embedding for the prompt?
Thank you for your insightful comments. We agree that our description of the prompt embedding requires clarification.
- The prompt Embedding actually the action generated by belief state, we use this expression because it embeds control parameters that directly influence the Execution LLM's generation process. We will rename this in the revised manuscript to avoid confusion about embedding of answer output.
### Q3: About the scalability of ECON:
> How does ECON scale up to more agents? Does it require additional training?
We would like to clarify how we make our framework scalable and yes, scalable ECON require additional training:
- Our solution enhances scalability by forming a **global Nash equilibrium through local Nash equilibria** by introducing additional coordinators. Simply increasing the number of Execution LLMs would cause performance degradation since coordinator LLMs cannot handle excessive information (especially for weaker models).
- As shown in **Figure 4**, merely increasing Execution LLMs causes performance decline as coordinator LLMs struggle to manage numerous agents, particularly with weaker models.
- Results in **Figure 5** demonstrate that our scaled-up system (9 Execution LLMs, 3 coordinators, 1 central LLM) achieved an **18.1% improvement** over the baseline system (3 Execution LLMs, 1 coordinator), indicating potential for further scaling.
- More details about ECON's scalability can be found in **Section 3.4**, with detailed explanation and pseudocode in **Appendix A.4**.
## Q4: Figure Presentation Issues
> In Figure 2, the red gradient flow is missing. In Figure 3, it would be more effective to group the bins by LLMs rather than baseline methods, as this would better highlight differences between baselines rather than between different LLMs.
**Response:**
We appreciate your constructive feedback. We will address these visualization issues in the revised manuscript by adding the missing red gradient flow to Figure 2 and reorganizing Figure 3 to group bins by LLMs, which will indeed better emphasize the comparative performance.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' detailed rebuttal, which has solved most of my concerns.
In total, I like the idea of this work but will suggest rewriting the method section to include these necessary descriptions and make it easier to understand. Besides, the additional training for scaling up makes the proposed method less flexible and adaptive, which is one of the main limitations of ECON. Discussion about the training cost for the convergence in terms of different numbers of agents can provide more insights into this aspect.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer hmqe,
We sincerely appreciate your thoughtful review and positive assessment of our work. In response to your valuable feedback, we will restructure and revise the method section (Section 3.1) to provide clearer definitions and more comprehensive descriptions. The updated section will follow a more logical flow, enhancing the accessibility and readability of the final manuscript.
Regarding the discussion about the training cost for convergence in terms of different numbers of agents, we have conducted an additional comparison between ECON (1 coordinator, 3 execution LLMs) and scaled ECON (1 central model, 3 coordinators, 9 execution LLMs) using LLaMA 3.1 70B on the MATH dataset. The scaling increases the number of trainable parameters from 1.7M to 8.9M (5.2×), requiring 1.7× more convergence episodes while achieving an 18.1% overall performance improvement. We will expand our conclusion section to include a more thorough discussion of this trade-off by providing detailed analysis on the correlation between training costs and the number of agents, as well as examining their convergence patterns under different scaling configurations.
Thank you once again for your constructive feedback, which has been invaluable in improving the quality and impact of our work.
| Parameter | ECON | Scaled ECON | Change |
| --- | --- | --- | --- |
| **Training Configuration** | | | |
| Episodes | 150 | 250 | ×1.67 |
| Buffer Size | 32 | 64 | ×2.0 |
| Batch Size | 16 | 24 | ×1.5 |
| Update Interval | 8 | 12 | ×1.5 |
| **Network Architecture** | | | |
| Entity Dimension | 256 | 384 | ×1.5 |
| Belief State Dimension | 128 | 192 | ×1.5 |
| Attention Heads | 4 | 8 | ×2.0 |
| Transformer Blocks | 2 | 3 | ×1.5 |
| Feed-forward Size | 1024 | 2048 | ×2.0 |
| **Model Complexity** | | | |
| Trainable Parameters | 1.7M | 8.9M | ×5.2 |
| Convergence Episodes| 99| 164| ×1.7 | | Summary: The paper introduces ECON, a hierarchical reinforcement learning framework that optimizes multi-agent reasoning in Large Language Models (LLMs) by leveraging Bayesian Nash Equilibrium (BNE) under incomplete information. By modeling collaboration as a Decentralized Partially Observable Markov Decision Process (DEC-POMDP), ECON ensures each LLM agent independently generates solutions based on local beliefs and a shared coordinator strategy, minimizing inter-agent communication. The framework employs a Coordinator-Executor architecture: Execution LLMs produce answers using belief networks that model probabilistic expectations of others’ behaviors, while a Coordinator LLM aggregates responses into a global commitment. Theoretical contributions include proving BNE existence and achieving a sublinear regret bound, outperforming linear regret in non-BNE methods. Key innovations include belief networks for reducing token overhead, dynamic reward mechanisms balancing task performance and collaboration, and hierarchical Nash coordination for scalability.
Claims And Evidence: **Sublinear Regret**
The existence of Bayesian Nash Equilibrium (BNE) is rigorously proven using Glicksberg’s Fixed Point Theorem (Appendix A.1), with assumptions like strategy space compactness and quasi-concave payoffs explicitly stated. The sublinear regret bound is derived under standard RL assumptions (bounded rewards, concentrability), supported by a detailed decomposition of Q-value differences (Appendix B.2).
Critical assumptions like concentrability (Assumption A.8) and posterior alignment (Assumption A.4) are central to the regret analysis but not empirically validated.
**Empirical Results**
Performance gains over single-LLM (10.9%) and multi-agent baselines (11.2%) are validated across six benchmarks (Tables 1–3, Figures 3–5), with detailed task setups (Appendix B.5) and hyperparameters (Appendix B.6). Token efficiency (21.4% reduction vs. Multi-Agent Debate) is demonstrated via token usage tables (Table 3), though prompts and strategies are standardized (Appendix D).
While performance improvements are reported, statistical significance tests (e.g., confidence intervals, p-values) are absent.
Methods And Evaluation Criteria: **Benchmarks**:
Tasks like GSM8K, MATH, and TravelPlanner are standard for evaluating reasoning and planning in LLMs, ensuring comparability with prior work.Including heterogeneous model experiments (Table 2) tests robustness, though deeper analysis of weaker models’ contributions would strengthen validity.
**Metrics**:
Accuracy and token consumption directly measure performance and efficiency, addressing the paper’s core claims. Scalability tests (up to 9 LLMs) validate the framework’s practical utility for large ensembles.
Theoretical Claims: I have checked the proof, but I think the proof is hard to understand. For example, in Appendix B.2, the proof is an outline rather than an rigorous proof. I think this part should be rewritten because it is one of the key theoretical contributions: sublinear convergence rate.
Experimental Designs Or Analyses: I have checked the its main experiments (Sec 3.2), weaker or stronger execution llms (Sec 3.3), ans scaling up to multiple execution llms by hierarchical coordination (Sec 3.4). and ablation study on different compoent of the model (Sec 3.5). I think the results are promising.
Supplementary Material: I have reviewed the theory part and Appendix D.
Relation To Broader Scientific Literature: - Introduces Bayesian Nash Equilibrium (BNE) to formalize consensus in multi-agent LLMs,
- Proposes a Coordinator-Executor architecture where a Coordinator LLM guides Execution LLMs via strategies, combining CTDE’s centralized coordination with the flexibility of LLM-based reasoning. This extends hierarchical RL to static, non-trainable LLM agents.
- Bridges game-theoretic equilibria with modern LLM ensembles, offering a blueprint for principled multi-agent reasoning.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: No.
Other Comments Or Suggestions: In the Appendix, the section numbering is incorrect. For example, "Together API Integration for ECON" should be section C I think.
Questions For Authors: - Does the results of ECON in Table.3 contains the token usage of coordinator models?
- Table 4 seems to be confusing for me, why R3 does not appear in the column name, and where is S3 in the table?
- I suggest the author to explain what the local history $\tau$ and partial observation $O$ consist of.
- What is U in the definition of Sec 2.2, can you write the math form? What $\mathcal{R}$ consists of? I think $\mathcal{R}=\{ r_{coordinator}, r_1,\cdots, r_n \} $? I also notice that the notation $R$ is used in both the reward design and regret $R(T)$, please avoid reusing notation names.
- is $\theta$ in Sec 2.1 related to $\theta^B$ in Sec 2.2? It seems that $\theta^B$ is model's parameters but $\theta$ is something like the concatenation of $b_i$ and $\tau_i$. Can you write the math definition of $\theta$? I think it should be $(\tau_i, O_i)$, i.e., past history and current obervation.
- The equation in Sec 2.3 is a little bit confusing. $B_i$ output $Q_i$ and $e_i$.
- $e_i$ be called a prompt embedding? Based on your writing, I think $T_i$ and $p_i$ are scalars and $e_i$ is a two-dimensional vector, I wonder why this can be called "embedding", and can you explain why you concatenate it with ${\bf E}_i$. As stated in the paper, $T_i$ and $P_i$ control the sampling, so why do you connect it with belief encoding?
- $Q_i$ is the output of $B_i$, therefore, $\phi$ should be part of $\theta^B$, I think? But the current writing does not relfect this relation and it is confusing where $\phi$ comes from.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear reviewer sVjj, we'd like to thank you for your careful readings and valuable comments, we provide point to point response as follow:
### Q1: The proof of sublinear convergence rate (Appendix B.2)
We acknowledge that the proof in Appendix B.2 would benefit from a more rigorous presentation. In the revised manuscript, we enhance this proof with the following improvements, while we cannot include the rewritten proof in this rebuttal due to space limitations and URL limitation, we provide the procedure as follow:
- Formally justify the $O(t^{-1/2})$ convergence rates for both Q-function estimation errors and policy suboptimality through stochastic approximation theory and convex optimization results.
- Explicitly demonstrate how our learning rate schedule $\eta_t = \eta_0/\sqrt{t}$ ensures these convergence properties under stated assumptions.
- Include the complete mathematical derivation from regret definition to final bound, showing:
- Precise Q-value decomposition into error terms
- Rigorous bounding of each error component
- Explicit summation across agents and time steps
- Formal harmonic sum bound: $\sum_{t=1}^T 1/t^{1/2} \leq 2\sqrt{T}$
- We present the complete derivation of the final $O(N\sqrt{T}/(1-\gamma))$ bound with all intermediate mathematical steps included.
### Q2: The token consumption of ECON
Yes, the token usage reported in Table 3 including the coordinator LLM's output strategy and formatting instructions, the execution LLMs' answers, and the coordinator LLM's aggregated answers. We provide a case study about ECON Inference process in Appendix D.1 and a token usage for strategy formulation and formatting in Appendix D.2.
### Q3: Regarding the confusion caused by Table 4
We apologize for the presentation of Table 4. You are correct that R3 and S3 are missing from the table, which makes it difficult to interpret. We have provided a revised Table 4 as follow:
|$R_1$|$R_2$|$R_3$|ECON|
|-|-|-|-|
|✓|✗|✓|77.55|
|✓|✗|✗|74.32|
|✓|✓|✗|76.21|
|Random|||62.71|
|$S_1$|$S_2$|$S_3$|ECON|
|-|-|-|-|
|✓|✗|✗|71.35|
|✗|✓|✗|72.31|
|✗|✗|✓|81.47|
### Q4: Explanation of notations
> I suggest the author to explain what the local history and partial observation consist of.
>
We make a more detailed explanation about local history and partial observation, as the limitation of space, **please refer to the response of reviewer 7mra Q2**.
> What is U in the definition of Sec 2.2, can you write the math form? What R consists of?
>
**Utility Function**: In our framework, the utility function $U_i$ represents the expected cumulative reward for agent i, calculated as:
$$U_i(\pi_i(\theta_i), \pi_{-i}(\theta_{-i}), \theta_i, \theta_{-i}) = \mathbb{E} \left[\sum_{t=0}^{\infty} \gamma^t r_i^t \mid \pi_i, \pi_{-i}, \theta_i, \theta_{-i} \right],$$
where $r_i^t$ is the reward at time step $t$, the components of this reward function are detailed in Sec 2.3 under "Reward Design." And $R$ consists of $[r_i]$, i.e., $r_1, r_2, ..., r_n$.
> Can you write the math definition of θ?
>
Yes you are right, the θ in Sec 2.1 defined as local history $\mathbf{\tau}_i^t$ and observation $O_i^t$.
### Q5: Clarification of Sec 2.3
> I wonder why this can be called "embedding", and can you explain why you concatenate it with Ei
It is confusing where ϕ comes from.
>
#### **Regarding Prompt Embeddings**
- The prompt Embedding actually the action generated by belief state, we use this expression because it embeds control parameters that directly influence the LLM's generation process. We will rename this in the revised manuscript to avoid confusion about embedding of answer output.
- Our approach encodes the action $e$ and the global state $E_t$ to jointly optimize the global $Q$-function $Q_{tot}$. The former focuses on local action decisions, enhancing diversity and aiding cooperative exploration, while the latter captures comprehensive group context to foster more efficient inter-agent coordination. We make additional ablation experiments to validate this point by removing the concatenation of $e$ and remove belief encoder; **please refer to the response of reviewer t4DV Q4**.
#### **Regarding φ**: You are correct that there is an unclear relationship between $\phi$ and $\theta_i^B$. In our revised manuscript, we will explicitly denote $\phi_i \subset \theta_i^B$ to indicate that Q-value function parameters are a subset of the belief network parameters.
### Q6: Deeper analysis of weaker models
> Though deeper analysis of weaker models' contributions would strengthen validity.
We have conducted additional experiments with smaller language models and found that our framework still provides significant improvements over baseline approaches, the comparison is made with few-shot CoT.
|Dataset|Model|Few-shot CoT (%)|ECON (%)|
|---|---|---|---|
|GSM8K|QWEN2.5 3B|79.1|84.9|
||LLaMA3.1 8B|84.5|87.7|
|GSM-Hard|QWEN2.5 3B|19.7|21.3|
||LLaMA3.1 8B|27.6|29.9|
|MATH|QWEN2.5 3B|42.6|49.7|
||LLaMA3.1 8B|51.9|60.4| | Summary: This paper introduces ECON, a multi-agent framework designed to enhance the reasoning capabilities of LLMs. ECON models the multi-LLM setup as a DEC-POMDP with incomplete information, employing a Bayesian Nash Equilibrium perspective. Specifically, multiple “Execution LLMs” reason in parallel, each maintaining its own belief network and generating local solutions under partial information. A separate “Coordinator LLM” orchestrates consensus by aggregating and evaluating these local solutions, issuing a guidance to all agents. This structure aims to achieve a BNE, in which no single LLM agent can unilaterally improve its outcome, given its beliefs about the other agents.
Claims And Evidence: Yes. In general, the claims are supported by (1) regret bounds derived from RL theory and DEC-POMDP frameworks, and (2) empirical results across diverse tasks.
Methods And Evaluation Criteria: Yes. The hierarchical reinforcement learning approach and the evaluation (both theoretical and empirical) are
Theoretical Claims: I have skimmed through the proof of the theoretical results and believe they are intuitively correct.
Experimental Designs Or Analyses: The authors test on well-established datasets, including math reasoning, commonsense QA set, and planning, using models with different sizes.
Supplementary Material: I have briefly checked the theoretical parts in the appendix while did not review the experimental part, e.g., hyperparameters and prompt templates.
Relation To Broader Scientific Literature: In general, this extends prior “multi-agent debate” by proposing incomplete-information modeling and rigorous regret analysis, bridging a gap between purely heuristic debate approaches and formal MARL frameworks.
Essential References Not Discussed: This work did a good job in reviewing and discussing related references.
Other Strengths And Weaknesses: I have a mixed feeling about this work, stated in the following:
Strength:
- I strongly believe that incorporating a more principled approach into multi-agent LLM collaboration (e.g., a game-theoretical one) is highly valuable. This work makes advances in this direction and can inspire future works.
- This work is a nice combination of theory and practice. In particular, both theoretical regret analyses and empirical test results are provided.
Weakness:
- The presentation can be largely improved, in particular, providing more clarity and intuition on the framework. For example, the mapping into DEC-POMDP is a key step, which is not presented in a very abstract way without explanation of related concepts (e.g., how is the partial observation received, what is the state transition, why is the action "generating prompt embeddings"). With the lack of these, the understanding of the connection is very hard.
- Also, the learning target should also be explained better. In general, we wish to have the system perform well in an aggregated fashion (i.e., we only care about the final output from the aggregator), while this work focus on BNE. A better connection should be drawn.
- The scalability of the proposed training method should also be discussed. The previous works, while less principled, can be flexibly extended without training. This work, however, requires certain training steps to enable collaboration. It is unclear whether such training is valuable in terms of flexibility and efficiency.
Other Comments Or Suggestions: NA
Questions For Authors: My name concerns are listed in "Other Strengths And Weaknesses". It would be much appreciated if the authors can discuss these aspects.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear Reviewer 7mra:
Thanks for your constructive review, we provide point to point response regarding your question:
### Q1: The Clear Definition of DEC-POMDP and the ECON Framework
**Reply:** We provide a more intuitive and detailed explanation of our framework below:
----
### Detailed DEC-POMDP Component
**Partial Observation:**
In our framework, each agent receives **only three types of information: the question it needs to answer, the format and strategy provided by the coordinator, and the final commitment** (aggregated response) after the coordinator combines all LLM answers. This setting prevents Execution LLMs from directly accessing the outputs of others.
**Local History:**
Local history includes its actions and received observations. This local history enables the agent to learn from past behaviors and environmental feedback, refining its situational understanding and updating its belief state accordingly.
**Action: Prompt embedding**
The **action space** is represented by the **prompt embedding**, which encodes the agent's action output. The prompt embeddings directly influence the LLM's generation process, adjusting the balance between exploration and exploitation.
**State Transition:**
The state transition in our framework is essentially the update of the agent's **belief state**. The belief network updates the belief state based on the agent's local history and current observation, influencing future actions.
----
### Intuitive Framework Explanation
During the **inference phase** of ECON, a Coordinator LLM generates an informative strategy and a format based on the input question. These are then disseminated to the Execution LLMs, which independently produce their respective answers. Finally, the Coordinator LLM aggregates these answers to form a final commitment. A detailed case study demonstrating this inference process is provided in Appendix D.1.
In the **optimization phase** of the ECON framework, we update each Execution LLM's belief state through its belief network, aggregate these belief states via the belief encoder to form group-level information, then the mixing network outputs the final Q value. After the current answer is completed, we calculate loss and reward based on the commitment, which are then used to update the parameters of each Execution LLM's belief network and the belief encoder.
---
### Q2: Explanation of the Learning Target of ECON Framework
**Reply:** We clarify how setting BNE as the learning target leads to better aggregated answers as follows:
As mentioned in MAD, the key improvement in Multi-Agent Debate lies in **agreement intensity,** indicates the degree to which agents agree with each other can provide significant performance gains. This principle underlies our approach, where **we set BNE as the optimization target to establish consensus among the agents.** We then analyze the total Bayesian regret of the joint policy (i.e., the aggregated answer output by the MA-LLM system) based on learning towards BNE in **Lemma 2.2** and **Appendices B.2–B.3**.
To further validate that learning towards BNE can lead to better aggregated answers, we provide additional experiments showing the actual performance differences of the ECON framework before and after achieving BNE as follow:
| Dataset | LLaMA3.1| Without BNE (%) | With BNE (%) |
|-|-|-|-|
| GSM8K |8B | 74.38 | 80.33 |
| |70B | 82.12 | 96.61 |
| | 405B | 92.36 | 99.17 |
| GSM-Hard | 8B | 21.73 | 30.71 |
| | 70B | 43.58 | 60.26 |
| | 405B | 51.54 | 65.91 |
| MATH | 8B | 55.92 | 71.45 |
| | 70B | 74.47 | 87.31 |
| | 405B | 82.31 | 94.78 |
----------
### Q3: Discuss the Issue of ECON Scalability
**Reply:** We would like to clarify how we make our framework scalable:
- Our solution enhances scalability by forming a **global Nash equilibrium through local Nash equilibria** by introducing additional coordinators. Simply increasing the number of Execution LLMs would cause performance degradation since coordinator LLMs cannot handle excessive information (especially for weaker models).
- As shown in **Figure 4**, merely increasing Execution LLMs causes performance decline as coordinator LLMs struggle to manage numerous agents, particularly with weaker models.
- Results in **Figure 5** demonstrate that our scaled-up system (9 Execution LLMs, 3 coordinators, 1 central LLM) achieved an **18.1% improvement** over the baseline system (3 Execution LLMs, 1 coordinator), indicating potential for further scaling.
- More details about ECON's scalability can be found in **Section 3.4**, with detailed explanation and pseudocode in **Appendix A.4**. | null | null | null | null | null | null |
A Market for Accuracy: Classification Under Competition | Accept (poster) | Summary: This paper examines a machine learning (ML) model market for classification tasks. Specifically, the authors assume that multiple ML model providers compete for market share. Each user in the market randomly selects an ML provider that delivers an accurate prediction, and the market share of each provider is defined as the expected number of users choosing its model. The authors analyze the fundamental properties of best-response classification in simplified 2 × 2 accuracy markets and markets with threshold classifiers. They show that best-response classification can benefit both ML providers and consumers. Additionally, they propose an approach to effectively solve best-response classification using real finite samples and predictions from other ML providers. Synthetic and real-world experiments validate the theoretical findings.
---
I remain the score unchanged after rebuttal.
Claims And Evidence: **(Pro)** The theoretical results are generally sound and are supported by both theoretical analysis and experiments.
**(Con)** Best-response classification may be impractical in real-world settings since ML providers typically lack access to competitors’ predictions. While Equation (10) only requires information on the number of accurate predictions for each sample, obtaining this data in practice remains challenging. It would be helpful for the authors to discuss the implications of cases where this information is unavailable or biased.
**(Con)** The theoretical results are primarily based on the $N=2$ setting. However, in real markets, the number of ML model providers is often much larger. Extending the theoretical analysis to a general $N$-provider setting would strengthen the paper.
Methods And Evaluation Criteria: **(Pro)** The proposed method is theoretically sound and validated through experiments.
Theoretical Claims: I did not verify the details of the theoretical claims, but the results appear to be generally sound.
Experimental Designs Or Analyses: **(Pro)** The experiments are well-designed to validate both the theoretical findings and the proposed method. They also explore more general settings beyond those covered in theory, which is valuable.
Supplementary Material: I briefly reviewed the proofs.
Relation To Broader Scientific Literature: Compared to Ben-Porat & Tennenholtz (2019), this work focuses specifically on classification problems and provides a more detailed analysis of best-response dynamics.
Essential References Not Discussed: No essential references appear to be missing.
Other Strengths And Weaknesses: No additional strengths or weaknesses were identified.
Other Comments Or Suggestions: **(Pro)** The paper is generally well-written.
Questions For Authors: See the concerns listed above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for the insightful feedback! We are pleased that you appreciated the soundness of our results and found the empirical results to complement them nicely. We would like to address the questions laid forth in your review, and hopefully alleviate some concerns:
**”Best-response classification may be impractical in real-world settings since ML providers typically lack access to competitors’ predictions”**
Indeed, our analysis relies on the simplifying assumption that players have complete knowledge of $\\kappa_{-i}$, the number of other correct firms per $x$. This, however, need not be taken to imply that players share such information. Instead, we view this as simplifying a process in which each player has access to (some) information regarding competition. For example, there are markets where firms are likely to have a good sense of which user groups the other firms excel on or target. Another example are cases where a firm performs market research and obtains exact information on a subset of users. A third example is firms having coarse information, such as knowing whether users subscribed to a different provider.
To investigate this idea, **we have added an additional experiment on partial and coarse information**, in which firms have inexact estimates $\hat{\kappa}_{-i}$, and therefore learn with inexact weights. We consider two settings:
- **Coarse:** For each user, a player has knowledge only of whether there is at least one other accurate player, i.e., $\\hat{\\kappa}_{-i}$ is either 0 or rounded down to 1.
- **Partial:** Players know the true $\kappa_i$ for a random subset of users, and use this to make inference about the remaining $\\hat{\\kappa}_j$ (we use kNN).
Our results suggest that despite partial or coarse information, overall trends are preserved. However, the cost of misinformation is that social welfare is lower (and hence also total market share). For *coarse*, this reduction is 37% for $n=3$, and up to 45% for higher $n$. For *partial*, we see for $n>3$ that market research on just 100 users results in a welfare gain that is >70% of that under full information, suggesting that extrapolation can work well in our setting.
Additionally, this gain increases with the size of user information. We found that providers are able to learn near-optimal best responses even when only having done market research on 20% of the population. The gain in welfare is summarized in the following table for the COMPAS-arrest dataset:
| | # research points| 100 (2.7%) | 200 (5.4%) | 400 (10.8%) | 800 (21%) | Full information |
|:-----------:| --|:----------:|:----------:|:----------:|:----------:|:----------:|
| **# providers** | | | | | | |
| $n=2$ | | +17% | +21% | +23% | +26% | +29% |
| $n=3$ | | +25% | +33% | +35% | +37% | +42% |
| $n=4$ | | +34% |+38% | +42% | +44% | +46% |
| $n=5$ | | +38% | +42% | +45% | +47% | +48% |
| $n=6$ | | +41% | +44% | +46% | +48% | +48% |
These findings show promise to the approach of extrapolating to the uninformative user sectors, and suggest that even in real-world settings our approach can be worthwhile.
We will be sure to present these new results using the extra page of the final version, and provide full details of the experimental setup and complete results in the Appendix.
**”Extending the theoretical analysis to a general N-provider setting would strengthen the paper”**
Further analysis for multiple players is certainly interesting, and it is indeed our hope that this work inspires future work in multi-player settings. We will note though that this is not without challenges. Take, for example, our notion of partial discrepancy ($\\delta$), which offers a simple expression for utility in 2-player settings. As $n$ grows, the number of partial discrepancies between any 2 sets of players grows exponentially, and so extending our results to general $n$ becomes complex. We would also like to point out that the works of Ben-Porat & Tennenholz (upon which we base our setting) began with a work on 2 players (2017), and later released an entirely separate work to discuss the $n>2$ player setting (2019).
Thanks again for the helpful feedback. We are happy to discuss further any follow-ups regarding the new experiments and topics discussed above. If you found these to your satisfaction, we would greatly appreciate it if you would consider raising your score.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. I will keep my score unchanged.
---
Reply to Comment 1.1.1:
Comment: Thank you for your positive review and enlightening feedback! | Summary: This paper defines a simplified economic model to consider how the dynamics of an “accuracy market” between multiple firms would play out. In this model, each firm has a model that makes predictions about a user, and users are assumed to choose a provider with the highest accuracy. The paper lays out several theoretical and empirical results. First, it shows that firms must consider both their own accuracy and the accuracy of other players in the market, adopting a weighted objective approach. Second, it shows that adopting this strategy benefits both the model providers and the users, as defined through a welfare metric. It then considers several datasets and experiments, including both with synthetic data and simple real world datasets.
Claims And Evidence: I believe the claims made in the paper regarding the behavior of these idealized markets are supported both by the proofs and empirical experiments. My primary reservations with the work are regarding the assumptions and relevance to ICML, as discussed in the “Other Strengths and Weaknesses” section below.
Methods And Evaluation Criteria: As far as I can tell, the experimental setups make sense for the idealized market setting proposed in the paper.
Theoretical Claims: I did not check the details of the proofs as they are outside my area of expertise.
Experimental Designs Or Analyses: My overall impression of the experimental designs is that they are sound, but I did not have the opportunity to dive deeply into any code or data to verify further.
Supplementary Material: No, I did not review the supplementary material.
Relation To Broader Scientific Literature: The work seems to be primarily related to other work on algorithmic markets. In particular, the work seems to build on the 2019 Ben-Porat and Tennenholtz paper on regression.
Essential References Not Discussed: I am not aware of any essential references that were not discussed.
Other Strengths And Weaknesses: The strengths of this paper are its clarity and novelty. The paper does a good job of showing why, in this idealized setup, firms need to consider their competitors. Based on my limited search, it also seems as though this is the first paper to consider a market-based approach in the classification regime (although they do cite work that does similar treatment for regression models). Several results are formally proven, and I do think it is likely to inspire discussion among the right set of readers.
The primary weaknesses of this work are its relevance to ICML and its practicality. First, the work seems like it would be more appropriate for an economics venue than a machine learning one. While it does introduce an economic formalism to a machine learning topic, it feels much heavier on the economics side. I acknowledge that this may be due to my own lack of knowledge regarding economic approaches to ML, but I had a hard time following several parts of the paper despite being an experienced ML practitioner. I also note that the primary work that the paper builds on, Ben-Porat and Tennenholtz, was published at the ACM Conference on Economics and Computation.
Second, the setting seems quite idealized for how ML actually happens in practice. For example, I noted right away in equation (1) that the choice to have a model making one prediction per user seems odd - if there are firms competing over users (for example, in social media), they would likely be making several predictions about a user’s preferences, and those predictions in aggregate would influence the user’s choice of firm. There also does not seem to be much consideration of the effect that number of users has on the accuracy itself. Accuracy is not a quantity that can be purchased as in a traditional market - instead, the main way to get more accuracy is to have more users and thus get more data. I may have missed it, but I do not think that the model accounts for this relationship. I suspect that if it were taken into account, the market timing results would be different - a first mover would be more likely to be able to improve their accuracy more quickly because they have more data than competitors.
Because of its simplifying assumptions and shaky relevance to ICML, I would recommend that this paper be submitted to a different venue.
Other Comments Or Suggestions: N/A
Questions For Authors: 1. Why assume that there is only one prediction per user in equation 1 (and 7)? Are there any settings where this would be true?
2. Is it possible to summarize the results in table 2 in a more visual way? At the moment it is very hard to understand what is going on or easily spot the trends that are described in section 6.2.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your review and efforts. We have addressed your concerns in our response below. As for the issue of whether our paper is a good fit for ICML – which seems to be a main concern – we hope that our response, combined with the other reviews for our paper, help in establishing its relevance and appropriateness. Given this, **we are hopeful that you will be willing to reconsider your position regarding acceptance**, and will gladly answer any further questions you may have.
**Relevance to ICML:**
ICML includes a large sub-community of researchers that study questions at the interface of learning and economics, with `game theory` being one of the conference’s stated topics of interest (see this ICML’s call for papers). Many of the papers we cite and share connections with have been published in leading machine learning venues (see references). In fact, we view our work as lying *more on the machine learning side* than typical works in this space. As for Ben-Porat & Tennenholtz (2019), please note that it is in fact a follow-up to Ben-Porat & Tennenholtz (2017) – which was published in NeuIPS.
**”The setting seems quite idealized for how ML actually happens in practice.”**
Studying learning in economic settings requires making simplifying assumptions. This is especially true when aiming to establish a theoretical understanding – which is one of our goals. This also allows us to build on earlier results (e.g., that best-response dynamics reach equilibrium) that rely on similar assumptions. Despite these limitations, one of our key points is that naively optimizing accuracy can be a bad strategy. If anything, this portrays conventional learning frameworks as “idealized” when confronted with an economic reality. Our hope is that our work serves as a step towards further exploration of learning in practical market settings.
**”Why assume that there is only one prediction per user..?”**
While not applicable to all scenarios, this modeling choice is appropriate for many real-world settings where each user makes only one decision and hence is in search of a single prediction. For instance, in housing or car sales, a user typically sells a single house or car, and therefore looks for a single accurate prediction about the worth of the entity (for example from Zillow as a real estate valuation predictor). This leads to a well-defined $(x, y)$ pair per user.
Other examples are cases where users interact with multiple items (e.g., recommending multiple products in social media sites), but the outcome of interest often remains singular at the user level. For example, a user may receive multiple predictions $(x, z)$, but only one true outcome $y$ (e.g. user satisfaction). This aligns with our setting where the goal is to model a single, well-defined outcome per user.
We agree that there are also settings in which users are associated with multiple labels. One idea is to model user decisions based on the probability of correct prediction $P(\\hat{y} = y)$ (e.g., quantal response). However, equilibrium results for such a model remains an open question and is beyond our current scope. We believe our framework provides a solid starting point and can be extended to incorporate such user dynamics in future research.
**”The main way to get more accuracy is to have more users and thus get more data”**
This is a good point. In our work sample size comes in indirectly through competition: for player $i$, if other players are correct on many points, then these will get small weights in $i$’s objective, and therefore its *effective* data size (see [1]) will be small. Our setup abstracts away the direct effect of competition on data size – this is an intentional design choice, intended to enable tractable analysis of our main phenomena. One justification is that accuracy gains from increased sample size typically exhibit diminishing returns; i.e., once a reasonable number of samples are obtained, adding more samples has only a mild effect on accuracy (see e.g. [2]). In other words, sample size effects are important mostly in the small-data regime, which is not our focus.
A final note is that in our setup, the goal of learning is *not* maximizing accuracy, but rather, market share. Our results show that aiming for maximal accuracy can be a poor strategy, and that it is often better to focus efforts on a small subset of the population than to seek better accuracy on as many users as possible.
**”Is it possible to summarize the results in table 2 in a more visual way?”**
To complement Table 2, we have already illustrated the full best-response dynamics in Figure 5 and provided a detailed discussion of such in Appendix D.2. If you have specific suggestions for further improving clarity, we are happy to consider them and will gladly add them to the final version if applicable.
[1] Pustejovsky (2019). Effective sample size aggregation.
[2] Kaplan, Jared, et al. (2020) Scaling laws for neural language models.
---
Rebuttal Comment 1.1:
Comment: Thank you for the rebuttal and the clarification on the suitability of this topic for ICML. In light of the response and the other reviewers' reviews, I have updated my score to a 3.
---
Reply to Comment 1.1.1:
Comment: Thank you for revisiting your score and for your thoughtful feedback on our work. We really appreciate it! | Summary: This paper studies equilibria in marketplaces when multiple firms are competing with each other for consumers. As compared to the prior literature in this space, this paper focuses on the dynamics of learning an equilibrium between two firms, as well as the impact on consumers and markets. Interestingly, this paper shows that firms can converge to a stable equilibrium relatively quickly, shows that firms can implicitly coordinate which markets they enter, and that increased competition helps consumers. It concludes with experiments on synthetic and real data. The paper is pretty clearly written and addresses related literature well.
Claims And Evidence: Yes - this paper is clearly written and makes a compelling contribution to the study of how firms can converge to (anti)-competitive behavior.
Methods And Evaluation Criteria: Yes
Theoretical Claims: N/A
Experimental Designs Or Analyses: N/A
Supplementary Material: N/A
Relation To Broader Scientific Literature: One high-level takeaway from the paper is that firms effectively coordinate by independently deciding to compete in different markets. This occurs even though firms move sequentially and don’t coordinate explicitly. Given this result, it would be helpful if authors address how their work connects with work on algorithmic collusion, e.g. https://arxiv.org/pdf/2409.03956.
Essential References Not Discussed: See above
Other Strengths And Weaknesses: It would have been interesting to have deeper discussion of the effect of capacity on competition (discussed at the end of section 6).
Similarly, it also could have been helpful to have deeper discussion of how the main results of the paper would change given a wider model of classifiers (e.g. where MLR isn't satisfied).
Other Comments Or Suggestions: N/A
Questions For Authors: See above
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Many thanks for your positive review! We are pleased that you found that the paper was clearly written and that you enjoyed our result showing how competition can induce anti-competitive behavior. In line with this, we would like to address some of your feedback below:
**”It would be helpful if authors address how their work connects with work on algorithmic collusion”**
Thank you for raising our attention to this work! It is interesting to see another example of cooperative behavior arising implicitly in a setting of competition. It seems that they show that under certain families of stateful online algorithms, multiple competitors can actually retain monopolistic prices, which would benefit all of the providers. While our focus here is how machine learning prediction can be optimized under competition, their positive result in the area of **pricing optimization** is certainly interesting to see, and gives thought to how we can incorporate pricing into future work. More specifically, it would be a neat extension to define a price parameter for each provider, and as such the user choice can be according to which provider gives the best price-to-accuracy tradeoff.
With that said, there are some fundamental differences between their work and ours which bear consideration:
- As mentioned above, our focus is entirely on how **machine learning predictors** can implicitly coordinate under competition (through the *weighted-accuracy* objective laid out in Section 5). The work on Algorithmic Collusion does not consider the effect that product relevance has on the user choices, and therefore there is no consideration for how predictors play a role in the strategies of the players.
- In our setting we assume best-response dynamics, where the players respond optimally to the **current** strategies of the competitors. Indeed in the pricing model of their work, if players best-responded locally at each timestep, then the dynamics would converge to expected competitive behavior.
- As stipulated in the performativity paragraph at the end of Section 5, our setting is **stateless** (at least for $n=2$), whereas the Algorithmic Collusion work is **stateful** in that it takes into consideration the pricing in previous timesteps. This may also explain the positive result under the family of no-regret algorithms, which are different from best-response dynamics.
**“It would have been interesting to have deeper discussion of the effect of capacity on competition”**
We appreciate your interest in how the capacity of data representation affects the overall welfare. It is indeed a counter-intuitive result, and one reasoned about at length in the work of Jagadeesan et al. (mentioned in the paper), albeit in a different (though related) setting. Our intention here was to show how the result of “lower data representation resulting in higher welfare” shows itself in our setting as well, as it seems to be a central characteristic of competition in machine learning. We would also like to note that the main paper shows this result for $n=2$ players, and results for $n=2,3,4$ players are shown in Appendix D.3 (Figure 7).
**“How [would] the main results of the paper change given a wider model of classifiers?”**
Thank you for this feedback. To make sure we fully understand your concern, is your intention regarding the theoretical results for $n=2$ players? Regarding our results on equilibrium, market share dynamics, and welfare, indeed when extending to general model classes and for multi-dimensional data (Section 4.3 in the paper), the convergence results may not hold. Our empirical results, however, show that in general model settings, convergence still occurs almost immediately, within two rounds. The market share and welfare results hold strongly as well, suggesting, as you may have alluded to here, that there is opportunity for future work to extend our results to more general settings, as we did in the 1-dimensional case. | Summary: The paper studies a market where multiple model providers compete to provide accurate predictions to as many users (points) as possible. The theoretical analysis reveals several interesting insights. The main insight is that naively maximizing for accuracy is not optimal for either player. For example, if there are two players and both use the optimal classifier that is accurate on say 80% of the points. Since they are accurate on the same points their payoffs are divided into half. They can increase their payoffs by deviating from the optimal classifier and being accurate on subsets of points (subpopulations) exclusively. This dynamics of the market improves the payoffs of the players and maximizes the welfare of users. Thus overall the market benefits all parties despite the competition. These insights are also illustrated through experiments on synthetic and real datasets.
Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence.
Methods And Evaluation Criteria: The methods and evaluation criteria are reasonable.
Theoretical Claims: I checked a few (Proposition 1 and Theorem 1). I do not see any major issues in other claims.
Experimental Designs Or Analyses: Both synthetic and real data experiments are designed to evaluate the theoretical claims. Overall, there are no issues in the design of experiments, and they support the theory. One issue is the absence of error bars in the numerical results.
Supplementary Material: Proofs of proposition 1 and Theorem 1.
Relation To Broader Scientific Literature: The paper provides novel insights into the accuracy markets.
Essential References Not Discussed: This is fine.
Other Strengths And Weaknesses: S1. The paper provides an interesting analysis of accuracy market dynamics. Naive intuition suggests each player’s best interest is in deploying the model with the highest accuracy. The analysis shows it is not true and all parties can maximize their utility by deploying models that are accurate on exclusive sets of points.
S2. The paper is well-written with a clear problem setup and analysis. I appreciate the analysis in a 2-player setting and with 1-d threshold-based classifiers. The synthetic experiment results in Figure 1 helps in getting to the core of the market dynamics and why the claims made in the paper make sense.
------
W1. The biggest weakness is the assumption that the players know the predictions of other players on all points and they are learning models on the same data points (eq. 11). In practice such information is private to the players and they may not share the same data points.
W2. While the analysis in 2-player setting is good as a first step and easier to understand. Analysis in multi-player settings would make the paper stronger.
W3. I like the paper for the insights but I am not sure about the practical relevance as the setting and assumptions seem too unrealistic. Grounding in a specific application could help here.
Other Comments Or Suggestions: 1. Introduction is easier to follow after reading the setup and some of the empirical results. Otherwise, the setting and the dynamics of the game are not clear which makes it difficult to follow the introduction. I’d suggest making the introduction crisper and also introducing the setting and some examples (e.g. Figure 1) earlier.
2. In section 4, it would help to list the main claims clearly. It is likely that readers can get lost in a series of propositions and theorems.
Questions For Authors: 1. In practice precise information of other players may not be known. In such cases how will the analysis and implementation unfold? Are there applications where the assumptions made in the paper are realistic?
2. Can the empirical results be provided for realistic applications?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your feedback! We are encouraged that you found the paper well-written and that you appreciate our analysis and experiments.
**”The biggest weakness is the assumption that the players know the predictions of other players…”**
Indeed, our analysis relies on the simplifying assumption that players know the number of other correct classifiers. This, however, need not be taken to imply that players share such information. Instead, we view this as simplifying a process in which each player has access to (some) information regarding competition. For example, there are markets where firms are likely to have a good sense of which user groups the other firms excel on or target. Another example are cases where a firm performs market research and obtains exact information on a subset of users. A third example is firms having coarse information, such as knowing whether users subscribed to a different provider.
To investigate this idea, **we have added an additional experiment on partial and coarse information**, in which firms have inexact estimates $\\hat{\\kappa}_{-i}$, and therefore learn with inexact weights. We consider two settings:
- **Coarse:** For each user, a player has knowledge only of whether there is at least one other accurate player, i.e., $\\hat{\\kappa}_{-i}$ is either 0 or rounded down to 1.
- **Partial:** Players know the true $\kappa_i$ for a random subset of users, and use this to make inference about the remaining $\\hat{\\kappa}_j$ (we use kNN).
Our results suggest that despite partial or coarse information, overall trends are preserved. However, the cost of misinformation is that social welfare is lower (and hence also total market share). For *coarse*, this reduction is 37% for $n=3$, and up to 45% for higher $n$. For *partial*, we see for $n>3$ that market research on just 100 users results in a welfare gain that is >70% of that under full information, suggesting that extrapolation can work well in our setting. Please see the table attached in our response to reviewer GVZQ for a more detailed comparison.
We will add these new results to the Appendix.
**”In practice … players may not share the same data points”**
This is a fair point (and one we mentioned in our broader impact section). From a learning perspective, it is certainly possible to define a market where different firms have differing datasets, and in this setting our method would be valid and well-defined. However, from a game theoretic perspective, once exact datasets differ, then the market is no longer a congestion game, and its analysis becomes much more involved. We are familiar with only one work that aims to extend congestion games to settings where resources are not shared [1], but this requires making strong structural assumptions and the results are limited. Nonetheless, we agree that exploring a setting with differing datasets is of practical value and can be interesting as future work.
**”Grounding in a specific application could help here”**
Thank you for this recommendation. Consider a media platform, such as Netflix, whose value lies (in large part) in their ability to recommend relevant content to their users. Given their knowledge of the potential user base, they would like to create a recommendation system that maximizes subscriptions. Using their knowledge of the performance of the competitors (Disney Plus, etc.), whether through general, coarse, or partial information, they can use our method to give larger weight to the less-targeted users and in doing so optimize for market share.
**”Analysis in multi-player settings would make the paper stronger”**
Further analysis for multiple players is certainly interesting, and it is indeed our hope that this work inspires future work in multi-player settings. We will note though that this is not without challenges. Please see our response to reviewer GVZQ on this matter for more detailed reasoning regarding the challenges involved in extending our results to $n>2$ players.
**”Can the empirical results be provided for realistic applications?”**
Our empirical results focused primarily on the effectiveness of our method, establishing its robustness and efficiency. That said, we believe our framework lays a strong foundation for future empirical studies, and if applicable, we can address more potential directions for empirical validation in the final version.
**”Absence of error bars in the numerical results”**
Please note that we report standard errors in the Appendix (Table 5). These were omitted from the main paper for clarity, but if you feel these would be helpful there, we can certainly incorporate them back in.
Once again, thank you for the detailed feedback. If you found the above clarifications and improvements to your satisfaction, we would greatly appreciate it if you would consider raising your score.
[1] Milchtaich, I. (1996). Congestion games with player-specific payoff functions.
---
Rebuttal Comment 1.1:
Comment: Thank you for the rebuttal and additional experiment on the imperfect information setting. I like the conceptual contributions of the paper and maintain my recommendation for acceptance. It would have been much stronger if the market setting were more realistic ( multiple players, imperfect information, players with different data points, etc.). However, this would require more comprehensive treatment that could be deferred to future work.
---
Reply to Comment 1.1.1:
Comment: Thank you for your positive feedback and for acknowledging the conceptual contributions. We greatly appreciate your comments and agree that exploring more realistic market settings is an important direction for future work. | null | null | null | null | null | null |
Neural Representational Consistency Emerges from Probabilistic Neural-Behavioral Representation Alignment | Accept (poster) | Summary: The authors have introduced a new framework PNBA for aligning neural and behavioural distributional representations. Their approach allows learned and generalisable embeddings across subjects. Their framework uses a multimodal VAE architecture with a constrastive loss term (probabilistic matching term) to provide an alignment pressure. It provides a non-linear generalisation to existing alignment tools in neuroscience such as CCA or linear factor models.
Claims And Evidence: The authors have provided a variety of compelling examples on real neural data. Showing cross-trial, -session and -subject correlation analyses in monkey centre out reaching tasks. They also show different neural modality (calcium imaging). Overall empirical evidence is quite strong.
They authors also provide a proof that the generative constraints should prevent mode collapse. They seem quite standard langrangian arguments, but could do with some more detail.
The authors state that the neural representations exhibit minimal correlation with non-corresponding pairs of neural data and behavioural data in figure 3b. But this doesn’t seem so? The correlations between all trials are somewhere between 0.8 and 0.95, including unmatched neural and behavioural data (from my understanding of the plot). This is not convincing.
I don’t see how the method would be ‘present significant potential for advancing calibration-free neural decoding systems’ (line 464). There would still need to be trials from a new animal or session to internally learn the alignment. This would be a form of calibration.
Methods And Evaluation Criteria: The constrastive probabilistic matching loss seems well motivated given failures of point-wise alignment approaches and the trial-trial variability common in neural data. However, the downside is that the full trial must be input to match with behaviour, hindering online decoding that is important in BCI applications.
However, neural data is inherently dynamic and the VAE approach introduced here does not explicitly model dynamics. Whilst there are a number of methods that do this, i.e., learn embeddings whilst ignoring the dynamics, it prevents forecasting.
Theoretical Claims: The authors provide short ‘sketch’ proofs for theorems 3.1 and B1. I couldn’t find any glaring errors in the proofs and seem relatively consistent with constrained VAEs of beta VAEs.
Experimental Designs Or Analyses: I was not entirely sure about how the correlations are being performed. I assume this is on the latent space z embeddings but I couldn't explicitly find this. This makes it difficult for me to evaluate the soundness of the experimental results/claims.
Supplementary Material: Section A – provides a short comparison with CLIP constrastive losses to probabilistic matching used by the authors – just highlighting the degeneracy of point-to-point constrastive loss vs distributional approaches.
Sectiobn B – some further theory on their proposed method and elbo derivation.
Relation To Broader Scientific Literature: Cross-subject neural alignment is an open neuroscience problem. Recent work has used linear factor models like CCA or even posthoc alignment. This method provides a clear improvement over these tools, taking advantage of non-linear generative models.
Essential References Not Discussed: A recent paper published just this month in Nature Methods seems to have some relevance for aligning neural representations using geometric deep learning and would not have been available upon submission of this article to ICML: Gosztolai et al. MARBLE.
Other Strengths And Weaknesses: I wonder whether there is some combined approach with sequential methods like LFADS or GPFA that could be proposed? Since at the moment it does not explicitly model temporal dynamics.
Other Comments Or Suggestions: None.
Questions For Authors: It was unclear to me what the normalised representation values are? Are these the latent space activations? Unit vector normalised? I had to dig into the SI too much to understand what was being correlated in the empirical experiments.
Recent work has shown that there are different ‘solutions’ for solving the same task, i.e., different neural representations. This puts into question a lot of work that uses trial averaging. I wonder how the authors would approach such a problem with PNBA? Since in such a case, I would imagine that the low-dimensional neural representations wouldn’t align?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer's thoughtful feedback and the positive assessment. We address each point as follows:
```Q1. The langrangian arguments could do with more detail.```
**A1**: Thanks for this suggestion. In our revision, we will add a more comprehensive derivation of the Lagrangian multiplier method, explicitly clarify how constraints are transformed into optimization objectives, and provide mathematical explanations for key steps in the derivation process.
```Q2. The authors state that the neural representations exhibit minimal correlation with non-corresponding pairs of neural data and behavioural data in figure 3b. But this doesn’t seem so? The correlations between all trials are somewhere between 0.8 and 0.95, including unmatched neural and behavioural data (from my understanding of the plot).```
**A2**: Thank you for this observation. Correlations in Figure 3b are indeed generally high due to our compact latent space. Our intention was to emphasize trial discriminability, with diagonal elements forming distinguishable patterns in most cases. In our revision, we will better characterize successful cases and emphasize trial-level discriminability.
```Q3.I don’t see how the method would be ‘present significant potential for advancing calibration-free neural decoding systems’ (line 464). There would still need to be trials from a new animal or session to internally learn the alignment.```
**A3**: PNBA extracts preserved neural representations in zero-shot individuals (Sections 4.3-4.4), allowing us to train both PNBA models and BCI decoders on known individuals and directly apply them to new individuals without any fine-tuning or calibration (Section 4.5). Thus, we believe this demonstrates potential for calibration-free BCI applications.
```Q4. The downside is that the full trial must be input to match with behaviour, hindering online decoding that is important in BCI applications.```
**A4**: In experiments from Section 4.5, we can adopt a sliding window approach with most BCI decoders, especially Linear and MLP, supporting online decoding. While not our primary focus, this would address the concern. We will add this discussion in our revision.
```Q5.Unclear how correlations are performed, are they on latent space embeddings?```
**A5**: All correlation calculations are indeed performed on the low-dimensional latent representations z, as they have the same spatial dimensions. We will clarify this explicitly in our revision.
```Q6.Recently published Gosztolai et al. MARBLE. (2025.3) paper seems relevant.```
**A6**: Thank you for this recommendation. While both MARBLE and our PNBA aim to reduce dimensionality of neural activity, there are fundamental differences. MARBLE is a single-modal method focusing on geometric constraints and temporal dynamics on neural activity, while PNBA is a multimodal strategy directly incorporating behavioral data as constraints in latent space.
We'll add this related work in our revision.
```Q7. I wonder whether there is some combined approach with sequential methods like LFADS or GPFA that could be proposed? Since at the moment it does not explicitly model temporal dynamics.```
**A7**:We agree and believe such a combination is entirely feasible and would address the current limitation in modeling temporal dynamics. PNBA provides multimodal alignment constraints while methods like LFADS/GPFA offer complementary temporal modeling capabilities. These represent independent constraints that could be directly combined in future work.
We'll include this as limitation and state this future work in our revision.
```Q8. It was unclear to me what the normalized representation values are? Are these the latent space activations? Unit vector normalized?```
**A8**:'Normalized representation values' refers to z-score normalization applied independently to each trial's representation, making different modalities comparable by eliminating scale differences. We'll clarify this in our revision.
```Q9. I wonder how the authors would approach such 'multiple solutions' with PNBA? Since in such a case, I would imagine that the low-dimensional neural representations wouldn’t align?```
**A9**: PNBA directly addresses the "multiple solutions" challenge. Rather than forcing identical representation for the same behavior, our approach:
1. Avoids assuming one-to-one correspondence between neural activity and behavior, instead employing distributional matching that preserves neural diversity.
2. Ensures that PNBA must generate *different* neural representations for the same behavior (Theorem 1, Property 1), otherwise it would result in representation collapse.
3. Establishes **a lower bound on representational similarity that ensures differences** and an upper bound to promote feasible similarity (Theorem 1, Property 3).
Therefore, these multiple considerations make PNBA particularly suited for studying the many-to-one mapping between neural activity and behavior. | Summary: Modeling shared variability across multiple animals is critical to understand universal principles of neural computation. Still, we like probabilistic tools to capture them into a singular representation. This work introduces a new probabilistic method to represent neural and behavioral variability across animals while allowing for individual variability. The authors validate their model in two different neural datasets, and compared the performance to relevant alternative models showing the applicability of their model.
Claims And Evidence: The work is clearly motivated, supported by the theoretical proof and the presented results. The competitive performance results in comparison to alternative models further cements the relevance of the presented model. Moreover, they tested the model performance across recording modalities and species illustrating the broader applicability of their method. The author claim the relevance to BCI applications, still this requires additional constrains, including exploration of inference times, computational costs and data demands.
Methods And Evaluation Criteria: The methods are adequate and evaluation as a function of correlation between trajectories captures the shared variability across neural representations. The authors also showed minimal decoding performance to unseen observations. Still, since the emphasis of the work is to align to behavioral tasks, decoding with respect to behavior would provide a better sense of how much behavioral information is captured. To provide additional evidence on the use of the method for BCI, beyond just neuroscience discovery, the authors should discuss limitations on real-time inference and computational demands.
Theoretical Claims: The theoretical proofs are clearly presented with enough detail and are correct.
Experimental Designs Or Analyses: The authors evaluate their method on two neural datasets which amply shows the applicability of the work. Still, using simulation could further highlight uses and limitations of the model. For example, can the model generalize across missing behavioral conditions? Can the model recover ground truth parameters? How robust is the model to the dimensionality of the latent space, missing observations, trial misalignment, or individual variability? Moreover, adding behavioral decoding results across all the experiments and datasets would further show the ability of the model to extract those representations.
Supplementary Material: The supplementary material is informative, extending the methods, parameter choices and results.
Relation To Broader Scientific Literature: The authors correctly frame their work around the relevant literature, they compared their model to alternative solutions. Still, the authors missed prior work introducing a probabilistic method for across-animal task-informed neural alignment (Herrero-Vidal et al. NeurIPS 2021). Additionally, the author could contrast their results with other simpler alignment methods based on Procrustes alignment (Williams et al NeurIPS 2021, Safaie et al. Nature 2023).
Essential References Not Discussed: The authors should include a reference, and potential comparison, to prior work introducing probabilistic method for across-animal task-informed neural alignment (Herrero-Vidal et al. NeurIPS 2021).
Other Strengths And Weaknesses: While the work is clearly presented, adding a section to discuss the limitations is needed to fully assess the impact of the contribution.
Other Comments Or Suggestions: The authors could include comparisons to linear alignment methods and compare decoding performance, data demands, and computational costs to understand the direct applicability to BCI technology.
Questions For Authors: How much data is needed to train the initial model? The author mention that a linear head is needed to compensate for inconsistencies between the number of recorded neurons, how much of the alignment happens in this transformation? How much data is used for this pre-alignment?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the positive assessment and constructive feedback. We address each point as follows:
```Q1. Decoding behavior would better demonstrate captured behavioral information. Authors should discuss limitations on real-time inference and computational demands for BCI.```
**A1**: Behavior decoding results are as follows: M1, R²=0.89, PMd, R²=0.83 (training subjects); M1, R²=0.78, PMd, R²=0.71 (testing subjects), indicating capture of behavioral information. These are expected as PNBA aligns neural-behavioral representations. To avoid circular reasoning, Section 4.5 presents movement decoding using V1 data, where the neural encoder was only trained with stimulus in PNBA, excluding movement data.
For BCI discussions, training takes ~1 hour (M1/PMd) and ~8 hours (V1), detailed in SI Line 1075, with inference requiring 0.3ms and 1.4ms, respectively (averaged over 1000 runs on an A100 GPU), showing potential for BCI.
We will add these discusions in our revision.
```Q2. Simulations could reveal model limitations. Can the model generalize across missing behavioral conditions or recover ground truth parameters?How robust is it to latent space dimensionality, trial misalignment, or individual variability?```
**A2**: Thanks for this suggestion. We agree with the usefulness of simulation. However, building simulations for neural-behavioral modeling with ground truth representation constraints is inherently difficult due to the complex, unknown nature of true neural encoding mechanisms. We therefore validated PNBA across three diverse real datasets.
Regarding specific questions:
a. No, PNBA cannot generalize to missing behavioral conditions in the raining. This is a current limitation, as discussed in Section 6.
b. PNBA doesn't aim to recover ground truth parameters, as these are difficult to define in real neural data. PNBA is data-driven, learning effective representation alignment instead, similar to CLIP.
c. We show robustness through theoretical guarantees (Theorem 3) and empirical validation. Figure 4b shows the results of varying latent space dimensions. Figures 3b,6c show effective handling of trial-to-trial variability in unseen subjects.
```Q3. The authors missed a reference, Herrero-Vidal et al. NeurIPS 2021.```
**A3**: Thanks for suggesting this work.
Herrero-Vidal et al. assumes similar neural trajectories across individuals responding to identical stimuli so as to align **single-modal** neural activity through individual-specific optimization. In contrast, our **multimodal** neural-behavioral alignment doesn't presuppose neural encoding similarity, and can be tested across individuals. This methodological difference allows us to validate preserved neural representations in unseen subjects.
We believe our findings provide empirical support for assumptions in Herrero-Vidal et al., supporting shared neural representations under the same behavior.
We will add these discussions in our revision.
```Q4. Compare with Procrustes alignment methods (Williams et al. 2021, Safaie et al. 2023).```
**A4**: Thanks for this suggestion. We didn't directly compare with Williams et al. and Safaie et al. for several reasons:
a. Our work proposes cross-modal (neural-behavioral) alignment, while these methods focus on unimodal (neural-neural) alignment, making direct comparison potentially unfair.
b. **Our method requires no fine-tuning on unseen individuals, whereas these methods require individual-specific optimization**.
c. Different foundational assumptions (**A3**).
We will include these discussions in our revision.
```Q5. Add a section discussing limitations.```
**A5**: Limitations are currently discussed in Section 6, paragraph 1, noting PNBA cannot train with neural activity alone. We will add temporal dynamics discussion based on Reviewer ```DJVs```'s suggestion.
```Q6. Compare with linear alignment methods regarding decoding performance, data demands, and computational costs for BCI applications.```
**A6**: For computational costs and linear alignment comparisons, please see **A1** and **A4**. Regarding data requirements, PNBA is currently trained with 2 monkeys (M1/PMd) or 8 mice (V1), indicating moderate demands. While BCI applications aren't our current focus, our results show potential, and we plan dedicated BCI research in future work.
We will add these discussions in our revision.
```Q7. How much data is needed for initial training? How does the transformation for handling different neuron counts work, and how much data is used for pre-alignment?```
**A7**: Our model requires no pre-training, but trains directly on data from multiple individuals in the training set and directly tests on new individuals without fine-tuning.
To handle varying neuron counts, we use a **shared** convolutional projection head followed by pooling to only unify dimensions in our network. This requires no separate pre-alignment data or process, as it's learned directly during end-to-end training.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the detailed feedback. While the assumptions differ between models, they are still relevant comparisons and would further show that assumptions presented in this work's model are more adequate for the underlaying data statistics. Still, I agree with the additional points of discussion and I adjusted the score accordingly. Please add the relevant comparisons discussed here to the final manuscript.
---
Reply to Comment 1.1.1:
Comment: We appreciate the reviewer's insightful suggestion! We have now included all suggested baselines. As shown in the following table, FA+Procrustes† and PCA+CCA† achieve moderate performance despite being optimized specifically for each trial of individual subject. The results suggest that linear alignment approaches struggle to effectively handle complex, high-dimensional neural activity and behavioral variables, further highlighting the importance of non-linear algorithms in this domain.
| Cortical Area | Method | Training Subjects | New Subjects |
|---------------|--------|-------------------|--------------|
| Motor Cortex (M1) | VAE§ | 0.0197 | 0.0016 |
| | FA+Procrustes† | 0.3334 | 0.2009 |
| | PCA+CCA† | 0.3520 | 0.2160 |
| | FA+amLDS† | 0.5807| 0.3627 |
| | Neuroformer* | 0.5214 | -- |
| | MEME | 0.7756 | 0.7060 |
| | **PNBA (Ours)** | **0.9465** | **0.9302** |
| Motor Cortex (PMd) | VAE§ | 0.0063 | 0.0028 |
| | FA+Procrustes† | 0.3605 | 0.2877 |
| | PCA+CCA† | 0.3916 | 0.3397 |
| | FA+amLDS† | 0.4733| 0.4366 |
| | Neuroformer* | 0.3283 | -- |
| | MEME | 0.5279 | 0.5255 |
| | **PNBA (Ours)** | **0.9248** | **0.9176** |
| Visual Cortex (V1) | VAE§ | 0.0029 | -0.0009 |
| | FA+Procrustes† | 0.1221 | 0.1207 |
| | PCA+CCA† | 0.1210 | 0.1209 |
| | FA+amLDS† | 0.1509| 0.1501|
| | Neuroformer* | 0.4116 | -- |
| | MEME | 0.6357 | 0.5980 |
| | **PNBA (Ours)** | **0.8830** | **0.8705** |
§: Independent modality training without cross-modal alignment
*: Requires session-specific/subject-specific training
†: Requires per-trial optimization for each individual subject
Note: We utilized Factor Analysis (FA) to standardize dimensionality when adapting the amLDS algorithm (Herrero-Vidal et al. 2021) for our neural-behavioral representation alignment task.
We hope these additional results address the reviewer's suggestions. We also note that these results would satisfy the comparison (**Q2**) suggested by Reviewer ```a8zJ```. We will incorporate all these comparisons into the final manuscript as suggested. | Summary: This work proposed a probabilistic representation alignment framework PNBA that can be used to align neural activities and animal behaviors. The method is applied across brain regions, neural data modalities, and animal species. Authors provided extensive experimental evidence across multiple datasets, validating the robustness of the proposed method.
Claims And Evidence: Line 382-384: "The observation of preserved neural representations across both motor and visual cortices, despite their distinct functional roles and varying correlation strengths, suggests a broader preservation of neural coding structure." This claim is confusing, and seems like an over-claim: How is the calcium imaging encoder trained? Is the zero-shot experiments still the cross-subject zero-shot experiments? Is the behavioral encoder frozen? If both encoders are optimized based on the defined loss given by the authors, the experimental evidence does not suggest broader preservation of neural coding structure.
Methods And Evaluation Criteria: 1. As stated in Section 3.1, the proposed matching objective between neural activities and behaviors is only one necessary condition for distributional alignment. It seems to me that one sufficient condition for alignment is that f and g both needs to be reversible. Is this true? Yet the provided theoretical guarantees seem to be very loose to provide a sufficient alignment.
2. What is the underlying assumption of using Pearson correlation coefficient to evaluate the alignment quality? (1) Pearson Correlation is a method to evaluate the distributional similarity between two variables. In your case, how it is applied on single-trial data? Is it applied on each coordinate of a latent representation z? Are the latent vectors are centered (i.e. are you measuring cosine similarity)? (2) Have the authors tried other possible evaluation methods for alignment, e.g. L1 distance? Are all baseline methods optimized based on the given evaluation metrics?
Theoretical Claims: I did not check carefully, as the theoretical bounds seem to be loose.
Experimental Designs Or Analyses: This paper provides experimental results on three different datasets involving different modalities and brain regions. The experimental design is extensive, seem to be complete, and are of high-quality.
However, the authors provided limited experimental details in the main text, which makes certain parts difficult to evaluate. For example, one common issue in spike neural data analysis is the difference between amount of neurons across sessions, and how to transfer encoder given different input sizes. How did the author address this issue? Are encoders re-trained when initialized on the data from a new animal? How did the authors deal with unseen neurons in new animals/sessions?
Supplementary Material: I did not review the supp materials.
Relation To Broader Scientific Literature: The problem studied is a very important problem to the neuroscience community.
Essential References Not Discussed: Liu, Ran, Mehdi Azabou, Max Dabagia, Chi-Heng Lin, Mohammad Gheshlaghi Azar, Keith Hengen, Michal Valko, and Eva Dyer. "Drop, swap, and generate: A self-supervised approach for generating neural activity." Advances in neural information processing systems 34 (2021): 10587-10599.
The proposed idea is actually very similar to the work pointed above. This work uses different encoders to encode neural activities of different animals, uses a behavioral-guided latent space, and uses generative loss to prevent latent collapse.
Other Strengths And Weaknesses: NA
Other Comments Or Suggestions: NA
Questions For Authors: NA
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the constructive feedback. We address each point as follows:
```Q1. Line 382-384’s claim is confusing. How is the calcium imaging encoder trained?Is the zero-shot experiments still the cross-subject zero-shot experiments?Is the behavioral encoder frozen?If both encoders are, the evidence does not suggest broader preservation. ```
**A1**: We appreciate the reviewer's careful examination.
a. Our conclusion summarizes preservation properties observed in three independent experiments (M1, PMd, V1) during zero-shot cross-subject conditions. We will clarify this in our revision.
b. The calcium imaging encoder follows the same PNBA framework but with visual stimulus encoders. The zero-shot evaluations are also performed on unseen subjects without any fine-tuning or further alignment process.
c. During training, all encoders are optimized.
d. We understand the concern about preservation claims. However, our conclusion is supported by (1) consistent preservation observed in functionally different brain regions (M1, PMd, V1); (2) such validation are performed on unseen subjects without further finetuning or alignment. We believe this rigorous validation provides evidence for the preservation property.
```Q2. The proposed matching objective in Section 3.1 is only one necessary. Is reversibility of encoders a sufficient condition for alignment?The theoretical guarantees seem too loose for sufficient alignment.```
**A2**: We agree with the reviewer's insight. While reversible encoders would provide an optimal sufficient condition for alignment, this becomes impractical for neural data requiring dimensionality reduction. Our generative constraint approach offers a practical sufficient condition that balances theoretical guarantees with implementation feasibility, achieving effective representation alignment on multiple datasets without requiring strict encoder reversibility.
```Q3. What assumptions underlie using Pearson correlation for alignment quality?(1)How is it applied to single-trial data?Is it per coordinate?Are vectors centered?(2)Did you try other metrics like L1 distance?Are all baseline methods optimized based on the given evaluation metrics? ```
**A3**: The core assumption behind using Pearson correlation is that aligned neural and behavioral representations should exhibit similar distributional structures in latent space.
(1) We compute correlation between complete neural-behavioral latent vectors for each trial, not coordinate-wise. Pearson correlation centers the vectors and normalizes by standard deviation, distinguishing it from cosine similarity.
(2) We considered alternatives (L1, Euclidean) but selected Pearson correlation because it's established in neuroscience for representational similarity analysis (e.g., Safaie et al., Nature 2023) and effectively captures structural similarities in high-dimensional spaces.
All methods (baselines and our PNBA) were optimized using their original objective functions, not optimized the evaluation metrics.
We will include these details in our revision.
```Q4. Limited details are provided in the main text regarding how to handle different neuron counts across sessions and how to transfer encoders given different input sizes. Are encoders re-trained when initialized on the data from a new animal? How do you deal with unseen neurons in new animals/sessions? ```
**A4**: Thank you for highlighting this important issue. We addressed varying neuron counts through a neuron-adaptive encoder architecture incorporating projection layers with pooling layer (SI Line 1091-1093) to get unified dimensional latent space. This enables direct application to new animals with different input sizes.
No, encoders are not re-trained when applied to new animals.
Our approach handles unseen neurons effectively because: (1) the neural encoder unifies the latent space dimensionality, and (2) based on PNBA, training across multiple subjects with diverse neural populations encourages extraction of behaviorally-relevant features rather than memorizing individual neuron characteristics, enabling generalization to completely new neural populations.
We will incorporate these details into the main text.
```Q5. Suggested SwapVAE```
**A5**: Thank you for suggesting this reference. While both approaches address neural representations, fundamental differences exist between SwapVAE and our PNBA framework. DropVAE operates within a single modality (neural spikes), using data augmentation and swap operations predicated on within-trial similarity, without directly incorporating behavioral constraints. In contrast, PNBA draws inspiration from CLIP's multi-modal paradigm, explicitly aligning neural and behavioral representations through generative constraints. This cross-modal approach directly establishes neural-behavioral associations, enabling zero-shot generalization to unseen subjects, as detailed in **A4**. We will add this in our related work. | Summary: This paper introduces PNBA, a framework leveraging probabilistic modeling to find robust preserved representations across different scales of neural variability: trials, sessions, and subjects. The method is evaluated on three datasets spanning M1, PMd and V1 of primates and mice, showing zero-shot preserved representations across cortices and species.
Claims And Evidence: The paper claims to obtain robust neural-behavioral representation alignment within multiple cortical regions and from different species. This claim is convincingly supported by the Pearson correlation coefficients between neural and behavioral representations throughout the paper.
The second claim the paper made is on preserved neural representations through zero-shot validation with practical applications in zero-shot behavior decoding. However, it was not clear how this zero-shot generalization was achieved, given that each session/subject has varying number of neurons, making the application of the same encoder on the unseen session/subject without any finetuning impossible.
Methods And Evaluation Criteria: The choice of datasets (Safaie et al., 202, Turishcheva et al., 2024) and evaluation metrics made sense for this problem.
Theoretical Claims: I have not carefully checked the correctness of proofs for Theorem 3.1 which was in the Appendix.
Experimental Designs Or Analyses: The experimental designs and analyses look good generally, although some details are missing (see Questions for Authors)
Supplementary Material: I reviewed the supplementary material but not in great detail.
Relation To Broader Scientific Literature: The paper made contributions toward representation learning in neuroscience, tackling the problem of identifying robust cross-modality representations across sessions and animals performing the same behavior tasks. The method has implications for calibration-free brain-computer interfaces with potential for motor function restoration.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strengths:
* The paper tackles an important problem in neuroscience, which is finding a robust representation amidst neural variability across trials, sessions and subjects.
* Claims are supported with good experimental results.
* The paper is well written and easy to follow.
Weaknesses:
* It is unclear how the model handles varying neural population sizes for zero-shot behavior decoding in unseen sessions/subjects.
* The paper provides results of behavior decoding on V1 dataset but not M1 and PMd datasets.
* The related alignment method in Safaie et al, 2023 was mentioned (Figure 1) but was not compared with the proposed method in Table 1.
* Some details are missing that make it difficult to evaluate the soundness of the methods (see Questions for Authors)
Other Comments Or Suggestions: Minor typo on line 381 and Figure 1c: "trails" instead of "trials"
Questions For Authors: 1. Figure 3a: Was the histogram constructed using aggregated samples from two mice? I'm wondering what the histograms on each individual mouse look like.
2. Figure 3a: What are the samples used to construct the histogram? Are the samples Nx1 vector at each timestep or NxT matrix of a trial/session? How are the normalized representation values computed from these vectors/matrices? How are trials of varying lengths and population sizes handled?
3. Figure 3b: Why is the correlation matrix not symmetric?
4. Line 247: How are the 4 pairs chosen from the 8 mice? Shouldn't there be 8 choose 2 total number of pairs?
5. Figure 5a: Is mean R calculated across all possible pairs of trials regardless of which behavior conditions that trial belongs to? How to handle the problem that each trial can be of different lengths? What is the standard deviation calculated over?
6. Figure 5b and 5c: Similar question to the above. How are the mean R and standard deviation calculated for across-session and across-subject?
7. Section 4.5: How can the model be applied zero-shot for behavior decoding, given that the input to the model should be of fixed size while the number of neurons and time steps can vary across subjects? Could the authors provide the detailed step-by-step procedure of training and inference of the model?
8. Section 4.5: Could the authors also provide results of behavior decoding on the primate datasets?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We appreciate the reviewer's detailed feedbacks! We address each point as follows:
```Q1. How does zero-shot generalization work with varying neuron counts across subjects?```
**A1**: Our model uses convolutional and pooling layers to **standardize input activities of varying counts into a fixed-size latent space(detailed in SI Line 1088-1094)**. Based on **PNBA**, **training on multi-subject** further forces the model to learn behavior-relevant neural representations rather than individual-specific neuron characteristics, enabling cross-individual generalization without any fine-tuning on zero-shot subject.
```Q2. Safaie et al, 2023 was not compared in Table 1.```
**A2**: Table 1 excludes Safaie et al. because they focus on neural representation alignment requiring individual-specific optimization, whereas we evaluate cross-modal correlations on unseen subjects without fine-tuning. A direct comparison would be unfair to theirs.
```Q3. Figure 3a: Was histogram from aggregated data of two mice?What do individual mouse histograms look like?```
**A3**: Yes, the histogram aggregates data from 2 mice viewed identical stimuli. Individual mouse histograms show the same patterns for stimulus representation, with subtle peak variations in neural representation due to individual differences.
```Q4. Figure 3a:What samples construct the histogram?How are normalized values computed and varying trial lengths/population sizes handled?```
**A4**: The histogram uses latent representations z (d×T) from each trial, reshaped into one-dimensional vectors and z-score normalized. Neuron count variations are addressed by pooling neural features into a fixed-dimensional latent space (SI Line 1088-1094). Variable trial lengths are handled through sliding window (t=16) processing (SI Line 1067). All vectors across trials are then concatenated into a one-dimensional array, whose frequency distribution forms the histogram.
```Q5. Figure 3b: Why is the correlation matrix not symmetric?```
**A5**: The correlation matrix is asymmetric due to trial asymmetry. Specifically, corr_matrix[i,j] measures correlation between neural activity of trial i and visual stimulus of trial j, noted as pair (i,j), while corr_matrix[j,i] measures pair (j,i), a different combination, naturally yielding different values. Similarly in Figure 6c, pairs (i,j) and (j,i) represent different trial combinations.
```Q6. Line 247: How are the 4 pairs chosen from the 8 mice?Shouldn't there be 8 choose 2 total number of pairs?```
**A6**: We apologize for the confusion. Our experiment included 5 pairs of mice (10 mice total), with each pair viewing identical stimuli. 4 pairs (8 mice) were used for training, while the remaining pair (2 mice) served as an independent test set to evaluate zero-shot performance (see SI Table 4). We will refine this in the revised version.
```Q7. Figure 5a: is mean R calculated across all trial pairs regardless of behavior condition?How are different trial lengths handled?What is the standard deviation calculated over?```
**A7**: No, we calculate mean R only between trials with matching behaviors, following Safaie et al, 2023. For trials with different lengths (d×T1 vs d×T2), e.g. T2>T1, we downsample T2 by uniformly discarding timepoints. This length handling only applies to monkey data, as all V1 trials have consistent lengths.
Standard deviation is calculated across all possible same-behavior trial pairs.
We will add these clarifications in our revision.
```Q8. Figure 5b and 5c: Similar question to the above.How are the mean R and standard deviation calculated for across-session and across-subject?```
**A8**: We use identical strategy as **A7**, but comparing trials from different sessions or different subjects while maintaining matched behavioral conditions.
```Q9. Section 4.5: How can the model handle zero-shot behavior decoding with varying neuron counts and time steps?What's the detailed training and inference procedure?```
**A9**: Neural representations have the same spatial dimensions across subjects (**A1**). V1 trials have consistent lengths, but we can use sliding windows to handle varying lengths (SI Line 1067).
**The full procedure:(1) train PNBA on training subjects (2) generate representations via the frozen neural encoder, validate preservation (3) train V1 BCI decoders using training subjects' representations and behavior (4) apply frozen neural encoder and BCI decoder to unseen subjects.**
```Q10. Section 4.5: Could the authors also provide results of behavior decoding on the primate datasets?```
**A10**: Our primate decoding results on unseen animals: M1 achieved R²=0.78; PMd achieved R²=0.71. We note that these results are expected as PNBA aligns neural-behavioral representations for these datasets. To avoid circular reasoning, we showed independent movement decoding using V1 data in Section 4.5, where the neural encoder was only trained with stimulus in PNBA.
**Others**: We have fixed all typos. | null | null | null | null | null | null |
PINNsAgent: Automated PDE Surrogation with Large Language Models | Accept (poster) | Summary: The paper introduces a framework that utilizes LLMs to design and optimize PINNs to solve PDEs. It facilitates solving PDEs with PINNs more efficiently without tuning parameters and choosing architectures manually. The paper demonstrated its effectiveness on dataset PINNacle.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria are well-suited to the problem at hand. However, I believe the paper could be improved by clarifying the following two aspects of the proposed method:
1. What is the unique role of the LLM in your framework? Could a smaller language model be used instead? If the focus is solely on configuration generation, is an LLM even necessary?
2. Could you provide more details on the pipeline for PDEs that are not already in the code bank? Perhaps an additional section would help clarify this process.
Theoretical Claims: There's no theoretical claims in the paper.
Experimental Designs Or Analyses: The experimental design in the paper is solid. However, the study was conducted on only one dataset, which is not a commonly used benchmark. I recommend that the authors conduct more extensive experiments to evaluate the framework’s performance on widely used benchmark datasets, such as PDEBench or PDEArena. This would better demonstrate the framework’s real-world applicability, particularly by comparing its performance with other PDE solvers and showing whether it produces comparable or superior results.
Supplementary Material: Appendix B on the dataset looks good to me.
Relation To Broader Scientific Literature: The scientific computing community can benefit from the proposed framework, as it automates the workflow for solving PDEs using PINNs. While PINNs are effective, they have a significant drawback—the process of tuning and designing the model structure requires substantial human effort. This framework helps bridge that gap.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: I believe the paper presents an innovative idea, and the problem it aims to solve is well-motivated with broad applications.
However, the primary weakness, in my view, is the lack of comprehensive experimental results.
Other Comments Or Suggestions: NA
Questions For Authors: Please see above comments.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: # Reply to Reviewer mBxJ
We sincerely thank the reviewer for their thoughtful comments and constructive feedback. We address each concern below and explain how we will improve the manuscript accordingly.
## The Unique Role of LLM in Our Framework
The LLM serves several critical and unique functions in PINNsAgent that would be difficult to achieve with smaller language models. To illustrate this, we conducted additional experiments comparing GPT-3.5 and GPT-4 (see table below). The results show that while both models improve over traditional methods, GPT-4 achieves not only better average performance but also significantly lower variance (±8.47E-02) compared to GPT-3.5 (±2.03E-01), demonstrating more consistent and reliable reasoning capabilities. These unique functions include:
| **Method** | **Average MSE** |
|------------|-----------------|
| Random Search | 6.13E-01 ± 1.49E-01 |
| Bayesian Search | 5.88E-01 ± 1.86E-01 |
| PINNsAgent (GPT-3.5) | 3.89E-01 ± 2.03E-01 |
| PINNsAgent (GPT-4) | 3.52E-01 ± 8.47E-02 |
1. **Reasoning across physics and deep learning domains**: The LLM planner must simultaneously understand PDE characteristics (equation type, dimensionality, boundary conditions) and relate them to appropriate neural architectures. This cross-domain reasoning requires the sophisticated knowledge integration capabilities of large language models.
2. **Memory Tree Reasoning Strategy (MTRS) implementation**: As detailed in Section 3.4, our MTRS approach requires the LLM to function as a policy model π_pl(a|s) that guides the exploration of the hyperparameter search space. The LLM's probabilistic outputs are used directly in the UCT formula to balance exploration and exploitation.
3. **Adaptive hyperparameter generation**: The LLM analyzes feedback from previous iterations f^(t-1) and adjusts hyperparameters accordingly. This requires understanding complex relationships between hyperparameters (e.g., how learning rate interacts with optimizer choice) and PDE characteristics.
## Pipeline for PDEs Not Already in the Code Bank
For PDEs not in the Code Bank, PINNsAgent operates in the Config Generation mode, as the Code Bank contains base code that can be applied to any unseen PDE with the appropriate configuration. The complete pipeline is:
1. **PDE encoding**: The system encodes the mathematical and physical properties of the new PDE using the comprehensive set of labels described in Section 3.3.
2. **Physics-Guided Knowledge Replay (PGKR)**: PGKR computes weighted cosine similarity between the encoded target PDE and all PDEs in the database. The top-K most similar PDEs are retrieved along with their best-performing configurations.
3. **Configuration generation**: The planner uses these retrieved configurations as starting points and generates YAML configuration files for the new PDE.
4. **Base code application**: The system applies the generated configuration to the appropriate base code from the Code Bank. This base code is designed to be flexible and can handle any PDE when provided with the correct configuration.
5. **Memory Tree-guided exploration**: Following the MTRS approach described in Section 3.4, the system iteratively refines the hyperparameter configuration.
6. **Database update**: Successful configurations are added to the database to benefit future queries.
## Additional Benchmark Datasets
We appreciate the suggestion to evaluate on additional benchmark datasets. As shown in the table below, we conducted experiments on two representative PDEs from the PDEBench dataset: Reaction-Diffusion 1D and Darcy Flow 2D. PINNsAgent consistently outperforms both Random Search and NAS-PINNs on these additional PDEs, demonstrating its broader applicability beyond the PINNacle benchmark.
| **PDE** | **PINNsAgent** | **Random Search** |
|---------|---------------|-------------------|
| Reaction-Diffusion 1D | 3.75E-08 ± 1.36E-08 | 4.45E-05 ± 2.84E-05 |
| Darcy Flow 2D | 5.31E-06 ± 6.60E-08 | 9.22E-06 ± 2.31E-07 |
These additional experiments provide more comprehensive evidence of PINNsAgent's effectiveness and generalizability across different PDE types and benchmarks. | Summary: This paper introduces PINNsAgent, a framework that uses large language models (LLMs) to automate the development and optimization of Physics-Informed Neural Networks (PINNs) for solving partial differential equations (PDEs). The key components are:
1. Physics-Guided Knowledge Replay (PGKR) – encodes PDE characteristics and associated PINN configurations into a structured format to enable knowledge transfer between similar PDEs.
2. Memory Tree Reasoning Strategy (MTRS) – abstracts the hyperparameter optimization process as MCTS.
The framework is evaluated on 14 benchmark PDEs and demonstrates strong performance compared to random search and Bayesian optimization.
Claims And Evidence: Convincing evidence is provided for the benefits of PINNsAGents for PINN hyperparameter search:
* Performance improvements are demonstrated through comprehensive experiments on 14 diverse PDEs. PINNsAgent consistently outperforms random search and Bayesian optimization on most benchmark problems. Results are also averaged over 10 runs to account for randomness. It also outperforms the best PINNacle MSEs on a number of tasks, with especially promising results on Heat-ND.
* Ablation studies validate the contributions of both PGKR and MTRS components.
Methods And Evaluation Criteria: The methods and evaluation approach appear mostly sound, though more details about the evaluation setup and baselines would be helpful (see questions below):
* The benchmark set (PINNacle) includes a diverse range of PDEs with varying characteristics. Furthermore, this is a standard benchmark used in the PINN literature.
* Performance is measured using a reasonable metric (MSE), although it would also be helpful to report relative error metrics.
* Baselines include both simple (random search) and sophisticated (Bayesian optimization) approaches. However, details about the baseline methods, including compute budget vs. performance, seem to be missing.
Theoretical Claims: N/A, no theoretical claims made.
Experimental Designs Or Analyses: The experimental analyses seem sound, though details about the baselines are missing. See questions below.
Supplementary Material: Yes. The supplementary material contains details about 1. the encoding method and pseudocode for PGKR, 2. descriptions of the PDE benchmark, 3. prompts for each part of the PINNsAgent pipeline.
Relation To Broader Scientific Literature: The paper builds on several research directions:
* Physics-informed neural networks (PINNs) for solving PDEs
* LLM-based automation of machine learning pipelines
* Neural architecture search / AutoML for scientific computing
Prior work finds that the performance of PINNs depends crucially on the choice of architecture (e.g. activation function) and optimizer. This positions hyperparameter optimization for PINNs as an important problem for improving the performance of ML methods on PDEs.
Essential References Not Discussed: The discussion of related work is generally thorough. A couple missing references regarding the intersection of LLMs and Bayesian optimization or Neural Architecture Search are:
* Large Language Models to Enhance Bayesian Optimization. Liu et al, ICLR 2024.
* EvoPrompting: Language Models for Code-Level Neural Architecture Search. Chen et al, NeurIPS 2023.
* LLMatic: Neural Architecture Search via Large Language Models and Quality Diversity Optimization. Nasir et al, GECCO 2024.
Other Strengths And Weaknesses: n/a
Other Comments Or Suggestions: n/a
Questions For Authors: 1. Standard hyperparameter optimization papers show the tradeoffs between performance and time/compute/number of HPO iterations. However, these evaluations seem to be missing, and the “random search” and “Bayesian search” baselines seem to only report one point along this performance-cost tradeoff curve. Could the authors describe the baselines in more detail, including number of iterations or compute cost? This is crucial for understanding the main results (Table 2).
* Furthermore, could the authors clarify the total computational cost of running PINNsAgent compared to baselines? This is important for understanding practical trade-offs.
2. How much do the optimal hyperparameters vary across the different PDEs, and how much do they depend on all of the PDE features encoded within the PGKR scheme? This information would clarify whether an HPO method like PINNsAgent is necessary for different PDEs, vs. if existing PINN architectures/hyperparameters are simply undertuned.
* Intuitively, I might expect the optimal hyperparameters should depend mostly on a few features, e.g. equation type and time-dependence. Did the authors conduct any investigation about which features are the most important?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: # Reply to Reviewer gzrR
We sincerely thank the reviewer for their thoughtful assessment and insightful questions. We address each point below.
## Baseline Details and Computational Costs
We ran all experiments on 8 NVIDIA V100 (32GB) GPUs, providing sufficient computational resources to complete all benchmark PDEs in the PINNacle dataset.
Compared to traditional methods, the additional computational overhead in PINNsAgent comes primarily from LLM inference and PGKR retrieval processes. We also observed that the LLM tends to recommend slightly larger (though still reasonable) network architectures, which marginally increases training time. To address the reviewer's concern about computational costs, we conducted additional experiments comparing all methods (Random Search, Bayesian Search, and PINNsAgent) with 5 iterations each. As shown in the table below, PINNsAgent introduces only modest computational overhead (approximately 8.2% compared to Random Search) while delivering substantially better performance.
| **Method** | **Average Computation Time (s)** |
|------------|-----------------------------------|
| Random Search | 3462.24 ± 2631.55 |
| Bayesian Search | 3598.47 ± 2792.83 |
| PINNsAgent | 3747.78 ± 2965.62 |
## PDE-Dependent Hyperparameter Sensitivity
Regarding the reviewer's question about hyperparameter variation across PDEs, we found that PINNs exhibit significant hyperparameter sensitivity compared to neural operator methods, even with identical architectures. We found clear patterns in optimal hyperparameters across different PDE types:
Different PDE types consistently favor certain optimizers. For time-dependent diffusion equations (Heat series: HeatND, Heat2D_Multiscale, Heat2D_VaryingCoef), the LBFGS optimizer significantly outperforms other choices. For example, on HeatND, LBFGS (5.64E-08) outperforms MultiAdam (7.93E-08) by approximately 29%. This advantage persists even with increasing problem dimensionality, suggesting that the diffusion mechanism, rather than dimensionality, drives optimizer selection. For static problems (Poisson class), MultiAdam performs better on complex geometries.
The second most important hyperparameter is the activation function. Problems with smooth solutions (Heat, Poisson) benefit most from tanh and sin functions, while problems with sharp gradient changes (Burgers, NS) perform better with gaussian and swish activations. On NS2D_LidDriven, for instance, gaussian (9.89E-06) outperforms sin (1.27E-05).
Network size only needs to be reasonable; excessively large networks do not significantly improve PINN performance but substantially increase computational burden. We also found that advanced architectures like LAAF and GAAF (Jagtap et al., 2020) do not consistently deliver the best performance--the original PINN architecture often proves more robust across different PDEs. These findings validate our approach's ability to identify nuanced relationships through the PGKR framework.
## Additional References
We thank the reviewer for suggesting the additional references. We will incorporate these papers in our discussion of related work. | Summary: The paper introduces PINNsAgent, an automated framework using LLM to design and optimize PINNs for solving PDEs. It addresses the limitations of manual hyperparameter tuning by incorporating two novel methods: Physics-Guided Knowledge Replay for efficient knowledge transfer from past experiments, and the Memory Tree Reasoning Strategy for systematic hyperparameter optimization. Experiments on various PDEs demonstrate that PINNsAgent outperforms traditional approaches.
## update after rebuttal
After reviewing the other reviewers' comments and the corresponding responses, the reviewer keeps the current rating.
Claims And Evidence: The authors' experiments partially provide empirical evidence supporting their claims. However, there are several limitations:
1. The experiments can provide empirical evidence for the claims to some extent, but the results do not conclusively show that the proposed approach successfully learns or transfers domain-specific knowledge. It remains unclear whether the observed improvements come from learning genuine transferable knowledge or merely from exhaustive hyperparameter search.
2. There is insufficient theoretical analysis and empirical evidence to substantiate that the information encoded by PGKR is both effective and transferable across different PDEs.
3. Additionally, the authors have not provided adequate theoretical justification or empirical validation for the effectiveness of representing the hyperparameter tuning problem explicitly as a tree-structured search.
Methods And Evaluation Criteria: The proposed method and evaluation criteria make sense to the problem. But it is worth pointing out here that a large portion of the proposed method seems to be a combination of existing methods while acknowledging there are some new aspects added to the existing work.
Theoretical Claims: There is no theoretical claim in this paper.
Experimental Designs Or Analyses: Although the authors have compared their proposed methods with two baselines and conducted the ablation studies, the experiments are insufficient.
First, the paper lacks detailed analysis or discussion regarding the computational cost associated with implementing and running the proposed framework, especially given the iterative nature of the Memory Tree method. It could be extremely time consuming when the search space for hyperparameters is large.
Second, there is no experiments to verify that the PINNs actually can learns or transfers domain-specific knowledge from PINNsAgent.
Third, there is insufficient theoretical analysis and empirical evidence to substantiate that the information encoded by PGKR is both effective and transferable across different PDEs.
Fourth, the authors have not provided adequate theoretical justification or empirical validation for the effectiveness of representing the hyperparameter tuning problem explicitly as a tree-structured search.
Supplementary Material: Yes, PDE encoding, datasets, pseudocodes, and prompt design parts.
Relation To Broader Scientific Literature: As it primarily addresses hyperparameter tuning specifically for PINNs, there seem to be some contributions to the scientific community interested in using PINNs. However, it is less clear if the proposed method has been thoroughly studied (e.g., computational costs, accessibility -- the current one uses GPT 4, variants of PINNs architectures) could be generally applicable to other relevant methods (such as neural operators).
Essential References Not Discussed: No. Although there are some PINNs-related papers that could be discussed here in this paper, the main focus of the paper is not the PINN itself, but hyper-parameter tuning. Regarding the hyper-parameter tuning, not extensive, but essential papers seem to be included.
Other Strengths And Weaknesses: S1: The authors propose their own framework to reduce manual effort and reliance on expert knowledge for solving PDE.
S2: The authors conduct some experiments with the proposed methods.
W1: This paper's technical contributions do not seem to reach the bar.
W2: The scope of this paper is rather limited as it is only designed for hyperparameter tuning for PINNs (only the vanilla PINNs architecture, which is known to suffer from many technical issues e.g., spectral bias in PINNs failure mode by Krishnapriyan et al, NeurIPS 2021).
W3: This paper lacks theoretical evidence, and the experiments are insufficient to verify the effectiveness of their proposed method.
W4: Different components lack motivations. For instance, the authors should explain why they formulate the searching for hyperparameters as a tree structure, and why MCTS process is a potential optimal choice.
Other Comments Or Suggestions: Some minor issues:
- The author should provide the provide the definition of L_BC in the Preliminary.
- In line 310 “Figure ??” should be “Figure 2”.
Questions For Authors: The major questions are relevant to the points raised in "Experimental Designs Or Analyses". Including those, other points raised in the above sections (weaknesses, etc) would be the questions.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: # Reply to Reviewer znat
We appreciate the reviewer's thorough assessment of our paper. Below, we address the key concerns raised:
## Methods and Evaluation Criteria
1. **Knowledge Transfer Evidence**: The reviewer questions whether improvements come from learning genuine transferable knowledge or merely from exhaustive hyperparameter search. As shown in our experimental setup (Section 4.1), while baseline methods (Bayesian optimization and random search) required 10 iterations to reach their best performance, PINNsAgent achieved superior results in just 5 iterations across most PDEs. This 50% reduction in required iterations demonstrates efficient knowledge transfer rather than exhaustive search.
2. **PGKR's Effectiveness**: Our ablation study in Table 2 provides clear empirical evidence of PGKR's effectiveness. When PGKR is removed ("w/o PGKR"), performance degrades significantly across most PDEs. This direct comparison isolates PGKR's contribution to PINNsAgent's performance, demonstrating that the knowledge encoded and retrieved by PGKR substantially improves PDE solving capabilities.
While our approach builds on existing techniques, its novelty lies in the unique integration and adaptation of these methods specifically for PINNs optimization. Our multi-agent LLM framework represents the first comprehensive system to fully automate PINNs development without expert intervention.
## Experimental Designs and Analyses
We appreciate the reviewer's feedback on our experimental design and have conducted additional analyses to address these concerns:
### Computational Cost Analysis
| **Method** | **Average Computation Time (s)** |
|------------|--------------------------------|
| Random Search | 3462.24 ± 2631.55 |
| Bayesian Search | 3598.47 ± 2792.83 |
| PINNsAgent | 3747.78 ± 2965.62 |
Regarding computational cost concerns, we conducted additional analysis across all 14 benchmark PDEs, with results shown in the table. The total computation time for PINNsAgent is only about 8.2% higher than random search and 4.1% higher than Bayesian optimization. This additional cost primarily comes from LLM inference and PGKR retrieval processes.
### Knowledge Transfer Verification
To demonstrate broader applicability, we also evaluated PINNsAgent on the PDEBench dataset. As shown in the table below, PINNsAgent consistently outperforms both baseline methods on these additional PDEs.
| **PDE** | **PINNsAgent** | **Random Search** |
|---------|---------------|-------------------|
| Reaction-Diffusion 1D | 3.75E-08 ± 1.36E-08 | 4.45E-05 ± 2.84E-05 |
| Darcy Flow 2D | 5.31E-06 ± 6.60E-08 | 9.22E-06 ± 2.31E-07 |
**Relation To Broader Scientific Literature**
We appreciate the reviewer's insights. Our focus on PINNs is deliberate as these models are particularly sensitive to hyperparameter choices - far more than standard neural networks or even neural operators. Small configuration changes in PINNs can lead to order-of-magnitude differences in accuracy, making automated tuning especially valuable for this domain, an observation also confirmed by Wang et al. 2024 in their NAS-PINN work.
* [1] Wang, Yifan, and Linlin Zhong. "NAS-PINN: Neural architecture search-guided physics-informed neural network for solving PDEs." *Journal of Computational Physics* 496 (2024): 112603.
## Essential References Not Discussed
We agree with the reviewer's assessment that our paper's primary focus is on LLM-enabled AutoML for hyperparameter tuning rather than PINNs methodology itself. As noted, we have already included the essential references related to this focus in Section 2.3. We will enhance the literature review by incorporating additional relevant references on hyperparameter tuning approaches to provide a more comprehensive context for our work.
## Other issues
1. **Definition of L_BC:** We agree that L_BC should be formally defined in the Preliminaries section. We will add the boundary condition loss definition alongside the other loss components to ensure completeness.
$$ L_{BC} = \frac{1}{N_{BC}} \sum_{i=1}^{N_{BC}} \left| u_{\theta}(\mathbf{x}_i^{BC}) - u_{BC}(\mathbf{x}_i^{BC}) \right|^2 $$
2. **Figure reference:** Thank you for catching this error. We will correct "Figure ??" to "Figure 2" in line 310.
3. **Scope and PINNs architecture:** Our framework is designed to be flexible regarding model architecture and training strategies. As mentioned in Section 4.2, our configuration files allow users to specify various PINN variants and training techniques. This includes addressing known issues like spectral bias (Krishnapriyan et al., NeurIPS 2021) through techniques such as curriculum learning, adaptive weighting, and alternative network architectures. The LLM agents can select and configure these options based on the specific PDE characteristics. We will clarify this flexibility more explicitly in the revised manuscript to address this concern. | Summary: In this work, the authors introduce PINNsAgent, a surrogation framework that leverages large language models (LLMs) enabling efficient knowledge transfer from solved PDEs to similar problems. By leveraging LLMs and exploration strategies, PINNsAgent enhances the automation and efficiency of PINNs-based solutions. PINNsAgent is evaluated on 14 benchmark PDEs, demonstrating its effectiveness in automating the surrogation process.
Claims And Evidence: 1. PINNsAgent is an agent to enhance the process of automatically searching for the best hyperparameters settings for a PDE, may be useful for non-expert users of PINNs.
2. This work is mainly engineering-oriented, lacking the depth suitable for ICML. The PGKR module is simple and the MTRS is a simple application of conventional Monte Carlo Tree Search technique.
3. This work utilizes LLM's output hyperparameters to determine the action. Are the hyperparameters output by LLM reliable? Only prompt is given in the appendix. It is recomended to give examples or experiments to show the effectiveness of LLM's output. What is going to happen if we input some rare PDE configurations to LLM? Also, only GPT-4 is used in experiments, ablation study should be conducted to show the effect of using different LLMs.
4. In the MTRS module, there is eq.7 to select the best action for a state, and "The planner, serving as the policy model πpl(a|s), uses the distribution of the LLM’s output to determine the following action to take". These seem confusing. Please descibe them in detail, including how the best action is selected and how the distribution of the LLM’s output is used. Is the LLM used for exploration?
5. For the training of MCTS, what is the initial tree? Does the policy model need update?
Methods And Evaluation Criteria: Standard benchmark PDEs are used in experiments.
Theoretical Claims: no theoretical part.
Experimental Designs Or Analyses: Only GPT-4 is used in experiments, ablation study should be conducted to show the effect of using different LLMs.
Supplementary Material: All parts.
Relation To Broader Scientific Literature: The proposed LLM-based agent to automatically search for suitable hyperparameters for PINN training seems to be new.
Essential References Not Discussed: no
Other Strengths And Weaknesses: no
Other Comments Or Suggestions: typo: line 310, Figure ??
Questions For Authors: no
Code Of Conduct: Affirmed.
Overall Recommendation: 1 | Rebuttal 1:
Rebuttal: # Reply to Reviewer Yfa3
We sincerely thank the reviewer for their detailed assessment of our paper. We address each concern below:
## Novelty and Technical Depth
Our multi-agent LLM framework is the first comprehensive system to fully automate PINNs development without expert intervention. In our pipeline, the Physics-Guided Knowledge Replay mechanism introduces a novel physics-informed approach to knowledge transfer across different PDEs, while our Memory Tree Reasoning Strategy offers a structured exploration method that captures the hierarchical dependencies unique to PINNs design.
## LLM Output Reliability and Ablation Studies
Regarding the reliability of LLM-generated hyperparameters, our extensive experiments in Tables 2 and 3 demonstrate that our approach significantly outperforms traditional methods like Random Search and Bayesian Optimization across diverse PDE types. This empirical evidence confirms that our LLM-based reasoning framework adds substantial value to the hyperparameter optimization process for PINNs.
To address the reviewer's concern about different LLM performance, we conducted additional experiments comparing GPT-3.5 and GPT-4. The results show that while both models improve over traditional methods, GPT-4 achieves not only better average performance but also significantly lower variance (±8.47E-02) compared to GPT-3.5 (±2.03E-01), demonstrating more consistent and reliable reasoning capabilities.
**Performance comparison with different LLMs (Average MSE across 12 PDEs)**
| **Method** | **Average MSE** |
| -------------------- | -------------------- |
| Random Search | 6.13E-01 ± 1.49E-01 |
| Bayesian Search | 5.88E-01 ± 1.86E-01 |
| PINNsAgent (GPT-3.5) | 3.89E-01 ± 2.03E-01 |
| PINNsAgent (GPT-4) | 3.52E-01 ± 8.47E-02 |
## Handling Rare PDE Configurations
For rare PDE configurations, PINNsAgent leverages both the PGKR mechanism and the LLM's reasoning abilities. The PINNacle benchmark used in our evaluation includes a diverse range of PDEs with varying characteristics, including some with complex boundary conditions and multi-scale phenomena. Our consistent performance already across this diverse set demonstrates the framework's robustness to different PDE types.
When encountering a completely new PDE type, the PGKR component retrieves the most similar (though not identical) PDEs from the database, and the LLM uses these as starting points for reasoning about appropriate hyperparameters. The iterative refinement process through MTRS then allows the system to adapt these initial configurations to the specific requirements of the new PDE.
## MTRS Implementation Details
We appreciate the request for clarification regarding Equation 7 and the MTRS implementation. In our framework:
1. The initial tree is constructed based on the LLM's knowledge and the most similar PDEs retrieved by PGKR. For each state $s$ (representing a partial hyperparameter configuration), the LLM generates a probability distribution over possible actions $a$ (hyperparameter choices).
2. The policy model $\pi_{pl}(a|s)$ is implemented as the LLM itself. It uses both its pre-trained knowledge and the retrieved similar PDE configurations to generate probabilities for different hyperparameter choices. These probabilities are then used in the UCT formula (Equation 7) to balance exploration and exploitation.
3. The tree is expanded using the standard MCTS process: selection, expansion, simulation, and backpropagation. The key difference is that our expansion and simulation steps are guided by the LLM's output probabilities rather than random sampling or a separately trained policy network.
4. The policy model does not require separate updating since the LLM adapts its recommendations based on the feedback from previous iterations, effectively implementing an adaptive policy.
## Other Issues
Thank you for pointing out the typo in line 310. We will correct "Figure ??" to "Figure 2" in the revised manuscript.
We believe these clarifications address the reviewer's concerns and highlight the technical novelty and depth of our approach. The empirical results in Tables 2 and 3, along with the additional LLM comparison in Table 4, provide strong evidence of the effectiveness of our framework. | null | null | null | null | null | null |
Universal Length Generalization with Turing Programs | Accept (poster) | Summary: This work proposes Turing Program, which is a CoT strategy that decomposes an algorithmic task into steps mimicking the computation of a Turing Machine. The work showed that by using Turing Programs, they obtain robust length generalization on a range of algorithmic tasks: addition, multiplication and in-context SGD.
Claims And Evidence: Yes, the claims made in the submission supported by clear and convincing evidence.
Methods And Evaluation Criteria: The proposed methods make sense the problem or application at hand
Theoretical Claims: There is no theoretical claims in the work.
Experimental Designs Or Analyses: I have checked the experimental designs or analyses.
Supplementary Material: I review the supplementary materials, including Appendix
Relation To Broader Scientific Literature: This work propose a special data format for length generalization/extrapolation.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Weakness:
* **The core weakness is the methodology of the work**. In this work, the authors propose a special data format for algorithmic tasks to improve the length generalization performance. Though the performance is promising, the theoretical analysis may be missed.
* **Need More detailed analysis to support why such data format works for Transformer Length Extrapolation**. After reading the work, I am not sure why such a special data format is related to length extrapolation. Any special explanation?
* **Why the method could be used for language modeling**? The paper claims that the method could be used for any algorithmic task, but we are curious where the Turing Program could be used for language modeling.
Therefore, though the methods work well for length extrapolation, it sill needs more reason to explain why such methods work.
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their feedback. We respond to the main points raised by the reviewer below.
**Theoretical analysis may be missed:** If the reviewer can further explain what is lacking in terms of the theoretical analysis, we would love to further explain and add to the revision. From our perspective, theorem 4.2 is the main theoretical result: we show a Transformer of some fixed size can simulate Turing Programs up to length that is exponential in their size, which suggests that “small” Transformers can execute very long programs. This expressivity result is connected to length generalization because it is a necessary (but not sufficient) condition for length generalization to happen with Turing Programs (not sufficient because gradient descent might not learn this constructed solution). Therefore, we feel it provides theoretical backing for the empirical success we see in algorithmic tasks.
**Why special data format works:** We should have made it clearer why the Turing Program data format is useful for length generalization. When we write out a task in this format, the task gets broken down to two subtasks: 1. modifying the tape content at a single position and 2. copying the tape content. The modification only requires the token at the head position and the positions where the Turing machine state is located, so is relatively independent to the overall length of the input. Copying is length dependent, but a past work (https://arxiv.org/abs/2402.01032) already showed that Hard-ALiBi could achieve length-generalization on copying. Thus, we expect Turing Programs combined with Hard-ALiBi to deliver length generalization results.
We explained this understanding in Section 2.2 of the paper (quoted below for ease of reading), but can add further explanation in the revision.
“We begin with the observation that the scratchpad technique can be realized as an iterative sequence of copying operations, where at each iteration the input is slightly modified. Building on previous works showing that with the right positional encoding, transformers can achieve length generalization on the copying operation, we hypothesize that combining the scratchpad technique with a favorable positional encoding can unlock length generalization capabilities.”
**Could be used for language modeling?:** Our method currently is most suited for algorithmic tasks, i.e. problems that can be decomposed into step-by-step solutions. Our work focuses on algorithmic problems that have closed and known solutions (e.g. arithmetic problems), similarly to many existing works on length generalization (e.g. https://arxiv.org/abs/2402.09371). However, we believe that this method can be adapted to a broader set of natural language problems that are of algorithmic nature, such as mathematical reasoning problems, and leave this study to future work.
We thank you again for the helpful comments on our paper. We are happy to provide further clarification, and would appreciate it if you would raise your score if you believe that your concerns were addressed.
---
Rebuttal Comment 1.1:
Comment: As the current work is not validated on language modeling and is designed specifically for algorithmic tasks, I am afraid that the evaluation is not adequate. Moreover, the author mentions *Copying is length dependent, but a past work (https://arxiv.org/abs/2402.01032) already showed that Hard-ALiBi could achieve length-generalization on copying*. However, we have to know that AliBI/SWA (Sliding Window Attention) can still achieve cheating length extrapolation, as they actually abandon long-distance tokens for focusing on local tokens, so that they cheat to have length extrapolation ability.
**To improve the score, the following is necessary**
* **Choice 1: Try on real benchmark but not simulation, whatever the real benchmark is**
* **Choice 2: Use LongPPL [1] to evaluate the PPL on the addition, multiplication, and in-context SGD tasks**
Reference:
[1] Fang, L., Wang, Y., Liu, Z., Zhang, C., Jegelka, S., Gao, J., ... & Wang, Y. (2024). What is Wrong with Perplexity for Long-context Language Modeling?. arXiv preprint arXiv:2410.23771.
---
Reply to Comment 1.1.1:
Comment: We will work on adding length generalization experiments on real mathematical reasoning benchmarks to the final version of the paper, but since this is a short timeframe we will not have results until the discussion period is over, so we politely ask the reviewer to take this into consideration if possible.
Our plan is to use datasets like gsm8k. Each block of the Turing Program CoT consists of a copy of the original question and a line of calculation leading to the final answer. We will evaluate this approach on whether the model can generalize to problems that require more steps of calculation than those it has seen in training. | Summary: The paper proposes a new method for designing chain-of-thought supervision for algorithmic tasks, termed "Turing Programs". Essentially, the state of a Turing machine (including the tape, head position, and internal state) before and after each transition are serialized and represented in a chain of thought. The authors explore how training with this trace supervision encoded in chain of thoughts improves length generalization on several tasks: addition, multiplication, and SGD on linear regression. This trace supervision combined with HAlibi positional encodings is shown to exhibit significant (although imperfect) length generalization on these tasks.
The paper also includes a theorem that relates to a constructive demonstration (via RASP) that a Transformer can emulate a Turing machine using the proposed encoding (with several simplifying assumptions).
## Update after rebuttal
I think clarifying the main claims of the paper would be an improvement. I also think if the main claims are related to the empirical performance of the proposed chain-of-thought format, a stronger CoT baselines such as the one proposed would be useful to support the claim.
I like the general idea of the paper, and I think if the pledged changes were implemented well and the empirical results still support the main claims, I would likely update my score to a 3, but it is difficult to verify this given that the pledged changes are somewhat significant, and affect the clarity of and support for the main claims. Therefore I think the paper would benefit from resubmission to a future conference or workshop with the proposed changes. However, I don't want to block acceptance if the other reviewers have a different opinion.
Claims And Evidence: The key claims were a bit unclear to me.
There are specific empirical claims related to training with trace supervision in the form "Turing Programs" improving length generalization over training without such supervision, e.g. for addition and multiplication, which appear to be well supported (although could be strengthened by improving the baselines, e.g. considering other chain-of-thought formats).
However, the title of the paper primes the reader to expect a definition of "Universal Length Generalization", and some related result for "Turing Programs". However, the definition and connection to "Turing Programs" was unclear to me. Maybe these are bit nitpicky but:
1. The paper does not seem to establish new expressivity results for Transformers. "Turing Programs" are a specific convention for chain-of-thought sequences, and therefore do not formally extend the expressivity of Transformers. The expressivity of Transformers with chain-of-thought has been studied by prior work (e.g. the cited https://arxiv.org/abs/2310.07923, but also https://arxiv.org/abs/2406.14197 and https://arxiv.org/abs/2402.12875). A key point of complexity in these results is indexing and attending to evolving register states, as well as issues around finite precision. The proposed RASP program seems to avoid this issue through the non-repeated n-gram restriction, and does not restrict tokens to bounded integers, if I understood correctly. It's therefore not clear how this extends our understanding of Transformer expressivity. If this is a key contribution, it would be good to discuss the result in the context of prior work.
2. The paper does not seem to establish a broad class of new learnability results for Transformers. It is already previously known that training with additional chain-of-thought supervision can improve generalization, especially when such chain-of-thoughts effectively "unroll" some dynamic loop (e.g. https://arxiv.org/abs/2310.16028 and https://arxiv.org/abs/2404.15758). "Turing Programs" are simply one specific convention for representing this information, so it's not clear that the result is categorically novel, even though the authors show that it is empirically effective for several tasks. The "universal" claim seems to relate to the fact that any algorithmic task can in theory be encoded as a Turing machine, however the theoretical results limit the set of Turing machines that can be emulated (e.g. the non-repeated n-gram restriction). Additionally, there is no automatic conversion from a given algorithmic task to a Turing Program. Therefore, it's unclear formally what the claim to "universality" is, and why "Turing Programs" have this property in a way that other schemes for encoding chain-of-thoughts do not. For example, chain-of-thoughts in natural language are also "universal" in the sense that they can be used for any task.
In summary: I think the key claims could be clarified for the reader. If the main claims are simply empirical results for the tasks studied, that is fine but should be clearer. If there is some qualitative property of "Turing Programs" that other chain-of-though conventions lack, this property should be more clearly formalized. If there is some new expressivity result, the difference from prior work should be emphasized.
Methods And Evaluation Criteria: The tasks seem reasonable, although the baselines could be improved. The authors could compare against other conventions for "unrolling" the underlying computation and representing it in a serialized chain-of-thought.
Theoretical Claims: See above.
Experimental Designs Or Analyses: See above concern related to baselines.
Supplementary Material: No, I did not review the supplement in detail.
Relation To Broader Scientific Literature: I think the theoretical claims could be better contextualized in prior work, per above comments.
Essential References Not Discussed: Papers with theoretical results on Transformer decoder expressibility: https://arxiv.org/abs/2406.14197 and https://arxiv.org/abs/2402.12875
This paper discusses emulating Turing machines in a Transformer (with external memory): https://arxiv.org/abs/2301.04589
Other Strengths And Weaknesses: Strengths:
* The paper presents a new scheme for representing unrolled computation traces in a serialized chain-of-thought, inspired by Turing machines.
* The paper shows how training with such traces enables strong length generalization for several tasks, including those where training without trace supervision exhibits minimal length generalization.
* The paper gives a constructive result (via RASP) for how Transformer decoders can emulate a subset of Turing machines.
Weaknesses:
* See confusion around key claims and relation to prior work above.
Other Comments Or Suggestions: nits: Should use \citet in several places.
Questions For Authors: What is the size of `n` for the "non-repeated `n`-gram" constraint of Theorem 4.2? It would be good to formalize this a bit more clearly. Can `n` be chosen, i.e. we just require that there exists some `n` such that there are no repeated `n`-grams? Maybe this is clearer from inspecting the RASP code in the appendix, but would be helpful to clarify in the actual theorem statement.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We thank the reviewer for their feedback. We respond to the main points raised by the reviewer below.
**Key claims of the paper:** we want to emphasize that the key claim of our paper is that Transformers can achieve *length generalization* (generalization to problems longer than the ones observed in the training data) for a large class of problems. That is, we do not argue that we establish novel expressivity results—as you pointed out, the fact that language models can express a large class of functions using chain-of-thought has already been shown in various prior works. Rather, we argue, using a combination of empirical experiments and theoretical observations, that Transformers can extrapolate to longer sequence lengths when provided with chain-of-thought/scratchpad data of a particular format which tracks the step-by-step operation of a “Turing Machine” (the Turing Programs). This connection between the chain-of-thought format and length generalization goes far beyond what has been shown in prior works (including the works that you mentioned), which focus on a narrow set of algorithmic tasks. Instead, our results demonstrate the potential of “universal length generalization” — length generalization for any algorithmic tasks. While we do not establish this formally (due to limitations of our theoretical analysis), we believe that the combination of our extensive experimental results and theoretical insights suggests that Transformers can length-generalize on a far larger class of problems than previously acknowledged, a result that we see as the main novel contribution of our work. Therefore, we believe that our theoretical result should be viewed in the broader scope of the paper, coupled with the experiments on length generalization, and not as an independent expressivity result. Thank you for pointing this out, and we will clarify this in the final version of the manuscript.
**Comparison to other baselines:** throughout the paper, we compare our method to training without chain-of-thought, which displays poor length generalization performance. However, we agree that adding a baseline where we train with another chain-of-thought format can help establish our claims, and we plan to run additional baseline experiments and add them to the final version of the paper. Specifically, we will compare our scratchpad technique to more minimal chain-of-thought, for example one that tracks only the current number and the carry digit in the case of multi-digit addition.
**The choice of n in non-repeated n-grams:** we would like to clarify that Theorem 4.2 holds *for any choice of n.* I.e., for any $n \in \mathbb{N}$ there exists a RASP program of size (number of lines) which grows linearly with the chosen *n*, that satisfies the conditions of the theorem. We will clarify this in the final version of the paper.
**Citations:** we will fix the citations format in the paper.
We thank you again for the helpful comments on our paper. We believe that we answered the main drawbacks raised in the review (in particular, about the key claim of the paper), and would appreciate it if you would raise your score if you believe that your concerns were indeed addressed. | Summary: The paper tackles the challenge of length generalization in transformer models—the ability to extrapolate from short training sequences to test sequences longer. The main contribution is Turing Programs, a novel scratchpad strategy inspired by Turing machine computations. In this framework, an algorithmic task is decomposed into a series of intermediate “tape” states, where each step is a slightly modified copy of the previous one. Combined with the Hard-ALiBi positional encoding, this approach enables robust length generalization on several algorithmic tasks, including multi-digit addition, multiplication (with both 1-digit and 3-digit operands), and an in-context simulation of SGD for linear regression. The paper also provides theoretical evidence by showing that transformers can implement Turing Programs via construction in the RASP programming language, thereby establishing a formal connection between the proposed method and Turing machine computations.
## update after rebuttal
The authors' response explains the limitations of their work on real data and clarifies their focus on studying position encoding. I find their explanation sufficiently convincing.
Claims And Evidence: - Claims: The paper claims that using Turing Programs enables transformers to generalize to longer sequences on a variety of algorithmic tasks, achieving near-perfect performance (e.g., 98% accuracy on addition when generalizing from 50—to 100-digit numbers) and that transformers can theoretically implement these programs.
- Evidence: The experimental results on addition, multiplication, and SGD, along with detailed comparisons of different positional encoding strategies, support these claims—the provided theoretical construction (Theorem 4.2) further bolsters the claim of universality.
Methods And Evaluation Criteria: The chosen methods and evaluation criteria are well-aligned with the problem of length generalization in algorithmic tasks.
Theoretical Claims: The theoretical claim is solid under the assumptions.
Experimental Designs Or Analyses: The experimental design is sound and thorough.
Supplementary Material: I reviewed the extension experimental results and the code.
Relation To Broader Scientific Literature: - This paper extends ideas from previous studies on Hard-ALiBi and related positional encoding strategies and situates its contributions in the context of research on transformer expressiveness and Turing completeness.
- By linking empirical improvements to theoretical constructs (RASP programs), the paper offers a meaningful contribution that advances our understanding of length generalization—a long-standing issue in the literature.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strengths:
- The approach is novel, drawing inspiration from Turing machines to design a universal method.
- Empirical results across multiple tasks are strong and convincingly demonstrate improved length generalization.
- The theoretical construction adds depth and rigor to the contributions.
Weaknesses:
- The claim of universality might be overextended given that experiments focus on a limited set of algorithmic tasks.
- Some of the theoretical assumptions (e.g., non-repetition of n-grams) may not hold in more complex or noisy real-world scenarios.
Other Comments Or Suggestions: - Gap Between the Assumptions and Real-World Data:
In practice, natural language often contains repeated patterns and more complex structures, which could affect the copying mechanism critical to the proposed Turing Program approach. While the authors acknowledge this gap, a deeper empirical or analytical exploration of how these assumptions might limit performance on actual data would be valuable.
- Focus on Algorithmic Tasks vs. Real Language Tasks:
Real-world language tasks could greatly benefit from these insights. It would be interesting to see future work that adapts the Turing Programs framework to more complex, natural language applications, thereby testing whether the benefits observed in algorithmic tasks translate to these richer, less structured domains.
- Role of Positional Encoding in Length Generalization:
While the experiments clearly demonstrate that the choice of positional encoding (notably Hard-ALiBi) is crucial for enabling robust length generalization, it is important to recognize that other components of the transformer architecture also play significant roles. For instance, the attention mechanism, model depth, and training protocols can influence how well the model generalizes to longer sequences.
Questions For Authors: See list above.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive feedback. We respond to the main points raised by the reviewer below.
**Universality:** We understand that the word “universal” may not accurately capture the nature of our results, and we are open to removing it from the title if the reviewer thinks this will be adequate.
**N-gram repetition**: We agree that relying on n-gram repetition may be a significant restriction of our theoretical construction, but note that for “random enough” inputs, repeated n-grams become very unlikely for large enough n. Additionally, we suspect that transformers in practice may be in fact utilizing an n-gram matching mechanism (as observed in prior works, e.g. https://arxiv.org/pdf/2402.01032), which means that this limitation reflects a true limitation of transformers and not just a problem in our theoretical construction. In that sense, we believe that our theoretical construction truly captures the nature of the solutions learned by the transformers, including the potential limitation of these learned solutions.
**Gap with real-world tasks and data:** This is a legitimate concern. We want to make two points here:
- For this work, our goal is to do a deep evaluation of how length generalization can arise in language models trained on next-token prediction. Following a large body of prior work (see the many papers on addition, such as https://arxiv.org/abs/2402.09371), we conduct these experiments on synthetic tasks. Therefore, we hope this won’t be considered as a fatal weakness for the paper.
- There may be applications to math problems that have good algorithmic solutions. Consider the game of 24 (analyzed in https://arxiv.org/abs/2404.03683): you win by manipulating 4 numbers to reach 24. It can be solved by DFS, which can be encoded into the Turing Program format. It would be interesting to see if including Turing Programs of various math problems into the training data mix can improve length generalization performance. We leave this study to future work.
**Role of Positional Encoding in Length Generalization:** This is a good point. We agree that many factors contribute to length generalization performance, but we chose to focus on the choice of positional encoding as it was pointed out to be a key factor in prior works studying length generalization. Controlled experiments on how other variables affect length generalization should be done in future research.
We thank you again for the helpful comments on our paper. We believe that we answered the main drawbacks raised in the review, and would appreciate it if you would raise your score if you believe that your concerns were indeed addressed. | Summary: This paper introduces Turing Programs, a novel CoT strategy that improves length generalization on a range of algorithmic tasks. By structuring algorithmic tasks as step-by-step computations resembling a Turing Machine, this method achieves robust generalization across tasks like addition, multiplication, and in-context SGD. The authors also provide theoretical proof that transformers can implement Turing Programs.
Claims And Evidence: Yes. The authors conduct experiments on three algorithmic tasks to demonstrate that transformers can achieve length generalization on random Turing Programs.
Methods And Evaluation Criteria: Yes
Theoretical Claims: I did not find any major issues.
Experimental Designs Or Analyses: Yes.
Supplementary Material: No Supplementary Material submitted beyond Appendix.
Relation To Broader Scientific Literature: The paper builds on prior work in length generalization, scratchpad, and other CoT prompting methods.
Essential References Not Discussed: No
Other Strengths And Weaknesses: This paper is the first results showing non-trivial length generalization on multiplication. The experimental design is generally sound and supports the claim that Turing Programs achieve robust length generalization on three arithmetic tasks. However, since arithmetic problems can be effectively solved by deterministic algorithms or Program of Thoughts methods, this may limit the method's generalization to more complex tasks and weaken its practical applicability.
Other Comments Or Suggestions: The abstract and introduction mention that length generalization is a challenge for current LLMs. Since the transformer used in this paper is relatively small (150M), it would be helpful to briefly discuss the potential application of Turing Programs to larger LMs.
Questions For Authors: Can the proposed Turing Programs method provide non-trivial performance improvements on real-world QA tasks, particularly in mathematical reasoning (e.g., MATH benchmark)?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive feedback. We respond to the main points raised by the reviewer below.
**Only algorithmic problems:** The reviewer is right to point out that currently the result is not immediately practical. For this work, our goal is to do a deep evaluation of how length generalization can arise in language models trained on next-token prediction. Following a large body of prior work (see the many papers on addition, such as https://arxiv.org/abs/2402.09371), we conduct these experiments on synthetic tasks. Therefore, we hope this won’t be considered as a fatal weakness for the paper.
**Larger LMs:** We expect no reason why the same generalization won’t hold when our technique is applied to larger models. We want to make two points:
- As model size grows, we expect the model to perform complex Turing programs better. (e.g. harder arithmetic tasks). It would be an interesting direction of future research to quantify and see how model size and task difficulty (maybe captured by the RASP program size as observed in https://arxiv.org/pdf/2310.16028) affect length generalization results.
- Turing program may be a way to construct CoT for certain math problems (see the next section), which can be used to train large models.
**Improvement in math reasoning:** Although it is unclear how general QA can be improved by the current iteration of Turing Program, there is certainly application to math problems that have good algorithmic solutions. Consider the game of 24 (analyzed in https://arxiv.org/abs/2404.03683): you win by manipulating 4 numbers to reach 24. It can be solved by DFS, which can be encoded into the Turing Program format. It would be interesting to see if including Turing Programs of various math problems into the training data mix can improve length generalization performance. We leave the study of how to use Turing Programs for more realistic math problems to future work.
We thank you again for the helpful comments on our paper. We believe that we answered the main drawbacks raised in the review, and would appreciate it if you would raise your score if you believe that your concerns were indeed addressed. | null | null | null | null | null | null |
H-Tuning: Toward Low-Cost and Efficient ECG-based Cardiovascular Disease Detection with Pre-Trained Models | Accept (poster) | Summary: This paper proposes H-Tuning, a novel framework that reduces the computational cost of fine-tuning large pre-trained models for ECG-based cardiovascular disease detection by integrating mix-order optimization, low-rank adaptation, and layer-dependent tuning. Additionally, it employs knowledge distillation to transfer knowledge to smaller models, significantly reducing inference costs and enabling efficient deployment on low-resource devices while maintaining high diagnostic performance.
## update after rebuttal
I'd keep the current score.
Claims And Evidence: The claims made in the submission are well-supported by clear and convincing evidence. The paper provides quantitative experiments on four publicly available ECG datasets, demonstrating that H-Tuning significantly reduces GPU memory consumption (by 6.34×), inference latency (by 19.8×), and the number of model parameters (by 194.2×) while maintaining comparable performance to standard fine-tuning methods. It also includes comparisons against multiple baseline methods, such as Full Fine-Tuning (Full FT), LoRA, MeZO, and Addax, showing superior efficiency and performance. Furthermore, ablation studies confirm the contribution of different components (e.g., mix-order optimization, gradient refinement, and knowledge distillation), while sensitivity analyses demonstrate the robustness of the approach across different hyperparameter settings. Overall, the empirical results strongly support the paper’s core claims regarding computational efficiency and diagnostic performance.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense for the problem of ECG-based cardiovascular disease (CVD) detection using pre-trained models.
Theoretical Claims: This paper does not appear to present formal mathematical proofs.
Experimental Designs Or Analyses: The experimental design is methodologically strong, using diverse datasets, meaningful baselines, and rigorous efficiency metrics.
Supplementary Material: No.
Relation To Broader Scientific Literature: Although this paper focuses on pre-trained ECG models for CVD diseases, the proposed approach has a strong potential to be generalized and be applied to many other fields. Thus it’s worth being discussed by the general audience of ICML.
Essential References Not Discussed: Nothing particular.
Other Strengths And Weaknesses: Nothing particular.
Other Comments Or Suggestions: Nothing particular.
Questions For Authors: Nothing particular.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We are grateful for your insightful comments on our work. In this rebuttal round, we provide an external validation on a wearable ECG dataset, a more thorough ablation study to support the claims of our study. At the same time, experiments on different backbones strengthen the flexibility of the proposed method (Please refer to our responses to other reviewers). | Summary: This paper aims to detect cardiovascular disease by fine-tuning large-scale pre-trained models using ECG signals. It focuses on low-cost and efficient fine-tuning through a mix-order optimization with low-rank adaptation and a novel layer-dependent model update scheme. Then, a knowledge distillation technique is introduced for smart devices. Experiments on the G12EC, PTB-XL, Ningbo, and Chapman datasets show the proposed method achieves comparable or even better performance with lower time cost and reduced memory consumption.
## update after rebuttal
I would like to keep the score.
Claims And Evidence: 1. The proposed method is based on the ECG setting. But is there any reason this framework has to be specific for ECG but not other signals like EEG? It would be better if the authors expanded the experiments for other signals for core AI/ML contribution.
Methods And Evaluation Criteria: 1. They propose a framework (H-Tuning) developed to integrate mix-order optimization with low-rank adaptation and a novel layer-dependent model update scheme, enhancing both computational efficiency and robustness.
2. The mix-order optimization provides a low-cost solution and does not require text data.
3. The study is set in the context of smart devices. However, as I understand it, the data collected by smart devices include a significant amount of noise. Is there any preprocessing applied to remove different types of noise?
4. Unclear writing:
- In line 023 of the abstract, what is meant by “a joint framework” and “a holistic method”? Do they mean a framework that includes both fine-tuning and downsampling tasks? What is being joined, and what subtasks should a non-holistic method focus on?
- In Table 1 and Table 2, what is the unit for Memory? MB? GB?
Theoretical Claims: The authors claim the effectiveness of the zero-th order optimization and the mixed-order (zero-th order and first order) optimization.
Experimental Designs Or Analyses: - The authors investigate the performance of the student model under various lead configurations, considering that most mobile ECG devices have only 1–3 leads.
- The authors claim in the abstract that the computational costs for fine-tuning are unaffordable, however, the backbone in the proposed method only consists of 50 million parameters, which is easy to train from scratch. It would be better if the authors conducted experiments using a larger model, except it cannot support this motivation.
- The authors claim that for signal preprocessing, a band-pass filter (1–47 Hz) is applied to remove potential noise from the raw ECG recordings, such as power-line interference and motion artifacts (Section 3.1). It would be better if the authors had conducted additional experiments to show that the motion artifacts are removed.
- The authors claim that the student model meets the requirements for mobile cardiac healthcare. Are any of the datasets used collected by mobile devices? If yes, which dataset? If not, how can it be proven that this method has the potential to be embedded into mobile phones?
- It would strengthen the authors' argument if the authors included more ablation studies to demonstrate the effectiveness of the Low-Rank Adaptation module and the Model Update Scheme. A comparison between the current results and those obtained by removing the fine-tuning block would support their claims. Additionally, comparing the results with those from a standard layer (not the shallow and deeper layers) could provide further insights.
- In Table 1, MeZo and MeZo+LoRA only achieved about 0.5 on AUROC, almost making a random decision. It is unusual and worth explaining why.
- In Table 4, regarding Time/Iter, the efficiency of H-Tuning does not appear to be significantly improved.
Supplementary Material: NA - no additional information. No code is provided.
Relation To Broader Scientific Literature: It would be better if the authors could provide the reason for selecting this particular backbone. Additionally, it would be useful to know whether the proposed framework can also work with other backbones.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: NA
Other Comments Or Suggestions: Some notions are unclear.
- In line 156, Equation (2), section 2.1, what does “η” represent?
- In line 159, Section 2.1, what does “E” represent?
Questions For Authors: NA
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for all your questions and suggestions.
- **Claims**:
The reasons for choosing the ECG setting: (1) The ECG community offers many open-access datasets for model training and evaluation. (2) Developing low-cost methods for accurate and mobile cardiac healthcare is an important topic in AI/ML for health. We apologize for not being able to extend the experiments to other signals within this timeframe, but we will include this as future work in the next version of the manuscript.
- **Methods**:
**Q3**: A bandpass filter (1-47 Hz) was used to filter out the noise, which is also adopted by [2].
[2] P. Nejedly et al., “Classification of ECG using ensemble of residual CNNs with attention mechanism,” in Proc. Comput. Cardiol., 2021, pp. 1–4.
**Q4**: An ideal joint framework should not only reduce the computational costs associated with fine-tuning and deploying pre-trained models but also maintain the performance of the fine-tuned models on downstream datasets. Therefore, under the constraint of achieving comparable fine-tuning performance to full fine-tuning, the joint framework proposed in our study includes two subtasks: (1) reducing the fine-tuning cost and (2) reducing the inference cost.
The unit for Memory is GB.
- **Experimental Designs:**
**Q2**: Yes, a backbone with 50 million parameters can be trained or fine-tuned with high-level GPUs (> 8GB memory). **However, we want to point out that the costs for fine-tuning and inference become prohibitive on low-level devices (with 2-4 GB memory), which are affordable and typically deployed in clinics or home settings. Therefore, the motivation of our research is to maintain model performance under limited resources.** We validated that H-Tuning achieves similar performance to Full FT while reducing GPU memory costs by 6.34 times, fulfilling our research objective. Additionally, we utilized a knowledge distillation technique to reduce the inference costs of the fine-tuned models, enabling them to be deployed on mobile devices (<20MB memory).
**Q3**: Motion artifacts often behave as baseline wander. Theoretically, a bandpass filter (1-47Hz) can filter out most of the baseline wander noises (generally below 1Hz). We will add a comparison between the raw and the filtered signals in the next version of the manuscript.
**Q4**: In the previous manuscript, no dataset was collected by wearable devices. To support our claims, we conducted an external validation of the final classifiers generated by H-Tuning on a wearable 12-lead ECG dataset. Please refer to our response to Reviewer ChMh (Table R1).
**Q5**: There is no 'standard layer' in the proposed model update scheme. All layers are trainable. Without this scheme, all layers can be optimized with first-order optimization, such as Full FT or LoRA. However, we agree that we should include an ablation study on the low-rank adaptation. The results are shown in Table R3.
**Table R3**
||Time/iter (s)|Memory (GB)|Macro $F_{\beta=2}$|
|-|-|-|-|
|H-Tuning w/o low-rank adaptation|0.358|2.002|0.571|
|H-Tuning|0.408|1.453|0.600|
|Full FT|0.401|9.212|0.605|
**Q6**: The success of MeZO-based methods in fine-tuning pre-trained models relies on prompts[3]. However, prompts are not feasible in ECG analysis because there are no text inputs, making the MeZO unable to find a stable optimization path. The MeZO paper also reported a performance collapse (30.3%-47.2% loss) in the absence of prompts.
[3]. Malladi, S.etal. (2023). Fine-tuning language models with just forward passes. Advances in Neural Information Processing Systems, 36, 53038-53075.
**Q7**: H-Tuning integrates low-rank adaptation to reduce the GPU memory consumption and decrease the variance for gradient estimation, which introduces extra time for the forward process. As shown in Table R3, training time is significantly reduced without this module at the expense of increased memory costs.
- **Supplementary Material**: We will release our code after the publication of our manuscript.
- **Relation To Literature**: The architecture of the backbone used in our study is provided by [4], which validated its effectiveness on various datasets. We also verified that the performance of H-Tuning is comparable to Full FT on another backbone provided by [2].
**Table R4**
|-|Params|Marco AUC|Macro $F_{\beta=2}$|
|-|-|-|-|
|**H-Tuning with backbone**|-|-|-|
|in [2]|15.8M|0.902|0.576|
|in [4]|50.4M|0.913|0.600|
|**Full FT with backbone**|-|-|-|
|in [2]|15.8M|0.908|0.590|
|in [4]|50.4M|0.918|0.605|
[4] Zhou, R. etal. Computation-efficient semisupervised learning for ecg-based cardiovascular diseases detection. arXiv:2406.14377, 2024.
- **Other Comments:**
$\eta$ is the learning rate. $\mathbb{E}$ denotes the expectation of the zero-order gradients over vector $z$.
- **Note:**
All evaluation metrics are averaged across 4 downstream datasets and 4 seeds.
---
Rebuttal Comment 1.1:
Comment: Additional comments for Q2: given that 50M is relatively small for modern pre-trained models, how does the method generalize to larger backbones? To demonstrate the effectiveness of H-Tuning and knowledge distillation in reducing fine-tuning and inference costs, comprehensive and rigorous experiments should be conducted across models of varying sizes, particularly larger-scale backbones. This would strengthen the generalizability of the proposed method and better support the motivation related to computational affordability.
Additional comments for Q3: Some experimental justifications are needed, as mentioned in those papers, motion artifacts cannot be removed by simple filtering, as the MA’s frequency contents overlap those of the ECG.
1. Lee S Y, Su P H, Hung Y W, et al. Motion artifact reduction algorithm for wearable electrocardiogram monitoring systems. IEEE Transactions on Consumer Electronics, 2023, 69(3): 533-547.
2. Pholpoke B, Songthawornpong T, Wattanapanitch W. A micropower motion artifact estimator for input dynamic range reduction in wearable ECG acquisition systems. IEEE Transactions on Biomedical Circuits and Systems, 2019, 13(5): 1021-1035.
---
Reply to Comment 1.1.1:
Comment: First of all, we are grateful for your time and effort in reviewing our paper.
**Additional comments for Q2:**
We conducted additional experiments to compare the performance of H-Tuning and the other fine-tuning methods using a larger backbone provided by [1]. The large backbone has 113.49M parameters, demonstrating a complexity comparable to RoB-base (125M) and DeBERTaV3 (184M), both of which are commonly used in evaluating different fine-tuning methods for natural language processing tasks [2, 3]. The results shown in Table R5 demonstrate that H-Tuning achieves similar performance to Full FT and LoRA while using significantly less GPU memory (by 6.04x). Additionally, H-Tuning demonstrated better fine-tuning performance than Addax and LoHO, which are SOTA in memory-efficient fine-tuning. In conclusion, these results provide direct evidence to the generalizability of H-Tuning in larger backbones.
**Table R5**
|Method|Params (M)|Memory (GB)|MAP|Macro $F_{\beta=2}$|
|-|-|-|-|-|
|Full FT|113.49|13.78|0.541|0.593|
|LoRA|3.20|12.94|0.545|0.599|
|Addax|113.49|3.488|0.501|0.563|
|LoHO|113.49|3.487|0.498|0.557|
|H-Tuning|3.20|2.28|0.540|0.615|
We also investigated how the backbone size influences the knowledge distillation process. Specifically, we fine-tune different backbones using the proposed H-Tuning to generate the teacher models. Subsequently, the knowledge distillation method is applied to generate the corresponding student models, which contain only 0.26M parameters. Table R6 shows the performance of the student models, which indicates that an increase in teacher size can improve student performance.
**Table R6**
|Teacher Model|Student's MAP|Student's Macro $F_{\beta=2}$|
|-|-|-|
|[4] (15.8 M)|0.539|0.602|
|[1] (50 M)|0.544|0.603|
|[1] (113.49 M)|0.551|0.618|
Note: All metrics are averaged across 4 downstream datasets and 4 seeds.
**Additional comments for Q3:**
There are two primary types of motion artifacts in wearable ECG signals. The most common type is baseline wander, which has no frequency content overlapping with ECG and can be filtered out by a bandpass/highpass filter, as demonstrated in previous studies [4, 5]. In our research, we follow the pre-processing pipeline (bandpass filter) provided by the Physionet 2021 challenge winner [4]. The ECG classification model using this pre-processing pipeline ranked first among all models, justifying our choice of the pipeline in our study.
On the other hand, there is another type of motion artifact, which is more complex and has frequency content overlapping with ECG [6, 7]. We agree that simple filtering cannot remove it from ECG. According to [8], three types of methods have the potential to remove it:
(1) Adaptive filtering, such as Least Mean Square (LMS).
(2) Wavelet Transform;
(3) Blind Source Separation (BSS).
According to [6, 7], LMS-based methods require a reference channel to remove the artifact, such as the electrode-tissue impedance. However, such reference is absent in the public ECG datasets we used. According to [8], the BSS-based method is computationally expensive and cannot meet our requirement for low-cost inference. Therefore, we choose the discrete wavelet transform (DWT) to pre-process the wearable ECG dataset [5] used in our study and compare it with the bandpass-based pipeline [4]. In Table R7, we present the performance of the H-Tuning with two pre-processing pipelines on the wearable ECG dataset. The results indicate that the DWT performs similarly to bandpass on CVDs detection using wearable ECG.
**Table R7**
|H-Tuning with|Teacher's AUC|Teacher's Macro $F_{\beta=2}$|Student's AUC|Student's Macro $F_{\beta=2}$|
|-|-|-|-|-|
|DWT|0.862|0.562|0.876|0.541|
|Bandpass (1-47 Hz)|0.866|0.567|0.880|0.551|
[1] Zhou, et al. Computation-efficient semisupervised learning for ecg-based cardiovascular diseases detection. arXiv:2406.14377, 2024.
[2] Hu, et al. "Lora: Low-rank adaptation of large language models." ICLR 1.2 (2022): 3.
[3] Zhang, et al. "Adalora: Adaptive budget allocation for parameter-efficient fine-tuning." arXiv preprint arXiv:2303.10512 (2023).
[4] P. Nejedly et al., “Classification of ECG using ensemble of residual CNNs with attention mechanism,” in Proc. Comput. Cardiol., 2021, pp. 1–4.
[5] Lai, J., et al. Practical intelligent diagnostic algorithm for wearable 12-lead ECG via self-supervised learning on large-scale dataset. Nature Communications, 14(1), 3741.
[6] Lee S Y, et al. Motion artifact reduction algorithm for wearable electrocardiogram monitoring systems. IEEE Transactions on Consumer Electronics, 2023, 69(3): 533-547.
[7] Pholpoke B,et al. A micropower motion artifact estimator for input dynamic range reduction in wearable ECG acquisition systems. IEEE Transactions on Biomedical Circuits and Systems, 2019, 13(5): 1021-1035.
[8] Berwal, Deepak, et al. "Motion artifact removal in ambulatory ECG signal for heart rate variability analysis." IEEE Sensors Journal 19.24 (2019): 12432-12442. | Summary: The authors propose H-tuning, a model pipeline for efficiently fine-tuning pre-trained models for ECG classification to enable cardiac diagnosis under limited computation resources.
They combine zeroth- order backpropagation, low-rank adaptation, and knowledge distillation to reduce computation times and memory requirements at fine-tuning and inference time. They demonstrate superior runtime, memory footprint, and classification performances for numerous related methods considering four public datasets for evaluation. Design choices are justified with thorough ablation studies.
## update after rebuttal: I am happy with the authors responses and believe this submission to be worthy of publication. I am happy to change my score to reflect this.
Claims And Evidence: The claims are convincing as the study introduces a thorough experiment design and evaluation. The proposed method has been evaluated on multiple datasets with convincing performance impact and compared with with multiple related state-of-the-art techniques. Finally the proposed model design choices is supported by various ablation studies.
Methods And Evaluation Criteria: The proposed method has not been assessed on external databases, and therefore it is impossible to assess the generalisability of the final classifiers, and that was one of the key challenges in the 2020 PhysioNet Challenge.
Moreover, it would have been interesting to compare the proposed with the approaches proposed by the winning challenge team to compare the models with a strong baseline.
Theoretical Claims: No
Experimental Designs Or Analyses: The generalizability of the learned representation has not been performed, testing of the classifier on an external database would have been interesting.
Comapring their appoach with other state-of-the-art techniques (2021 PhysioNet challenge entries) would have been informative
Supplementary Material: no
Relation To Broader Scientific Literature: It would have been interesting to compare the results with baseline approaches on the downstream tasks, and not only assess how the proposed technique compares with other SSL techniques.
It would have been also interesting to compare the performance of the proposed technique with CE-SSL (Zhou, R., Liu, Z., Clifton, L., Clifton, D. A., Chan, K. W., Zhang, Y.-T., and Dong, Y. Computation-efficient semisupervised learning for ecg-based cardiovascular diseases detection. arXiv preprint arXiv:2406.14377, 2024.), which has a very similar study design.
Essential References Not Discussed: The authors have omitted the literature of past 2020 and 2021 PhysioNet/challenge on rhythm; classification.
Other Strengths And Weaknesses: The novelty of the proposed work is the combination of existing state-of-the-art model fine-tuning techniques.
Other Comments Or Suggestions: Font size of Figure 2 could be increased
Typo in L224? “Additionally, we tune the deep layers using the proposed mix-order
optimization method…” -> shallow layers?
Table 3: Teacher “None” might be confusing before reading the manuscript, add a short
description in the table caption
Questions For Authors: Could the authors highlight the differences of your proposed mix-order optimization with Addax and LoHO ?
Could the authors also discuss more in depth the difference and added value of H-tuning compared to CE-SSL (Zhou, R., Liu, Z., Clifton, L., Clifton, D. A., Chan, K. W., Zhang, Y.-T., and Dong, Y. Computation-efficient semisupervised learning for ecg-based cardiovascular diseases detection. arXiv preprint arXiv:2406.14377, 2024.)
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for all your questions and suggestions.
- **Methods And Evaluation Criteria:** To assess the generalizability of our classifiers in mobile cardiac healthcare, an external validation set consisting of 7000 wearable 12-lead ECG signals, provided by [1], is used for testing. We first utilized the G12EC, PTB-XL, Ningbo, and Chapman datasets for downstream training, where only 10% of labeled ECG signals are used to fine-tune the backbone. We employ two fine-tuning methods (Full FT and LoRA) and H-Tuning to train three teacher models, followed by knowledge distillation to create three corresponding student models. The CVD detection performance of the six classifiers on the external dataset is shown in Table R1.
**Table R1 External validation on a wearable ECG dataset**
|Methods|Macro AUC|Macro $F_{\beta=2}$|
|-|-|-|
|**Teacher Models**|
|Full FT|0.870|0.570|
|LoRA|0.879|0.579|
|H-Tuning |0.866|0.567|
|**Student Models**|
|Full FT|0.867|0.534|
|LoRA|0.874|0.543|
|H-Tuning|0.880|0.551|
The results demonstrate that the teacher generated by H-Tuning achieves comparable performance to Full FT and LoRA, but with significantly less GPU memory consumption, as shown in our manuscript. Additionally, our student model performs better than the compared methods.
We cannot compare our classifiers with the winning challenge team on the Physionet 2020/2021 test datasets, which are not publicly available. However, we can report the performance of H-Tuning using the model designed by the winning team (See our response to Reviewer m5NF, Table R4, reference [2]).
[1] Lai, J., etal. Practical intelligent diagnostic algorithm for wearable 12-lead ECG via self-supervised learning on large-scale dataset. Nature Communications, 14(1), 3741.
- **Experimental Designs Or Analyses:** Please refer to the above section.
- **Relation To Broader Scientific Literature:** (1) We need to clarify that the proposed H-Tuning and the compared SOTA methods in our manuscript are all fine-tuning methods, which do not utilize unlabeled data for semi-supervised or self-supervised learning. In addition, the experiments presented in our manuscript are all conducted on the downstream datasets. External validation was performed on a wearable 12-lead ECG dataset (Table R1). (2) Comparisons between H-Tuning and CE-SSL. Please see *Questions For Authors:*.
- **Essential References Not Discussed:** We thank the reviewer for this comment. The downstream datasets used in our study were also included in the PhysioNet challenge. We will add this reference in next version of our manuscript.
- **Other Comments Or Suggestions**:
We are grateful for your valuable suggestions. We will correct the issues in the next version of our manuscript.
- **Questions For Authors:**
(1) LoHO applied first-order optimization and zeroth-order optimization on different trainable parameters. It did not explore how to utilize their advantages in a more flexible way to fully optimize all parameters. Addax and the proposed mix-order optimization both utilize first-order gradients to refine the direction of the zeroth-order gradients. However, our mix-order optimization proposed a gradient normalization technique to regulate the norm of the zeroth-order gradients, which has not been explored by Addax. Our ablation studies in section 3.4 demonstrated that the proposed technique has an obvious positive impact on fine-tuning performance. At the same time, Table 1 in our manuscript demonstrated the superior performance of H-Tuning compared with Addax and LoHO.
(2) The differences between CE-SSL and H-Tuning can be listed as:
a. CE-SSL is a semi-supervised method, which utilizes unlabeled data to achieve robust CVDs classification performance. In contrast, H-Tuning is a supervised method focusing on low-cost and efficient fine-tuning and inference.
b. CE-SSL utilizes first-order optimization for model training, which needs extensive activation outputs for gradient backpropagation. H-Tuning avoids this drawback by designing a mix-order optimization method, which greatly reduces the GPU memory costs.
c. In our study, we integrated H-Tuning with a knowledge distillation technique to reduce the model inference costs on wearable devices, which was not explored by CE-SSL.
d. To ensure a fair performance comparison between CE-SSL and H-Tuning, we remove the semi-supervised learning module from the CE-SSL while preserving the other modules, which makes CE-SSL a supervised fine-tuning method. We report their average performance across four downstream datasets (following the data split method and the backbone used in our manuscript).
**Table R2**
|Methods|Memory (GB)|MAP|Macro $F_{\beta=2}$|
|-|-|-|-|
| CE-SSL (without unlabeled data)|9.024|0.562|0.616|
| H-Tuning|1.453|0.535|0.600|
The results show that H-Tuning performs similarly to CE-SSL with much less GPU memory costs.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the authors for their responses.
I hope that external validation will be included in the revised manuscript.
---
Reply to Comment 1.1.1:
Comment: Many thanks for taking the time to review our responses. The external validation (Table R1) will be included in the revised manuscript along with more evaluation metrics (Coverage, Ranking Loss, MAP, and Macro $G_{\beta=2}$ score).
**Table R1 External validation on the wearable 12-lead ECG dataset.**
| Methods | Ranking Loss $\downarrow$ | Coverage $\downarrow$ | Macro AUC $\uparrow$ | MAP $\uparrow$ | Macro $G_{\beta=2}$ $\uparrow$ | Macro $F_{\beta=2}$ $\uparrow$ |
|-|-|-|-|-|-|-|
|**Teacher Models**|||||||
| Full FT |0.137|5.595|0.870|**0.600** |0.314|0.570|
| LoRA |0.134|5.440|0.879|0.598|**0.319**|**0.579**|
| H-Tuning |0.141|5.484|0.866|0.575|0.312|0.567|
|**Student Models**|||||||
| Full FT |0.135|5.462|0.867|0.566|0.287|0.534|
| LoRA |0.129| 5.384| 0.874|0.582|0.302|0.543|
| H-Tuning |**0.127**|**5.297**|**0.880**|0.598|0.311|0.551| | null | null | null | null | null | null | null | null |
Probabilistic Factorial Experimental Design for Combinatorial Interventions | Accept (spotlight poster) | Summary: This paper studies the combinatorial intervention problem. The authors propose a probabilistic factorial experimental design, where each unit independently receives a random combination of treatments according to specified dosages. They derive a closed-form solution for the near-optimal design in the passive setting and a numerically optimizable solution for the near-optimal design in the active setting. Simulation results are provided to validate their findings.
Claims And Evidence: They are generally well-supported to me. The simulation results clearly align with the theories.
Methods And Evaluation Criteria: The simulations are carefully designed to validate the theoretical results. For example, the stated near-optimality of setting the dosage to 1/2 for each treatment in the passive setting is clearly demonstrated in Figures 1 and 2. Additionally, the simulations for the active setting illustrate the effectiveness of adaptively choosing the dosages in accordance with the proposed theory.
Theoretical Claims: The proofs appear sound to me.
Experimental Designs Or Analyses: The experimental designs are generally sound. However, the paper could be strengthened by including a sensitivity analysis that varies $k$.
Supplementary Material: I reviewed most of them, with a particular focus on Section B.
Relation To Broader Scientific Literature: The proposed probabilistic factorial design includes both full and fractional factorial designs in the literature as special cases. It serves as a flexible realization of a factorial design.
Essential References Not Discussed: I don't see any obvious missing references.
Other Strengths And Weaknesses: Strengths: The paper is generally well written and smooth to follow. Then extensions are enlightening.
Weaknesses: Simulation results on real-world dataset are missing.
Other Comments Or Suggestions: The x-axis in Figure 2 seems to be the dosage value rather than $\\|d-\frac{1}{2}\\|_{\infty}$.
Questions For Authors: According to Figures 1 and 2, $\mathbf{d} = (1 / 2, \ldots, 1 / 2)$ appears to be exactly optimal, rather than merely near-optimal. Do the authors conjecture that this is indeed optimal for general $k$? Similarly, in the general constrained case, is the uniform dosage of $L / p$ conjectured to be optimal?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for finding our extensions enlightening. We find the reviewer's suggestions insightful and accordingly lay out additional experiments and their results.
> Simulation results on real-world dataset are missing.
We thank the reviewer for sharing this concern. Our paper is concerned with how an experimenter could optimally construct a dataset through choice of dosage, and therefore, experiments with real-world data necessitate close collaboration with individuals engaged in experimentation. However, we could have simulated the performance of various dosages using a real-world dataset, if this dataset included samples with all combinations. For example, for each combination generated by a given dosage, we could draw a corresponding point from the dataset to construct a new dataset consistent with the dosage. Unfortunately, we are not aware of such complete datasets. We plan to collaborate with biological experimenters in the future to create datasets based on our results.
While existing real-world datasets are not feasible for experimentation, we added a semi-synthetic simulation in the following way: we use a real-world Boolean function with $p=5$, which is a reliability function originally presented in Quality and Reliability Engineering International of full-degree [1]. We create datasets based on this function, where each sample corresponds to a combination (according to the dosage) and the corresponding value of the function, with noise added. We conduct experiments using this function, results of which can be found in tabular format below. We note that $p=5$ is relatively low, but we were unable to find a completely-defined Boolean function of higher dimension.
> However, the paper could be strengthened by including a sensitivity analysis that varies k.
We thank the reviewer for this suggestion. We have accordingly conducted the following experiment: we use a real-world Boolean function with $p=5$ and of full degree (described above). We replicate the first experiment in our paper, where we investigate the effect of $||\mathbf{d}-\frac{1}{2}||_\infty$. We investigate values of $k$ ranging from $2$ through $4$. We display the average loss ($\mathbb{E}_x\left[||f(x) - \hat{f}(x)||_2^2\right]$) over $200$ dosages at each distance (where we perform $20$ trials with each dosage), to be displayed in graphical format in our paper. Here we use $n=200$ samples.
|| $0$ | $.02$ | $.04$ | $.06$ | $.08$ | $.1$ | $.12$ | $.14$ | $.16$ | $.18$ |
| -- | -- | -- |--| -- | -- | -- |--| -- | -- | -- |
|$k=2$ |$3.49$|$3.50$|$3.53$|$3.58$|$3.64$|$3.71$|$3.81$|$3.98$|$4.12$|$4.27$|
| $k=3$ |$.59$|$.59$|$.61$|$.62$|$.64$|$.67$|$.74$|$.78$|$.87$|$1.11$|
| $k=4$ |$.58$|$.58$|$.72$|$.65$|$.95$|$1.17$|$1.39$|$1.73$|$1.90$|$2.30$|
Even when the model is misspecified, we see that the half dosage appears optimal and observe the loss increase as we move further away from the half dosage.
> The x-axis in Figure 2 seems to be the dosage value rather than $||d-\frac{1}{2}||_\infty$
We thank the reviewer for catching this, which we will fix accordingly in the paper.
> According to Figures 1 and 2, $\bf{d}=(\frac{1}{2}, \ldots , \frac{1}{2})$ appears to be exactly optimal, rather than merely near-optimal. Do the authors conjecture that this is indeed optimal for general k? Similarly, in the general constrained case, is the uniform dosage of $L/p$ conjectured to be optimal?
Based on our experiments and the general heuristic that in linear regression, one would like features which are "spread out," we conjecture that the half dosage is optimal and that the uniform dosage in the constrained case is also optimal for general $k$. It is difficult to prove exact optimality, as we must compare the quantity $\mathbb{E}_{\mathcal{X}}\left[\sum\_{i=1}^K \frac{1}{\lambda_i(\mathcal{X}^T\mathcal{X})}\right]$ across different dosages. While we were able to show that the inner quantity concentrates as the number of samples grows, it is not clear to us how to compute the mean for a fixed number of samples.
---
Reference:
$[1]$ Montgomery, Douglas C. Design and analysis of experiments. John Wiley & Sons, 2017. | Summary: This paper introduces probabilistic factorial experimental design for combinatorial interventions, where each treatment is assigned a dosage between 0 and 1, and units randomly receive treatments based on these probabilities.
This framework generalizes both full and fractional factorial designs by allowing random assignment of treatment combinations rather than deterministic selection.
The authors model outcomes using Boolean functions with Fourier expansions to capture bounded-order interactions.
They prove that uniform half-dosage allocation ($d_i = 0.5$) is near-optimal in single-round experiments, with optimality up to a factor of $1 + O(\frac{\operatorname{ln}(n)}{n})$. For multi-round experiments (i.e., active learning setting), they develop an acquisition strategy that adapts dosages based on previous observations. The work also addresses practical constraints like limited treatment supply. Experiments results on simulated datasets demonstrate that the proposed strategies outperform random dosage selection.
Claims And Evidence: The paper's claims are generally well-supported by theoretical analysis and empirical evidence.
Methods And Evaluation Criteria: The methods and evaluation criteria in this paper are appropriate for the problem of optimal experimental design for combinatorial interventions. My only concern is that all the evaluations are conducted on synthetic data. It would be great if the authors could conduct some experiments on semi-synthetic or real-world datasets.
Theoretical Claims: I checked the proof of Theorem 4.2, which establishes the near-optimality of half-dosage allocation. The proof appears sound, using concentration inequalities and eigenvalue properties to bound the estimation error.
Experimental Designs Or Analyses: I checked the experimental designs in Section 6, both the passive setting and active setting simulations. The authors appropriately test the theoretical claims by comparing estimation errors across different dosage strategies. I have some minor concerns:
1. The simulations use synthetic data generated from the same model class assumed in the theory, but this might not reflect the robustness of the proposed framework to model misspecification.
2. In the active setting, the authors could include more existing active learning methods as baselines beyond random and half-dosage strategies for a more comprehensive evaluation.
Supplementary Material: I checked the supplementary material, particularly Section B.
Relation To Broader Scientific Literature: This paper extends classical factorial design literature by introducing a probabilistic framework that addresses scalability issues in traditional full and fractional factorial designs. The active learning component relates to Bayesian experimental design and sequential experimental design, though with acquisition functions specific to the probabilistic factorial framework. The work also complements recent advances in causal inference for combinatorial interventions and provides theoretical foundations for experimental practices used in biological perturbation experiments.
Essential References Not Discussed: All the essential related works are discussed.
Other Strengths And Weaknesses: Strengths: The theoretical analysis in this paper is rigorous and well-organized, making it easy to understand the theoretical results and their practical implications.
Weaknesses:
1. Please see my comments in the previous parts.
2. The computational complexity of the active learning approach is not thoroughly discussed.
3. The empirical results suggest that the optimal acquisition strategy only outperforms the half strategy slightly in the active setting. This raises questions about whether the half strategy might be preferable in practice since it requires no computation or learning procedure. A more thorough discussion of this trade-off between computational complexity and performance gain would strengthen the paper.
Other Comments Or Suggestions: Please see Other Strengths And Weaknesses.
Questions For Authors: please see Other Strengths And Weaknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for appreciating our theoretical analysis, as well as for their many valuable suggestions. Below, we address the reviewer's concerns and lay out modifications we will make according to the reviewer's suggestions.
> My only concern is that all the evaluations are conducted on synthetic data. It would be great if the authors could conduct some experiments on semi-synthetic or real-world datasets.
Due to the character limit, we refer the reviewer to our response to Reviewer jf24's first point under "Simulation results on real-world dataset are missing".
> The simulations use synthetic data generated from the same model class assumed in the theory, but this might not reflect the robustness of the proposed framework to model misspecification.
We thank the reviewer for pointing this out. While Boolean functions are universal approximators, our low-degree assumption can cause misspecification, as recognized by the reviewer. In many applications the low-degree assumptions holds, especially in biology, but degree misspecification may still exist. To address the reviewer's concern, we have conducted an additional experiment where the model is misspecified. Please see the response to Reviewer jf24, under the comment about "sensitivity analysis." Here, we use a real-world full-degree Boolean function ($k=p$), and fit assuming lesser values of $k$.
> In the active setting, the authors could include more existing active learning methods as baselines beyond random and half-dosage strategies for a more comprehensive evaluation.
We appreciate the reviewer's suggestion. For the comparison with passive baselines, we have added an additional baseline based on partial factorial design. Results are shown in the table below. For the comparison with active strategies, since multiple combinatorial interventions are drawn in each round (administered by a selection of dosage), we are not aware of existing methods that can be easily adapted to this setting. However, we would be happy to include additional baselines if the reviewer has specific suggestions.
Here we compare a Resolution $V$ $2^{5-1}$ fractional design versus our optimal strategy and half dosages. Each round has $16$ samples, with $p=5$ and $k=1$.
| | Round 1 | Round 2 |Round 3| Round 4| Round 5|
| -------- | -------- | -------- | -------- | -------- | -------- |
| Optimal dosage | $.54$| $\bf{.16}$|$\bf{.10}$|$\bf{.08}$|$\bf{.07}$|
| Half dosage |$.52$|$.23$|$.14$|$.09$|$.08$||
|Fractional factorial design|$\bf{.28}$|$.21$|$.15$|$.12$|$.09$|
Bolded entries show the lowest loss among each round, where we see that our optimal dosage strategy outperforms the other strategies after the first round.
> The computational complexity of the active learning approach is not thoroughly discussed.
The number of iterations for the optimizer to converge is roughly $O(p^3)$, and the complexity of each iteration is $O(nK^2 + K^3)$ (where the first term comes from the matrix multiplication of $\mathcal{X}^T\mathcal{X}$ and the second term comes from computing the eigenvalues of $\Sigma(\mathbf{d})$). Recall the definition of $K$ to be the number of interactions under consideration, i.e. $K = \sum_{i=0}^k {p\choose i} = O(p^k)$ for small $k$. Therefore, the overall complexity is $O(np^{3k+3}+p^{6k+3})$ for small $k$. In practice, we may recommend using a proxy, which only involves the inverse of the minimum eigenvalue: $\bf{d}_{T} = \text{argmin}\_{\bf{d}\in[0,1]^p}\frac{1}{\lambda\_{\min}\left(\Sigma(\bf{d})+\frac{1}{n}\sum\_{t=1}^{T-1}{\mathcal{X}_t^\top\mathcal{X}_t}\right)}$. We found that numerically optimizing this was significantly faster and that the solver was consistently accurate. While the complexity computed above should be the same for this approach, in practice it takes many less iterations to converge.
> The empirical results suggest ... A more thorough discussion of this trade-off between computational complexity and performance gain would strengthen the paper.
We thank the reviewer for this suggestion and will include a discussion of this trade-off in our paper. In the case where there are not many samples (compared to features) per round, we find that the optimal acquisition strategy more clearly outperforms the half strategy. This is because when we have a smaller number of samples, we will need to "correct" as the distribution of combinations will be more lopsided and further away from the uniform distribution. Therefore, in scenarios where each round has few samples, we think it is worth computing the optimal acquisition dosage. When we have a large $n$ relative to $p$, the half strategy and optimal strategy perform very similarly. While the computational complexity of finding the optimal strategy can quickly scale, in practice it only takes a matter of seconds to compute.
---
Reference:
$[1]$ Montgomery, Douglas C. Design and analysis of experiments. John Wiley & Sons, 2017. | Summary: This paper is concerned with the problem of experimental design in the high dimensional factorial setting where users may be administered combinations of treatments, and the aim is to administer a subset of treatments such that all combinations are recovered. The authors frame this problem in terms of the Fourier transform of boolean functions and assuming that the treatment status can be relaxed to probabilities of treatment. After this transformation the authors use tools from optimal experimental design for the selection mechanism. Extensions are provided to subsets and heteroskedastic settings. Empirical results show strong performance.
Claims And Evidence: All theoretical claims made are well supported by theory provided in the paper.
Methods And Evaluation Criteria: Yes, the method is quite sensible (and interesting), evaluation criterion is appropriate.
Theoretical Claims: Yes, I reviewed all proofs and they are sound to my reading.
Experimental Designs Or Analyses: Yes. The experiments are sound, though I would have like to seen a more complete comparison to partial factorial experiments.
Supplementary Material: Yes, I reviewed all supplementary material.
Relation To Broader Scientific Literature: This paper addresses an interesting and highly relevant problem of factorial experimental design. While the problem itself dates back to Fisher, the authors provide a nice contribution to the literature.
Essential References Not Discussed: The authors should have a broader literature review of the partial factorial design literature.
Other Strengths And Weaknesses: Overall, I think this paper is a creative approach to the problem of design of factorial experiments.
My main complaint, as I mention above, is that the experimental evaluation here is severely limited.
Other Comments Or Suggestions: N/A
Questions For Authors: I am curious how this approach (specifically the active learning setting) interacts with adjustment using user covariates. Does this change the design considerations?
Ethical Review Concerns: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for appreciating our method and the thoughtful suggestions. We would like to address the concerns and questions of the reviewer as below.
> The authors should have a broader literature review of the partial factorial design literature.
We thank the reviewer for this suggestion. We will add the following paragraph to Section 2 to expand our discussion of the partial factorial design literature. In addition, we are happy to include any specific references the reviewer believes would further strengthen our coverage of related work.
"A $2^{-m}$ fractional design is one where $2^{p-m}$ samples are used, each with a different combination [1]. These combinations are carefully selected to minimize aliasing. Aliasing occurs when, for the combinations selected, the interactions are linearly dependent [2][3]. In a full factorial design, there is linear independence so there is no confounding when the model is fit. In a fractional design, some aliasing will always occur in a full-degree model; however, methods proposed in literature select combinations such that the aliasing of important effects (i.e. degree-1 terms) does not occur [2]. With a low-degree assumption, aliasing can be avoided entirely. Fractional designs can be classified by their *resolution* (denoted by $R$), which determines which interactions can be potentially confounded. For example, a Resolution V fractional design eliminates any confounding between lower than degree-3 interactions, appropriate for degree-2 functions [4]. Of particular interest in literature are *minimum aberration designs*, which minimize the number of degree-$l$ terms aliased with degree-$R-l$ terms [5][6]."
> I would have like to seen a more complete comparison to partial factorial experiments.
We thank the reviewer for this suggestion. We have conducted an additional experiment, where we compare the half dosage versus a partial factorial design in the passive setting. Here, we generate a degree-$1$ Boolean function with $p=8$. We use a $2^{8-2}$ Resolution $V$ design with $64$ samples for each approach. Results are shown below, averaged over $300$ trials and with $\pm 1$ std.
| Fractional design |Half dosage|
| -------- | -------- |
|$.14\pm .062$| $.16\pm.078$|
With fewer samples, the careful selection of combinations will make a difference, so the fractional design can outperform the half dosage. But in many cases, especially in biological applications, careful selection of combinations is not possible which is why the much more flexible dosage design is preferable, as it enables the administration of an exponential number of combinations by choosing a linear number of dosages.
However, in the active setting, the optimal dosage can outperform a fractional design. Please see the experiment in response to Reviewer CWrH, under "... the authors could include more existing active learning methods as baselines".
> I am curious how this approach (specifically the active learning setting) interacts with adjustment using user covariates.
We thank the reviewer for this question. We could assume the following setup in the passive setting: there are $m$ users with known covariates $\mathbf{c}\_i\in \mathbb{R}^l$, each of which receives the $n$ combinations determined by the dosage (so that we have a total of $mn$ samples). Assuming the covariates have a linear relationship with the outcome, i.e. $y\_i = \beta\_u\mathbf{c}\_i+f(\mathbf{x})$, then the optimal dosage in the passive setting is $\mathbf{d}\_u^* = \text{argmin}\_{\mathbf{d}\in [0,1]^p} \sum\_{i = 1}^{l+K}\frac{1}{\lambda\_i(A)}$ where $A=\begin{bmatrix}
\sum\_{i=1}^m\mathbf{c}\_i\mathbf{c}\_i^T&\sum\_{i=1}^m \mathbf{c}\_i\mathbf{y}(\mathbf{d})^T\\\\
\sum\_{i=1}^m \mathbf{y}(\mathbf{d})\mathbf{c}\_i^T&\Sigma(\mathbf{d})
\end{bmatrix}$, with $\mathbf{y}(\mathbf{d})\_S = \prod\_{i\in S} (2d\_i-1)\in \mathbb{R}^{K}$ and $\Sigma(\mathbf{d})$ is as defined in the paper. We conjecture that $\mathbf{d}_u^*$ is still the half dosage. To extend to the active setting, the same objective is used as in the paper except $\Sigma(\mathbf{d})$ is replaced with $A$. We are happy to consider alternative models of user covariates if the reviewer has any specific suggestions.
---
References:
$[1]$ Box, George EP, William H. Hunter, and Stuart Hunter. Statistics for experimenters. Vol. 664. New York: John Wiley and sons, 1978.
$[2]$ Gunst, Richard F., and Robert L. Mason. "Fractional factorial design." Wiley Interdisciplinary Reviews: Computational Statistics 1.2 (2009): 234-244.
$[3]$ Mukerjee, Rahul, and CF Jeff Wu. A modern theory of factorial design. Springer Science & Business Media, 2007.
$[4]$ Montgomery, Douglas C. Design and analysis of experiments. John Wiley & Sons, 2017.
$[5]$ Fries, Arthur, and William G. Hunter. "Minimum aberration $2^{k–p}$ designs." Technometrics 22.4 (1980): 601-608.
$[6]$ Cheng, Ching-Shui. Theory of factorial design. Boca Raton, FL, USA: Chapman and Hall/CRC, 2016. | Summary: The paper introduces a probabilistic factorial experimental design to address the optimal experimental design problem for combinatorial interventions.
The contribution of the paper:
1. The paper introduces a probabilistic factorial experimental design for a given choice of dosage vector.
2. The paper provides a closed-form solution for the near-optimal design for passive and active settings.
3. The authors explore extending the design framework to incorporate constraints and noisy scenarios.
Claims And Evidence: Yes, the paper provides the theoretical proofs and empirical evidence to support the claims.
Methods And Evaluation Criteria: The authors validated the proposed approach using a simulated dataset for both passive and active settings.
Theoretical Claims: Yes
Experimental Designs Or Analyses: Yes
Supplementary Material: Yes
Relation To Broader Scientific Literature: Building on previous work, this paper utilizes Boolean functions and Fourier transforms to establish the theoretical foundation of its approach.
Essential References Not Discussed: No
Other Strengths And Weaknesses: **Strengths:**
1. The paper addresses a significant gap in scalability challenges in factorial design with combinatorial interventions.
2. The theoretical framework is robust, with clear assumptions and derivations.
**Weaknesses:**
1. The use of Boolean functions and Fourier transforms is not new, as similar approaches have been explored in prior work, such as *Agarwal, A., Agarwal, A., and Vijaykumar, S.Synthetic Combinations: A Causal Inference Framework for Combinatorial Interventions*.
Other Comments Or Suggestions: No
Questions For Authors: 1. The proposed design strategies appear to depend on the choice of dosage (may require prior knowledge from experimenters), which is a subset of the full factorial design. As a result, the outcomes of the proposed approach may be suboptimal. Could the authors elaborate more on this and discuss how to address it?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for appreciating our theoretical framework and scalability challenges it addressed. Below, we address the concerns and questions brought up by the reviewer.
> The use of Boolean functions and Fourier transforms is not new, as similar approaches have been explored in prior work, such as Agarwal, A., Agarwal, A., and Vijaykumar, S.Synthetic Combinations: A Causal Inference Framework for Combinatorial Interventions.
We thank the reviewer for this comment. As noted in section 2 (last paragraph), we used Boolean functions to model combinatorial interventions, as it can easily model the scenario where the outcome is mainly driven by low-order effects in the absence of higher-order interactions -- a common assumption in fractional factorial design. In addition, it captures generalized surface models (see section 3.1). However, the main contribution of the paper lies not in the use of Boolean functions, but in the proposal of the novel probabilistic experimental framework and the accompanying theoretical analysis of the dosage choice (described in detail in section 1), which, to our knowledge, has not been explored in prior work.
> The proposed design strategies appear to depend on the choice of dosage (may require prior knowledge from experimenters), which is a subset of the full factorial design. As a result, the outcomes of the proposed approach may be suboptimal. Could the authors elaborate more on this and discuss how to address it?
A full factorial design can be formulated within our framework. In particular, there would be $2^p$ rounds, each with $1$ sample. In order to fix the sample, the dosage would be chosen to be deterministic, i.e. $\mathbf{d}\in \\{0,1\\}^p$. In addition, when the number of samples is large enough for a full factorial design to be implemented, the half dosage is closely related to this design as the half dosage induces a uniform distribution over combinations. Therefore in such cases, the two approaches perform similarly. Per our theoretical results, we suggest the experimenter uses the half dosage, which requires no prior knowledge. | null | null | null | null | null | null |
Locally Differentially Private Graph Clustering via the Power Iteration Method | Reject | Summary: The authors study the problem of spectral clustering in local differential privacy (LDP) in the edge-level DP model. Prior work on LDP in the model used the standard randomized response method.
The work is based on the interesting insight that while one wants to compute the second eigenvector; the largest component of the is from first eigenvector this dominates in the noise addition part.
The authors propose a technique to eliminate the largest component terms from the noise computation resulting in an algorithm that can compute spectral clustering for graphs with degrees of order sqrt(n) for constant epsilon.
This algorithm has nlogn communication complexity contrary to prior spectral clustering work with DP requiring n^2 communication. The authors also evaluated the method on real world data.
Technically the authors work under a series of simplifying assumptions (sec 2.5) on the degrees being large and on the structure of the Eigenvectors of the transitions matrix of the standard random walk. The assumption on the degree > sqrt n is quite restrictive but understandable given the use of LDP.
The author’s algorithm is defined based on the standard iterative multiplication algorithm but with additional considerations to add laplace noise for privacy and bounding the degree of the graph (privately).
The authors show on experiments in SBM graphs good results compared with the edge Randomized Response baseline. The authors also explore a real graph from which they need to extract only the 100-core. For this real graph the results are less clear but still non-trivial.
## update after rebuttal
I have read the rebuttal and it has not affected my score.
Claims And Evidence: yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: I checked the privacy proofs and they are convincing.
Experimental Designs Or Analyses: Yes, the experiments are sound but the results on real graphs are not convincing (See questions and summary)
Supplementary Material: No
Relation To Broader Scientific Literature: Related work looks good to me.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: See summary
Other Comments Or Suggestions: Minor:
Formatting isses - . AUTHORERR: Missing \icmlcorrespondingauthor.
Line 167 for a sufficiently large T, applying B^T to x give. Here T can be confused with the transpose. Perhaps I would use a different letter.
Algo 1: I would replace delta with another letter to not confuse it with DP delta.
## update after rebuttal
I have read the rebuttal and it has not affected my score.
Questions For Authors: While the communication complexity is smaller the running time is N^2 each node does N computation (section 4.1). I would be useful to highlight it in the contribution part where you discuss the running time of prior work.
Can you discuss more how to interpret the results on the real graph?
What result do you get if don't use the kcore?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you very much for your thoughtful consideration of our paper. We sincerely appreciate your comments, which will be very helpful in significantly improving our manuscript. Since your evaluation of our paper is a weak reject, please feel free to let us know if there are any specific comments or concerns we can clarify or address to help improve our manuscript.
> The author’s algorithm is defined based on the standard iterative multiplication algorithm but with additional considerations to add laplace noise for privacy and bounding the degree of the graph (privately).
We would like to clarify that the key feature of our algorithm lies in Difference 4: Elimination of the Leading Eigenvector, which allows us to significantly limit the magnitude of Laplace noise required for privacy. This is an important contribution of our work, and we have highlighted it in the abstract as follows:
[Given that the noise introduced by the largest eigenvector constant can be significant, we incorporate a technique to eliminate this constant.]
We also mentioned this contribution in the introduction (Lines 87–97). However, your comment made us realize that this key point may not have been emphasized clearly enough. In response, we will ensure that this contribution is more explicitly highlighted in “Section 3: Our Algorithm” in the next version of our manuscript.
> While the communication complexity is smaller the running time is N^2 each node does N computation (section 4.1). I would be useful to highlight it in the contribution part where you discuss the running time of prior work.
Thank you very much for your kind comment. We will highlight this contribution in the next version of our manuscript.
> Can you discuss more how to interpret the results on the real graph?
Thank you very much for your kind comment. We will expand our discussion of the results on the real-world graph in the next version of our manuscript. In particular, as suggested by Reviewer PjPy, we will provide a more detailed explanation of the results shown in Figure 2, using the clustering outcomes for illustration. Additionally, we will further discuss the experimental results involving the 700-core of the Reddit graph, which were obtained based on the suggestion from Reviewer XrvP. Please let us know if there are any specific aspects you would like us to address in the revised manuscript.
> What result do you get if don't use the kcore?
Thank you very much for your question. The reason why we use k-core decomposition results in our experiment is because we need to ensure that the minimum degree of the input graph is large. If we do not use the k-core decomposition results, the minimum degree can be as small as one, which would violate our assumption (1). Indeed, when the minimum degree is small the value of $\delta$ and the size of the noise in our algorithm can be large, the value $w_i^{(t)}$ calculated at Line 6 would be totally random, which gives the discrepancy of our algorithm larger than 0.99.
We observe that the requirement for a large minimum degree also applies to the benchmark algorithm—randomized response followed by spectral clustering. As discussed in [Mukherjee and Suppakitpaisarn, arXiv2309.06867], achieving robust results often requires applying core decomposition to the graph. In our experiments, we found that the benchmark algorithm tends to perform poorly when low-degree nodes are present. After the randomized response step, the number of edges from such nodes to each cluster becomes nearly uniform, causing them to act as bridges and obscuring the underlying clustering structure of the input graph.
We have already highlighted the importance of using core decomposition in Lines 408–409: “To ensure that the noise added in our algorithm is not too large,” and also in Assumption (1): “The first assumption is essential for any graph clustering algorithm under edge LDP with a constant privacy budget. Protecting the connections of low-degree nodes requires adding so much noise that their contributions are obscured, resulting in unstable clustering outcomes for these nodes.” However, this question has made us realize that we should emphasize this point more clearly in the experimental results section. We will address this in the next version of our manuscript. | Summary: This paper considers the problem of graph clustering under privacy constraints. Specifically, the algorithm must satisfy Local Differential Privacy according to Definition 2.2 with budget \epsilon. The authors propose an algorithm, Private Power Iteration Clustering, which approximates B^T x, where x is an initial vector and B is the random walk matrix of G. In turn, the entries B^T x have approximately the same sorted order as those of the second-largest eigenvector of B, which is the one that can cluster the vertices. Since the algorithm injects noise at the user level, it is shown to satisfy the privacy budget. It requires O(n log n) time and \Theta(n) space. Experiments on real and synthetic data investigate the performance of the proposed algorithm.
Claims And Evidence: Theorem 4.1 shows that the algorithm satisfies the privacy budget. Some evidence is given that the algorithm returns a similar classification to that of non-private spectral clustering (Theorem D.7), but I found the statement hard to interpret.
Methods And Evaluation Criteria: Experiments on the SBM and Reddit graph are provided. For the synthetic instances, the performance of the proposed algorithm is compared to that of the algorithm of Hehir et al (2022) involving privatization via randomized response. These algorithms are compared over different privacy budgets and SBM parameters, where the performance metric is normalized discrepancy.
Theoretical Claims: I checked the main text.
Experimental Designs Or Analyses: The SBM comparison is well-designed. I am not sold on the Reddit comparison. The authors state “For graphs generated using the SBM, we observe that when an algorithm fails to classify the graph in a particular setting, the normalized discrepancy exceeds 0.99. In contrast, our normalized discrepancy remains below 0.99 when the privacy budget is at least 4 for the 100-core decomposition and at least 1 for the 500-core decomposition.” They are proposing that 0.99 is the threshold between succeeding and failing to classify, but it seems a bit arbitrary. It would perhaps be more informative to compare to non-private spectral clustering, and also to visualize the resulting clustering for each algorithm.
Supplementary Material: I read the statement of Theorem D.7 to try to understand the theoretical claim that the algorithm performs similarly to non-private spectral clustering.
Relation To Broader Scientific Literature: Differential privacy is a popular topic, so clustering under privacy constraints is a valuable contribution.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: “Comparison across Different Privacy Budget” Budget Budgets; same comment for next heading. The word choice of “publish/publishing” is a bit out of place to me in several spots.
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you very much for your kindness in accepting to review our paper and in giving several comments that will help improve our manuscript.
> Theorem 4.1 shows that the algorithm satisfies the privacy budget. Some evidence is given that the algorithm returns a similar classification to that of non-private spectral clustering (Theorem D.7), but I found the statement hard to interpret.
Thank you very much for your kind comment. We agree that the statement of Theorem D.7 is hard to interpret. We have given our best effort to explain the intuition behind the statement at the last paragraph of Section 4. However, we will give our best effort to even increase the readability of this part.
(Our statement at Section 4) [Recall that the outcome of the spectral clustering algorithm is $S_{\rm orig} = \{v_i : v_{1,i} > 0\}$. Thus, when $c_1 > 0$, the result $S_{\rm alg}$ closely resembles $S_{\rm orig}$ with high probability. Conversely, when $c_1 < 0$, the result $S_{\rm alg}$ is similar to $V \setminus S_{\rm orig}$ with high probability. Therefore, our algorithm is likely to produce a small $d_{\rm vol}(S_{\rm alg}, S_{\rm orig})$.]
> I am not sold on the Reddit comparison. The authors state “For graphs generated using the SBM, we observe that when an algorithm fails to classify the graph in a particular setting, the normalized discrepancy exceeds 0.99. In contrast, our normalized discrepancy remains below 0.99 when the privacy budget is at least 4 for the 100-core decomposition and at least 1 for the 500-core decomposition.” They are proposing that 0.99 is the threshold between succeeding and failing to classify, but it seems a bit arbitrary.
Thank you very much for your comment. When the flipping probability in the randomized response mechanism is too high (i.e., the privacy budget $\epsilon$ is too small), the clustering structure of the graph becomes obscured, leading the spectral clustering algorithm to produce essentially random results. Similarly, if both $\epsilon$ and the minimum node degree are too small, the noise introduced in Line 6 of Algorithm 1 can overwhelm the true signal, causing the values of $w_i^{(t)}$ to appear nearly random. In such cases, the resulting cluster assignments are largely meaningless, and the discrepancy typically exceeds 0.99. Conversely, when the algorithm is able to correctly recover even small portions of the network structure, we observe that the discrepancy falls below 0.99. Based on these observations, we set 0.99 as our threshold. We will include a more detailed explanation of this choice in the next version of our manuscript.
> It would perhaps be more informative to compare to non-private spectral clustering, and also to visualize the resulting clustering for each algorithm.
Thank you very much for your helpful suggestion. We will include a figure comparing our results with the non-private case in the next version of our manuscript. | Summary: The authors consider the problem of finding a partition (aka clustering) of a graph privately. The goal is to provide a locally differentially private (LDP) algorithm. There the notion of privacy used in edge privacy, i.e., two neighboring adjacency lists differ by a single edge.
The authors show how to make a spectral graph clustering approach, the so called power iteration clustering algorithm of Lin & Cohen'10 LDP.
The approach consists in adding noise at the different steps of the procedure to ensure that the overall procedure is private. Then the authors conduct experiments on synthetic data and one real world dataset.
The main theoretical result is the proof of the privacy guarantee of the algorithm.
The paper also relies on several assumptions about the input graph.
I think the paper has been written in a rush and the paper is poorly written, a bit sparse on theoretical and experimental results.
Writting quality: It is very confusing, and mathematically problematic, the way the assumptions are written. Indeed, some of the assumptions are required for the privacy guarantees to hold, while other are only needed for the algorithm convergence or the output quality. For example, are assumptions 2 and 3 needed for Theorem 4.1 to hold? It is clear that assumption 1 is but it does not appear in the statement of Theorem 4.1. So concretely, you need to rewrite Theorem 4.1 so that the statement is well defined and there is a correct setting for the quantifiers...
The paper contains several such examples and so requires a careful polishing.
Theoretical results: While the privacy is fully analyzed, it is less clear how much utility the pipeline retains. In particular, how do the bounds provided relate to other works? It seems that the current algorithm would not improve the state-of-the-art on the stochastic block model (SBM).
Practical results: I see very little value in experimenting on the SBM. This always looks like a poor man's solution to the lack of theoretical analysis. Moreover, SBM are also known to be pretty far from real-world graphs, with very peculiar degree distributions, and very large size communities, lack of triadic closure, etc.. My suggestion would be to instantiate the utility results you have to the SBM, hence providing some clean bounds which hopefully are state-of-the-art or at least comparable, then removing these experiments.
Furthermore, the authors consider only a single real-world graph and so it seems that the results are cherry picked or the authors didn't care of conducting more thorough experiments. In both cases it calls for a lot more experiments.
Claims And Evidence: See summary for details.
Methods And Evaluation Criteria: Mostly, except from the SBM experiments (see summary).
Theoretical Claims: Yes, though not in all details.
Experimental Designs Or Analyses: Looked at the experiments. They seem legit, though the fact that only one real-world dataset is used is very suspicious.
Supplementary Material: Yes, I went over the utility analysis.
Relation To Broader Scientific Literature: I think the paper makes a good job at relating to previous work
Essential References Not Discussed: None
Other Strengths And Weaknesses: None.
Other Comments Or Suggestions: It is pretty much useless to have [User/Server] in the algorithm since, e.g., server is not referenced in the proofs. So it just confuses the reader for free.
Questions For Authors: No specific question. I would be happy to raise my score if the authors could provide the bounds they obtain for the SBM and how it improves over previous work, at least in some regime.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you very much for your kind consideration and comments.
> some of the assumptions are required for the privacy guarantees to hold, while others are only needed for the algorithm convergence or the output quality. For example, are assumptions 2 and 3 needed for Theorem 4.1 to hold?
While we initially omitted the assumptions to keep the statements simple, we agree with the reviewer that it is important to clearly state the assumptions used for each lemma and theorem. We will include them in the next version of the manuscript. In fact, all the assumptions pertain to the precision result (Theorem D.7).
> It is clear that assumption 1 is but it does not appear in the statement of Theorem 4.1. So concretely, you need to rewrite Theorem 4.1...
We emphasize that no assumptions are needed to prove Theorem 4.1. The statement at Lines 192–198 — “The first assumption is essential for any graph clustering algorithm under edge LDP with a constant privacy budget” — serves to justify Assumption 1.
However, this does not imply that the assumption is required for Theorem 4.1 itself. Rather, our point is that achieving both the privacy guarantee of Theorem 4.1 and a meaningful precision result in Theorem D.7 is not possible without this assumption.
> In particular, how do the bounds provided relate to other works? It seems that the current algorithm would not improve the state-of-the-art on the stochastic block model (SBM).
> I would be happy to raise my score if the authors could provide the bounds they obtain for the SBM and how it improves over previous work, at least in some regime.
Our precision results match the state-of-the-art performance for the Stochastic Block Model (SBM) and the Degree-Corrected Block Model (DCBM) [Hehir, Slavkovic, Niu, 2022], while our method extends to a significantly broader class of graphs. In particular, we provide results for models that have received little to no prior attention in the literature, such as the Geometric Block Model [Galhotra et al., AAAI 2018] and the SBM with triadic closure [Peixoto, Physical Review X, 2022]. More importantly, our approach offers guarantees even for graphs that are not generated by any specific model—which is often the case in practice. The only requirements are that the graph has a large minimum degree (Assumption 1) and a well-clustered structure (Assumptions 2 and 3).
Although we have discussed at Lines 104-108 (“Our algorithm, however, provides precise results under the same minimum degree condition but applies to general graphs, not limited to those generated by the model”.) that our results include the previous results on SBM. We do realize by this comment that we should have formally demonstrated the inclusion in our paper.
Equation (5.3) of [Hehir, Slavkovic, Niu, 2022] shows that when each community has size $\Omega(n)$, randomized response followed by standard spectral clustering results in a misclassification error of order $O(1)$, provided that the maximum connection probability between any two nodes $u$ and $v$ is at least $\Omega(1/\sqrt{n})$. The degree distribution in such an SBM follows a binomial distribution, which can be approximated by a Gaussian distribution for large $n$. It is also known that the minimum degree in these graphs is $\tilde{\Omega}(\sqrt{n})$ with high probability (i.e., at least $1 - o(1)$). Therefore, the SBM and DCBM considered in [Hehir, Slavkovic, Niu, 2022] satisfies our assumption (1).
Regarding Assumption (2), we can infer from [Deng, Ling, and Strohmer, JMLR 2021] that when the number of nodes $n$ is large, $p$ denotes the probability that two nodes within the same cluster are connected, and $q$ denotes the probability that two nodes from different clusters are connected, then the ratio $g = (\lambda_2(B) + 1)/(\lambda_3(B) + 1)$ is at least $2p / (p + q)$. Since typically $p \gg q$, this implies $g \approx 2$, indicating that the stochastic block model (SBM) satisfies the assumption. Empirically, any well-clustered graph—including those generated by the DCBM or similar models—tends to have a large $g$, and thus also satisfies this assumption.
Any well-clustered graph in which each cluster has size in $\Theta(n)$ satisfies the conditions of Proposition A.1. Therefore, graphs generated from the SBM and other clustered models satisfy assumption (3).
> the authors consider only a single real-world graph and so it seems that the results are cherry picked or the authors didn't care of conducting more thorough experiments.
As demonstrated by the graphs generated using the SBM, our algorithm performs well when the graph is sufficiently large and dense. Aside from the Reddit graph, most publicly available datasets such as GitHub, PolBlogs, Twitch, DeezerEurope are neither large nor dense enough to fully showcase the effectiveness of our method. We have discussed this limitation in Lines 416–418 of our manuscript, and would be happy to discuss more on this limitation. | Summary: The paper considers graph clustering under edge-level localDP. The work proposes private power-iteration clustering to obtain the partition of nodes. They show that under certain assumptions, this method obtains good approximation to spectral clustering with $O(1)$ valued $\epsilon$. Previous work obtain same results only for the restrictive case of graphs from the SBM family, whereas this work identifies a property of well clustered graphs that extends this result to a more broader family of graphs. They also show empirically that their algorithm performs better, especially for larger graphs.
Claims And Evidence: The central claim of paper is that interactivity for LDP algorithms for graph clustering can help achieve better privacy-utility tradeoff. The claims are accompanied with valid proofs and experimental evaluation.
Methods And Evaluation Criteria: The paper proposes privatizing the power iteration clustering, making modifications to ensure that final bipartition produced by approximation to eigenvector provides good clustering. They also showcase by directly comparing against the prior work -- the results confirm their claims, for smaller $\epsilon$ they are able to obtain better results whereas for larger $\epsilon$, the results are comparable.
Theoretical Claims: The proofs are correct to the best of my knowledge.
Experimental Designs Or Analyses: Experimental setup and the baseline used are appropriate. The results are convincing, but the baseline for real-world dataset is not provided citing computational issues. I wonder in this case why experiments for other real-world datasets with smaller graph size were not considered? Or resorting to downsample the graph appropriately?
Supplementary Material: The proofs in appendix seem correct to the best of my knowledge. I did not verify the code.
Relation To Broader Scientific Literature: The work makes contributions to a line of work exploring interactive algorithms for LDP. They cite the appropriate works and describe where their contributions sit among the literature.
Essential References Not Discussed: None, to the extent of my knowledge.
Other Strengths And Weaknesses: In addition to the other contributions, their algorithm is memory and communication efficient. The class of graphs satisfying their assumptions still seem like a small subset.
Other Comments Or Suggestions: There's an error -- ``Missing icmlcorrespondingauthor'' in the footnote of page 1.
Questions For Authors: No additional questions.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you very much for your kind consideration and comments. Please kindly find our answers to the comments below.
> Experimental setup and the baseline used are appropriate. The results are convincing, but the baseline for real-world dataset is not provided citing computational issues. I wonder in this case why experiments for other real-world datasets with smaller graph size were not considered? Or resorting to downsample the graph appropriately?
We need the input to have a large minimum degree. However, we were unable to find a real-world dataset (or its core decomposition) that has a large minimum degree but a smaller number of nodes.
Thanks to this insightful comment, we have begun exploring the use of available real-world datasets for our experiments. During the rebuttal process, we successfully conducted an experiment on the 700-core of the Reddit graph, which consists of 22,370 nodes and 31,053,002 edges. The average normalized discrepancy of 10 runs for this graph are presented below. As shown in the table, our algorithm consistently outperforms the randomized response method across all evaluated privacy budgets.
| Privacy Budget (ε) | Our Algorithm | Randomized Response |
|-------------|----------------|----------------------|
| 0.2 | 0.4940 | 0.4966 |
| 0.4 | 0.4917 | 0.4932 |
| 0.6 | 0.3387 | 0.4950 |
| 0.8 | 0.3112 | 0.4666 |
| 1.0 | 0.2627 | 0.4244 |
| 1.2 | 0.2513 | 0.3565 |
| 1.4 | 0.2380 | 0.3732 |
| 1.6 | 0.1748 | 0.3077 |
| 1.8 | 0.2370 | 0.2722 |
| 2.0 | 0.2187 | 0.2586 | | null | null | null | null | null | null |
Schwarz–Schur Involution: Lightspeed Differentiable Sparse Linear Solvers | Accept (poster) | Summary: This paper investigates efficient methods for solving sparse linear equations that commonly arise in applications related to partial differential equations (PDEs) and convolutional neural networks. The key insight of the study is the exploitation of hidden structures within convolutional kernels, allowing the division of an image into small, independent patches. Within each patch, computations can be performed independently by leveraging the concept of Gaussian elimination.
This technique follows a divide-and-conquer approach, where the problem is progressively broken down into smaller subproblems. The solution process involves solving these subproblems independently and then performing back-substitution from the final step to the initial step. By structuring the problem in this manner, the authors effectively harness the parallel computing capabilities of GPUs to handle multiple small but dense square matrices efficiently.
## update after rebuttal
I want to thank the authors for updating the codebase to include a comparison with scipy's direct solve method. I can verify scipy is significantly slower than the proposed method.
However, I also implemented indirect method using cupy+GPU, here are the preliminary results:
```
# GPU sparse solve alternative
A_cupy_sparse = cp.sparse.csr_matrix(A_scipy) # Convert SciPy sparse to CuPy sparse
lb_cupy = cp.asarray(lb.flatten()) # Keep RHS as dense CuPy array
x_cupy_sparse = None
print("Timing CuPy sparse solve (with synchronization):")
for _ in range(5):
time_start_sparse = time.time()
# x_cupy_sparse = cupyx.scipy.sparse.linalg.spsolve(A_cupy_sparse, lb_cupy) # method 1, 89 second, norm difference with gsol is 0.955
# x_cupy_sparse = cupyx.scipy.sparse.linalg.splu(A_cupy_sparse).solve(lb_cupy) # method 2, 17 seconds, norm difference with gsol is 0.005
# x_cupy_sparse = cupyx.scipy.sparse.linalg.lsqr(A_cupy_sparse, lb_cupy) # method 3, 84 second, norm difference with gsol is, nan
x_cupy_sparse = cupyx.scipy.sparse.linalg.cg(A_cupy_sparse, lb_cupy) # method 4, 0.15 second, norm difference with gsol is 0.085
# x_cupy_sparse = cupyx.scipy.sparse.linalg.gmres(A_cupy_sparse, lb_cupy) # method 5, 0.54 second, norm difference with gsol is 1.40
# x_cupy_sparse = cupyx.scipy.sparse.linalg.cgs(A_cupy_sparse, lb_cupy) # method 6, 0.14 second, norm difference with gsol is 0.606
# x_cupy_sparse = cupyx.scipy.sparse.linalg.minres(A_cupy_sparse, lb_cupy) # method 7, 0.066 seconds, norm difference with gsol is 141.08
# x_cupy_sparse = cupyx.scipy.sparse.linalg.lsmr(A_cupy_sparse, lb_cupy) # method 8, 14 seconds, norm difference with gsol is 165.35
cp.cuda.Stream.null.synchronize() # Wait for GPU to finish
print(f"cupy sparse solve time: {time.time() - time_start_sparse}")
```
For comparison, the proposed method (on GPU) runs in 0.032 seconds. The scipy direct method runs in 11.12 seconds. The norm difference between gsol and x_scipy is 0.021.
Hence, we can see the proposed method is still faster than these cupy indirect solve implementation, but the speedup is now much smaller than anticipated. The accuracy for the indirect method is kind of okay in my stand, since the norm is calculated as np.linalg.norm(x_vec - y_vec), and the vector has length 263169. So, the speedup over indirect method is more around 5-10 I would say.
Also, for this current example image size, transfer A_scipy to the GPU doesn't seem to take that long. Therefore, having the claim of 1000x speedup in the title is \textit{a little bit} of an exaggeration. I would probably add a quantifier in the title saying that this impressive speedup is only for direct solve.
Personally, I still support the acceptance of this paper due to this nice observation/idea of being able to do lots of local parallelization for direct solve, although a little less excited than first reading this work (I thought this was a HUGE breakthrough). My current score is more like 3.5 than 4, but I am going to keep it as 4. For revision, I would strongly suggest the authors should compare thoroughly with indirect cupy methods (as I show above) and also modify the title and abstract to reflect that there is a place for the indirect cupy method.
If the authors can show the speedup of the proposed method over the indirect cupy method is much more significant (2 or 3 orders of magnitude) on large images (2561 x 2561), that would be wonderful. However, since I do not have that information, I cannot champion for something I haven't seen or experimented by myself.
If the authors feel strongly about the advantage of their method on large image instances, they are welcome to update the codebase in the anonymous code repo and send a message to the AC with clear evidence so that AC may check it out.
Claims And Evidence: The claims made in the paper are supported by clear and compelling evidence, particularly in Section 4.2, where the authors use color-coded visualizations and mathematical derivations to illustrate key computational concepts. While the explanations are well-structured, those unfamiliar with convolutional neural networks may find some parts challenging to follow. However, after manually performing the Gaussian elimination steps from Equations (4.2) to (5) and verifying the authors' calculations, the divide-and-conquer approach became evident.
The computational results presented are particularly impressive. Compared to CUDA-based implementations, the proposed method achieves a speedup of two orders of magnitude. Additionally, when compared to SciPy implementations, the authors report a computational speedup exceeding three orders of magnitude. These results strongly support the effectiveness of the proposed approach.
Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria (running time comparison) are valid for the problem and application.
Theoretical Claims: I have manually checked the Gaussian Elimination step from Equation 4 to Equation 5 and can verify that they are correct.
Experimental Designs Or Analyses: I have checked the experimental designs and analyses. It's mainly solving the same sparse linear systems and comparing the running times, which is very straightforward.
Supplementary Material: There is no supplemental material submitted.
However, I have looked through the appendix, which is attached after the main text. The appendix elaborated on different applications that upon which the proposed method can have an impact. The discussions are very comprehensive in my opinion.
Relation To Broader Scientific Literature: Yes, this work is at the intersection of scientific computing and differential equations. Different branches of literature reviewers are discussed extensively in this work.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: The proposed method is both novel and clever, demonstrating significant potential for applications in various domains. However, one notable weakness lies in Section 4, particularly Subsection 4.2, where the methodology is discussed. The explanation lacks sufficient detail, making it difficult for readers unfamiliar with convolutional neural networks to grasp the key ideas immediately. In particular, understanding the notation and subscripts used in the matrices required considerable effort. The color-coded elements in Subsection 4.2.1 are not well explained, which adds to the confusion. Additionally, the process of Gaussian elimination from Equation (4.2) to (5) is not elaborated on sufficiently, requiring extra effort from the reader to reconstruct the steps.
Another point of concern is Figure 4, where the rationale behind dividing patches in the presented manner is unclear. A suggestion for improvement would be to clarify in the caption that the convolutional kernel is 3x3, which influences the selection of points included in the computation. Specifically, when the kernel focuses on a brown dot, it only includes nearby red and green dots while never incorporating the light blue dots. A clearer explanation would help readers understand why certain dots incorporate information while others do not. Adding such clarifications would significantly improve the paper’s clarity and accessibility.
Other Comments Or Suggestions: Thank you very much for writing this paper. I have enjoyed reading, understanding, and learning from your proposed idea. Below are some writing suggestions:
Clarify the meaning of the color-coded subscripts for matrix A in Section 4.2.1.
Provide a step-by-step breakdown of Gaussian elimination from Equation (4.2) to (5). At least mention that $u_a = A_{aa}^{-1}(v_a - A_{ar} u_r - A_{as} u_s)$.
Improve the caption of Figure 4 to explicitly mention the kernel size (3x3) and its impact on patch selection.
Consider adding a brief explanation or visual aid to illustrate why the green dots incorporate information from all other colored dots while the brown and light blue dots do not incorporate information from all colored dots.
Questions For Authors: 1. Regarding Figure 4, is our understanding correct that the kernel size is , and that this is why the brown dots only incorporate information from red, brown, and green dots but not from the light and dark blue dots?
2. In Line 218, what do $A_{SS}^P$ and $A_{SS}^Q$ represent? Do they both equal 1/2 of $A_{SS}$?
3. In Line 678, you discuss the running time of your method but do not mention the running time of the gradient descent method using L-BFGS. How long does L-BFGS take in comparison?
4. In the appendix, Line 749 (right column), you mention that it takes 25 seconds to solve the top-six eigenvalue problem in MATLAB. Which method is this referring to? Is it your proposed method or the baseline? If it is your method, how long does the baseline take to solve the same problem?
5. In Line 813 (right column), you state that matrices are not treated as symmetric, even when they are. Do the baseline methods exploit this prior knowledge of symmetry?
6. In Appendix Section B5, you discuss optimizing matrix A for Problem 11. Could you clarify how this optimization is related to solving the sparse linear system?
7. There is no code submission for this paper. While the overall methodology is clear on a high-level, verification through implementation would be helpful. Would it be possible to provide a sample implementation during rebuttal, specifically for the deconvolution operation shown in Figure 7? An anonymous GitHub repository or similar would greatly enhance reproducibility and allow for further validation of your approach.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thanks for the insightful comments and efforts to verify our method & derivation!
We appreciate the detailed feedback on improving clarity, and will be sure to elaborate on full details to reduce readers’ efforts to reconstruct the steps.
Please also refer to Re:eGtJ for extra descriptions, and refer to the Re:D4Wa for background information and the visualizations in https://anonymous.4open.science/r/icml-rebuttal-268F/rebuttal-figures.pdf (Figures 21-24)
# Clarifications (+D4Wa):
> Clarify the meaning of the color-coded subscripts for matrix A in Section 4.2.1:
please refer to Figures 22,23 in the anonymous link: each color indicates a group of pixels: r = [0, 1, 2, 7, 14, 15, 16], s = [3, 10, 17], t = [4, 5, 6, 13, 18, 19, 20], a = [8, 9], b = [11, 12], in the case of 3x7 image (Figure 21).
A_rr, A_rs represent the 7x7 and 7x3 submatrices (since the group r, s have 7, 3 nodes, resp.):
A_rs is the slicing into matrix A that uses r as indices to select rows and s to select columns: A_rs = A[r, s]. A_rr = A [r,r].
See Figure 24 for the definitions of P_rr, P_rs:
P_rr = Arr − Ara Aaa^{-1} Aar,
P_rs = Ars − Ara Aaa^{-1} Aas.
P, Q are 10×10 matrices (since after eliminating nodes in a, b, each of the
subdomains have 10 nodes remaining, as shown in Figure 21.)
P_rr is a 7x7 matrix, as a submatrix of P, selecting the rows and columns that correspond to the red nodes (there are 7 of them).
P_rs is a 7x3 matrix, similar to P_rr except that its columns correspond to the green nodes (3 of them).
In Figure 24, we provide a step-by-step breakdown of Gaussian elimination from Equation (4.2) to (5), including the derived value of $u_a$ same as suggested.
# Re: Questions
Q1: Yes, it is exactly right that because the kernel size is 3x3, a 3x3 window centered at the a brown dot cannot incorporate information from the light and dark blue dots. We add a demonstration in Figure 21.
Q2: The values of $A_{SS}^P$ and $A_{SS}^Q$ can be arbitrary, including $1/2 A_{SS}$, as long as their sum is $A_{SS}$. In fact the values of $A_{SS}^P$ and $A_{SS}^Q$ are never used standalone, only their sum $A_{SS}^P + A_{SS}^Q$ is used (they become parts of the matrices P,Q which are summed later).
The terms of $A_{SS}^P$ and $A_{SS}^Q$ represent the partition of the value $A_{SS} = A_{SS}^P + A_{SS}^Q$, i.e. into contributions from the patch P and Q separately. This partition is because then the computation for patch P and Q can be done in parallel. Especially for applications like 1st-order FEM discretized PDEs, each patch P and Q will contribution to the $A_{SS}$, and their contributions $A_{SS}^P$ and $A_{SS}^Q$ only depend on information within the patch P and Q, respectively, This makes the computation within P and Q exactly independent from each other so it can be done in parallel.
Q3: The overall runtime of L-BFGS is 20x slower than our method. Note that L-BFGS is a very different type of method---Newton's method using sparse Hessian solver with SciPy backend remains our primary point of comparison, which is 400x slower than our method.
Q4: Only the baseline method [Shi & Malik 2000] uses MATLAB---the “eigs” function and it takes theirs/MATLAB 25s to solve the top-6 Eigs. In contrast, it takes less than 0.3s for our method (since the major computation is sequentially calling 20 times of A^{-1} b, that each takes 0.011s using our method).
Q5: For fair comparison, both ours and all baseline direct solvers (Scipy, CUDA) do not use symmetry. Theoretically, methods that use symmetry can roughly reduce half of the computation. Note the baselines AMG and (effectively) neural operators do exploit symmetry: In other words, the actual improvement of our method is more significant than what is presented in the paper. In theory, our method could use symmetry to get another 2x speedup. Some relevant discussions are in Appendix A.6, C.4.
Q6: Our basic routine A^{-1} b is a forward problem: solves for x given both A,b. When A is not known, we need to search for the value of A, using the gradient of E w.r.t. entries of A, which further depends on the Jacobian of x over entries of A. As explained in B.5, our solver is already differentiable in PyTorch so no extra work needed to make it differentiable to obtain the Jacobian. Otherwise, one has to explicitly implementing the adjoint method that solves for A^{-T}, as done in [Wang et al. 2023] and also here:
https://arxiv.org/pdf/2404.17039v2
Q7: We provide sample code in the link: https://anonymous.4open.science/r/icml-rebuttal-268F and promise the final code will be released. In the code repo, the major effort to implement our solver is “index-tracking”: i.e., in Figure 3, when some nodes are eliminated, what will be the new indices of these nodes (so that subsequent matrix slicings can find the correct submatrices). These are tedious details that can be found in the code. Actually, the 2-lines code in line 179 is a minimum validation to trace down why our solver can be much faster.
---
Rebuttal Comment 1.1:
Comment: Thank you very much!
I am not sure whether it is allowed to attach figures in anonymous links during rebuttal. I am generally fine since I figured out the derivation while reading the paper. However, I strongly recommend you to put in details since I see other reviewers complain about the scarcity of the mathematical derivations as well. To me, I don't understand why you left lots of the details during the submission. This lack of mathematical details doesn't match the high-quality in other parts of the paper. Were you trying to hide the details so that you could later fill them out and submit the full version to a journal??? That's my only hypothesis why this could happen. If so, please don't do that. Even if this paper gets accepted, I think it is necessary to include all details so that readers can understand what is going on and reproduce the results if they need.
I am able to run the code provided in the anonymous repository. Is it possible for you to update the repo so that the Jupyter notebook also has a regular linear solver to solve the large system without using your trick? I want to compare the running time and confirm that the speedup is real. Right now, I only have access to the 3D representation of the image, not the 2D matrix representation of the image.
I have an additional question. What happens if the kernel size is not 3x3? What if the kernel size is 5x5 or 7x7? Can the current method be easily extended to these large kernel sizes? How would you handle all the boundary pixels? Do we have to rewrite a different parallel Gaussian elimination method? The pixel overlap would be different in the cases of 3x3 and 5x5, in my undertstanding.
---
Reply to Comment 1.1.1:
Comment: We appreciate the insightful feedback and careful examination of our method!
> “…attach figures…”
Yes, the official ICML guidance allows “figures, proofs, and code” in the anonymous link in the response. https://icml.cc/Conferences/2025/PeerReviewFAQ
> “...put in details…”
We thank the reviewer for affirming the high quality of the paper. We appreciate the desire to include additional mathematical details and we will do so in the revised version of the paper. We kept the method section brief due to space limitations as we prioritized the paper to focus on the broad impact of an efficient sparse solver in machine learning.
During the rebuttal period, we made new figures and visualizations at the request of Reviewer D4Wa. We remark that readers do not unnecessarily need to follow them to reproduce our method. They provide one possible example to implement the block elimination outlined in Figure 3, under a specific node indexing---which can be arbitrary so there are many different ways to implement the elimination. We had omitted indexing-specific descriptions to keep the method general but added it to be specific. With the extra exposition, the work should appeal to a larger audience in machine learning, requiring minimal background knowledge.
> “...larger kernel…”
Great question (we will incorporate the discussion in our paper)! Our approaches can be generalized to a larger kernel size of 5x5 or 7x7 but the implementation will be more complicated. Currently one layer of boundary pixels can separate two subdomains P, Q; and it will require two layers of boundary pixels in the case of 5x5 to separate two subdomains P,Q (or 3 layers of boundary pixels in the case of 7x7). Similarly, the boundary pixels for the whole image at the last Schur step will have two layers of pixels. The parallel elimination procedure will be similar but have to account for the fact that the “wire-frame” has two layers of pixels.
In fact, in some sense, our current method *already supports* kernels larger than 3x3. The actual constraint we have is that every pixel can only contribute to pixels in the same 5x5 patch (and recall that pixels at the patch boundary belong to multiple patches). Thus, the convolution window for a pixel can cover the entire 1/2/4 patches it belongs to (for example, pixels at the patch boundary can use a local window of 9x9 or 9x5; and it is 5x5 for an interior pixel though the window may not be centered at it). Also recall the 5x5 patch size is a hyperparameter that we are free to change arbitrarily in the current method.
> code for the baseline
Thank you for taking the time to run the code of our method and verify that an expected result is produced!
We have also added the code for the baseline method in the same anonymous link https://anonymous.4open.science/r/icml-rebuttal-268F, in addition to the code for our method.
For timing, we also realize we forgot to mention that in the demo.ipynb one will need to add a line like ```torch.set_default_device('cuda:0')``` before executing the code, otherwise the notebook will fall back to using the cpu. We have added this to the code repo as well.
Please note this message will be our last opportunity to reply to your questions. We are absolutely certain about the reproducibility and the level of speedup of our method as reported in the paper. Our code will be released and we envision *a very broad range of users* will benefit from our method and use it to verify its effectiveness. | Summary: The authors propose an efficient method to solve sparse linear systems. Current algorithms for solving such systems are slow, which hinders their applications in real-time scenarios such as interactive graphics. The authors propose a direct solver, which uses a divide-and-conquer strategy to efficiently solve sparse linear systems. The proposed method is differentiable, thus can be integrated into modern machine learning framework.
Claims And Evidence: The authors claim their method is faster than current baselines, Table 1 shows that indeed the runtimes are orders of magnitude faster.
Methods And Evaluation Criteria: The major problem right now with the manuscript is the description of the method. It is currently hard to follow, sections 4.2.1 and 4.2.2. Although the authors tried to put visuals to make the texts easier to follow, certain symbols are introduced without enough details. For example, how do we arrive at equation 4, what are the submatrices A_rr, A_rs representing exactly. Same question for equation 5, what does P_rr, P_rs represent. It may be better if the authors start with even smaller P,Q in Figure 4, say 3x7 and show exactly how the submatrices in equation 4 are composed of. If there are too many details, some can be put into a supplementary. Some of the results can also be moved to supplementary, it is essential that the method is clear enough to understand so that it can be reproduced by interested researchers.
Theoretical Claims: None
Experimental Designs Or Analyses: The experimental analysis looks good.
Supplementary Material: I looked at the supplementary materials, however I haven’t carefully reviewed them.
Relation To Broader Scientific Literature: Solving sparse linear systems is a general problem, with wide applications to scientific domains, some of which are mentioned in the manuscript. The authors innovation on efficiently solving such sparse systems can greatly accelerate scientific computation in many domains.
Essential References Not Discussed: None
Other Strengths And Weaknesses: None
Other Comments Or Suggestions: None
Questions For Authors: None
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We appreciate the thoughtful feedback on improving our paper. Please also refer to Re:eGtJ for extra descriptions of our methods.
# Method motivations and descriptions
We are committed to improving the exposition and we are confident we can make the paper significantly more accessible to a broader ICML audience.
We have added supporting figures in the following link:
https://anonymous.4open.science/r/icml-rebuttal-268F/rebuttal-figures.pdf
This includes a step-by-step illustration of our method.
We realize our presentation assumed too much background in Gaussian elimination which we are happy to expand upon. To address this, we have added a brief summary and will expand it into a detailed explanation in the revised paper.
## Parallel Gaussian elimination
Schur complement reduces solving the 2x2 block matrix,
[ X Y; Z W ]
to solving a smaller matrix [X - YW^{-1} Z]
While equations in the paper might look a bit dense, at the high level the overall objective is quite simple and intuitive: recursively applying the Schur complement many times to reduce the problem to a smaller system.
Two extra considerations have to be added: 1) always first re-order rows and columns of matrices, so that we know the part we want to eliminate is located at W (or anywhere we prescribed); 2) apply many Schur complements in parallel.
## Graph algorithm perspectives
Our method, as Gaussian elimination, can be simply understood as a *graph algorithm* that removes nodes from a graph. The actual procedure looks involved because the data is put in a tensor to leverage BLAS for parallel computing, and we avoid explicitly constructing the graph.
As in Figure 21, initially each pixel in the image is a node in the graph, and two nodes are connected by an edge with weights $A_{ij}$, if the pixels i and j are adjacent (as defined in Sec. 2). In other words, the matrix A plays the same role as the adjacency matrix in graph theory.
Gaussian elimination removes nodes one by one from the graph: when removing a node $k$ from the current graph, the only modification we need to make to matrix A is: for all pair of nodes i,j that are both adjacent to k, update $A_{ij}$ by subtracting the term $A_{ik}A_{kk}^{-1}A_{kj}$ from it; 2) delete the row and column correspond to node $k$.
I.e., the indirect influence of node j on i via k, is attributed through a direct influence of node j on i, when deleting $k$.
## Block elimination
The updating formula generalizes to the case when $i,j,k$ each is not a single node, but each consists of a set of nodes. For example, for r=[1,2,3], s=[6,7], $A_{rs}$ refers to the 3x2 submatrix that selects the first three rows and 6-th, 7-th columns from matrix A.
Then, we have block Gaussian elimination: first divide the domain into three sets of nodes as r, s, and t, such that any node in r is not connected to any node in t as separated by s.
The 3x3 block:
[Arr, Ars, Art==0;
Asr, Ass, Ast;
Atr==0,Ats, Att;]
becomes the 2x2 block:
[Ass - Asr Arr^{-1} Ars, Ast;
Ats, Att;]
Namely, the update rule is to subtract the adjustment term: Asr Arr^{-1} Ars.
In Sec 4, we simply do two block Gaussian elimination at the same time.
Then our algorithm is basically a parallel block Gaussian elimination in which many groups of nodes (marked in yellow in Figure 3) are removed by concurrently subtracting many adjustment terms.
The major effort in the code is ``index-tracking’’: carefully track what the indices of remaining nodes become after some nodes are removed.
Illustrated in Figure 24 as a step-by-step breakdown diagram of algebraic manipulation: we apply a few steps of transformations: equations in Gray box are the transformed equations, and definitions in White box (including P, Q and their submatrices) are intermediate symbols we introduce to simplify the notation. The steps simply apply Schur complements many times and in parallel.
# Re: Questions
We add visualization, sample code, and derivation diagrams to help to understand the method in the anonymous link above.
> how do we arrive at equation 4
Please refer to Figure 21-23, for a visualization of the matrix partitioning process used to derive Equation (4), demonstrated on a 3×7 image.
> what are the submatrices A_rr, A_rs, P_rr, P_rs represent
Please refer to Re:QcrL-Clarifications for definitions of submatrices.
# Plug-and-play for end users
Our method is designed to be *black-box usable* for the vast majority of users—understanding the internal details is often not required (though we will clearly document them). Just as convolution and matrix multiplication are accessed via CUDA (cuDNN/cuBLAS), we envision our solver being similarly exposed through low-level APIs (e.g., via a cuDSS-like interface), allowing users to easily integrate it without needing to implement anything themselves. Our code will be released publicly. A demo and code that include all implementation details are already available at the anonymous link. | Summary: The paper proposes a method for accelerating sparse linear and PDE solvers by transforming sparse Laplacian matrices into dense tensors. This procedure uses dense GPU BLAS kernel to batch and run such system in parallel. This method is differentiable, which can be potentially useful for machine learning pipeline integration.
Claims And Evidence: The major claims of this paper include the following: First, the significant speedup compared to the existing solutions. This is supported by Table 1, where it shows the average runtime to solve Laplacian systems under the proposed method, CUDA, Scipy, and AMG. Second, the paper claims that the proposed method is applicable to more problems than PDE, which is demonstrated in examples such as image segmentation.
Methods And Evaluation Criteria: The evaluation contains multiple PDE problems, including anisotropic and isotropic Laplacian systems, along with Darcy flow experiments compared to neural operator baseline. Another evaluation aspect is the comparison of average runtime, performed on SciPy, CUDA, and AMG. The author uses the relative error tolerance as the main evaluation metric, which is a standard measurement solution accuracy.
Theoretical Claims: I have no comments on the theoretical claims due to a lack of expertise in this specific area.
Experimental Designs Or Analyses: The experimental designs in this paper cover multiple domains, including PDE solving and real-world applications in graphics and vision. In Section 5.2, “A zero-shot baseline: learn-to-solve PDEs,” the author evaluates the proposed method against a state-of-the-art neural operator on the Darcy Flow dataset. While this comparison demonstrates the method’s applicability to PDE solving, it primarily focuses on a single PDE equation. While the paper shows that the method is widely applicable, it is suggested to include experiments on additional PDE types, such as Navier-Stokes equations.
Supplementary Material: The supplementary material includes the visualization, applications, and experiments across multiple facets of optimizations, including physical simulation, image segmentation, fast eigen solvers, and so on. It additionally includes ablation studies on solving anisotropic systems under different conditions. Along with the implementation details, these materials increase the reproducibility for researchers.
Relation To Broader Scientific Literature: The paper builds on literature in sparse linear solvers, domain decomposition methods, such as the Dirichlet-to-Neumann methods operator, and the backends for direct sparse solvers.
Essential References Not Discussed: I have no comments on the essential references.
Other Strengths And Weaknesses: Strength:
The proposed method significantly speeds up the PDE solver and the method can be applied to extensive areas more than PDE solving.
Other Comments Or Suggestions: I do not have specific comments on this aspect as it is outside my primary research area.
Questions For Authors: I have no further questions to the authors.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: # General responses
We thank the reviewers for their careful examination of our paper. We note that all reviewers appreciate the broad impact that our method will have (eGtJ: “can be applied to extensive areas”, D4Wa: “greatly accelerate scientific computation in many domains”, QcrL: “both novel and clever”, “significant potential for applications in various domains”).
We also acknowledge that our paper is rather unusual, in that it revisits an area that is often taken for granted (linear solvers), and in its current form assumes textbook knowledge about classical methods, in particular Gaussian elimination. It’s now clear that by bypassing some steps in our derivations (for reasons of readability and space), we have made the method more difficult to follow for some readers. However, our derivatives are correct (as confirmed by QcrL and numerical experiments), and we are happy to add intermediate derivations in Appendix. These extra steps, along with a more extensive graphical presentation of the problem and its specific sparsity, are available in https://anonymous.4open.science/r/icml-rebuttal-268F and will be added to Appendix.
The reviewers' comments highlight a specificity of our paper, in that it identifies a problem that is *critical yet long overlooked and considered having little-room-to-improve*. It is generally accepted that existing direct solvers are close-to-optimal, and haven’t been improved in decades. However, we do show that the approach we propose accelerates the inversion of large systems with Laplace-like sparsity by several orders of magnitude. There is nothing magical though: we simply make the observation–which has already been made for *forward* operators and is critical to the success of deep networks–that extremely parallelized problems can be implemented extremely efficiently on GPUs, such that they outperform “optimal” (flop-wise) sequential algorithms. Our *main contribution* is in transforming a problem that is not parallel by construction into a form that enables the use of dense parallel operations. Rather than focusing on the mathematics (which build on simple ideas) of our method, we have chosen to focus on its potential applications and on quantifying its performance gains on an array of real-world applications.
# Broad impact
Due to the foundational role of linear solvers, our method can broadly impact:
* We provide the *first method to invert convolution* efficiently and exactly (in the generalized case of spatially-varying kernels) at interactively rate, for vision & image processing.
* Immediate improvements across many areas: PDE solvers, learning-for-PDEs, image segmentation, physical simulation, shape optimization, and so on.
* Since solvers are used in all subareas of science and engineering, many computing, solver-in-the-loop and learning systems can benefit from ours by substituting their existing solvers.
* Our finding---sparse solvers can be 1000x faster than current algorithms---qualifies them as a module in the neural networks; especially those involving geometry/physics, since conventional methods in e.g. physics/geometry processing heavily rely on sparse solvers.
* Sparse solvers have been a long requested feature in e.g. PyTorch; the lack of efficient implementation hinders its adoption in neural architectures, especially for scientific ML.
* Optimization: by making Newton’s method much faster on images.
* Spectral methods are accelerated: vision, graphics, geometric deep learning.
* Partially explain and reduce the performance gap between conventional methods and deep learning for many tasks.
# Other PDEs
We appreciate the insightful suggestions on more PDEs like the Navier–Stokes. We note that *Laplace FEM solvers are effectively sparse linear solvers*, making them ideal for evaluating our method. Many nonlinear PDEs—minimal surfaces, deformation, heat flow, optimal control of PDEs (diffeomorphisms), and eigenvalue problems—*repeatedly call Laplace solvers as subroutines*. Shown in Appendix, our method *directly accelerates these different types of PDEs*.
Some classic Navier–Stokes solvers involve solving Laplace equations with *constant* kernels—these can often be handled efficiently by FFT-based methods, and do not require general-purpose sparse solvers like ours. In contrast, our method targets **spatially-varying kernels** (e.g., inhomogeneous materials so FFTs no longer apply) or **generalized deconvolution** with spatially-varying kernels. While our method *can* handle more general fluid models with varying diffusivity or anisotropy, we are not aware of widely-used PDE learning benchmarks involving such settings.
We discuss in Appendix (line 962) on how future work can employ **a simplified variant** of our method to accelerate constant-kernel PDEs like regular Navier–Stokes, and we will try to implement the discussed variants and apply it to Navier–Stokes experiments if time permits. This will further extend the applicability of our method. | null | null | null | null | null | null | null | null |
AAAR-1.0: Assessing AI’s Potential to Assist Research | Accept (poster) | Summary: This paper introduces AAAR-1.0, a benchmark designed to assess the capabilities of Large Language Models (LLMs) in assisting with research-specific tasks. While most existing benchmarks focus on general-purpose tasks, AAAR-1.0 specifically targets high-level academic reasoning and research assistance, addressing three core challenges:
Equation Inference (EQINFER): Evaluates an LLM’s ability to verify the correctness of equations within research papers, a critical aspect of scientific validation.
Experiment Design (EXPDESIGN): Tests whether LLMs can propose well-structured experimental plans aligned with research objectives.
Paper Weakness Identification (WEAKNESS): Assesses an LLM’s capacity to critically analyze research methodologies and identify key weaknesses.
The dataset is constructed using expert-annotated research examples, ensuring that evaluation aligns with real-world academic challenges. The authors conduct extensive comparative evaluations across a range of LLMs, including GPT-4o, Claude 3.5, Gemini 1.5, Mistral, and Mixtral, revealing:
LLMs struggle significantly with equation reasoning, with performance barely exceeding random baselines.
While LLMs can generate diverse experimental plans, these plans are often misaligned or infeasible for real-world research applications.
In research critique tasks, LLMs can identify broad weaknesses but frequently fail to provide specific, actionable insights comparable to expert reviews.
The paper positions AAAR-1.0 as a necessary step toward evaluating and improving AI’s role in research, highlighting the current limitations and potential future directions for AI-assisted research workflows.
Claims And Evidence: Claim 1: AAAR-1.0 is the first benchmark specifically designed to evaluate LLMs in research-oriented tasks.
While prior work has explored AI-driven research assistance, most datasets focus on code generation, summarization, or retrieval, rather than high-level research tasks.
The authors present AAAR-1.0 as a unique benchmark addressing critical reasoning tasks encountered by academic researchers.
Claim 2: LLMs perform poorly in verifying equations and mathematical reasoning.
Experimental results from the Equation Inference (EQINFER) task show that even the best-performing models (GPT-4o, Claude 3.5) achieve only 46% F1-score, indicating a fundamental gap in symbolic reasoning capabilities.
Open-source models like Mistral and Mixtral perform close to random guessing, reinforcing the difficulty of formal equation verification for LLMs.
Claim 3: LLM-generated experimental designs are often misaligned with real-world research constraints.
In the Experiment Design (EXPDESIGN) task, LLMs produce syntactically correct but practically infeasible experiment proposals.
Human evaluations of 15 model-generated plans show that many experiments were either unnecessary, redundant, or impossible to implement with available resources.
Claim 4: LLMs provide general but shallow critiques in peer review tasks.
In the Paper Weakness Identification (WEAKNESS) task, LLM-generated critiques highlight surface-level issues but often lack depth and specificity compared to human reviewers.
Using an ITF-IDF-based informativeness metric, the paper shows that human-written critiques outperform LLM-generated ones in specificity and actionable feedback.
Methods And Evaluation Criteria: The evaluation framework is well-structured, incorporating:
Three benchmark tasks covering distinct aspects of AI-assisted research.
Comparisons across multiple models, including both open-source (Mistral, Mixtral, Qwen, LLaMA) and closed-source (GPT-4o, Claude, Gemini) LLMs.
Task-specific metrics:
F1 Score for equation inference.
Semantic Precision, Recall, and ITF-IDF Informativeness for weakness identification.
Human evaluation for experiment design.
Theoretical Claims: The paper does not propose new theoretical models but provides empirical insights into LLM limitations in symbolic reasoning, research critique, and experiment design.
A discussion on why LLMs fail in equation inference (e.g., limitations in token-level vs. structural mathematical reasoning) would strengthen the study.
A deeper exploration of LLMs’ long-context reasoning capabilities in academic texts could provide additional insights.
Experimental Designs Or Analyses: The experiments are controlled and reproducible, ensuring fair model comparisons.
Ablation studies explore the impact of context length, multimodal inputs (figures, tables), and response verification methods.
The finding that multimodal inputs (figures, tables) do not significantly improve performance is valuable, suggesting that LLMs struggle with processing visual research data effectively.
Supplementary Material: The supplementary materials provide:
Dataset construction details, ensuring reproducibility.
Additional model performance comparisons.
Sample task outputs, illustrating where models succeed and fail.
Relation To Broader Scientific Literature: This work contributes to ongoing research in AI for research automation, building on:
AI-powered academic assistants (Lu et al., 2024; Si et al., 2024).
Mathematical reasoning in LLMs (Song et al., 2023).
Automated peer review tools (Gao et al., 2024; Liang et al., 2024).
Essential References Not Discussed: Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
Other Strengths And Weaknesses: Strengths:
Well-structured benchmark tailored to research tasks.
Expert-validated dataset, ensuring high annotation quality.
Provides critical insights into LLMs' limitations in academic reasoning.
Weaknesses:
Limited failure case analysis.
No discussion of computational efficiency.
Evaluation relies heavily on automated metrics instead of human judgments.
Other Comments Or Suggestions: No
Questions For Authors: What are the primary failure patterns in equation inference? Are LLMs failing due to lack of symbolic reasoning or misunderstanding notation?
Would AAAR-1.0 generalize to non-STEM fields like law, medicine, or social sciences?
Could incorporating multi-turn dialogue improve the performance of LLMs in research critique tasks?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely appreciate your detailed review and comments. Below, we provide our comprehensive responses to your questions.
---
>Q1. Why LLMs fail on `EQINFER` task & provide the failure patterns discussion.
`EQINFER` leverages a challenging binary inference setting, where LLMs are forced to examine each option separately rather than relying on superficial shortcuts from multiple-choice QA (please refer to the Q1 of Reviewer `qTbh` for more detailed discussion). According to our observations, most LLMs tend to predict any given equation as correct (**a common error pattern for most LLMs**) without performing in-depth reasoning over the paper context. Thus, we assume that the inferior performances of most LLMs result from their **limited long-context reasoning capacity**.
To verify our assumption, we ran the OpenAI o1 model with varying levels of reasoning effort.
|Model |F1 |
|--------|-----|
|o1-low |42.98|
|o1-medium |46.35|
|o1-high |**47.12**|
The o1’s performance consistently improves as we increase the reasoning effort. Although this is a simple empirical verification, we believe it highlights the importance of reasoning capacity for this task, particularly for open-source LLMs.
---
>Q2. Heavy reliance on automatic metrics for evaluation.
Thanks for your suggestions. We proposed task-specific automatic metrics for all the tasks in AAAR to ensure that the public can easily and efficiently reproduce the experimental results reported in our paper.
At the same time, we agree that human judgment is also valuable. As shown in Tables 3 and 4, we conducted a small-scale human evaluation for the model-generated experiment ideas, where we found that despite a few unexpected evaluation results (i.e., false negatives), the results from the automatic metrics generally align well with human judgments, especially when compared with conventional generation metrics like ROUGE. This suggests the reliability of the proposed automatic metrics.
---
>Q3. Discussion on computational efficiency.
Apologies for missing this detail. When running the open-source LLMs on our local machine, we used [vllm](https://docs.vllm.ai/) to accelerate computational efficiency. Given 4 NVIDIA A100 GPUs for LLM inference, the largest model we utilized in our experiment, Qwen-72B, took approximately 1.25 hours, 0.4 hours, and 1.75 hours for `EQINFER`, `EXPDESIGN`, and `WEAKNESS`, respectively. All the running hyperparameters, such as the maximum model length, can be found in our paper.
In our next version, we will include more details about the computational costs of various LLMs on AAAR.
---
>Q4. Would AAAR generalize to other fields?
Yes. Our proposed data collection method can ideally be generalized to other disciplines, with AAAR serving as a representative benchmark in the AI/ML field. However, the main constraint is still the requirement for domain experts, as recruiting a reliable and large annotation team is extremely expensive for this kind of research benchmark (see the Q2 discussion of Reviewer `qTbh`).
---
>Q5. Could incorporating multi-turn dialogue improve the performance of LLMs in research critique tasks?
We assume so. To our knowledge, multi-turn dialogue context can be seen as a structured reasoning path, potentially benefiting LLMs in complex tasks (based on our experience). Meanwhile, some studies have explored dialogue-based collaboration between humans and LLMs or among different LLMs, showing that intra-perspective message sharing can enhance performance in reasoning-intensive tasks [1][2].
---
### References:
[1]. [Collaborative Gym: A Framework for Enabling and Evaluating Human-Agent Collaboration.](https://arxiv.org/abs/2412.15701) (*arxiv 2025*)
[2]. [Chain of Agents: Large Language Models Collaborating on Long-Context Tasks.](https://arxiv.org/abs/2406.02818) (*NeurIPS 2024*) | Summary: This paper introduces AAAR-1.0, a novel benchmark designed to evaluate the ability of Large Language Models (LLMs) to assist researchers in expert-level tasks. The benchmark comprises three distinct tasks: EquationInference (EQINFER), which assesses the LLM's ability to validate the correctness of equations within a given context; ExperimentDesign (EXPDESIGN), which evaluates the LLM's capacity to design reliable experiments based on a research idea; and PaperWeakness (PAPERWEAKNESS), which tests the LLM's ability to identify weaknesses in research paper drafts. The authors meticulously curated datasets for each task, employing expert annotators to ensure high-quality data. The evaluation process involved a range of both open-source and closed-source LLMs, and the results were analyzed using a combination of quantitative metrics and qualitative assessments. The study found that while LLMs demonstrate some capability in these tasks, their performance is often only slightly above chance, particularly in the EquationInference task, highlighting limitations in their practical utility for advanced research assistance. The paper's core contribution lies in the creation of a benchmark that targets complex, reasoning-intensive tasks that are highly relevant to the research process, moving beyond superficial applications of LLMs.
Claims And Evidence: 1. The paper lacks a clear and operational definition of what it means for an LLM to "assist researchers." This ambiguity makes it difficult to interpret the results and understand the practical implications of the benchmark. While the motivation section outlines the challenges researchers face, and the introduction specifies the three tasks, a broader definition is missing. The notion of assistance is too broad and could encompass a wide range of activities, from generating research ideas to writing code, and the paper does not specify which of these are targeted. This lack of clarity makes it challenging to assess the real-world utility of the benchmark.
2. The paper does not provide a clear explanation of how the performance of LLMs on the benchmark translates to their ability to assist researchers in practical scenarios. The connection between the benchmark tasks and real-world research assistance is not well-established, leaving the reader to speculate about the practical utility of the results. For example, it is unclear how well performance on the EquationInference task correlates with the ability of an LLM to help a researcher identify errors in their own equations. While the paper attempts to connect the tasks to real-world scenarios, the explanation is not always explicit and lacks strong supporting evidence or citations.
Methods And Evaluation Criteria: Convincing enough.
Theoretical Claims: None.
Experimental Designs Or Analyses: It does not explicitly discuss the limitations of the proposed benchmark. The benchmark not cover all aspects of research assistance, and the tasks biased towards certain types of research or domains. The tasks may also be too narrow, focusing on specific sub-tasks rather than the broader context of research.
Supplementary Material: None.
Relation To Broader Scientific Literature: See in Essential References Not Discussed.
Essential References Not Discussed: I believe the authors have overlooked discussing a very closely related work: CycleResearcher: Improving Automated Research via Automated Review. In fact, when we compare the data construction of the two papers, it's readily apparent that both AAAR and CycleResearcher's collected Review-5K and Research-14K are oriented towards assisting researchers. For AAAR's ExperimentDesign task, it is actually similar to the "Idea" and "Experiment" sections of Research-14K (and I think they are very similar). For AAAR's PaperWeakness task, it is almost identical to the content of the Review-5K dataset. I understand the authors of AAAR want to focus on benchmarking and evaluation, but the lack of discussion of CycleResearcher, and the absence of using CycleResearcher and CycleReviewer as baseline methods, is difficult to accept.
Other Strengths And Weaknesses: In my opinion, the true brilliance of this paper absolutely shines through in its introduction of the AAAR-1.0 benchmark. This isn't just another benchmark; it's genuinely novel and undeniably relevant to the booming interest in leveraging LLMs for research support. What I particularly appreciate is how the benchmark tackles expert-level tasks – equation validation, experiment design, and paper weakness identification – this is a remarkably significant contribution. It's refreshing to see a benchmark that moves beyond the shallow, commonplace uses of LLMs and dives into something truly meaningful. Frankly, the authors have brilliantly identified a real gap in existing benchmarks, and they've masterfully filled it with a resource that is precisely what the research community desperately needs.
Other Comments Or Suggestions: First and foremost, please, please take the time to clearly define what you mean by "assisting researchers." This isn't just a minor detail; it's fundamental. Make this definition specific and operational – something that truly guides the scope of your benchmark. For instance, you could explicitly state that "assistance" encompasses tasks like identifying errors in equations, suggesting relevant experimental designs, or pinpointing weaknesses in research papers. Ground this definition in the real needs of researchers – what do we actually struggle with? – and let that guide the very design of your benchmark tasks.
It's crucial that you provide a clear and compelling rationale for why you chose these specific tasks. Don't just assume their relevance is obvious. Walk us through your thinking. Discuss the common hurdles researchers face, and explain precisely how your benchmark tasks directly address these challenges. Think about the broader research landscape. Consider how this benchmark could be used across different research domains, and honestly discuss its limitations in those diverse contexts. Transparency here is key.
You need explicitly and honestly discuss the limitations of your proposed benchmark. Don't shy away from this. Analyze potential biases in the tasks. Acknowledge domains not covered. Be upfront about potential biases – is it skewed towards certain research types or domains? Does it truly capture all aspects of research assistance? Discuss the very real possibility of "gaming" – could LLMs be specifically trained to excel on your benchmark without truly understanding the underlying research tasks? This discussion needs to be honest, transparent, and insightful.
Questions For Authors: What is the random guess baseline for the PaperWeakness task? how to set?
How do you ensure that the experts involved in the data collection and evaluation process are not biased towards certain LLMs or research domains?
---
Do you know "DeepReview"? I believe this work may be helpful to you.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Your comments are very much appreciated! We took your comments carefully and tried to address them one by one.
---
>Q1. The critical references that were missed (i.e., CycleResearch and DeepReview).
Thanks for highlighting these important concurrent works. Specifically, we ran CycleResearch on our benchmark during the rebuttal.
| Method | S-F1 | ITF-IDF |
|----------------|------|--------|
| Llama3.1-70B |42.78 |2.60 |
| GPT-4o |**47.73** |**5.95** |
| AI-SCI |45.05 |2.23 |
| CycleReview-70B|46.68 |2.65 |
The table above presents the `WEAKNESS` results, where the 70B CycleReview model (based on LLaMA-3.1) achieves an S-F1 score nearly comparable to GPT-4o, highlighting the benefits of post-training for open-source LLMs on this task. However, CycleResearch's ITF-IDF score (the proposed diversity metric) remains similar to LLaMA-3.1 due to a lack of specificity—a common error pattern of LLMs discussed in our paper.
| Method | En-F1 |
|-----------------|-------|
| Llama3.1-70B | 22.92 |
| GPT-4o | **25.03** |
| CycleResearch-72B| 21.16 |
The table above presents the `EXPDESIGN` results. In our view, CycleResearch may not be a suitable baseline for this task, as it is a policy model designed for whole-paper writing rather than specializing in experiment design. Consequently, applying it to our task requires modifying its original system prompt and task objective, which could make the comparison unfair.
We will provide further observations on CycleResearch in our next manuscript.
---
>Q2. The definition of “assisting research”.
In terms of 'AI for research', our benchmark differs from existing works in two key aspects.
- i) **The scope of 'research'**: Since research activities are broad and diverse, we focus on domain-specific, expertise-demanding, and reasoning-intensive tasks that highlight the **irreplaceability of researchers**. For example, writing experimental code is a reasoning-light task, often done by students, whereas determining 'what experiments are necessary' to support a paper’s primary claim is an expertise-demanding task, typically decided by senior advisors. The latter is clearly more challenging and demonstrates that AI models cannot easily replace senior researchers.
- ii) **'Assisting researchers' rather than 'replacing researchers'**: For high-level research tasks, our benchmark primarily serves an educational purpose — LLMs assist junior researchers by offering imperfect yet insightful ideas rather than governing the entire research process [1]. Relying on LLMs to oversee research and replace human effort compromises academic integrity. For example, we can use LLMs to suggest weaknesses as feedback to help us refine our own manuscripts, rather than directly using model-generated comments for peer review.
---
>Q3. Reason for choosing the three tasks and their connection with the real-world scenario.
As addressed in Q2, our benchmark focuses on expertise-demanding research tasks that highlight the irreplaceability of researchers. Though more tasks could be included, we prioritize those that are **widely underestimated** in existing works. For example, while writing a paper review is well-studied, identifying a paper’s weaknesses is significantly more challenging than writing a paper summary/strengths [2][3].
`EQINFER` relates to scenarios where LLMs assist in double-checking the correctness of our own equation writing. `EXPDESIGN` mirrors a PhD student seeking a professor’s advice before writing experimental code. `WEAKNESS` represents using LLM feedback to refine our entire research project. Each corresponds to a real-world scenario and strictly aligns with our 'assisting researcher rather than replacing researcher' perspective.
---
>Q4. The limitation of the benchmark.
Thanks for your suggestion. We agree that discussing limitations would benefit readers and future research. In fact, we initially included a limitation discussion section in our manuscript but removed it to comply with the ICML submission format. Due to the rebuttal word limit, we have provided that section in **[this link](https://anonymous.4open.science/r/ICML2025-rebuttal-A9BF/limitation%20section.png)** and will aim to reintegrate it in future versions.
---
>Q5. Details about random baseline and how we ensure the data collection is not biased.
For the PaperWeakness task (Table 5), we reported only human performance and did not include a 'random baseline'. Please refer to the Q2 of Reviewer `qTbh` for more details on how we ensured unbiased data collection.
---
### References:
[1]. [Collaborative Gym: A Framework for Enabling and Evaluating Human-Agent Collaboration.](https://arxiv.org/abs/2412.15701) (*arxiv 2025*)
[2]. [Can large language models provide useful feedback on research papers? A large-scale empirical analysis.]() (*NEJM AI 2024*)
[3]. [LLMs Assist NLP Researchers: Critique Paper (Meta-)Reviewing.]() (*EMNLP 2024*)
---
Rebuttal Comment 1.1:
Comment: Thank you to the author for addressing my concerns! I have set my score to 4 to support this work being accepted! | Summary: The paper introduces AAAR-1.0, a benchmark for measuring the ability of LLMs to perform 3 key research tasks: mathematical equation understanding, designing experiments, and identifying weaknesses in paper submissions.
The authors curate datasets for each of their chosen research tasks by scraping public research papers and reviews, transforming it, and using expert human annotation to filter and assure quality.
Finally, they evaluate both open and closed-source LLMs on their constructed benchmark and report results along with ablations and analyses.
Claims And Evidence: The main contribution of the paper is their dataset and benchmark, which is well-supported by construction.
Otherwise, there are no significant claims besides their main results which are directly supported by their experiments. I leave specific criticisms of the benchmark and experiments to later sections.
Methods And Evaluation Criteria: The overall benchmark is built around a good choice of 3 main tasks, and the authors have done a great job creating interesting datasets for all 3.
- EquationInference - no complaints here, this appears to be a solid and useful dataset for mathematical reasoning with straightforward metrics.
- ExperimentDesign - The dataset is well-constructed. Precision/Recall are suitable metrics. One issue is that you’re making an assumption that the ground truth experiments are exactly what are needed - nothing more, nothing less. If a paper includes an extraneous experiment, or misses a useful experiment that the LLM thinks of, the LLM gets penalized unfairly.
- PaperWeakness - Dataset is well-constructed, and metrics are fine. However, this has the same issue of unfairly penalizing LLMs as I mentioned for ExperimentDesign.
Given the subjective “correctness” of real-world papers/reviews, treating these datasets as ground truths to exactly match, it’s likely that this benchmark is unfair / impossible in some ways. It would be good to have more careful analysis / discussion of what the extent of false positives / false negatives might be in the dataset.
Theoretical Claims: No key theoretical claims.
Experimental Designs Or Analyses: - The choice of unifying context lengths is an unusual one. This arbitrarily limits the performance of models, and adds a lot of complexity to the paper’s discussion and results
- The paper dedicates many experiments to this topic, which would be relevant for a long-context-focused benchmark, but is only a practical consideration for the AI-research benchmark. (The results are surprising in some cases but it feels mostly like performing hyperparameter sweeps which minimally change the overall conclusions)
- In my opinion, the benchmark should always directly provide all the necessary information - if models can’t handle it, the models may do worse, but future models will soon have more context length given the quick rate of progress in AI.
Supplementary Material: No supplementary material has been submitted.
Relation To Broader Scientific Literature: This work contributes valuable benchmarks and metrics for understanding the ability of LLMs to perform 3 key research tasks that are not currently well-covered by existing datasets. This will be a valuable dataset for future work to build upon. The Related Work in Appendix A does a good job laying out the relevant literature.
Essential References Not Discussed: None that come to mind.
Other Strengths And Weaknesses: Strengths
- All 3 tasks give really interesting datasets! Even leaving the main benchmark aside, I think they are all really interesting datasets in their own right.
- Very valuable data collection and annotation of papers with expensive experts. Multi-stage annotation with multi-annotation and peer review shows a great care for data quality.
- Interesting results! I was surprised to see the overall weak performance of models on this benchmark, suggesting that it might reveal useful new information about our understanding of models' capabilities.
Weaknesses
- The main concern I have is of data contamination. All the datasets and answers are scraped from publicly available papers on the internet. Before long, models will train on the data and then the benchmark results will no longer be trustworthy. This doesn't seem to be an issue now given the low results, but this will likely limit the lifespan of the benchmark.
Other Comments Or Suggestions: None.
Questions For Authors: None.
Ethical Review Concerns: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks for your efforts in reviewing our manuscript! We're glad you found our dataset interesting, the proposed evaluation metrics reasonable, and the experimental results useful. Below, we address your concerns in detail.
---
>Q1. Problem for setting the 'ground truth' for `EXPDESIGN` and `WEAKNESS`.
This is a great point. While research is inherently 'open-ended,' human annotations still meet the regular acceptance standards of research — **not a gold standard, but at least a reasonable one**. Therefore, despite its limitations, using human annotations as ground truth remains a practical approach for benchmark construction, as evidenced by recent works that continue to adopt this methodology [1][2].
In this work, to mitigate potential bias from setting human annotations as the ground truth, we establish the evaluation framework that integrates both **automatic metrics** and **human assessment**. For example, in `EXPDESIGN`:
- First, we use an automatic metric to compute the En-F1 score (Table 2) and identify 'negative' predictions.
- Then, to *quantify false negative judgments*, experts manually assess the true correctness of these ‘negative’ predictions (Table 3).
To our knowledge, combining automatic metrics with further human assessment represents the 'best' current practice for evaluating research outputs.
---
>Q2. Analysis for the false positive / false negative cases w.r.t. the 'ground truth'.
Thanks for your helpful suggestion. We have indeed made some analyses in our paper. Specifically, as shown in Table 3, for those model-predicted experiment ideas that do not match the human ground truth, we asked the human experts to evaluate manually.
We found a few model-generated experiment ideas that deviate from the ground truth but are deemed reasonable in manual checks (implying **some false negatives**). Meanwhile, our manual examination confirms that our rigorous peer review process ensures the ground truth's correctness and objectivity (indicating **no notable false positives**).
We hope these results can inspire future work on fairly evaluating research outputs without reliance on ground truth while maintaining efficiency and reproducibility.
---
>Q3. Concerns on data contamination.
This is an important point to discuss. First, we believe that `EQINFER` and `EXPDESIGN` are less likely to be affected by data contamination, as the ground truth outputs for both tasks have been **reformulated** or **rewritten** from the original source. Output rewriting or distribution perturbation is a widely adopted method to mitigate data contamination [3][4].
However, we acknowledge the potential data leakage issue in the `WEAKNESS` task, as all outputs are directly taken from OpenReview (though we believe this is a common challenge faced by most current benchmark datasets [5]). We maintain that our experimental results still offer valuable insights and can **serve as an upper bound for certain LLMs**, especially if they were pretrained on papers from OpenReview.
Notably, this work introduced AAAR-1.0. Our ongoing efforts involve collecting confidential data that no LLM has touched; we will include that as blind test sets in AAAR-2.0, our next version.
---
>Q4. Concerns on unifying input length.
Since not all LLMs support long-context reasoning and different models have varying maximum context sizes, we standardized the input length for a fair comparison. We agree that it is crucial to ensure all input information is provided to the LLMs.
To address this, we conducted extensive context length analyses in our manuscript (pls refer to Appendix D1 --- 'Input Context Scaling Investigation'). For example, Figure 4 in the Appendix demonstrates that not all long-context LLMs benefit from the full input information.
---
### References:
[1]. [ResearchTown: Simulator of Human Research Community.](https://arxiv.org/abs/2412.17767) (*arxiv 2024*)
[2]. [LLMs Assist NLP Researchers: Critique Paper (Meta-)Reviewing.](https://arxiv.org/pdf/2406.16253) (*EMNLP 2024*)
[3]. [ScienceAgentBench: Toward Rigorous Assessment of Language Agents for Data-Driven Scientific Discovery.](https://arxiv.org/abs/2410.05080) (*ICLR 2025*)
[4]. [ML-Bench: Evaluating Large Language Models and Agents for Machine Learning Tasks on Repository-Level Code.](https://arxiv.org/abs/2311.09835) (*ICLR 2025*)
[5]. [CycleResearcher: Improving Automated Research via Automated Review.](https://arxiv.org/abs/2411.00816) (*ICLR 2025*) | Summary: This paper aims to measure the capability of Large Language Models (LLMs) in research-relevant tasks. Specifically, those tasks include 1) Equation Inference, which measures whether the equation is relevant to the given context of the paper, 2) Experiment Design, which measures whether the experimental designs generated by LLMs align with the designs generated by humans, and 3) Paper Weakness, which measures whether the weaknesses of the paper identified by LLMs align with humans. Through extensive experiments, this paper shows that even state-of-the-art LLMs are not sufficient for those advanced research-relevant tasks, and points out that there is room for improvement.
Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria are mostly reasonable. However, one critical concern about the overall quality and reliability of the proposed dataset collection process and evaluation setup is that the authors seem to be annotating data with five (or a few) PhD students. While the authors claim that they are senior researchers, I view them as still students, and it may be beneficial to check the quality of the collected data and the annotated experimental results with more seasoned researchers. Also, according to this, I think the authors may need to tone down their claim of performing annotations with senior researchers. Lastly, in addition to them, it is questionable how they were recruited, how much they were compensated, and how diverse they are across domains. Providing this information seems conventional for benchmark work.
Theoretical Claims: N/A
Experimental Designs Or Analyses: The evaluation setup for the Equation Inference is not clear. For the given context of the paper, there is one positive equation and three negative equations. As described in Figure 11 (the prompt template), the authors provide those four options as well as the given context, and prompt the model to predict one that is the relevant equation for the given context. If so (i.e., if the setting is the multiple-choice question answering), it is not clear how to formulate the "All-Positive baseline" that predicts all equations as positive. Also, as shown in the results of Table 1, most LLMs tend to predict the equation as positive, and there are no substantial improvements over the "All-Positive baseline". In this regard, I am wondering whether the LLMs still do not show substantial performance improvement over the "All-Positive baseline" if the authors use the multiple-choice question-answering setup.
Supplementary Material: Yes, I skimmed through it, mostly checking the prompt templates.
Relation To Broader Scientific Literature: There is a growing body of literature on using AI for science (to accelerate it), and this paper is relevant to this topic, which is very important and timely.
Essential References Not Discussed: For the proposed Experiment Design task, indeed there are few recent studies that evaluate the capability of LLMs in generating experiment designs [A, B]. The authors may cite them, and potentially include their approaches in their benchmark evaluation.
[A] ResearchAgent: Iterative Research Idea Generation over Scientific Literature with Large Language Models
[B] Chain of Ideas: Revolutionizing Research Via Novel Idea Development with LLM Agents
Other Strengths And Weaknesses: Please see my previous comments.
Other Comments Or Suggestions: It would be interesting to see the performance where LLMs are prompted with the few-shot examples to perform the given task. The LLMs might not be familiar with the given tasks (as they are a bit different from a typical suite of evaluation benchmarks); however, by learning from examples in-context, they can capture the core principle of the given task and then achieve significantly higher performance.
Questions For Authors: I feel like this paper is borderline, and I would like to increase my score if the authors would address all my comments.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Many thanks for your detailed and comprehensive review. We're pleased you found our evaluation criterion reasonable and the literature topic important. As shown below, we address your main concerns one by one:
---
> Q1: Unclear evaluation setup for the Equation Inference.
We apologize for the confusion. Although `EQINFER` was originally designed for a MCQA setting (which explains the 1:3 positive-to-negative distribution), we ultimately adopted a **binary inference setup**, as shown in Figure 1 / Footnote 1. This change was made because of two reasons:
- i) We found that the MCQA setup contains shortcuts --- LLMs may focus on superficial character differences among the options rather than reasoning through the paper context. In contrast, binary inference requires LLMs to evaluate the correctness of **each option separately**, eliminating any straightforward shortcuts.
- ii) In real-world equation reviewing during paper submissions, the task also involves determining whether the author-written equation is valid based on the context. We are making our 'AI assists researcher' **setting more realistic**.
If you're interested, you can find the MCQA setup performances at this **[link](https://anonymous.4open.science/r/ICML2025-rebuttal-A9BF/EQINFER_MCQA_performance.png)**. As shown, LLMs generally achieve higher scores due to the MCQA shortcut. We apologize again for the misleading MCQA prompt template in the appendix and will update it in our next manuscript.
---
> Q2. The reliability of the employed annotators.
Thank you for pointing out this critical argument. We will include the following recruitment details in our next version:
- **Recruiting**: we posted an online recruiting form with strict qualification requirements: i) more than 4 years of AI research experience; ii) more than 5 publications, with at least one first-authored publication in leading AI venues; iii) reviewed 10+ peer publications. The form was then shared through social media.
- **Annotator Profile**: over 10 annotators were selected, all from renowned academic institutions with strong ML/AI research backgrounds (**not just five students**), including some professors.
- **Domains**: we aimed to include annotators with diverse AI expertise, including NLP, CV, and ML theory.
- **Reward**: given the challenge of collecting expertise-level data, we offer a high payment of \\$70 per hour to all annotators. Additionally, they receive a \\$5 bonus for each low-quality sample identified or valid response made during peer discussions.
We will follow your suggestion to tone down the term 'senior researcher' in the next version. It is important to emphasize that we have made every effort to ensure all annotators have a strong background and are fully engaged in rigorous data collection and examination, which was also acknowledged by Reviewer `WuHG`.
---
> Q3. Performance of adding few-shot examples.
This is a good suggestion. Here, we provide the performance of adopting the few-shot examples.
| Model | 0-shot | 2-shot |
|---------------|--------|--------|
| Mistral-7B | 28.45 | 30.45 |
| Qwen-72B | 31.22 | 33.09 |
| GPT-4o | 40.35 | 40.61 |
| o1 | 46.35 | 46.28 |
The above table illustrates the results of `EQINFER`, we found that adopting few-shot examples only improves the F1 score of smaller open-source LLMs. We believe the examples mainly serve as format guidance for classification in this case, while the reasoning ability of LLMs plays a more critical role in EQINFER (see our Q1 discussion with Reviewer `xpyH`).
| Model | 0-shot | 2-shot |
|-------------|-----|-----|
| Mistral-7B |1.17 |0.89 |
| Qwen-72B |1.21 |0.94 |
| GPT-4o |5.95 |5.02 |
| o1 |5.63 |4.78 |
The above table illustrates the results of `WEAKNESS`, we found that few-shot examples even negatively impact the ITF-IDF scores of various LLMs. Our observation suggests that adding examples restricts LLMs' creativity in generating novel weaknesses. For instance, if we include 'lack of novelty' as an example weakness, LLMs tend to repeat this weakness across different papers.
For `EXPDESIGN`, due to the list format generation, this is the only task where we observed significant and consistent performance improvement after adopting few-shot examples (which we’ve already used in our paper, as seen in Figure 12).
---
> Q4. Missing references.
Thanks for suggesting highly relevant references. We will include a discussion on them in our next version. Additionally, we have conducted further baseline experiments to make our evaluation more comprehensive (please see our Q1 discussion with Reviewer `aPKE`).
---
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed response, which effectively addresses all the concerns I raised. I increase my score to 4 (accept) from 2. | null | null | null | null | null | null |
PISA Experiments: Exploring Physics Post-Training for Video Diffusion Models by Watching Stuff Drop | Accept (poster) | Summary: Current large-scale pre-trained video generation models excel in content creation but are not suitable as physically accurate world simulators out of the box. Therefore, this paper introduces the PISA framework, providing diagnostic tools for assessing the physical modeling capabilities of video generation models. Experimental validation highlights the crucial role of post-training (fine-tuning and reward modeling) in enhancing physical accuracy.
Claims And Evidence: The claims are supported by clear evidence.
Methods And Evaluation Criteria: The methods and evaluation criteria make sense for the problem.
Theoretical Claims: The proofs are correct.
Experimental Designs Or Analyses: The authors designed various metrics to comprehensively reflect the degree of fit between the generated falling trajectories and the real trajectories. However, there are still some issues that require further research, such as the lack of an assessment of human subjective perception of physical realism. I believe this is crucial for the practical application of generated videos. It's difficult to intuitively reflect the degree of this realism with only the quantitative metrics provided in the paper.
Supplementary Material: I did not review the supplementary material.
Relation To Broader Scientific Literature: The methods and benchmarks proposed in this paper are significant for further investigating the key deficiencies in current video generation models.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: The methods and benchmarks proposed in this paper are significant for enhancing the physical realism of generated videos. The authors' experiments found that while post-training can effectively improve physical realism, there are certain limitations, such as the incorrect distribution of dropping times, which has not been thoroughly discussed. Moreover, although the problem addressed in this paper is important and the proposed methods and benchmarks are valuable, I believe that this post-training approach struggles to fundamentally resolve the lack of physical realism. The authors' experiments also revealed generalization issues with post-training.
Other Comments Or Suggestions: I have no further suggestions.
Questions For Authors: I have no questions.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for taking the time to review our work. We are extremely grateful for the feedback given, and below we address the main concerns raised.
> **Q1:** Lack of human evaluation.
This is a great point. While our work focused on quantitative evaluation, human evaluation is important too. To address this limitation, we asked four volunteers (none of them authors) to rank preferences in generation between our PSFT+ORO model and Sora. We showed 30 videos, 10 from our sim seen test set, 10 from our sim unseen test set, and 10 from our real world test set. For each video, we asked the anotators to select a preference for each of the three following questions:
1. Which video does better at following basic physics?
2. Which video has better visual realism?
3. Which video does a better job at preserving the state/identity of objects, i.e. not hallucinating or introducing implausible state changes?
Overall, our model is prefered to Sora 90% of the time in physical accuracy, 56% of the time in visual realism, and 68% of the time in preserving object identity. The full results are shown [here](https://anonymous.4open.science/r/ICML-7650-Rebuttal-1645/7_human_eval.md).
> **Q2:** Concern about distribution of dropping times.
We believe that the distribution matching failure is an important insight for researchers developing world models. A goal of this paper is to make researchers aware of the strengths and weaknesses when post-training video models that struggle with basic physics. As such, we hope the reviewer agrees that negative results like this are important scientific findings that should be shared, even if they are limitations.
> **Q3:** Struggle to fundamentally resolve the lack of physical realism.
Our benchmark and human evaluation indicates our model to be more physically realistic than all other baselines, including frontier closed source models like Sora, on our dropping task. Our goal is not to solve physics as a whole, but rather to shed light on the post-training process that in our view will become an increasingly critical part of the world modeling stack. The strengths and limitations of this post-training process presented in our paper are valuable insights for future research.
> **Q4:** Generalization issues.
We evaluated our data on challenging and OOD settings in both real and simulated data. These videos include scenarios such as objects sliding down ramps, falling into containers, or domino-like setups. A summary of our relative improvement over the base OpenSora model is shown below.
| Scenarios | L2 | CD | IoU |
| --------------------- | ------ | ------ | ------- |
| Domino (Real) | 47% | 54% | 42% |
| Ramp (Real) | 41% | 47% | 18% |
| Stairs (Real) | 25% | 20% | 103% |
| Ramp (Simulated) | 87% | 90% | 55% |
| Container (Simulated) | 75% | 67% | 3.7% |
Please see [here](https://anonymous.4open.science/r/ICML-7650-Rebuttal-1645/3_ood.md) for more information about the dataset construction and a full table breakdown across settings and comparisons with baselines.
---
Rebuttal Comment 1.1:
Comment: Thank you for the authors' response, which has provided clearer answers to some of my questions.
On one hand, the issue of physical rules in probabilistic generative video models represents a crucial yet challenging research direction with limited existing studies. This paper makes valuable exploratory contributions by proposing a feasible method to enhance physical realism, analyzing its effectiveness, and offering inspiration for subsequent research. However, on the other hand, the approach of "incorporating physics-specific data for post-training" appears overly simplistic and direct, lacking methodological innovation and analytical depth. Both its effectiveness and limitations seem easily foreseeable. Therefore, I maintain my original "weak accept" rating.
From my perspective, the ultimate solution to this challenge should focus on model architecture and training paradigms rather than relying solely on data manipulation. For instance, why does the post-training process lead to significant divergence in performance between spatial trajectories and temporal trajectories in free-fall motion? Could this be related to the distinct modeling approaches for spatial and temporal dimensions in current video generation frameworks? I encourage the authors to conduct more in-depth investigations into these fundamental questions in future works. | Summary: The paper argues that the large-scale video generative modeling has enabled creative content generation but the accurate world modeling is missing. This is due to the complexities in the physical laws and perspectives that underpin the real-world videos. To solve this problem, the paper proposes to use targeted post-training. Specifically, the paper studies the potential of post-training in image-to-video generative models for freefalling objects under gravity.
The paper studies free falling because it is simple to simulate and evaluate to gain insights into the post-training stage. The experimental results suggest that: (a) the existing video generative models are quite bad at physically accurate object dropping, (b) simple finetuning of the video model on 1000s of examples fixes this problem significantly, (c) they consider further RL-tuning with multiple reward models and target multiple axes for physical improvement.
The paper also finds that finetuning struggles to generalize beyond the unseen depths and heights, and its struggle with trajectory distribution and dropping-time distribution. Overall, the paper performs insightful data collection and post-training that can serve as a useful data point for the community. I do have several comments on the work’s limitations and potential suggestions.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: NA
Experimental Designs Or Analyses: Yes
Supplementary Material: Yes (parts referred in the main paper)
Relation To Broader Scientific Literature: Mentioned in the summary, and strengths.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: **Strengths**
- The authors collect 361 real-world videos to assess model performance in real environments, which is crucial for quantifying the sim-to-real gap—an aspect missing from prior work [1].
- The entire framework is well-designed for the object dropping phenomenon -- having simulated videos for training the models and assessing them at different heights and depths for generalization study.
- The paper also goes beyond mere supervised finetuning and shows the ability to improve performance with further RL-tuning.
**Weaknesses**
- The study lacks a reliability measure with human evaluation; it is unclear whether humans prefer the finetuned model’s outputs over the generalist model’s.
- Figure 7 shows that the object dropping under gravity can be roughly broken down into two parts: straightline motion before impact and collision/rolling motion after impact. That figure suggests that the finetuned model has learned about behavior before motion pretty well but it is not clear about the later part. A good way to test this would be to breakdown the Table 1 finetuned model numbers into before impact and after impact.
- The study does not establish a strong conclusion on whether post-training improves world modeling in video models. The generalization capability appears limited, and experiments focus only on object falling—a relatively simple physical phenomenon. There is uncertainty about how the results extend to other physical behaviors, given the difficulty in acquiring simulated data for more complex scenarios.
[1] Section 5.3.2 in https://arxiv.org/abs/2501.03575v1
[2] https://arxiv.org/abs/2406.03520
[3] https://arxiv.org/abs/2501.09038
Other Comments Or Suggestions: - The evaluation metrics closely resemble those in the Physics-IQ paper [3], and proper attribution should be provided.
Questions For Authors: Mentioned in the strengths and weaknesses
Ethical Review Concerns: NA
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for taking the time to review our work. We are extremely grateful for the feedback given, and below we address the main concerns raised.
> **Q1:** Lack of human evaluation.
This is a great point. While our work focused on quantitative evaluation, human evaluation is important too. To address this limitation, we asked four volunteers (none of them authors) to rank preferences in generation between our PSFT+ORO model and Sora. We showed 30 videos, 10 from our sim seen test set, 10 from our sim unseen test set, and 10 from our real world test set. For each video, we asked the anotators to select a preference for each of the three following questions:
1. Which video does better at following basic physics?
2. Which video has better visual realism?
3. Which video does a better job at preserving the state/identity of objects, i.e. not hallucinating or introducing implausible state changes?
Overall, our model is prefered to Sora 90% of the time in physical accuracy, 56% of the time in visual realism, and 68% of the time in preserving object identity. The full results are shown [here](https://anonymous.4open.science/r/ICML-7650-Rebuttal-1645/7_human_eval.md).
> **Q2:** Evaluating before and after impact.
We estimated the contact frame using the method described in Appendix B of the paper and ran our benchmark. Overall, we find that ORO most improves the results *after* the point of contact. This indicates that ORO is strongest at improving the most difficult aspects of physics modeling. The full set of evaluation tables can be found [here](https://anonymous.4open.science/r/ICML-7650-Rebuttal-1645/8_impact.md).
> **Q3:** OOD generalization.
We evaluated our data on challenging and OOD settings in both real and simulated data. These videos include scenarios such as objects sliding down ramps, falling into containers, or domino-like setups. A summary of our relative improvement over the base OpenSora model is shown below.
| Scenarios | L2 | CD | IoU |
| --------------------- | ------ | ------ | ------- |
| Domino (Real) | 47% | 54% | 42% |
| Ramp (Real) | 41% | 47% | 18% |
| Stairs (Real) | 25% | 20% | 103% |
| Ramp (Simulated) | 87% | 90% | 55% |
| Container (Simulated) | 75% | 67% | 3.7% |
Please see [here](https://anonymous.4open.science/r/ICML-7650-Rebuttal-1645/3_ood.md) for more information about the dataset construction and a full table breakdown across settings and comparisons with baselines.
> **Q4:** Attribution for Physics-IQ.
Thank you for pointing this out. We did cite this paper in our related work section of our submission, though we did not point out the similarity between our concurrently developed metrics. We will definitely add an acknowledgement of this.
Please let us know if you have any further questions or concerns. We are grateful that you feel our paper should be accepted, and if our response has sufficiently addressed the concerns you have mentioned, we would appreciate it if you consider raising your score further. Thank you very much again for taking the time to review our work! | Summary: **Main Findings:**
- This paper addresses the physics-based task of modeling object freefall in video diffusion models, specifically formulated as follows: given an initial image of an object suspended midair, the goal is to generate a video depicting the object realistically falling, colliding with the ground, and potentially interacting with other objects.
- A new evaluation framework called PISA (Physics-Informed Simulation and Alignment) is proposed, including a video dataset. Results reveal that current state-of-the-art video generation models significantly struggle with accurately performing this fundamental physics task.
**Main Algorithmic/Conceptual Ideas:**
- The paper introduces a post-training method aimed at enhancing video generation models through Physics-Supervised Fine-Tuning (PSFT) and Object Reward Optimization (ORO).
**Main Results:**
- Evaluations using the proposed PISA framework demonstrate that existing SOTA video generation models have limited capabilities in accurately generating object freefall, highlighting weaknesses in their physical modeling abilities.
- While the proposed PSFT and ORO methods significantly enhance model performance on the benchmark task, their generalization to out-of-distribution (OOD) scenarios remains limited.
Claims And Evidence: Yes, it is clear and convincing evidence.
Methods And Evaluation Criteria: Yes, the proposed methods and evaluation make sense for the target problem.
Theoretical Claims: Not applicable. No theoretical claim is proposed.
Experimental Designs Or Analyses: Yes, I have checked.
Supplementary Material: Yes, all contents in supplementary materials (both text and videos) are reviewed.
Relation To Broader Scientific Literature: Evaluating video generation models' capabilities in physical modeling is crucial for developing effective world models. Addressing this research problem is particularly significant for advancements in embodied AI.
Essential References Not Discussed: No, the related work is solid.
Other Strengths And Weaknesses: **Strengths:**
- The proposed PISA (Physics-Informed Simulation and Alignment) evaluation framework is well-motivated and thoughtfully designed. Experimental results clearly highlight current limitations in the physical modeling capabilities of state-of-the-art video generation models.
- The introduced PSFT and ORO methods demonstrate significant performance improvements on the PISA benchmark. The limitations in generalizing to out-of-distribution (OOD) scenarios are explicitly and effectively discussed.
**Weaknesses:**
- The scope of this paper is somewhat limited. Evaluating only object freefall addresses just a narrow aspect of the physical modeling capabilities of video generation models. The assessment would be more comprehensive and convincing if additional physical scenarios, such as collisions or diverse movement interactions (like in CLEVR dataset), were included.
- Post-training specifically on object freefall scenarios introduces a bias. Although PSFT and ORO effectively enhance performance on the PISA benchmark, the restricted variability in direction and speed during object freefall could allow for dataset-specific optimization or "hacking." This limitation is further corroborated by experimental evidence.
- The improvement methods, while effective, still demonstrate limited generalization to out-of-distribution (OOD) scenarios. This limitation should be addressed by introducing greater variability and complexity in training conditions.
Other Comments Or Suggestions: No additional comments.
Questions For Authors: - If the video generation models become biased toward object freefall after post-training and lose their original capabilities? How might their performance be affected in scenarios involving collisions or movements in different directions (e.g., similar to CLEVR)? Additionally, how do these models perform on standard video generation benchmarks such as V-Bench?
- Regarding the GIF demonstrations provided in the supplementary materials, evaluating models based on higher-resolution object freefall experiments could yield more robust results. Lower-resolution demonstrations might limit the models' ability to accurately generalize semantic categories and effectively reason about object interactions, thereby posing additional challenges for evaluation.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for taking the time to review our work. We are extremely grateful for the feedback given, and below we address the main concerns raised.
> **Q1:** Limited scope of the paper.
Our goal in this paper is not to create a generalist state-of-the-art physics video model, but rather to treat the post-training process itself as the primary object of study. As shown in the paper, leading commercial models are unable to reliably simulate some of the most basic aspects of physics. As such, we believe that post-training is an overlooked aspect of video modeling that will soon become a mainstream research area, especially in world models/simulators for embodied AI applications.
The simplicity of our task was deliberately chosen because it can serve as a probing mechanism for understanding aspects of the post-training process that would otherwise be overlooked in a more general setting. For example, we use the fact that our task can be analytically descibed with the laws of gravity and perspective to conduct the analysis in Sec. 5. We believe the distribution matching limitations are an important finding because if they are present in a setting as simple as ours, then the problem is likely to persist in more complex settings that are of interest to world model researchers. Hence this contribution is a highly valuable insight for future research looking to leverage video models as planners or simulators in embodied applications.
> **Q2:** Bias in post-training.
The problem of models overfitting/memorizing statistical patterns in the training data is ubiquitous across all areas of deep learning, even in the post-training of leading LLMs [1]. Our paper does not study solutions to this broader generalization problem, and the extent to which it is possible to solve is not clear either. In this sense, some of the results presented in this paper, such as those in section 5.1, are not surprising. We hope that you would agree with our view that presenting these results is scientifically important, even if they are somewhat negative and unsurprising.
> **Q3:** OOD generalization.
We evaluated our data on challenging and OOD settings in both real and simulated data. These videos include scenarios such as objects sliding down ramps, falling into containers, or domino-like setups. A summary of our relative improvement over the base OpenSora model is shown below.
| Scenarios | L2 | CD | IoU |
| --------------------- | ------ | ------ | ------- |
| Domino (Real) | 47% | 54% | 42% |
| Ramp (Real) | 41% | 47% | 18% |
| Stairs (Real) | 25% | 20% | 103% |
| Ramp (Simulated) | 87% | 90% | 55% |
| Container (Simulated) | 75% | 67% | 3.7% |
Please see [here](https://anonymous.4open.science/r/ICML-7650-Rebuttal-1645/3_ood.md) for more information about the dataset construction and a full table breakdown across settings and comparisons with baselines.
> **Q4:** Degradation of original capabilities.
We evaluated our model on VBench to understand the effect that our post-training process has on the model's original capabilities. Overall, there is a degradation in aesthetic quality and image quality, likely stemming from the limited realism of our simulated videos. The degradations could potentially be mitigated by adding aesthetic samples into post-training, which has been shown to be effective for both image [2] and video [3] models. Aside from these degradations, the performance on the other metric categories either improves or remains intact. Please see [here](https://anonymous.4open.science/r/ICML-7650-Rebuttal-1645/6_vbench.md) for a full breakdown.
> **Q5**: Concern about resolution.
Due to computational limitations, we were not able to train models at resolution higher than 256. However, the video architecture of OpenSora supports zero shot evaluation at 512 resolution. Overall, our metrics slightly degrade, though performance could be improved with finetuning at 512. A summary on real data is shown below and the full table can be found [here](https://anonymous.4open.science/r/ICML-7650-Rebuttal-1645/5_512_resolution.md).
| Output Resolution | L2 (⬇️) | CD (⬇️) | IoU (⬆️) |
| ----------------- | ----------------- | ----------------- | ---------------- |
| $256\times256$ | 0.153 | 0.432 | 0.069 |
| $512\times512$ | 0.175 | 0.502 | 0.069 |
Please let us know if you have any further questions. If our response has sufficiently addressed the concerns you have mentioned, we kindly ask that you raise your score. Thank you very much again for taking the time to review our work!
References
[1] Embers of autoregression show how large language models are shaped by the problem they are trained to solve
[2] Emu: Enhancing Image Generation Models Using Photogenic Needles in a Haystack
[3] Stable Video Diffusion: Scaling Latent Video Diffusion Models to Large Datasets | Summary: This work finds the generations of SOTA video generation models are visually impressive but are physically inaccurate. This work rigorously examines the post-training process of video generation models by focusing on the simple yet fundamental physics task of modeling object freefall which is highly challenging for state-of-the-art models.
They find fine-tuning a small amount of simulated videos can effectively improve the physical consistency. They further introduce two reward models for reward gradient training. The study also reveals key limitations of post-training in generalization and distribution modeling. Their released benchmark can also serve as a useful diagnostic tool for measuring the emergence of accurate physics modeling in video generative models.
Overall, the proposed benchmark is quite valuable for evaluating physics modeling, but the technical contribution is lacking and claims are not quite solid.
Claims And Evidence: The claims are good overall.
A few issues:
- The improvement from ORO is quite marginal. For example, ORO increases IoU scores from 0.139 to 0.142. It's uncertain if the proposed reward models are useful or not.
- The authors conclude by experimenting on a single model Open-Sora, which is quite weak nowadays. It's unknown if the observations can transfer to other models.
- The paper doesn't evaluate if the learned physics can transferred to more OOD settings, i.e. dramatically different scenes (e.g. indoor, oudoor) and objects (e.g. cat, human etc.).
Methods And Evaluation Criteria: - The proposed methods (PSFT, ORO) make sense and can improve the physical consistency.
- While the proposed reward models make sense, it's quite specialized and may not work for general cases. For example, does these reward models work for human actions?
Theoretical Claims: No theoretical claims.
Experimental Designs Or Analyses: Overall, the experimental design and analyses are solid and convincing.
A few issues:
- When ablation on dataset size, all models are trained for 5k steps, which may be under-trained for large dataset size.
- The fine-tuning video resolution is unspecified in the paper. From supplementary material, the models are trained on 256p, which may be too small for many objects in the proposed benchmark. How does the model perform on higher resolution, e.g. 512p?
Supplementary Material: Yes. It's great that the supplementary material provides many implementation details.
Relation To Broader Scientific Literature: N/A
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: N/A
Questions For Authors: See issues in previous sections.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We thank the reviewer for taking the time to review our work. We are extremely grateful for the feedback given, and below we address the main concerns raised.
> **Q1:** Marginal improvement from ORO.
On simulated data, ORO yields substantial gains. As the goal of our work is to study the process of post-training in a rigorous and controlled manner, we chose simulation as the primary domain for both training and evaluation. Even though we do not directly address the sim2real gap in this work, our method does also show generalizability to real data, and importantly, it maintains its outperformance of all other state-of-the-art commercial models.
To see if we could push real world performance even further, we ran an experiment using depth reward. The depth reward improves performance on the real world dataset by a large margin in L2 (11% improvement) and Chamfer distance (15% improvement). Please see [here](https://anonymous.4open.science/r/ICML-7650-Rebuttal-1645/1_depth_reward.md) for the full results.
We also found that ORO significantly improves the results in the phase of trajectory *after* collision with the ground, which is the most complex part to model. Please see our response to hCaK for more details.
> **Q2:** Weakness of OpenSora.
We agree that it is important to make sure the claims made for OpenSora hold for other models. We applied the PSFT procedure to Pyramid-Flow[1], a more recent and performant open video model than OpenSora. On our real test set, Pyramid-Flow outperforms the baselines.
| Model | L2 ⬇️ | CD ⬇️ | IoU ⬆️ |
| ------------------- | ----------------- | ----------------- | ---------------- |
| Pyramid-Flow + PSFT | 0.081 | 0.194 | 0.121 |
Training curves and more evaluation tables can be found [here](https://anonymous.4open.science/r/ICML-7650-Rebuttal-1645/2_pyramid_flow.md).
> **Q3:** OOD evaluation.
We evaluated our data on challenging and OOD settings in both real and simulated data. These videos include scenarios such as objects sliding down ramps, falling into containers, or domino-like setups. A summary of our relative improvement over the base OpenSora model is shown below.
| Scenarios | L2 | CD | IoU |
| --------------------- | ------ | ------ | ------- |
| Domino (Real) | 47% | 54% | 42% |
| Ramp (Real) | 41% | 47% | 18% |
| Stairs (Real) | 25% | 20% | 103% |
| Ramp (Simulated) | 87% | 90% | 55% |
| Container (Simulated) | 75% | 67% | 3.7% |
Please see [here](https://anonymous.4open.science/r/ICML-7650-Rebuttal-1645/3_ood.md) for more information about the dataset construction and a full table breakdown across settings and comparisons with baselines.
> **Q4:** Reward method is specialized.
We disagree that our reward modeling framework is specialized. In fact, it is highly general, since it only requires dense annotation maps, such as segmentation, depth or flow, to be provided for the generated and ground truth video.
Accuracy on physics tasks besides dropping is an important problem though, and concurrent work has shown evidence that ORO could be effective in more general settings. VideoJam [2] uses optical flow supervision in a similar manner to ORO and finds that it dramatically improves motion accuracy in video diffusion models, including in modeling human motion.
> **Q5:** Potential for under-training on larger datasets.
We continued to finetune our model trained on 20k data samples for 5k more steps (batch size of 128). As can be seen in the figure [here](https://anonymous.4open.science/r/ICML-7650-Rebuttal-1645/4_20k_curve.md), our metrics do not consistently improve further as a result of doing this.
> **Q6:** Concern about resolution.
Due to computational limitations, we were not able to train models at resolution higher than 256. However, the video architecture of OpenSora supports zero shot evaluation at 512 resolution. Overall, our metrics slightly degrade, though performance could be improved with finetuning at 512. A summary on real data is shown below and the full table can be found [here](https://anonymous.4open.science/r/ICML-7650-Rebuttal-1645/5_512_resolution.md).
| Output Resolution | L2 (⬇️) | CD (⬇️) | IoU (⬆️) |
| ----------------- | ----------------- | ----------------- | ---------------- |
| 256x256 | 0.153 | 0.432 | 0.069 |
| 512x512 | 0.175 | 0.502 | 0.069 |
Please let us know if you have any further questions. If our response has sufficiently addressed the concerns you have mentioned, we kindly ask that you raise your score. Thank you very much again for taking the time to review our work!
References
[1] Pyramidal Flow Matching for Efficient Video Generative Modeling
[2] VideoJAM: Joint Appearance-Motion Representations for Enhanced Motion Generation in Video Models | null | null | null | null | null | null |
Toward Robust Hyper-Detailed Image Captioning: A Multiagent Approach and Dual Evaluation Metrics for Factuality and Coverage | Accept (poster) | Summary: This paper studies how to evaluate and tackle the hallucination phenomenon of MLLM. It first conduct a motivation experiments and conclude that existing hallucination detection methods struggles with long captions. Then it proposes a new multi agent approach which involves a LLM to decompose the original long detailed caption into atomic propositions, and another MLLM for fact checking. Additionally, a new evaluation metric and benchmark for factuality and coverage are proposed. Experiments on IIW-400 and DOCCI datasets with various common LLM and MLLM demonstrate the effectiveness of the proposed approach.
Claims And Evidence: Yes, the motivation justified by experiments in section 3.1 is straightforward and makes intuitive sense to me. Long-context understanding has always been a problem for LLM/MLLM, and it naturally comes to one's mind that hallucination detectors would lean more on closer text and less on far-way image when the generated caption is extremely long.
Methods And Evaluation Criteria: Yes. Breaking the detailed caption into atomic propositions naturally tackles the long-context understanding issue.
I wonder, do the authors have some experimental evidences to prove that after CapMAS correction, the new caption is on par with or better than the original caption besides hallucination degree? A high-quality image caption should not hallucinates, and also truly and faithfully describe the whole image content in fluent human-style language.
Theoretical Claims: N/A
Experimental Designs Or Analyses: Experiments in Table 4 does show non-trivial improvement when the proposed CapMAS is applied. However, with a stronger captioner like GPT-4V, the gain becomes smaller. Thus, one wonders if a truly powerful MLLM with superb long-context understanding capability is employed, how much gain would CapMAS bring?
Secondly, how does the corrector LLM takes as input the $\pi$-thresholding results and correct the original caption? I wonder have the authors check if the hallucination scores would continue to improve if CapMAS is applied iteratively?
Supplementary Material: I checked Appendix D and found that a larger threshold $\pi$ generally lead to better performances. I wonder, will it eventually plateau? The authors are encouraged to provide more data points on the selection of $\pi$.
Relation To Broader Scientific Literature: Tackling LLM/MLLM hallucination is an important aspect of current foundational model researches.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: This CapMAS approach requires two extra LLM and one MLLM. I wonder, what are the memory and computational cost of CapMAS when compared with naive baseline and other hallucination tackling methods? Could this strategy be interpreted as some kind of inference-time scaling? If so, how well does CapMAS perform when compared to simple inference-time scaling strategy of MLLM image captioning?
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **We are pleased to share that, in support of open-source research, we have decided to release our carefully curated VQA dataset and evaluation codes. This dataset includes 1k images, each paired with approximately 36 question-answer sets.**
We sincerely thank you for your thorough review. We have made our best efforts to address your remaining concerns as follows:
>**Q1**. Do the authors have some experimental evidence to prove that after CapMAS correction, the new caption is on par with or better than the original caption besides the hallucination degree?
**A1**. We sincerely thank the reviewer for suggesting this experiment, which allowed us to highlight a new strength of CapMAS in terms of fluency. Due to space limitations, we regret that we cannot provide a detailed response here, and kindly ask you to refer to our response **A2** to reviewer **quCw** for further details.
>**Q2**. Thus, one wonders if a truly powerful MLLM with superb long-context understanding capability is employed, how much gain would CapMAS bring?
**A2**. CapMAS is designed to improve the captioning performance of MLLMs, which currently have limited long-context capabilities and are prone to hallucinations. As such, its effect may be less pronounced for future MLLMs with powerful capabilities. However, it is important to highlight that even for GPT-4V, one of the current SOTA MLLMs, we were able to improve factuality by over 5% points using CapMAS—without sacrificing coverage—by leveraging relatively weaker open-source models. Given that today’s best MLLMs still fall short of a truly reliable MLLM, our work carries meaningful implications, particularly for visually impaired users.
>**Q3**. How does the corrector LLM take as input the $\pi$-thresholding results and correct the original caption?
**A3**. It can be easily understood by referring to the corrector LLM's prompt template:
```
system:
I want to create a caption that includes only facts. Please help me correct the given caption.
The given caption contains things that are not true. Based on the given FACTS and NON-FACTS,
remove the non-factual elements from the caption.
user:
Caption: {caption}
FACTS:
{propositions classified as non-hallucinatory}
NON-FACTS:
{propositions classified as hallucinatory}
```
>**Q4**. I wonder have the authors check if the hallucination scores would continue to improve if CapMAS is applied iteratively?
**A4**. CapMAS is designed to remove detected hallucinations rather than correct them. This design choice is based on our observation that attempts at correction often lead to the introduction of new hallucinations. As a result, applying CapMAS iteratively to a single caption has an effect equivalent to applying it once with a lower threshold $\pi$.
Instead, we demonstrate that applying CapMAS individually to multiple captions for a single image and subsequently summarizing the results can lead to an overall improvement in the caption quality for that image.
Table B: Results of LLaVA-Next on a subset of IIW400
|Number of CapMAS Applications|CLAIR|Factuality|Coverage|
|-|-|-|-|
|1|59.8|83.0|24.3|
|2|63.3|76.8|32.6|
|3|67.4|80.2|37.7|
|4|68.2|79.9|38.9|
|5|70.6|80.1|41.4|
Table B demonstrates that the overall quality of the resulting captions improves as the number of CapMAS applications increases.
>**Q5**. I checked Appendix D and found that a larger threshold generally lead to better performances. I wonder, will it eventually plateau?
**A5**. In CapMAS, as the value of $\pi$ increases, the criterion for judging a proposition as true becomes more relaxed. This results in improved coverage but also causes more propositions to be classified as true, thereby introducing more hallucinations and lowering factuality. Consequently, in Appendix D, further increases in $\pi$ lead to a significant drop in the factuality score, ultimately degrading the overall quality of the captions. We summarize the results of additional experiments in the following table:
Table C: Ablation results on $\pi$ using LLaVA-NeXT-7B
|$\pi$|CLAIR|Factuality|Coverage|
|-|-|-|-|
|3.0|71.1|63.0|47.8|
|2.0|70.6|65.0|47.1|
|1.0|74.1|72.2|46.9|
|0.5|73.6|76.9|43.7|
>**Q6**. Could this strategy be interpreted as some kind of inference-time scaling?
**A6**. Yes, CapMAS can indeed be considered an inference-time scaling strategy, and the experimental results, including those in Table B, support this interpretation. Additionally, we compared CapMAS with Self-Refine, another inference-time scaling method. CapMAS achieves significantly better performance while requiring a comparable level of cost. Due to space limitations, we regret that we cannot provide further details here and kindly ask you to refer to our responses **A3 and A4** to reviewer **bFJq** for more information.
**We have additional experimental results that we were unable to include here. To help us share them, we kindly ask you to click “Rebuttal Comment” to allow us to leave a supplementary comment.**
---
Rebuttal Comment 1.1:
Comment: Please share your additional results in a supplementary comment. Thanks!
---
Reply to Comment 1.1.1:
Comment: Thank you for your comment!
>**Q7**. How well does CapMAS perform when compared to simple inference-time scaling strategy of MLLM image captioning?
**A7**. Motivated by the reviewer’s comment, we tested the effectiveness of inference-time scaling via Self-Refine (SR) [1]. Since validating and revising one's output requires advanced reasoning capabilities, we conducted the experiments using GPT-4V. For SR, we used the following prompt:
```
You are given an image and its corresponding caption.
Your task is to:
1. Analyze the image and compare it with the caption.
2. Identify and correct any factual or descriptive errors in the caption based on the image.
3. Refine the caption for clarity, correctness, and completeness — even if the original caption is mostly accurate.
Show your reasoning and then provide a final refined caption.
```
<Table D>
|Method|CLAIR|Factuality|Coverage|
|-|-|-|-|
|Base|82.4|77.1|53.5|
|+SR (x1)|79.9|72.3|50.4|
|+SR (x2)|79.1|70.6|50.1|
|+SR (x3)|78.5|69.8|49.7|
Table D shows that having the model revise its own captions iteratively does not lead to better results.
CapMAS is an approach that achieves better results by incurring additional cost at inference time. Although it involves a multi-model pipeline, CapMAS can offer a better cost-performance trade-off than Self-Refine methods for the following reasons:
1. Most of CapMAS’s cost lies in the final step, where the corrector LLM generates a refined caption based on the initial caption and the True/False classification results of the propositions. The decomposition step involves shorter sequences, and the MLLM used for proposition classification can process them in parallel, generating only a single token (True or False) per proposition. SR processes long sequences that include the original caption, detailed feedback, and the refined caption. Considering the length and complexity of the feedback [1], CapMAS and SR can be seen as comparable in cost.
2. As shown in Table 4 of our manuscript, CapMAS’s performance does not heavily depend on the capability of the LLM used. This suggests that the LLM-related cost in the pipeline could potentially be further reduced.
[1] SELF-REFINE: Iterative Refinement with Self-Feedback
**Evaluation of CapMAS on an Additional Dataset**. To demonstrate the generalizability of CapMAS, we additionally tested its effectiveness on a subset of DOCCI (400 samples).
<Table E>
|Method|CLAIR|Factuality|Coverage|
|-|-|-|-|
|LLaVA-NeXT-7b|68.0|56.2|54.0|
| +CapMAS|72.8|68.1|53.5|
|LLaVA-NeXT-13b|70.1|58.2|55.6|
| +CapMAS|74.5|74.6|52.3|
Table E demonstrates that CapMAS is also effective on DOCCI.
We sincerely appreciate your thoughtful feedback and have done our best to address your concerns. If there are any remaining or additional questions, please don’t hesitate to let us know. If you find our response satisfactory, we would be grateful if you could consider reflecting that in your score.
Thank you again for your time and consideration. | Summary: This paper looks at preventing hallucination in long-form image captions, proposing a system "CapMAS" which decomposes generated captions into atomic statements, which are then generated/corrected using a VLM. The paper also introduces two metrics for image caption evaluation based on a similar pipeline: Factuality (which represents the portion of "true" atomic statements), and Coverage (which uses a dataset of human annotated examples to evaluate if a caption can be used to answer all questions in an images). The paper shows that the resulting factuality metric correlates better with human judgements than both FAITHSCORE and FACTSCOR. The paper then shows that CapMAS outperforms base models alone when applied across several base captioning models (on CLAIR, Factuality, and Coverage scores), and significantly improves performance compared to hallucination reduction methods.
## update after rebuttal
Thanks to the authors for providing some clarifications to my comments - I particularly appreciate the additional discussion comparing to SR, and would love to see this included in the paper (A3). In general, while the paper is similar to some released methods, and I agree with reviewer QHjW that it is unlikely to have broad impact due to it's limited scope, it's a solid paper.
Claims And Evidence: The paper makes several well-supported claims:
- CapMAS improves on factuality/clair/coverage averages in the DOCCI/ IIW-400 dataset
- VQA benchmarks do not correlate with captioning performance
- Existing hallucination detection methods fail on long captions (Though it's worth noting that CLAIR appears to be as effective in the meta-evaluation in Table 2, and ALOHa performs well in Table 2 for object-hallucination, which it is designed for).
The claim on L346 (Sec 4.3) that CapMAS exhibits a factuality-coverage tradeoff is somewhat over-stated. While the factuality and coverage scores do indeed have opposite trends in Table D, it doesn't seem to me that these are inherently conflicting values (and indeed, this is just an artifact of the fact that the LLM cannot achieve well-grounded performance).
Methods And Evaluation Criteria: The methods/evaluation criteria are fairly well-designed, though I would like to see some experiments on a wider set of images (particularly compared to IIW-400 which is quite small at only 400 images). Experimental comparisons on COCO (though less centered on detail captions), would be quite helpful in placing this against other captioning methods.
Theoretical Claims: This paper makes no theoretical claims.
Experimental Designs Or Analyses: I did not validate the experimental design, however no tables in the paper have any estimate of variance (for example, standard error around the mean), making it challenging to determine which experiments have significant effects. Also notable is that there is little ablation of the CapMAS components, for example, is it necessary to break the caption down into atomic points before re-captioning, or is just asking the external VLM to re-caption sufficient?
Supplementary Material: Yes - I reviewed appendices C, D, E, F, G and H.
Relation To Broader Scientific Literature: This paper consists of two key components, the method CapMAS and the evaluation measures/dataset. The evaluation measures are, while useful, quite similar to related work. Coverage is quite similar in structure to the POPE metric, albeit with a different goal (rather than hallucination detection, image factuality). Factuality is quite similar to the ALOHa metric, but expands somewhat from objects to atomic statements (though the motivating experiments seem to use a version that is object-detector based, which aligns quite well with ALOHa's contribution). CapMAS itself is quite similar to methods employed by both LURE and Woodpecker (See below), and uses rewriting models to directly rewrite output captions (though here, the focus is on coverage/correctness compared to hallucination).
While the approach is quite similar to existing methods, this appears to be the first approach which puts these all together, and the simplicity of the combination is quite compelling. The performance seems strong, and demonstrates that many of the ideas introduced in prior work can be combined to strong effect.
Essential References Not Discussed: The references are generally sufficient, however the paper might consider discussing a comparison to Woodpecker [1], which is a method similar to LURE for caption hallucination reduction which detects captions using an external model, and corrects the caption using information from this process. The system prompt in Figure 9 is very closely related to the system prompt in [2], and should probably be cited. The paper could consider citing related work in NLP on self-rationalization models which look at breaking sentences down into atomic claims [3].
[1] Yin, Shukang, et al. "Woodpecker: Hallucination correction for multimodal large language models." Science China Information Sciences 67.12 (2024): 220105.
[2] Chan, David, et al. "IC3: Image Captioning by Committee Consensus." The 2023 Conference on Empirical Methods in Natural Language Processing.
[3] Wiegreffe, Sarah, and Ana Marasović. "Teach me to explain: A review of datasets for explainable natural language processing." arXiv preprint arXiv:2102.12060 (2021).
Other Strengths And Weaknesses: Strengths:
- The paper is quite strong in terms of performance on the discussed metrics, and clearly leads to overall improvements on a limited data subset.
- The paper requires no additional training (is fully zero-shot)
- Factuality seems to be quite good compared to both baseline measures
Weaknesses:
- As discussed above, the test datasets used here are fairly small and limited in scope, and it would be good to see evaluations on something a bit broader.
- There's not really any explanation or discussion of the efficiency of the model, which now requires several LLMs (and probably significantly increases the cost of generating individual captions)
Other Comments Or Suggestions: - It would be quite helpful to have some additional qualitative analysis, or error analysis, to look at potential directions for future research, or to evaluate weaknesses in the approach (for both CapMAS and for the evaluation metrics).
Questions For Authors: - I couldn't find the dataset used for evaluation in Tables 4 and 5, is this on IIW-400 or the DOCCI dataset?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: **We are pleased to share that, in support of open-source research, we have decided to release our carefully curated VQA dataset and evaluation codes. This dataset includes 1k images, each paired with approximately 36 question-answer sets.**
We sincerely thank you for your thorough review. We have made our best efforts to address your remaining concerns as follows:
>**Q1**. The claim on L346 (Sec 4.3) that CapMAS exhibits a factuality-coverage tradeoff is somewhat over-stated.
**A1**. CapMAS is designed to remove detected hallucinations rather than correct them. This design choice is based on our observation that attempts at correction often lead to the introduction of new hallucinations. As CapMAS may remove not only hallucinations but also factual content, it inherently involves a trade-off between factuality and coverage, as discussed in Section 4.3.
Nevertheless, through inference-time scaling, CapMAS can improve the coverage of captions while maintaining a certain level of factuality. Due to space limitations, we regret that we cannot provide further details here and kindly ask you to refer to our response **A4** to reviewer **fLiQ** for more information.
>**Q2**. I would like to see some experiments on a wider set of images.
**A2**. Using benchmarks like COCO for evaluating detailed image captioning can lead to misleading conclusions:
1. Short captions in COCO bias evaluation metrics to favor simpler captions over accurate, detailed ones.
2. Reference-free metrics, intended to remove this bias, often introduce another bias stemming from the evaluation model itself, which can significantly affect the scores [1].
We believe reliable evaluation requires detailed human supervision. To address your concern, we conducted additional experiments on DOCCI. Due to space limitations, we kindly ask you to refer to our response **A2** to reviewer **QHjW** for details, which confirm CapMAS is effective on DOCCI as well.
[1] LLM Evaluators Recognize and Favor Their Own Generations
>**Q3**. Is it necessary to break the caption down into atomic points before re-captioning, or is just asking the external VLM to re-caption sufficient?
**A3**. CapMAS can be understood as an inference-time scaling strategy. Motivated by the reviewer’s comment, we tested the effectiveness of inference-time scaling via Self-Refine (SR) [2]. Since validating and revising one's output requires advanced reasoning capabilities, we conducted the experiments using GPT-4V. For SR, we used the following prompt:
```
You are given an image and its corresponding caption.
Your task is to:
1. Analyze the image and compare it with the caption.
2. Identify and correct any factual or descriptive errors in the caption based on the image.
3. Refine the caption for clarity, correctness, and completeness — even if the original caption is mostly accurate.
Show your reasoning and then provide a final refined caption.
```
<Table A>
|Method|CLAIR|Factuality|Coverage|
|-|-|-|-|
|Base|82.4|77.1|53.5|
|+SR (x1)|79.9|72.3|50.4|
|+SR (x2)|79.1|70.6|50.1|
|+SR (x3)|78.5|69.8|49.7|
Table A shows that having the model revise its own captions iteratively does not lead to better results.
[2] SELF-REFINE: Iterative Refinement with Self-Feedback
>**Q4**. There's not really any explanation or discussion of the efficiency of the model
**A4**. CapMAS is an approach that achieves better results by incurring additional cost at inference time. Although it involves a multi-model pipeline, CapMAS can offer a better cost-performance trade-off than Self-Refine (SR) methods for the following reasons:
1. Most of CapMAS’s cost lies in the final step, where the corrector LLM generates a refined caption based on the initial caption and the True/False classification results of the propositions. The decomposition step involves shorter sequences, and the MLLM used for proposition classification can process them in parallel, generating only a single token (True or False) per proposition. SR processes long sequences that include the original caption, detailed feedback, and the refined caption. Considering the length and complexity of the feedback [2], CapMAS and SR can be seen as comparable in cost.
2. As shown in Table 4, CapMAS’s performance does not heavily depend on the capability of the LLM used. This suggests that the LLM-related cost in the pipeline could potentially be further reduced.
>**Q5**. I couldn't find the dataset used for evaluation in Tables 4 and 5, is this on IIW-400 or the DOCCI dataset?
**A5**. For Tables 4 and 5, we used IIW-400, based on findings [3] indicating that IIW-400 serves as a better reference caption set than DOCCI. As noted in A2, our additional experimental results confirm that CapMAS is effective on both IIW-400 and DOCCI.
[3] ImageInWords: Unlocking Hyper-Detailed Image Descriptions
**We have additional results and discussions that we were unable to include here. We kindly ask you to click “Rebuttal Comment” to allow us to add a comment.** | Summary: This paper focuses on generating long, detailed captions for images. A key idea in the paper is to decompose long captions into atomic claims using an LLM, and then verify every claim individually in the context of the image using a VLM. The paper motivates this by showing that this approach outperforms alternative ways of identifying hallucinations, like token confidence or consistency-based approaches. Based on this observation, the paper proposes a pipeline called CapMAS, which decomposes a long caption into atomic claims, verifies the correctness of each claim, and removes incorrect claims from the caption using an LLM. The paper also proposes two metrics to measure factuality and coverage of claims in captions by using GPT-4o to break down captions into individual claims and verifying them. The paper compares these metrics to prior metrics like ROGUE and BLEU, and shows that it outperforms them wrt finding the correct caption for an image. Using these metrics, they evaluate CapMAS and prior approaches for the task of long-caption generation. CapMAS performs quite well on a wide range of VLMs, outperforming prior approaches. Finally, the paper highlights that current models are typically trained to generate short, concise responses and struggle with longer responses.
## Update after rebuttal
I think this is a solid paper. Thus, I increased my score to Accept. I hope the added results and discussions are reflected in the final version.
Claims And Evidence: The claims are mostly supported by empirical evidence:
- The paper motivated the use of decomposing long captions into atomic claims by showing that it is better at detecting hallucinations as compared to simple baselines like token confidence or consistency-based approaches. This forms the basis of their pipeline, CapMAS.
- The paper proposed a metric for measuring factuality and showed that it is better than prior metrics for this purpose.
- They also proposed a new metric for measuring coverage, but did not compare it to any prior metrics for this purpose – *are there any relevant alternatives?*
- They evaluated their pipeline CapMAS using these metrics and demonstrated that it consistently improved factuality for a range of MLLMs (like LLaVA and GPT-4o), supporting the efficacy of their approach.
- They also demonstrated that models which are good at visual question answering might not necessarily be good at detailed image captioning as they might be biased toward shorter responses, reducing their coverage.
Methods And Evaluation Criteria: Yes. The paper focuses on removing hallucinations from detailed captions. They identify two key criteria for this setting: factuality and coverage. They propose metrics for these criteria and demonstrate their efficacy. They evaluate their approach on two datasets – DOCCI and IIW-400 – which contain images and corresponding detailed, factual captions. They test their method on a wide range of MLLMs to assess how general it is.
Theoretical Claims: N/A
Experimental Designs Or Analyses: Yes, the experiments in the paper are mostly sound. The factuality metric is compared to multiple existing metrics on 3 types of perturbed captions that contain hallucinations, and it is the most accurate at identifying captions across all of them. Further, the CapMAS pipeline is also compared against prior works on a variety of MLLMs and shows strong performance consistently.
*One major question is: which datasets do Tables 4 and 5 correspond to? This wasn’t clear, and is an important detail.*
Supplementary Material: Yes, I reviewed all sections of the supplementary material. It provides some details and examples like the prompts used, qualitative examples, and a few additional results to complement those in the main paper.
Relation To Broader Scientific Literature: Prior literature in vision-language understanding has focused less on highly detailed captions. While approaches have been proposed to reduce hallucinations in shorter responses, the paper shows that these approaches don’t work well for longer captions. They propose a new method to improve the factuality of detailed captions and show that it outperforms existing approaches.
Essential References Not Discussed: Mostly, the literature is well-covered.
You could additionally discuss and compare against [1] as a baseline.
[1] Petryk et al. Simple Token-Level Confidence Improves Caption Correctness. WACV'24
Other Strengths And Weaknesses: Strengths:
- The proposed pipeline performs well on the task of reducing hallucinations in detailed captions. It leads to significant improvement in performance for a range of MLLMs (small models like LLaVA to large models like GPT-4v). It also outperforms prior approaches.
- The pipeline can be easily plugged into existing MLLMs without requiring any additional training.
- The paper is mostly well-written and easy to follow.
Weaknesses:
- A few details were not clear. Which datasets do Tables 4 and 5 correspond to? This wasn’t clear, and is an important detail.
- While this approach helps improve factuality, it doesn’t seem to improve the coverage of captions, which is also important for detailed captions. (I understand that one can tune the hyperparameter pi to change coverage but coverage is still upper-bounded by the original captions generated by the MLLM).
- This is a minor point, but for the sake of completeness, you could also evaluate your pipeline on GPT-4o, as it is more widely used than GPT-4v.
Other Comments Or Suggestions: You can consider giving your proposed metrics a specific name (for example, the title of Section 4.2 reads a bit weird to me). Also, clearly highlight that this metric uses GPT-4o’s vision capabilities as well (i.e., there is no other model serving as the VLM, only GPT-4o).
Questions For Authors: - How is coverage computed? I couldn’t find a clear formula for this in the paper.
- I believe this pipeline cannot improve the coverage in captions (and only improve factuality). How can we improve coverage?
- For measuring factuality, your method uses the image and the reference caption. If the reference caption is already available, is the image really required? Can’t you verify the claim against the reference caption? An ablation about this might be interesting, as removing the image can save significant amounts of compute/ API credits.
- Similarly, for coverage, the questions are generated based on the image. The coverage of these questions themselves would be bottlenecked by the vision capabilities of GPT-4o, and the questions might miss some details. Could you try incorporating the captions of images in the question-generation pipeline?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: **We are pleased to share that, in support of open-source research, we have decided to release our carefully curated VQA dataset and evaluation codes. This dataset includes 1k images, each paired with approximately 36 question-answer sets.**
We sincerely thank you for your thorough review. We have made our best efforts to address your remaining concerns as follows:
>**Q1**. They also proposed a new metric for measuring coverage, but did not compare it to any prior metrics for this purpose – are there any relevant alternatives?
**A1**. There have been prior attempts to measure the coverage of image captions [1]. These methods typically involve extracting visual entities from reference captions and checking whether they appear in the caption being evaluated. However, such approaches have notable limitations:
1. MLLMs can describe the same visual content in highly diverse ways. As a result, it is often difficult to determine with precision whether the visual content in the reference captions is present in the evaluated caption.
2. These methods can only evaluate pre-defined types of visual content—typically objects, attributes, and spatial relations—since they rely on accurate extraction and comparison of such elements within captions. As a result, they cannot assess whether a caption captures conceptual aspects of an image, such as mood or semantic relationships between entities.
In contrast, our coverage metric avoids these limitations. Any information can be turned into a question and given to an LLM grounded in the caption.
[1] Object Hallucination in Image Captioning
>**Q2**. Which datasets do Tables 4 and 5 correspond to?
**A2**. For Tables 4 and 5, we used IIW-400, based on findings [2] indicating that IIW-400 serves as a better reference caption set than DOCCI. However, to demonstrate the generalizability of CapMAS, we additionally tested its effectiveness on a subset of DOCCI (400 samples).
<Table A>
|Method|CLAIR|Factuality|Coverage|
|-|-|-|-|
|LLaVA-NeXT-7b|68.0|56.2|54.0|
| +CapMAS|72.8|68.1|53.5|
|LLaVA-NeXT-13b|70.1|58.2|55.6|
| +CapMAS|74.5|74.6|52.3|
Table A demonstrates that CapMAS is also effective on DOCCI.
[2] ImageInWords: Unlocking Hyper-Detailed Image Descriptions
>**Q3**. I believe this pipeline cannot improve the coverage in captions (and only improve factuality). How can we improve coverage?
**A3**. Through inference-time scaling, CapMAS can improve the coverage of captions while maintaining a certain level of factuality. Due to space limitations, we regret that we cannot provide further details here and kindly ask you to refer to our response **A4** to reviewer **fLiQ** for more information.
>**Q4**. This is a minor point, but for the sake of completeness, you could also evaluate your pipeline on GPT-4o, as it is more widely used than GPT-4v.
**A4**. We used GPT-4o to evaluate the captions, and it is known that LLMs tend to favor their own outputs [3]. Therefore, to ensure a fair comparison, we did not use GPT-4o at any stage of the proposed captioning pipeline.
[3] LLM Evaluators Recognize and Favor Their Own Generations
>**Q5**. How is coverage computed? I couldn’t find a clear formula for this in the paper.
**A5**. We apologize for the confusion. Let N be the total number of VQA samples and C the number correctly answered by GPT-4o using only the captions. Then, the coverage score is C/N.
>**Q6**. For measuring factuality, your method uses the image and the reference caption. If the reference caption is already available, is the image really required?
**A6**. Metrics based solely on reference captions are prone to stylistic bias, favoring or penalizing captions based on phrasing. To illustrate this, we compare a reference-based metric with our factuality metric using human-labeled captions (HUMAN) that are hallucination-free but stylistically different from the references. As shown in Table B, the reference-based metric correlates poorly with human judgment by assigning lower scores to HUMAN while our factuality metric remains robust.
Table B: Meta-evaluation with model- and human-generated captions; see Sec. 4.2 and Appx. B for details.
|Method|Correlation with Human Evaluation ↑|
|-|-|
|Reference-based|18.3|
|Ours|61.4|
>**Q7**. The coverage of these questions themselves would be bottlenecked by the vision capabilities of GPT-4o, and the questions might miss some details.
**A7**. Building a detailed VQA dataset is costly, so we relied on GPT-4o, making our dataset quality depend on its capabilities. To address this, we carefully guided human annotators during the labeling process.
While we considered using reference captions, we first aimed to show that our coverage metric works reliably without them, as reliance on references limits its applicability. Our results confirm this, and we are exploring ways to further improve the method by incorporating reference captions.
**We have more results—please click “Rebuttal Comment” to allow us to add a comment.**
---
Rebuttal Comment 1.1:
Comment: Thanks for your efforts in the rebuttal period. A lot of my concerns were addressed. Here are more specific comments:
Q1: While these arguments make sense, it would be good to include a quantitative comparison of the proposed coverage metric with prior work you cited [1].
Q2: This makes sense. Please update the submission to explicitly state which dataset the table corresponds to, as well as the new results.
Q3: This is an interesting result! You can include it in the main text.
Q4: Okay, that makes sense.
Q5: Thanks for the clarification; please add this to the submission.
Q6: Thanks for the additional results. It’s surprising to see the reference-based approach perform so poorly. If you plan to add this to your paper, including a few qualitative examples of failure modes of the reference-based approach might also be helpful.
Q7: Okay, that makes sense.
Overall, the paper is above the threshold for acceptance; hence, I maintain my original score of Weak Accept.
[1] Object Hallucination in Image Captioning
---
Reply to Comment 1.1.1:
Comment: Thank you for the comment! We will incorporate all of your suggestions into the next version of the paper. The comparison with the prior coverage metric requires manually identifying and organizing visual entities from the reference captions, so it will take some time. We will also include the results of this experiment in the next version.
In this additional comment, we would like to highlight a new strength of CapMAS in terms of fluency.
To assess the fluency of the captions generated by CapMAS, we employ an LLM-based evaluation. Specifically, we utilize GPT-4o with the following prompt:
```
You are a language expert evaluating the fluency of image captions.
Fluency refers to how grammatically correct, natural, and well-formed the text sounds to a native English speaker. A fluent caption should be grammatically correct, free of awkward phrasing, and read smoothly.
Evaluate the fluency of the following caption and return your output **strictly in JSON format** with:
- "reason": a key reason for your scoring
- "score": a number between 0 (completely disfluent) and 100 (perfect fluency)
Caption: "{caption}"
```
In addition to evaluating the captions generated by CapMAS, we also assess human-written detailed captions for the same set of images.
**Table C**
|Captions generated by|Fluency ↑|
|-|-|
|Human|89.0|
|CapMAS (LLaVA-v1.5-7B)|93.4|
|CapMAS (LLaVA-NeXT-7B)|93.4|
|CapMAS (LLaVA-NeXT-13B)|93.6|
|CapMAS (InternVL-Chat-V1.5)|94.1|
The results in Table C demonstrate that the captions generated by CapMAS achieve even higher fluency scores than the human-generated captions. This can be attributed to the final stage of CapMAS, in which the corrector LLM helps preserve or even improve the fluency of the captions. | Summary: This paper proposes a multiagent approach that leverages LLM and MLLM to correct given captions and designs two metrics for evaluating generated caption. A dataset is collected for one of the metrics.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: - In Table 4, after the captions were corrected, the coverage score decreased. This may because: 1) the design of the metric is not suitable, 2) the method sacrifices coverage for factuality, indicating that the method has some drawbacks. Same phenomenon is observed in Table 5, please provide more explanation.
- Since Factuality and Coverage cannot reflect the basic quality of language, how about including the traditional metrics like CIDEr and METEOR?
Theoretical Claims: Not Applicable.
Experimental Designs Or Analyses: In Table 2, it appears that the CLAIR metric can reflect the introduced noises. It's not as problematic as the article suggests.
Supplementary Material: I reviewed Appendix D. Ablation Study.
Relation To Broader Scientific Literature: This paper is related to helping MLLMs generating highly detailed captions.
Essential References Not Discussed: None.
Other Strengths And Weaknesses: - Strengths:
- The proposed method enhances the factual accuracy of captions.
- Existing approaches require the corrector model training, while the proposed method employs collaboration between an MLLM and LLM.
- Weaknesses:
- Previous studies focus on measuring the factuality of generated text, while the coverage metric is newly proposed. However, the rationale for using this metric is questionable, as the two metrics appear to be contradictory based on the experimental results. Or if there is no problem with the metric, does this mean that the proposed method may not work as intended?
Other Comments Or Suggestions: None.
Questions For Authors: See weaknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review. We appreciate your recognition of our method’s potential to improve the factual accuracy of image captions and our novel approach leveraging MLLM–LLM collaboration without training a separate corrector. We also thank you for your suggestions, which have enriched our work.
**We are pleased to share that, in support of open-source research, we have decided to release our carefully curated VQA dataset and evaluation codes. This dataset includes 1k images, each paired with approximately 36 question-answer sets. We kindly ask that you consider this a contribution to the open-source community.**
In the subsequent sections, we address each of your concerns as follows:
>**Q1**. In Table 4, after the captions were corrected, the coverage score decreased. This may because: 1) the design of the metric is not suitable, 2) the method sacrifices coverage for factuality, indicating that the method has some drawbacks. Same phenomenon is observed in Table 5, please provide more explanation.
**A1**. We would like to clarify a potential misunderstanding regarding the decrease in the coverage score caused by the application of CapMAS. First and foremost, CapMAS was proposed to enhance the factuality of detailed image captions. Ideally, identifying and correcting hallucinations within captions would improve both factuality and coverage. However, through empirical analysis, we observed that such correction attempts often lead to the generation of new hallucinations due to the limitations of current MLLMs.
Therefore, CapMAS is designed to remove detected hallucinations rather than correct them. As CapMAS may remove not only hallucinations but also factual content, it inherently involves a trade-off between factuality and coverage, as discussed in Section 4.3. This trade-off is controlled by its hyperparameter, $\pi$.
Consequently, the decrease in coverage scores observed in Tables 4 and 5 arises not from a flaw in the proposed metrics, but rather from the design of CapMAS, which reflects the current limitations of MLLMs.
>**Q2**. Since Factuality and Coverage cannot reflect the basic quality of language, how about including the traditional metrics like CIDEr and METEOR?
**A2**. We employ an LLM-based evaluation to measure the basic quality of language in the captions generated by CapMAS. This approach is motivated by recent studies [1,2] indicating that conventional automatic caption evaluation methods are biased and not well-suited for assessing detailed captions. Specifically, we utilize GPT-4o with the following prompt:
```
You are a language expert evaluating the fluency of image captions.
Fluency refers to how grammatically correct, natural, and well-formed the text sounds to a native English speaker. A fluent caption should be grammatically correct, free of awkward phrasing, and read smoothly.
Evaluate the fluency of the following caption and return your output **strictly in JSON format** with:
- "reason": a key reason for your scoring
- "score": a number between 0 (completely disfluent) and 100 (perfect fluency)
Caption: "{caption}"
```
In addition to evaluating the captions generated by CapMAS, we also assess human-written detailed captions for the same set of images.
**Table A**
|Captions generated by|Fluency ↑|
|-|-|
|Human|89.0|
|CapMAS (LLaVA-v1.5-7B)|93.4|
|CapMAS (LLaVA-NeXT-7B)|93.4|
|CapMAS (LLaVA-NeXT-13B)|93.6|
|CapMAS (InternVL-Chat-V1.5)|94.1|
The results in Table A demonstrate that the captions generated by CapMAS achieve even higher fluency scores than the human-generated captions. This can be attributed to the final stage of CapMAS, in which the corrector LLM helps preserve or even improve the fluency of the captions.
We sincerely thank the reviewer for suggesting this experiment, which enabled us to highlight a new strength of CapMAS in terms of fluency.
>**Q3**. In Table 2, it appears that the CLAIR metric can reflect the introduced noises. It's not as problematic as the article suggests.
**A3**. Yes, CLAIR is indeed capable of reflecting the introduced noise, which is precisely why we adopted it in our experiments. However, as shown in the prompt used below, the meaning of the score it produces is not clearly defined:
```
On a precise scale from 0 to 100, how likely is it that the candidate caption is describing the same image as the reference caption? (JSON format, with a key "score", value between 0 and 100, and a key "reason" with a string value.)
```
In contrast, our proposed metrics have clearly defined interpretations and enable a more detailed analysis of MLLMs in terms of both factuality and coverage.
[1] ImageInWords: Unlocking Hyper-Detailed Image Descriptions
[2] Benchmarking and Improving Detail Image Caption
**We have additional experimental results that we were unable to include here. To help us share them, we kindly ask you to click “Rebuttal Comment” to allow us to leave a supplementary comment.**
---
Rebuttal Comment 1.1:
Comment: Please show additional experimental results.
---
Reply to Comment 1.1.1:
Comment: Thank you for your comment! In this additional response, we demonstrate that inference-time scaling can improve the factuality–coverage trade-off, while alternative inference-time scaling methods are ineffective. We also provide experimental results for CapMAS on an additional dataset.
# Improved Factuality–Coverage Trade-Off via Inference-Time Scaling #
As we previously noted, CapMAS is designed to remove detected hallucinations rather than correct them. This design choice stems from our observation that attempts at correction often introduce new hallucinations. Consequently, applying CapMAS may eliminate not only hallucinations but also factual content, inherently leading to a trade-off between factuality and coverage.
However, we show that this trade-off can be improved through inference-time scaling. Specifically, we demonstrate that applying CapMAS individually to multiple captions for a single image and then summarizing the results can lead to an overall enhancement in the caption quality for that image.
Table B: Results of LLaVA-Next on a subset of IIW400
|Number of CapMAS Applications|CLAIR|Factuality|Coverage|
|-|-|-|-|
|1|59.8|83.0|24.3|
|2|63.3|76.8|32.6|
|3|67.4|80.2|37.7|
|4|68.2|79.9|38.9|
|5|70.6|80.1|41.4|
Table B demonstrates that the facutality-coverage trade-off and overall quality of the resulting captions improve as the number of CapMAS applications increases.
# Effectiveness of the Self-Refine Method #
We tested the effectiveness of a representative inference-time scaling approach, Self-Refine (SR) [3]. Since validating and revising one's output requires advanced reasoning capabilities, we conducted the experiments using GPT-4V. For SR, we used the following prompt:
```
You are given an image and its corresponding caption.
Your task is to:
1. Analyze the image and compare it with the caption.
2. Identify and correct any factual or descriptive errors in the caption based on the image.
3. Refine the caption for clarity, correctness, and completeness — even if the original caption is mostly accurate.
Show your reasoning and then provide a final refined caption.
```
<Table C>
|Method|CLAIR|Factuality|Coverage|
|-|-|-|-|
|Base|82.4|77.1|53.5|
|+SR (x1)|79.9|72.3|50.4|
|+SR (x2)|79.1|70.6|50.1|
|+SR (x3)|78.5|69.8|49.7|
Table C shows that having the model revise its own captions iteratively does not lead to better results. This demonstrates the superiority of CapMAS in terms of inference-time scaling.
[3] SELF-REFINE: Iterative Refinement with Self-Feedback
# Evaluation of CapMAS on an Additional Dataset #
To demonstrate the generalizability of CapMAS, we additionally tested its effectiveness on a subset of DOCCI (400 samples).
<Table D>
|Method|CLAIR|Factuality|Coverage|
|-|-|-|-|
|LLaVA-NeXT-7b|68.0|56.2|54.0|
| +CapMAS|72.8|68.1|53.5|
|LLaVA-NeXT-13b|70.1|58.2|55.6|
| +CapMAS|74.5|74.6|52.3|
Table D demonstrates that CapMAS is also effective on DOCCI.
We sincerely appreciate your thoughtful feedback and have done our best to address your concerns. If there are any remaining or additional questions, please don’t hesitate to let us know. If you find our response satisfactory, we would be grateful if you could consider reflecting that in your score.
Thank you again for your time and consideration. | null | null | null | null | null | null |
Categorical Schrödinger Bridge Matching | Accept (poster) | Summary: The authors:
- provide a proof for the convergence of discrete-time IMF in discrete-state spaces.
- develop an algorithm called "Categorical SBM" that approximates a solution to the SB problem for discrete-state spaces.
Claims And Evidence: I'm not well equipped to answer this question.
Methods And Evaluation Criteria: The experiments make sense to me (inter-domain translation of images), and the method performs on par with what was compared.
I don't know the literature enough to fully appreciate these results.
Theoretical Claims: I checked the proofs to the best of my ability, and they look correct.
However, I can't provide any merit on how relevant it is to the current understanding of the topic.
Experimental Designs Or Analyses: These experiments are standard.
Supplementary Material: No.
Relation To Broader Scientific Literature: I don't know.
Essential References Not Discussed: I don't know.
Other Strengths And Weaknesses: The paper is very well written.
Other Comments Or Suggestions: There are many typos regarding word order (see lines 112 and 116) as well as the use of the word "the".
They don't affect the understanding.
Questions For Authors: None.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your comments and positive evaluation. We will correct the typos you mentioned. If you're interested in the topic, we recommend checking the other reviews and these works on the Schrödinger Bridge Problem [1, 2, 3].
[1] Kim, Jun Hyeong, et al. "Discrete Diffusion Schrödinger Bridge Matching for Graph Transformation." arXiv preprint arXiv:2410.01500 (2024).
[2] Shi, Yuyang, et al. "Diffusion Schrödinger bridge matching." Advances in Neural Information Processing Systems 36 (2023): 62183-62223.
[3] Gushchin, Nikita, et al. "Adversarial Schrödinger Bridge Matching." The Thirty-eighth Annual Conference on Neural Information Processing Systems. | Summary: The paper proposes an algorithm based on Iterative Markovian Fitting (IMF) for solving Schrödinger Bridge (SB) in discrete (categorical) space. The contribution of the paper therefore lies in the extension of SB, originally constructed in continuous state spaces, and its data-driven learning-based algorithm to discrete setup. Experiments are conducted on 2D synthetic dataset and latent-space images.
Claims And Evidence: - Can the author clarify the difference between proposed method to DDSBM, which does present theoretical results for continuous-time IMF in discrete spaces? I'm fairly familiar with DDSBM and, since both are based on IMF, the methods seem to collapse in practice when time is discretized.
Methods And Evaluation Criteria: Y
Theoretical Claims: Y
Experimental Designs Or Analyses: - I'm not convinced by Table 2. The author should report ASBM and DSBM in a similar GAN-based continuous latent spaces for a fair comparison.
- It seems faulty to claim the dimensionality of CelebA Faces to be 1024^256. Given the factorized parametrization, the dimension that the proposed method handles should be 1024*256.
- Fig 3's caption should clarify that images are generated in VQ-GAN latent spaces. I think it's boarderline misleading to not mention specifically in the caption that it's a GAN-based latent-space image experiments. GAN's latent spaces are not only of lower dimension but also much structural.
Supplementary Material: Y
Relation To Broader Scientific Literature: Discrete space is an important extension of data-driven methods. Thm 3.1 could potentially handle other data types/spaces, which may also be of independent interests.
Essential References Not Discussed: N
Other Strengths And Weaknesses: N
Other Comments Or Suggestions: N
Questions For Authors: My main questions and concerns, as listed above, are the similarity to DDSBM (ICLR'25) and experiment setups. The only comparison to prior SB works is Table 2 which is not conducted fairly.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Dear reviewer 1AGf thank you for your questions and commentaries.
**[Q. 1] Can the author clarify the difference between proposed method to DDSBM, which does present theoretical results for continuous-time IMF in discrete spaces? I'm fairly familiar with DDSBM and, since both are based on IMF, the methods seem to collapse in practice when time is discretized.**
The answer to this question can be found in our response to reviewer [MBG2](https://openreview.net/forum?id=RBly0nOr2h¬eId=20eaTHc0ic) [W. 1] and [W. 2].
**[W. 1] I'm not convinced by Table 2. The author should report ASBM and DSBM in a similar GAN-based continuous latent spaces for a fair comparison.**
Regarding the setup on the CelebA dataset, we agree with your concerns. We did attempt to train DSBM in the latent space. For a fair comparison, we ran DSBM on the same latent space used for CSBM, following the approach in [1, Appendix G]. However, the results were not satisfactory, as the model tended to collapse to the identity mapping with $\epsilon = 1$ and $\epsilon = 3$ (**LINK:** see [figures](https://anonymous.4open.science/r/images-64B3/) with prefix 'latent'). Due to these limitations, we did not proceed with training ASBM and chose not to compare both methods with CSBM in such settings. One may ask why CSBM performs better in this setting. We hypothesize that this is due to the choice of the reference process, with $q^{\text{unif}}$ being more suitable for the latent space of VQ-GAN.
[1] Rombach, Robin, et al. "High-resolution image synthesis with latent diffusion models." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022.
**[W. 2] It seems faulty to claim the dimensionality of CelebA Faces to be $1024^{256}$. Given the factorized parametrization, the dimension that the proposed method handles should be $1024*256$.**
This is not a mistake but rather a point that could be clarified more explicitly. When we refer to $1024^{256}$, we are indicating the complexity of the data, not the complexity of the model parametrization. The quantity that you mentioned, $1024 \times 256$, corresponds instead to the complexity of the generated samples under $q_\theta(x_1 | x_{t_{n-1}})$.
It is also worth noting that as the number of sampling steps increases, the complexity of the resulting composition of distributions also grows. As mentioned in our response to reviewer [McJt](https://openreview.net/forum?id=RBly0nOr2h¬eId=FJm17HbJsP) [Q. 1], this increase in sampling steps can lead to higher-quality samples and help mitigate issues related to factorization.
**[R. 1] Fig 3's caption should clarify that images are generated in VQ-GAN latent spaces. I think it's boarderline misleading to not mention specifically in the caption that it's a GAN-based latent-space image experiments. GAN's latent spaces are not only of lower dimension but also much structural.**
We will specify this aspect of training in the caption, as you have suggested.
**Concluding remarks.** We hope that, with the above clarifications, you will kindly reevaluate our work and find it deserving of a higher rating. | Summary: The paper addresses the Schrodinger Bridge (SB) problem for discrete spaces (categorical data). It proposes CSBM: a method that extends IMF (actually D-IMF) to discrete categorical spaces, proving theoretical convergence, propoing a concrete implementation and showing experimental evidence with two practical reference processes. Evaluations on synthetic datasets, Colored MNIST, and CelebA demonstrate competitive or superior performance in unpaired image-to-image translation tasks compared to baseline methods (ASBM and DSBM).
## update after rebuttal
the authors addressed my concerns and i maintain my recommendation
Claims And Evidence: (1) Theoretical Claim: The uniqueness and convergence of the discrete-time IMF procedure for categorical spaces.
Evidence: A formal theorem is provided, clearly proving the convergence under stated conditions. The proof appears rigorous to the best of my judgement
(2) practical algorithm: The proposed CSBM is claimed to be effective in practice for categorical SB problems.
Evidence:supported by experimental results on several datasets: gaussian to swiss roll, colored MNIST and VQ CelebA, demonstrating visually good translations and quantitative improvements compared to baseline methods.
Methods And Evaluation Criteria: The methods and evaluation criteria (FID, CMMD) make sense given the problem of discrete unpaired translation. I liked the use of VQ representations of the celeb-A images as it aligns with common practices in generative modeling. The select reference processes (Uniform and Gaussian-like) are well suited for common scenarios (unordered vs ordered categories).
Theoretical Claims: Theorem 3.1's proof appears correct and rigorously executed
Experimental Designs Or Analyses: The experimental designs appear solid, with clear evaluation metrics (FID, CMMD) and visual evidence to assess qualitative performance. However, the chosen parameters for stochasticity level (alpha) appear somewhat ad-hoc. further explanation of the choice of particular values would be helpful
Supplementary Material: I skimmed through the supplementary material provided, which includes detailed derivations, experimental setups, loss formulations, and additional training details and results. These details clarify the implementation and evaluation methodologies
Relation To Broader Scientific Literature: The authors do a really good job positioning their work within existing literature on SB, optimal transport, and diffusion models. They distinguish their contributions from related methods -- continuous-space IMF (e.g., Shi et al., 2023; Gushchin et al., 2024) and the part clarifying difference from discrete optimal transport methods (Sinkhorn algorithm, gradient-based approaches) was also a good addition (even though i felt the distinction was clear, it was nice to read and to be stated explicitly).
Essential References Not Discussed: Hoogeboom et al., and Gat et al, are mentioned but could use further discussion and perhaps comparison
Other Strengths And Weaknesses: Strengths:
+ Clear motivation and justification for extending IMF to discrete spaces.
+ Thorough theoretical support and convincing experiments showing effectiveness across tasks and settings
+ Very clear presentation -- the paper is fun to read
Weaknesses:
- Could benefit from additional experiments on other categorical datasets (e.g., discrete text tokens or molecules) and perhaps more comparison with baselines.
Other Comments Or Suggestions: none
Questions For Authors: * Clarification: In Alg 1, when sampling (x0,x1) using q_eta or q_theta (x1|x0), do the authors mean x1 is simulated from x0 with N steps? or estimated directly from the x1 timestep? if it is the latter -- could you provide more details on how this is done in practice?
* Can the authors discuss ways to reduce the information loss arising from the factorization of the conditional distributions?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Dear reviewer atNH thank you for your questions and commentaries.
**[R. 1] Hoogeboom et al., and Gat et al, are mentioned but could use further discussion and perhaps comparison**
Regarding the references [1, 2] you mentioned, we believe they are not suitable for comparison in our setting. For [1], the practical differences from D3PM [3] (which is our backbone method) are minimal, as [3] extends the discrete diffusion framework introduced in [1]. While the second method [2] could, in principle, be used as a backbone for our method, it is well known that lower values of $\epsilon$ (or $\alpha$ in our work) make it significantly harder for the model to approximate the target distribution (see [4], Figure 7). Current discussion will be added to the revised version.
[1] Hoogeboom, Emiel, et al. "Argmax flows and multinomial diffusion: Learning categorical distributions." Advances in neural information processing systems 34 (2021): 12454-12465.
[2] Gat, Itai, et al. "Discrete flow matching." Advances in Neural Information Processing Systems 37 (2024): 133345-133385.
[3] Austin, Jacob, et al. "Structured denoising diffusion models in discrete state-spaces." Advances in neural information processing systems 34 (2021): 17981-17993.
[4] Shi, Yuyang, et al. "Diffusion Schrödinger bridge matching." Advances in Neural Information Processing Systems 36 (2023): 62183-62223.
**[W. 1] Could benefit from additional experiments on other categorical datasets (e.g., discrete text tokens or molecules) and perhaps more comparison with baselines.**
We believe the current set of experiments is sufficient to support our claims. In particular, the practical implementation of the VQ-GAN setup does not differ significantly from potential setups with text data. The model and reference process will remain the same. While graphs are indeed more challenging due to their structural complexity, this domain has already been addressed by DDSBM. As explained in our response to reviewer [MBG2](https://openreview.net/forum?id=RBly0nOr2h¬eId=20eaTHc0ic) [W. 2], we do not include a direct comparison with DDSBM and instead focus on exploring other experimental settings.
**[Q. 1] Clarification: In Alg 1, when sampling $(x_0,x_1)$ using $q_\eta$ or $q_\theta(x_1|x_0)$, do the authors mean $x_1$ is simulated from $x_0$ with $N$ steps? or estimated directly from the $x_1$ timestep? if it is the latter -- could you provide more details on how this is done in practice?**
Your first suggestion is correct. To generate samples, we follow a standard diffusion sampling procedure. After the $l$-th step of D-IMF, we obtain $q^l_\theta(x_1 | x_{t_{n-1}})$. Moving to $(l+1)$-th D-IMF iteration, we first apply the trained model to generate $x_1$ taking as an input a dataset point $x_0$. We then perform posterior sampling using $q^{\text{ref}}(x_{t_n} | x_{t_{n-1}}, x_1)$ to obtain $x_{t_n}$. This procedure is repeated for $N+1$ steps to obtain samples from coupling $q^l_\theta(x_0, x_1)$. For the backward parametrization, we use the same scheme.
**[Q. 2] Can the authors discuss ways to reduce the information loss arising from the factorization of the conditional distributions?**
Please refer to our response to reviewer [McJt](https://openreview.net/forum?id=RBly0nOr2h¬eId=FJm17HbJsP) [Q. 1].
**[R. 2] However, the chosen parameters for stochasticity level (alpha) appear somewhat ad-hoc. further explanation of the choice of particular values would be helpful**
The pattern of selecting $\alpha$ follows the same intuition as choosing $\epsilon$ in continuous SB methods. Specifically, lower values of $\alpha$ lead to less stochasticity in the trajectories, resulting in higher similarity to the input data but a lower-quality approximation of the target distribution. At very low values, the model may collapse due to insufficient stochasticity. Conversely, higher values of $\alpha$ introduce more variability, improving the quality of the approximation but reducing similarity to the initial data. Beyond a certain point, excessively large $\alpha$ values make the model difficult to train, leading to a drop in both quality and consistency. Unfortunately, the effective range of these behaviors is highly dependent on the dataset and the chosen reference process. Nonetheless, we provide reasonable baseline values from which one can begin and adjust as needed. | Summary: This paper introduces Categorical Schrödinger Bridge Matching, an approach that extends the Schrödinger Bridge (SB) framework to discrete spaces. While SB has gained traction in generative modeling and domain translation, most prior work has been confined to continuous spaces. The paper addresses this gap by developing a theoretical foundation and a computational algorithm tailored for discrete data.
## update after rebuttal
I have decided to maintain my score, as two main concerns remain:
1. It remains unclear how the discrete-time formulation offers meaningful insights to the field. While the authors argue that prior work relying on continuous-time models is theoretically unsound, I find that the improvements in mathematical rigor on such a fine-grained detail do not constitute a substantial breakthrough from a research perspective. In particular, I do not perceive a theoretical barrier to translating the results from continuous to discrete time.
2. The similarity to DDSBM is also a concern, as the proposed framework appears to have significant overlap and does not clearly offer novel contributions beyond existing approaches.
Claims And Evidence: **Limited Empirical Support for Discrete Claims:**
While the paper is motivated by discrete data applications (e.g., text, molecular graphs), all experiments focus on image-based tasks, which, despite using vector quantization, are inherently less discrete than the originally mentioned data types. A stronger demonstration on truly discrete datasets (e.g., molecular graphs, categorical sequences, or text-based data) would significantly bolster the claims.
Methods And Evaluation Criteria: A central issue with this paper is the lack of comprehensive comparison to existing methods. While the authors position CSBM as an advancement for discrete-state Schrödinger Bridge problems, several relevant baselines are missing from the experiments.
For example, DDSBM (Discrete Diffusion Schrödinger Bridge Matching) has been proposed specifically for graph-structured discrete data, yet it is not included in the evaluation. Given that DDSBM also addresses discrete Schrödinger Bridge problems, a direct comparison would be necessary to assess whether CSBM provides meaningful improvements or is simply an alternative formulation. The omission of such a baseline weakens the empirical claims of the paper. In particular, this paper would greatly benefit by replicating the experimental setup in the DDSBM paper and demonstrating improvement.
Additionally, while the paper compares CSBM to ASBM and DSBM, these are methods designed for continuous spaces, meaning they are not necessarily the most appropriate baselines for a method explicitly aimed at discrete settings. A fairer assessment would include methods developed for discrete generative modeling, such as categorical diffusion models.
Theoretical Claims: **Questionable Novelty of the Theoretical Contribution:**
The discrete-time theory presented in this paper appears to be a rather trivial extension of the existing continuous-time Schrödinger Bridge theory. The results largely follow from prior work and do not introduce fundamentally new insights beyond what has already been established in the continuous-time setting. While the authors claim to provide a theoretical foundation for discrete-time setting, it is unclear whether this contribution is substantially novel or merely a straightforward adaptation of known results.
Moreover, there is a deeper concern regarding the algorithm itself. The submission presents CSBM as a new computational approach, but it is unclear whether this is substantively different from DDSBM. DDSBM is explicitly motivated by continuous-time considerations, yet it is naturally discretized for implementation. Since the proposed CSBM method operates in discrete time, one must question: Is CSBM simply DDSBM with a different motivation? The paper does not make a clear distinction between the two, and without this clarification, the novelty of the algorithm is questionable.
If the authors aim to claim CSBM as a distinct method, they must explicitly differentiate it from DDSBM and explain what aspects of their approach do not follow directly from the standard discretization of the continuous-time Schrödinger Bridge framework.
Experimental Designs Or Analyses: The experimental design does not seem to substantiate the advantages of the proposed algorithm; see my comments in **Methods And Evaluation Criteria**.
Supplementary Material: I have examined all the supplementary materials.
Relation To Broader Scientific Literature: The problem studied in this paper is **important and valuable**, as extending **Schrödinger Bridge methods to discrete spaces** is highly relevant for applications like **molecular generation, text modeling, and vector-quantized representations**.
Essential References Not Discussed: **Missing Baseline: Discrete Static OT Solver:**
A missing baseline in this paper is a **discrete static optimal transport (OT) solver** built on top of **Reference [1]**. Since the proposed approach is fundamentally an extension of Schrödinger Bridge methods to discrete spaces, it is essential to test whether a **simpler discrete OT solver**—one leveraging existing **static OT formulations**—could serve as an effective alternative within the **bridge matching framework** along side ASBM and DSBM.
[1] Somnath et al., Aligned Diffusion Schrödinger Bridges, 2023.
Other Strengths And Weaknesses: The paper is well-written and effectively conveys its ideas.
Other Comments Or Suggestions: N/A
Questions For Authors: - In what ways does the derived algorithm differ from DDSBM?
- What values does the discrete-time perspective bring to the SB framework for categorical data?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Dear reviewer MBG2, thank you for raising important questions regarding our paper.
**[W. 1] Questionable Novelty of the Theoretical Contribution**
We respectfully disagree. First, as highlighted in Table 1, the continuous-time frameworks DDSBM [1] and DSBM [2] rely heavily on the theoretical foundation established in [3], which does not extend to discrete time. Therefore, our work should not be viewed as a trivial extension of these frameworks but rather as a distinct theoretical foundation that shares certain similarities. Furthermore, the core theoretical contribution of our paper goes beyond simply generalizing the D-IMF procedure from [4] to discrete data: the discrete case emerges as a consequence of a broader generalization of the D-IMF procedure to **arbitrary reference processes**. As noted in the article (see footnote 1, page 5), our generalization enables ASBM [4] to operate with any Markov process $q^{ref}$.
**[W. 2] Moreover, there is a deeper concern regarding the algorithm itself..., one must question: Is CSBM simply DDSBM with a different motivation?... For example, DDSBM has been proposed specifically for graph-structured discrete data, yet it is not included in the evaluation... In what ways does the derived algorithm differ from DDSBM?**
We agree that adding practical distinctions will strengthen the comparison. Thus, we will include them in the revision.
First, let us clarify how DDSBM derives its loss. In theory, it matches the generator matrices $A$ (see [1, Equations (5, 6)]). However, in practice, authors of [1] discretize time, which leads to minimization of $D_{KL}(q^{ref}(x_{t_n}|x_{t_{n-1}}, x_1)|| q_{\theta}(x_{t_n}|x_{t_{n-1}}))$ (see the derivation in [1, Appendix E.1]). Thus, as it could be mentioned, it is indeed the same loss as presented in our article. However, since we have theoretically alternative objective $D_{KL}(q^{ref}(x_{t_n}|x_{t_{n-1}})||q_{\theta}(x_{t_n}|x_{t_{n-1}}))$ (see Prop. 3.3), we can match distributions directly by employing **various loss functions** such as MSE or even adversarial training as in ASBM [4]. We conducted extra experiments using only an MSE loss and observed comparable to KL (**LINK**: see [figures](https://anonymous.4open.science/r/images-64B3) with prefix 'toy'). Thus, the similarity in practical implementation reflects our design choice to use this particular parametrization. Alternative approaches could be easily used. This also explains why we did not include a comparison with DDSBM.
**[W. 3] A missing baseline discrete OT with [5]**
If we correctly understand, you propose first using classical discrete OT methods to get aligned data followed by the application of [5]. However, there is one crucial problem with this setup. The SB problem that lies behind [5] is still considered to be continuous, i.e., continuous Brownian motion is used as a reference process. This breaks the assumptions on discrete data. We will cite [5] in the final revision and discuss it, but we think that a comparison with it is impossible due to the reasons mentioned above.
**[W. 4] Additionally, while the paper compares CSBM to ASBM and DSBM, these are methods designed for continuous spaces, meaning they are not necessarily the most appropriate baselines...**
To our knowledge, there are no other discrete domain translation models for unpaired data. Thus, the only methods we compare are continuous-space methods ASBM and DSBM.
**[W. 5] ...all experiments focus on image-based tasks, which, despite using vector quantization, are inherently less discrete than the originally mentioned data types...**
We apologize, but the meaning of "less discrete" is unclear to us. Still, for further clarification regarding the choice of experiments (especially texts), please refer to our response to the reviewer [atNH](https://openreview.net/forum?id=RBly0nOr2h¬eId=0v01XUlBGp) [W. 1].
**[Q. 1] What values does the discrete-time perspective bring to the SB framework for categorical data?**
The main theoretical advantage is the ability to consider CSBM with $N = 1$, which is guaranteed to converge to the SB. In contrast, continuous-time setups typically require to assume $N=\infty$ to achieve convergence. From a practical side, our framework also enables the flexible selection of loss functions for matching the transition distributions, which we mentioned in answers to your previous questions.
**Concluding remarks.** We hope that, with the above clarifications, you will kindly reevaluate our work and find it deserving of a higher rating.
[1] Kim, Jun Hyeong, et al. "Discrete Diffusion Schrödinger Bridge Matching for Graph Transformation."
[2] Shi, Yuyang, et al. "Diffusion Schrödinger bridge matching."
[3] Léonard, Christian. "A survey of the Schrödinger problem and some of its connections with optimal transport."
[4] Gushchin, Nikita, et al. "Adversarial Schrödinger Bridge Matching."
[5] Somnath, Vignesh Ram, et al. "Aligned diffusion schrödinger bridges." | Summary: The paper does what the title says: it establishes the basic framework for the version of Schrodinger bridge diffusion models, for the case of discrete (categorical) spaces. This means that it has a theoretical result describing why and how an iterated projection method for finite-steps markov processes can be made to converge to the optimum bridge process, and then it describes how to implement this iteration in practice using neural networks, and shows a few examples to illustrate the phenomena.
Claims And Evidence: Yes I think that the claims are all carefully argued and convincing.
Methods And Evaluation Criteria: I think that the examples are toy model examples, even the VQ-VAE one which is the hardest is a toy model. I don't see a big issue with that, given that this is really the first formulation of SB's for categorical data, toy model experiments are to be awaited. Nevertheless, I may be biased towards theory, a referee interested in practical applications may require stronger evidence that the model helps to advance applications. (The fact that this model would be better than any competitor, is not part of the experimental validation.)
Theoretical Claims: Yes, I checked all the proofs. Furthermore the main results are in line (similar hypotheses and theses) with well established versions as indicated from Table 1. This means that there is little surprise that the results are true, and on the positive side, it gives good reason to trust that the results are valid, even for people who would not have read the proofs.
Experimental Designs Or Analyses: I did not check in detail the experiment implementation (i.e. I did not run the codes from the supplementary material), but I believe that the experiment outcomes are realistic and that the declared setup is sound.
Supplementary Material: I reviewed the part of supplementary material present in the appendices from the PDF of the paper.
Relation To Broader Scientific Literature: As said before, the SB framework was previously restricted to continuous spaces, and no version for discrete spaces was available. Thus this paper fills a gap.
The gap was not hard to fill: it was sufficient to build similar versions of proofs as in cited reference [Guschin et al 2024b], but for the discrete space case, and no surprises appeared anywhere. Some simplifying tricks like the one for passing from S^D to S\times D for practical purposes, were inherited from previous literature.
Even if the goal of the paper does not face strong new difficulties, it is important that the gap in the literature has been filled. Also, the new theoretical result (grey box in the paper) is elegant and simple to state so it's worth publishing.
Essential References Not Discussed: I don't know of any.
Other Strengths And Weaknesses: A strength of the paper is its simplicity. Some referees may shun that, saying that the work is somehow minor since it is not technically involved, but I disagree.
Other Comments Or Suggestions: Here are a few places where some rewriting may help:
Line 110 column 2: "additional properties" sounds too vague, maybe state some examples
Line 235 column 2: "D=1" seems a weird way to put it, and it confused me because I had forgotten what D was.. maybe just say that you work with \mathbb S and it'll be clear that D=1
Line289 column 1: when you introduce this factorization, maybe briefly write about the limitation that this imposes (I know that this has been written in the "limitations" part, but I think it's worth putting it here too)
Line 290-299 column 1 : "Since, in fact, we need N+1 neural nets to do the prediction of endpoints at each time step, we simply use a single neural network with an extra input n" this sentence is unclear to me. I don't follow the implication beyond the "since [...]", and I don't see how "n" is an input of a neural network. I feel that this sentence tries to abbreviate stuff too much and it became unintelligible. Can you expand and state this clearly please?
Line 354-355 column 2: "discrete space of images but not continuous" this looks weird.. yeah it's discrete and not continuous, what did you want to say with the "but" part? Please correct/erase whatever is needed to make this right.
Questions For Authors: I'd like to know more precisely some examples of what kind of dependencies will be lost in the dimensional factorization, and also if the authors have any ideas on how to "do better than" the factorization S^D \to D\times S metioned in the un-numbered formula following (9). I mean, how would you increase a bit the complexity of the models to try to make it "forget less" about the dependencies between dimensions? In the case of a discrete space this is easier than in general, so it's worth giving some pointers.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Dear Reviewer McJt, thank you for pointing out the unclear parts you encountered. We will revise the text where you have highlighted, as much as possible.
**[R.1] Line 110 column 2: "additional properties" sounds too vague, maybe state some examples**
In line 110, by "additional properties", we refer to the reciprocal and Markovian properties, which allow us to use diffusion models (specifically, bridge matching models) as the backbone of our domain translation framework. Without these properties, we would be limited to training one-step models such as GANs, which have since been largely replaced by modern diffusion-based approaches.
**[R.2] Line 235 column 2: $D=1$ seems a weird way to put it, and it confused me because I had forgotten what $D$ was.. maybe just say that you work with $\mathbb{S}$ and it'll be clear that $D=1$**
In line 235, the purpose of $D = 1$ is to simplify the transition matrices defined in Equations (7, 8). Since we are using factorization, we consider feature-wise rather than data-point-wise transition matrices. So, to maintain compactness of the text, we omit introducing $Q_n$ for arbitrary $D$, as it is not used in our approach, again, due to the factorization.
**[R.3] Line289 column 1: when you introduce this factorization, maybe briefly write about the limitation that this imposes (I know that this has been written in the "limitations" part, but I think it's worth putting it here too)**
We agree that it is indeed important to clarify the factorization in line 289. We will include a reference to the limitations section in order not to distract the reader from the flow of the main text.
**[R.4] Line 290-299 column 1 : "Since, in fact, we need N+1 neural nets to do the prediction of endpoints at each time step, we simply use a single neural network with an extra input n" this sentence is unclear to me...**
In lines 290–299, we intended to explain that we use a single neural network for all time steps rather than separate networks for each step, which is a common practice. To simulate the stochastic process, one would typically require $N+1$ distinct functions $q(x_1 | x_{t_{n-1}})$ for each $n$. However, training $N+1$ neural networks is computationally expensive. Instead, we use a single neural network for all transition steps (i.e., the transition function $q(x_1 | x_{t_{n-1}})$), with additional time conditioning $q_\theta(x_1 | x_{t_{n-1}}, t_{n-1})$ for all $n \in [1, N+1]$. If you are also interested in the sampling procedure using the trained model, please refer to our response to the reviewer [atNH](https://openreview.net/forum?id=RBly0nOr2h¬eId=0v01XUlBGp) [Q. 1].
**[R.5] Line 354-355 column 2: "discrete space of images but not continuous" this looks weird.. yeah it's discrete and not continuous, what did you want to say with the "but" part? Please correct/erase whatever is needed to make this right.**
In lines 354–355, we intended to highlight that image spaces are typically treated as continuous rather than discrete, i.e., the space of image pixels is commonly represented as the interval $[0, 1]^D$ rather than the discrete set {$0, 1, \dots, 255^D$}.
**[Q. 1] I'd like to know more precisely some examples of what kind of dependencies will be lost in the dimensional factorization, and also if the authors have any ideas on how to "do better than" the factorization $S^D \to D\times S$ metioned in the un-numbered formula following (9). I mean, how would you increase a bit the complexity of the models to try to make it "forget less" about the dependencies between dimensions? In the case of a discrete space this is easier than in general, so it's worth giving some pointers.**
Regarding factorization, to the best of our knowledge, the work [1] is the only existing work addressing this issue. Authors introduce an additional generative model that models the copula of the factorized distributions, which is, informally, the part of the joint distribution $q_{\theta}(x_1 | x_{t_{n-1}}) = q_{\theta}(x^1_1, ..., x^D_1 | x_{t_{n-1}})$ that is lost due to the factorization.
However, practically, it seems that the issue of factorizayion can also be mitigated just by increasing the number of steps, as demonstrated, for example, in our experiments with C-MNIST. The more steps that are taken during sampling, the more expressive the resulting composition of distributions becomes. As a result, repeatedly incorporating the full previous state leads to more correlated features. Even though factorization has implications for our practical implementation, it is important to recall, as stated in the article, that this issue is inherent to all discrete diffusion models.
[1] Liu, Anji, et al. "Discrete Copula Diffusion." arXiv preprint arXiv:2410.01949 (2024). | null | null | null | null |
No Task Left Behind: Isotropic Model Merging with Common and Task-Specific Subspaces | Accept (poster) | Summary: This paper demonstrates that alignment between the individual components of task-specific and merged matrices is strongly correlated with performance improvements over a pre-trained model. Building on this finding, the authors propose an isotropic merging framework that flattens the singular value spectrum of task matrices, thereby enhancing alignment and narrowing the performance gap. Furthermore, they incorporate both common and task-specific subspaces to further optimize alignment and boost performance. The proposed approach achieves state-of-the-art results.
Claims And Evidence: The proposed Normalized Accuracy Improvement and Subspace Alignment Ratio are supported by evidence. These metrics provide quantitative validation for the proposed isotropic merging framework, showcasing the alignment improvements and their direct impact on model performance.
Methods And Evaluation Criteria: It is intuitive to keep task-specific knowledge in the near-zero singular values part of the common subspace and discard the unimportant part.
Theoretical Claims: This paper does not conduct theoretical analysis.
Experimental Designs Or Analyses: The experimental design followed Task Singular Vectors, but was limited to vision tasks.
Supplementary Material: Supplementary material provides the code.
Relation To Broader Scientific Literature: The method is a further improvement of Task Singular Vectors, removing noise through SVD decomposition.
[1] Task Singular Vectors: Reducing Task Interference in Model Merging. arXiv.
Essential References Not Discussed: Section 4.2, which discusses retaining components from the common subspace and the orthogonal projection in Equation 10, bears resemblance to the shared subspace optimization concept in DOGE [2]. It is recommended to discuss this.
[2] Modeling Multi-Task Model Merging as Adaptive Projective Gradient Descent. arXiv.
Other Strengths And Weaknesses: **Strengths**: The article is well-written and clear, with simple and effective methods achieving state-of-the-art results in model merging. The proposed methods and metrics are novel and intuitive.
**Weaknesses**: Lack of further analysis, such as why SVD can be used for model merging due to the redundancy of parameters caused by fine-tuning. It now seems more like an experimental discovery, and these further analyses would elevate the article to a higher level.
Other Comments Or Suggestions: **Suggestions**: Supplement experiments on NLP tasks to verify the generalizability of the method and make the article more complete.
Questions For Authors: 1. Why can subspace alignment eliminate conflicts and improve performance?
2. How is the ratio of singular values controlled between the common and task-specific subspaces?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We are pleased that the Reviewer appreciates the soundness of our introduced metrics, the simplicity and effectiveness of the proposed approaches, and clear writing. We thank the Reviewer for the comments and we respond below to specific points.
>[Reference 1 (R1)]: *Section 4.2, which discusses retaining components from the common subspace and the orthogonal projection in Equation 10, bears resemblance to the shared subspace optimization concept in DOGE [2]. It is recommended to discuss this.*
Thank you for pointing us to this recent reference. We were not aware of DOGE paper at the time of preparing a submission as the preprint appeared two weeks before the deadline. Here we compare this approach with ours:
- *Definition of common/shared subspace:* We define the common subspace as top-k components from sum of individual task matrices. DOGE defines shared as concatenation of top-k components from each task matrix followed by SVD, which resembles the TSV method.
- *Orthogonal projection:* Both Iso-CTS and DOGE use an idea of orthogonal projection. DOGE uses it on the gradient of $\Delta$ to restrict the optimization process from changing the shared space. Iso-CTS uses the orthogonal projection on the level of weight matrices to determine the task-specific subspace that is orthogonal to the common subspace.
Iso-CTS and DOGE both use ideas of common/shared subspace and orthogonal projection in different ways. Moreover, the results of our approaches are better than DOGE. We believe that this discussion is very significant and we will add a detailed version of it to the revised manuscript.
>[Weakness 1 (W1)]: *Lack of further analysis, such as why SVD can be used for model merging due to the redundancy of parameters caused by fine-tuning. It now seems more like an experimental discovery, and these further analyses would elevate the article to a higher level.*
It is known that fine-tuning of large pre-trained models results in low-rank parameter update. This observation enables efficient fine-tuning of models using inherently low-rank adaptation techniques such as LoRA. Consequently, the recent TSV paper shows how low-rank approximation of the parameter update matrices, obtained using SVD, can be used to facilitate model merging.
In our paper, we propose to extend the scope of SVD-based analysis for the purpose of model merging. Most importantly, by introducing the SAR metric we show that SVD can help in understanding the overlap between task-specific and merged matrices. Moreover, we show that by modifying the spectrum of singular values of merged matrix we can increase the alignment between task and merged matrix (see the Response to Reviewer **ff9S** (section Q2) for detailed discussion).
>*[Suggestion 1 (S1)]: Supplement experiments on NLP tasks to verify the generalizability of the method and make the article more complete.*
We present NLP results in response to Reviewer **mnUL** (section C3). Iso-C and Iso-CTS outperform other baselines across two presented settings.
>[Question 1 (Q1)]: *Why can subspace alignment eliminate conflicts and improve performance?*
Consider the Subspace Alignment Ratio between a task matrix and merged task matrix. SAR quantifies an overlap between the subspaces spanned by dominant singular vectors of these matrices. If the SAR is low, the overlap between the subspaces of these matrices is low and the corresponding singular vectors are close to orthogonal. Therefore, the merged matrix cannot reliably represent the dominant components of task matrix if SAR between them is low. Therefore, low SAR leads to low performance on the corresponding task. Conversely, high SAR indicates high subspace overlap, indicating that the merged matrix can reliably represent the important components of the task matrix and results in high performance.
>[Q2]: *How is the ratio of singular values controlled between the common and task-specific subspaces?*
The ratio of singular values between the common and task-specific subspaces is controlled by the hyperparameter $k$ that is fixed for all the experiments. $k$ is chosen such that $k/r$ for a single layer is equal to 0.8. The final paragraph of Section 5.3 from the paper contains an analysis of impact of $k$ on performance.
---
Rebuttal Comment 1.1:
Comment: Thank you for the author's rebuttal. Most of the explanations you provided were things I already understood during my review, and I was trying to ask for deeper explanations, such as why subspace alignment eliminates conflicts. You have merely repeated the definition and findings of SAR from the paper. Because the singular value average reduces the Frobenius norm and condition number of task vectors, which is why you need to search for a larger $\lambda$, this is unstable on LLM. As I said before, these questions are meant to encourage further analysis to elevate the article to a higher level. I also want to point out that the checkpoints used by ISO and TSV are different from those used in most model merging methods (from Task Arithmetic), which leads to slightly higher results. Additionally, I would like to ask why the authors did not compare with methods such as EMR merging or Twin merging.
The current response does not satisfy me. If the further responses are better, I will increase my score.
---
Reply to Comment 1.1.1:
Comment: >*Why subspace alignment eliminates conflict.*
We thank the Reviewer for encouraging us to think more deeply about the relationship between subspace alignment and merging conflicts. We will incorporate this analysis in the revised manuscript.
Intuitively, we can minimize the task interference (i.e. eliminate the conflicts) by ensuring that the internal representations of task $j$ remain stable after merging. Let $\theta_0$ be the pre-trained weights for a layer $l$. Define the task matrix $\Delta_j=\theta_j-\theta_0$ and the merged task matrix $\Delta_M$ for the layer $l$. Then, for an input $x_j^{(l)}$, we desire that the post-merging activation $h_j^{(l)}=(\theta_0+\alpha\Delta_M)x_j^{(l)}$, with $\alpha$ chosen on a validation set, be close to the task-specific activation $\hat{h}_j^{(l)}=(\theta_0+\Delta_j)x_j^{(l)}$. Hence, we can quantify the interference using:
$$||\hat{h}_j^{(l)}-h_j^{(l)}||=||(\Delta_j-\alpha\Delta_M)x_j^{(l)}||\leq||\Delta_j-\alpha\Delta_M||\cdot||x_j^{(l)}||$$
To show that the interference is lower when the Subspace Alignment Ratio (SAR) between $\Delta_j$ and $\Delta_M$ is higher, we decompose $\Delta_j$ into components aligned with and orthogonal to $\Delta_M$:
$$\Delta_j=\Delta_j^{||}+\Delta_j^{\perp}\mbox{ for }\Delta_j^{||}=\Pi_{k_M,M}\Delta_j\mbox{ and }\Delta_j^{\perp}=(I-\Pi_{k_M,M})\Delta_j,$$
where $\Pi_{k_M,M}$ is the projection matrix onto the subspace spanned by the top $k_M$ left-singular vectors of $\Delta_M$ (see Eqs. 5-6 for their definitions). By rewriting the SAR we have:
$$SAR(\Delta_j,\Delta_M)=\frac{||\Delta_j^{||}||_F}{||\Delta_j^{||}+\Delta_j^{\perp}||_F}.$$
Similarly, decomposing $\Delta_M$ in $\Delta_M^{||}$ and $\Delta_M^{\perp}$, we write:
$$||\Delta_j-\alpha\Delta_M||=||\Delta_j^{||}-\alpha\Delta^{||}_M+\Delta_j^{\perp}-\alpha\Delta^{\perp}_M||\approx||\Delta_j^{||}-\alpha\Delta^{||}_M+\Delta_j^{\perp}||,$$
since $k_M$ minimizes the approximation error of $\Delta_M$ (i.e.$\Delta^{\perp}_M\approx0$).
If SAR is close to 1, then $||\Delta_j^{\perp}||$ is small, so interference mainly depends on $||\Delta_j^{||}-\alpha\Delta^{||}_M||$. Conversely, if SAR is near zero, the large orthogonal component $\Delta_j^{\perp}$ increases the overall interference, regardless of the choice of $\alpha$. Even with optimal $\alpha$ chosen via validation, **interference cannot be reduced below the norm of the orthogonal component**.
Iso-C increases SAR of $\Delta_t$ with the merged model — bringing it close to 1, as shown in the paper — by flattening the singular values. Thus, the optimal $\alpha$ can adjust the merged model such that interference is minimized. In contrast, Task Arithmetic (TA), with SAR varying across tasks, exhibits interference that cannot be reduced below the norm of the orthogonal component.
We also experimentally show that interference, measured as L1 distance between the final embeddings of task-specific and merged models (following [1]), for Iso-C is lower than the interference for TA for ViT-B/16: https://imgur.com/a/b9Lpk8q.
[1] Representation Surgery for Multi-Task Model Merging, ICML 2024
>*search for a larger $\lambda$, this is unstable on LLM.*
It is true that the singular value average reduces the Frobenius norm and we need to search for a larger $\lambda$. However, we did not observe instabilities for $\alpha \in [0.5, 3.1]$ (plot: https://imgur.com/a/GgB6nFD) in NLP experiments on T5-Large -- a 770M parameter LLM (see response to Rev. mnUL, Sec. C3).
>*checkpoints used by ISO and TSV are different*
Thank you for pointing out this important detail. We use checkpoints introduced by Consensus Merging in all the experiments in our paper (both for our and competing methods) providing a fair comparison.
However, many other papers use TA checkpoints, and we were not aware of this when comparing with additional methods during this rebuttal. We reran Iso-C and Iso-CTS using the TA checkpoints to fairly compare with methods that reported merging using them:
||ViT-B/32|ViT-L/14|
|-|-|-|
|Fisher|68.3|83.7|
|RegMean|71.8|82.2|
|PCB|76.3|87.5|
|CART|83.0|90.8|
|**Iso-C**|_84.1_|_92.5_|
|**Iso-CTS**|**84.3**|**93.0**|
Iso-C and Iso-CTS still outperform all of the added methods.
>*compare with EMR merging or Twin merging.*
We would like to highlight that, during this rebuttal, we added comparisons with 4 vision methods, including the recent SOTAs CART and PCB, as well as PEFT evaluations — e.g. recent KnOTS — and NLP experiments. Moreover, we consider merging methods that result in a **single set of multi-task weights and do not change the inference procedure**, which can be used as a drop-in replacement for the pre-trained model. Twin-Merging, however, composes task-specific components at test-time and alters the inference algorithm increasing its cost over two times. Similarly, EMR-Merging uses additional per-task parameter masks and rescalers to perform inference. We will include this discussion in the revised manuscript. | Summary: This paper proposes a novel model merging framework that enhances alignment between subspace of task models and merged model. The framwork includes two algorithms, (1) Iso-C that achives isotropic by flattenning the spectrum to the averaged singular values and (2) Iso-CTS in which lowest spectral components are further replaced by task-specific directions. Experiments on merging 8, 14, and 20 CLIP models demonstrated the effectiveness of this framework.
## update after rebuttal
Thank you for the very detailed and complete rebuttal. I appreciate authors' newly added theoretical justification and arguments under responses to Reviewer ff9S. Also, thank you for providing the comparisons to those new baselines. It clearly shows the advantages of Iso-C and Iso-CTS. The new results authors provided for LoRA FT models and also T5 models (under responses to Reviewer mnUL) are also convincing. The new results provided during the rebuttal have greatly improved the quality of the paper and I hope the authors can include these results in the final version. I have updated my scoring to reflect this.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: N/A
Experimental Designs Or Analyses: The multi-task model merging experimental designs and the metrics for evaluating merged model follow the standard procedure of model merging work. The three analyses experiments in Section 5.3 are sound and valid.
Supplementary Material: Yes, skimmed the whole appendix.
Relation To Broader Scientific Literature: Yes. This paper provides an analysis of how an isotropic merged matrix can enhance model merging performance, complementing existing spectral-based merging methods by showing solely modifying singular values can be a powerful approach. This work also analyze individual task performance, offering a fresh evaluation angle in areas such as fairness.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: ### Strengths
- Both Iso-C and Iso-CTS exhibit low hyperparameter sensitivity, as demonstrated in the analysis and appendix.
- The proposed framework enhances model merging performance as well as fairness.
- The proposed framework uses subspace alignment ratio as the metric for quantifying subspace similarity.
- The paper is generally clear, and the overall structure is easy to follow.
### Weaknesses
- Lacks theoretical justification or motivation on why enhances the average subspace alignment ratio by making the merged matrix isotropic.
- The SVD baseline is insufficient.
[1] Choi, J., Kim, D., Lee, C., & Hong, S. (2024). Revisiting weight averaging for model merging. arXiv preprint arXiv:2412.12153.
- The main experiments are conducted on a single model family.
- Current results are based on merging fully fine-tuned models. Evaluating the approach on PEFT models (e.g. LoRA) would provide a more complete understanding of methods' capabilities.
- As acknowledged by the authors in the Limitations section, the methods have not been tested in the NLP domain.
Other Comments Or Suggestions: - Iso-CTS requires multiple SVD operations, can authors provide a complexity analysis of the proposed method?
- Why Table 1 only reports “average absolute accuracy” and “average normalized accuracy” but not the proposed NAI?
- The authors should provide more motivation for the performance gains by Iso-C and Iso-CTS and include supporting numerical results to echo "no task left behind". e.g., in Figure 3(a), tasks that were less represented in the TA model (such as Cars, DTD, and SUN397) exhibit greater performance improvement after Iso-C, which aligns with expectations.
Questions For Authors: - The claim in Section 3.2 (lines 145–147) is somewhat confusing. TA suspected their effectiveness arises from the cosine similarity between the vectorized representations of the task matrices being close to zero. From Fig2(a), they are indeed close to zero?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We are glad that the Reviewer appreciates the soundness of the experimental protocol and analyses, the effectiveness of the proposed approach, and clear writing. We thank the Reviewer for their comments and we respond below.
> [Weakness 1 (W1)]: *Lacks theoretical justification...*
We provide a detailed explanation why making merged matrix isotropic increases subspace alignment in the response to Reviewer **ff9S** (section Q2).
>[W2]: *SVD baseline is insufficient.*
Thank you for pointing out CART which is a recent and very relevant baseline. We added it (alongside baselines suggested by Reviewer **ff9S**, section Q3) to our main Table for 8 tasks (as these are the only results reported in the CART paper):
||ViT-B/32|ViT-L/14|
|-|-|-|
|Fisher|68.3|83.7|
|RegMean|71.8|82.2|
|PCB|76.3|87.5|
|CART|83.0|90.8|
|**Iso-C**|**86.3**|_94.2_|
|**Iso-CTS**|_86.2_|**94.7**|
Iso-C and Iso-CTS outperform all these methods.
>[W3]: *Experiments on a single model family.*
In this rebuttal we add NLP experiments highlighting the effectiveness of Iso methods on the T5 -- an encoder-decoder language transformer (see the response to Reviewer **mnUL**, section C3).
>[W4]: *...Evaluating the approach on PEFT (e.g. LoRA)...*
Thank you for the suggestion - it helps to emphasize the generalizability of our approach. We follow the evaluation protocol of KnOTS[1], a recent SOTA (ICLR 2025) method for merging LoRA fine-tuned models, tested on 8 vision tasks using ViT-B/32 and ViT-L/14. For comparison, we merge the task-specific LoRA weights - provided by the authors - to the pre-trained models, and then we apply Iso-C and Iso-CTS. Below, we present the average normalized accuracy:
||ViT-B/32|ViT-L/14|
|-|-|-|
|KnOTS-TIES|68.0|78.2|
|KnOTS-DARE|63.9|75.6|
|**Iso-C**|_74.4_|_89.4_|
|**Iso-CTS**|**75.0**|**89.6**|
Iso-CTS achieves SOTA results in LoRA merging setting. Note that our method is a general purpose merging technique while KnOTS is specifically designed for the LoRA merging. This highlights the versatility of Iso methods.
[1] Stoica et al. Model merging with SVD to tie the Knots, ICLR 2025.
>[W5]: *The methods have not been tested in the NLP domain.*
See our response to Reviewer **mnUL** (section C3) for NLP results.
>[Comment 1 (C1)]: *...can authors provide a complexity analysis?*
Let $\Delta_t \in \mathbb{R}^{m\times n}$, with $m\geq n$ and let $T$ and $L$ be the number of tasks and network layers, respectively. For simplicity, assume that each layer has a single matrix, whose dimensions are $m$ and $n$. In the analysis below, the lines refer to Algorithm 2 in the main paper.
- Iso-CTS: One SVD on $\Delta_{TA}$ (lines 2-3) whose complexity is $O(mn^2)$ and this is applied to each layer, so the complexity is $O(Lmn^2)$; one SVD on each $\Delta_{t}, t= 1..T$, for each layer (line 5) so the complexity is $O(LTmn^2)$; finally line 11 requires two SVDs on matrices $U \in \mathbb{R}^{m \times r}$ and $V \in \mathbb{R}^{n \times r}$. Since $m \geq n$, then $r=n$, the complexity is $O(2Lmn^2)$. The total complexity is:
$$O(IsoCTS)=O(Lmn^2+LTmn^2 + 2Lmn^2)=O(LTmn^2).$$
- Iso-C: One SVD on $\Delta_{TA}$ with complexity:
$$O(IsoC)=O(Lmn^2).$$
- TSV (our nearest competitor): $T$ SVDs per layer on each task matrix (line 1 Alg. 1, TSV paper), and two additional SVDs per layer(line 10-11 Alg.1 TSV paper) and thus:
$$O(TSV)=O(LTmn^2+2Lmn^2)=O(LTmn^2).$$
While Iso-CTS and TSV share the same asymptotic complexity, Iso-CTS incurs slightly more overhead due to the SVD on $\Delta_{TA}$ (lines 2-3). Both methods can be further optimized by computing Truncated SVDs for Iso-CTS (line 7) and TSV (line 1 Alg. 1, TSV paper), since only a few components are retained. This reduces the complexity for both approaches. Iso-C is the most computational efficient algorithm.
>[C2]: *Why Table 1 only reports “average absolute accuracy” and “average normalized accuracy” but not NAI?*
We report these two metrics to stay consistent with previous literature (Consensus TA, TSV-M). In the revised manuscript we will add a Table reporting NAI.
>[C3]: *The authors should provide more motivation...*
Thank you for the suggestion. In the revised manuscript we will put more emphasis on "no task left behind" achieved by Iso methods highlighting higher performance improvements for tasks underrepresented in TA.
>[Question 1 (Q1)]: *The claim in Section 3.2...*
Yes, each pair of task vectors has a near-zero cosine similarity (Fig. 2a). However, our analysis goes a step further by comparing the cosine similarity between individual task vectors and the task arithmetic vector, demonstrating that this measure alone does not correlate with normalized accuracy improvement (Fig. 2b). Since cosine similarity alone does not explain performance gains, we introduce SAR. Unlike cosine similarity, SAR allows for meaningful differentiation among task matrices by highlighting shared subspaces (Fig. 3b). Additionally, SAR positively correlates with NAI (Fig. 3a). | Summary: This paper focuses on bridging the performance gap between the merged and task-specific models. They first show that the subspace alignment of merged and task-specific models correlates with performance improvement. Then, they propose an isotropic merging method to improve the merging performance via flattening the singular values. An extension is proposed to further improve the alignment and performance by considering the task-specific subspaces. Empirical results show that their method consistently outperforms the baselines.
## After rebuttal:
I think most concerns are well addressed after the rebuttal. I particularly like the interesting findings of the reasons that leads to performance gap in TA. I believe those insights are valuable and critical to the community. Theoretical analysis is provided to enhance the understanding. Experimental design and results are also improved.
I have no questions now. Just suggest that the authors summarize all the reviews and appropriately integrate them in the paper, no matter in the main paper or the appendix. I am very glad to raise my score to 4.
Claims And Evidence: I am confused about Fig.2, where the authors propose their motivation. The motivation itself makes sense to me, but Fig.2 is confusing. See questions for details.
---
**After rebuttal:**
I think the claims are clear to me right now and are supported by sufficient evidence.
Methods And Evaluation Criteria: I think some important baselines are missing. The motivation of the method is unclear to me as well. See questions for details.
Theoretical Claims: N/A
---
**After rebuttal:**
They propose some theoretical analysis, which I believe will strengthen the interpretability of this paper and provide insightful views.
Experimental Designs Or Analyses: There is a dataset in which the performance is inconsistent. Also, the improvement of Iso-CTS over Iso-C is not sufficiently discussed. See questions for some comments.
---
**After rebuttal:**
Concerns are well addressed.
Supplementary Material: I only checked the README.md file.
Relation To Broader Scientific Literature: Model merging is a popular method to construct a multi-task model without retraining. However, the performance gap between the task-specific and merge models is critical. This paper first analyzes the key reason why the merged model has a worse performance and then proposes a novel method to address it. I believe this paper can help enhance the understanding of model merging and be applied as an effective method to merge models.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: The overall organization and writing of this paper are good. I also like the findings that the subspace alignment is critical to model merging, which I believe is important to the understanding of model merging. My main concern is two-fold. First, it lacks some important baselines. Second, there are some potential issues regarding the motivations, the analysis of experimental results, and the effectiveness of the extended method. Please see the questions for details.
---
**After rebuttal:**
I think most concerns are well addressed after the rebuttal. I particularly like the interesting findings of the reasons that leads to performance gap in TA. I believe those insights are valuable and critical to the community. Theoretical analysis is provided to enhance the understanding. Experimental design and results are also improved.
Other Comments Or Suggestions: N/A
Questions For Authors: 1. In Fig.2, the author tries to show that the task vector orthogonality is not the reason for TA performance improvement. However, Fig.2(b) is confusing to me. The x-axis is the similarity between $\Delta_i$ and $\Delta_{TA}$. Why do we need to compare these similarities? Fig.2(a) shows the task-vector orthogonality, but 2(b) shows some bad NAI, which is enough to conclude the incorrelation.
2. I may be missing or misunderstanding something, but I have a question regarding the motivation of the proposed method. In Sec.3.3, the author shows that the merging performance is correlated with $SAR_{avg}$. However, in Sec.4.1, the author uses Fig.1(a) as their motivation to “flatten” the singular values. I wonder what the relationship is between these two motivations. The authors claim that the variability of $SAR_{avg}$ is due to the skewness in Fig.1(a), but I didn't see a clear relationship.
3. Some important baselines are missing in the experiments, such as Fisher merging, RegMean, and a recent SOTA PCB-merging. Recent literature also competes with them. Is there any reason the authors do not compare their method with those baselines?
4. What are the values of $k$ in Tab.1?
5. In Fig.4(c), the performance of SUN397 is worst with $\beta=0.5$. Though this is the only inconsistency, I am curious if there is any explanation.
6. In Fig.5(a), the performance of Iso-C improves significantly when $\beta \to 1$, while in Fig.6, the improvement of Iso-CTS is marginal compared to Iso-C. Does it mean that the most improvement is due to the isotropic singular values but not the design in Alg.2? I am curious about the performance of Iso-CTS w/o line 12 (i.e., when there are no isotropic singular values).
---
I have no questions now. Just suggest that the authors summarize all the reviews and appropriately integrate them in the paper, no matter in the main paper or the appendix. I am very glad to raise my score to 4.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We are pleased that the Reviewer appreciates the novelty of the proposed method, the significance of our contribution, and the clear writing. We thank the Reviewer for their constructive feedback, and below we respond to specific points raised.
> [Question 1 (Q1)]: *In Fig.2, the author tries to show...*
Providing Fig. 2(a) along with only the Normalized Accuracy Improvement (NAI) for each task would show that all task vectors $\Delta_t$ exhibit similar cosine similarities close to zero, yet the NAIs vary significantly (e.g. the DTD task vector is orthogonal to all others but has the lowest NAI). We agree that this alone suggests that mere orthogonality among task vectors does not explain differences in performance.
Fig. 2(b) however takes this analysis one step further by examining whether the cosine similarity between each $\Delta_t$ and the task arithmetic vector $\Delta_{TA}$ correlates with NAI. Intuitively, one might expect that a higher similarity between $\Delta_t$ and $\Delta_{TA}$ would result in a higher NAI, but no clear correlation is found. This reinforces the observation that cosine similarity is not a good predictor of performance improvement. We conducted this analysis because we believe that a key factor in understanding the effectiveness of task arithmetic model merging is to directly compare each task matrix with the *merged* model matrix. Since cosine similarity between task vectors and task arithmetic vector alone does not explain performance gains, we propose analyzing the Subspace Alignment Ratio (SAR) between each individual and merged task matrices, which indeed shows a positive correlation with performance improvement (Fig. 3). In the revised manuscript we will include elements of this discussion to more clearly link the results in Figs. 2 and 3.
> [Q2]: *I may be missing or misunderstanding something, but I have a question regarding the motivation of the proposed method...*
The subspace alignment ratio $SAR_\text{avg}(\Delta_t, \Delta_{TA})$ quantifies how well a task matrix $\Delta_t$ is represented by the subspace of the task arithmetic matrix $\Delta_{TA}$. The subspace dimension, $k_M$ as defined in Eq. (6), is determined by the number of singular vectors required to minimize the reconstruction error in terms of Frobenius Norm. Because the singular value spectrum of $\Delta_{TA}$ is skewed, only a few singular values are large, leading to a low $k_M$ (see Fig. 4(a) - $\beta=0.0$ (TA), marked by a vertical red dashed line). Relying on these few singular vectors to represent each task matrix produces a highly variable $SAR_{avg}$ across tasks (Figure 4(b) - $\beta=0.0$), indicating that some tasks are not well captured by this limited subspace.
The motivation for "flattening" the singular values via Iso-C in Sec. 4.1 is to address this issue. By scaling the singular values, the influence of less dominant singular values increases while that of the dominant ones decreases. This adjustment raises the effective subspace dimensionality $k_M$ (as shown by the vertical dashed lines for $\beta > 0$ in Figure 4a), resulting in a subspace that better represents all task matrices. Consequently, this leads to a higher $SAR_{avg}$ (Fig. 4(b)) and Normalized Accuracy Improvement (Fig. 4(c)). Thus, the skewness in Fig 1(a) explains the variability in $SAR_{avg}$ (and hence merging performance), and the singular value flattening is introduced as a solution to this limitation.
> [Q3]: *Some important baselines are missing in the experiments, such as Fisher merging, RegMean, and a recent SOTA PCB-merging...*
Thank you for pointing out PCB method which is a recent and relevant baseline. Originally, we omitted Fisher Merging and RegMean for brevity as they are outperformed by many recent methods. We additionally include the CART baseline requested by Reviewer **BR4L**. The PCB paper reports the average absolute accuracy for merging 8 tasks across 2 model sizes and we compare these results in the Table below:
|Method|ViT-B/32|ViT-L/14|
|---|---|---|
|Fisher|68.3|82.2|
|RegMean|71.8|83.7|
|PCB|76.3|87.5|
|CART|83.0|90.8|
|**Iso-C**|**86.3**|_94.2_|
|**Iso-CTS**|_86.2_|**94.7**|
Iso-C and Iso-CTS outperform all of the added baselines.
> [Q4]: *What are the values of $k$ in Tab.1?*
We use $\frac{k}{r} = 0.8$ as a default for all the experiments (see L408-410, right column), where $r$ is a number of singular values for a given layer. Therefore, $k$ can vary across layers according to the $r$ for each particular layer.
> [Q6]: *...I am curious about the performance of Iso-CTS w/o line 12...*
We present the comparison of the performance of Iso-CTS and Iso-CTS w/o line 12: https://imgur.com/a/39CrGKJ. We observe that isotropic scaling is indeed a crucial component of Iso-CTS. However, the design in Alg. 2 also plays an important role, especially when the number of merged models increases, leading to up to 2.8% improvement on 20 tasks (see Table 1).
---
Rebuttal Comment 1.1:
Comment: Thanks a lot for your detailed response! While most of my concerns are addressed, I still have a question regarding your response to Q2.
Based on the response, my understanding is that a skewed spectrum leads to a lower $k_M$, which further results in a higher $SAR$. However, the latter relationship is still unclear to me.
From your response, the logic of "causal chain" is $\lbrace\sigma_i\rbrace_i \to k_M \to SAR$. But I don't know why a lower $k_M$ leads to a higher $SAR$ from Eq.5 and 6. The results in Fig.4 that you mentioned only imply that $\lbrace\sigma_i\rbrace_i \to SAR$ **or** $k_M \to SAR$, but I am still unsure whether it is due to a lower $k_M$.
While I think this does not influence the quality and contribution of this paper, I'd still like to know whether $k_M$ affects $SAR$ and how it does so. Could you explain a bit (theoretically) based on Eq.5 and 6? Empirical results are also acceptable but I think it could be hard to verify it via experiments. I am glad to raise my score if this can be addressed.
Due to the limited number of communication rounds, I'd like to summarize my review here. I really like the findings of this paper, esp. Fig.2 and 3, which provide new views to understand task arithmetic. Though some inclarities may be due to my misunderstanding, I strongly encourage the authors to make it clearer for readers, as other reviewers also post similar questions. Overall, this is an interesting and solid paper.
---
Reply to Comment 1.1.1:
Comment: We thank the Reviewer engaging with us in this discussion. Below, we formally clarify the causal chain $\{\sigma_i\}\rightarrow k_M\rightarrow SAR$.
The Subspace Alignement Ratio (SAR) between a task matrix $\Delta_t$ and a merged matrix $\Delta_M$ is:
$$\text{SAR}(\Delta_t,\Delta_M;k_M)=\frac{||\Pi_{k_M,M}\Delta_t||_F}{||\Delta_t||_F},$$
where $\Pi_{k_{\text{M}},\text{M}}=U_{k_{\text{M}},\text{M}}U^\top_{k_{\text{M}},\text{M}}$ is the projection onto the subspace spanned by the top $k_{\text{M}}$ left-singular vectors of $\Delta_{\text{M}}$. The rank $k_M$ minimizes the approximation error:
$$k_{M}=\text{min}\lbrace k:||\Delta_M-\Pi_{k,M}\Delta_M||_F\leq\epsilon||\Delta_M||_F\rbrace.$$
### **$\sigma_i\rightarrow k_M$: The connection between the skewness of the spectrum of $\Delta_{M}$ and $k_M$**
Using the SVD, $\Delta_{M}=U\Sigma V^T$, where $\Sigma=\text{diag}(\sigma_1,\ldots,\sigma_r)$, by definition of Frobenius norm we have:
$$\Vert \Delta_M\Vert_F^2=\sum_{i=1}^r\sigma_i^2,\quad\Vert\Delta_M-\Pi_{k,\text{M}}\Delta_M\Vert_F^2=\sum_{i=k+1}^r\sigma_i^2.$$
Hence, the relative approximation error becomes:
$$\frac{\Vert\Delta_M-\Pi_{k,M}\Delta_M\Vert_F^2}{\Vert\Delta_M\Vert_F^2}=\frac{\sum_{i=k+1}^r\sigma_i^2}{\sum_{i=1}^r\sigma_i^2},$$
and $k_M$ can be defined as:
$$k_M=\text{min}\left\lbrace k:\frac{\sum_{i=k+1}^r\sigma_i^2}{\sum_{i=1}^r\sigma_i^2}\leq\epsilon^2\right\rbrace.$$
This formulation is equivalent to the one used in the paper but **explicitly shows how the skewness of the spectrum $\lbrace \sigma_i\rbrace$ controls $k_M$**. When $\Delta_M$ has a skewed spectrum (e.g. $\sigma_1^2 \gg \sum_{i=2}^r \sigma_i^2$), a small $k_M$ is enough to satisfy the condition. This explains why Task Arithmetic $\Delta_{TA}$ ($\beta=0$ in Fig. 4(a)) — which has a skewed spectrum — yields a smaller $k_M$ than Iso-C, whose flatter spectrum leads to a larger $k_M$. We believe that expressing $k_M$ directly in terms of singular values highlights the link between the spectral skeweness and subspace dimensionality. We will adopt this definition in the revised version.
### **$k_M \rightarrow SAR$: The connection between $k_M$ and SAR**.
The rank $k_M$ defines the **effective rank** of the subspace identified by the merged model, determined directly by its spectrum. Let $k_{TA}$ be the effective rank of $\Delta_{TA}$, and define
$$T=\lbrace u_1,..,u_{k_{TA}}\rbrace$$
as the orthonormal basis formed by those $k_{TA}$ singular vectors. Flattening the spectrum of $\Delta_{TA}$ (Fig. 4(a)), yields $\Delta_{Iso-C}$ with effective rank $k_{Iso}>k_{TA}$ (as discussed previously). This flattening modifies only the singular values of TA, leaving the singular vectors unchanged. Therefore, the original subspace $T$ is contained within the larger subspace spanned by the top singular vectors of $\Delta_{Iso-C}$, defined as:
$$I=\lbrace u_1,..,u_{k_{TA}},..,u_{k_{Iso}}\rbrace.$$
Thus, by construction, we have $T\subset I$.
For simplicity, let $\Pi_T=\Pi_{k_{\text{TA}},\text{TA}}$ and $\Pi_I=\Pi_{k_{Iso},\text{Iso}}$ denote the projection operators onto the subspaces spanned by $T$ and $I$, respectively. Since $T\subset I$, for any matrix $\Delta_t$ it holds that:
$$SAR(\Delta_t,\Delta_{TA})=\frac{\Vert\Pi_T\Delta_t\Vert_F}{\Vert\Delta_t\Vert_F}\leq\frac{\Vert\Pi_I\Delta_t\Vert_F}{\Vert\Delta_t\Vert_F}=SAR(\Delta_t,\Delta_{Iso-C}),$$
This inequality holds because by definition:
$$\frac{\Vert\Pi_T\Delta_t\Vert_F^2}{\Vert\Delta_t\Vert_F^2}=\frac{\sum_{i=1}^{k_{TA}}\sum_j\langle u_i,\Delta_t^{(j)}\rangle^2}{\Vert\Delta_t\Vert_F^2}\leq\frac{\sum_{i=1}^{k_{TA}}\sum_j\langle u_i, \Delta_t^{(j)}\rangle^2+\sum_{i=k_{TA}+1}^{k_{Iso}}\sum_j\langle u_i,\Delta_t^{(j)}\rangle^2}{\Vert\Delta_t\Vert_F^2}=\frac{\Vert\Pi_I\Delta_t\Vert^2_F}{\Vert\Delta_t\Vert^2_F},$$
where $\Delta_t^{(j)}$ denotes the $j$-th column of $\Delta_t$. Equality holds (i.e. $SAR(\Delta_t,\Delta_{TA}) = SAR(\Delta_t,\Delta_{Iso-C})$ ) only if the additional vectors added to the basis $T$ — that is $\lbrace u_{k_{TA}+1},\ldots,u_{k_{Iso-C}}\rbrace$ — are orthogonal to each $\Delta^{(j)}_t$ or, equivalently, if they do not intersect the column space of $\Delta_t$ (i.e. its left sigular vectors).
Hence, in general **a lower $k_M$ yields smaller or equal SAR than a larger $k_M$**. However, our empirical findings show that enriching the basis $T$ with singular vectors corresponding to smaller singular values in original task arithmetic spectrum (i.e. $\lbrace u_{k_{TA}+1},\ldots,u_{k_{Iso-C}}\rbrace$) **consistently increases the alignement ratio** (Fig. 4(b)), implying that these vectors are relevant for representing each task matrix $\Delta_t$ and not orthogonal to its left singular vectors. This analysis formally supports the claim that a higher effective rank $k_M$ for the merged matrix leads to a higher SAR. We will make explicit the connection between $k_M$, Iso-C and SAR at the end of Section 4.1. in the final version of the paper. | Summary: The paper studies how to improve model merging methods by leveraging the singular value decomposition (SVD) of task matrices, defined as the differences between fine-tuned models' weight matrices and the pre-trained model. The authors first show that merging performance correlates with the alignment between the top eigenspace of task-specific and merged matrices. Building on this insight, they propose *isotropic merging* (ISO-C), which replaces the singular values of merged matrices with a uniform spectrum. Additionally, they refine the merged matrices by substituting directions associated with small singular values with task-specific eigenspaces orthogonal to the top eigenspace of the merged matrices before flattening the spectrum. These approaches achieve state-of-the-art performance on standard computer vision model merging benchmarks.
## Update after rebuttal
I maintain my positive assessment.
Claims And Evidence: The claims are well supported.
Methods And Evaluation Criteria: The paper includes 3 standard model merging benchmarks with 8, 14, and 20 tasks and evaluates 3 CLIP models with VIT base and large encoders, as it is standard in this research area.
Theoretical Claims: N/A
Experimental Designs Or Analyses: The experimental design is sound.
Supplementary Material: The supplementary material (appendix) was reviewed. I did not review the attached code.
Relation To Broader Scientific Literature: The paper contributes to the ongoing research on improving model merging and mitigating task interference. Similarly to concurrent studies, it considers SVD decomposition of the weight matrices. The proposed techniques – uniform singular value scaling and selective incorporation of task-specific subspaces – are novel and improve performance over previous techniques across model sizes and the number of merged tasks.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Other strengths:
- The proposed method is original and contributes novel insights into model merging.
- Model merging via weight interpolation is a relatively recent but impactful area. Improving it by reducing task interference is a significant contribution.
- The paper is clearly written and well-structured.
Minor weaknesses:
- Sec. 3, along with Fig. 2 and 3, lacks details of the experimental setting and the models considered.
- $\rho$ (L195, right) is undefined in the main text.
- L240-246 repeatedly mention *more/less correlated tasks*. I think this terminology is vague and should be clarified in terms of alignment.
- The claim in L256-258 (right) would benefit from explicit justification.
Other Comments Or Suggestions: - Have you analyzed SAR at different depths of the models?
- I assume Fig. 3(a) is obtained for a ViT base. How would it look for the larger model?
- Finally, I think the paper would be much stronger presenting some results for NLP as well, as it is standard in the field since model merging is widely relevant beyond computer vision.
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We are pleased that the Reviewer acknowledges the novelty of the proposed method, the contribution towards understanding model merging and clear writing. We thank the Reviewer for the constructive feedback, and below we respond to specific points raised.
> [Minor weakness 1 (MW1)]: *Sec. 3, along with Fig. 2 and 3, lacks details of the experimental setting and the models considered.*
We used ViT-B/16 and 8 tasks (see Sec. 5.1 for details). We will clarify this in the revised manuscript.
> [MW2]: *$\rho$ (L195, right) is undefined in the main text.*
$\rho$ is a Pearson correlation coefficient – defined in the caption of Fig. 3. We will unify the notation to $\rho_{\text{TA}}$ and add the definition to the main text.
> [MW3]: *L240-246 repeatedly mention more/less correlated tasks. I think this terminology is vague and should be clarified in terms of alignment.*
We will clarify this part in the revised manuscript to avoid vagueness:
*However, significant variability in the average alignment ratio across the dataset leads to a lower accuracy improvement for **less aligned tasks** compared to **the tasks belonging to groups of high alignment**. This variability stems from the skewness of the task arithmetic spectrum (Fig. 1), which is concentrated in the first few singular values (which we call top or dominant), favoring **the tasks from the highly aligned groups**.*
> [MW4]: *The claim in L256-258 (right) would benefit from explicit justification.*
We can formalize the SVD problem for the first left principal singular vector as the variance maximization problem:
$u_1=\arg\max_{||u|\vert=1}||\Delta_{TA}^Tu||^2=u^T\left(\sum_{t=1}^T\Delta_t\Delta_t^T\right)u+u^T(\sum_{{t,s=1,t\neq s}}^T\Delta_t\Delta_s^T)u$
If a particular task $\Delta_j$ has dominant directions with significantly lower intensity compared to the other tasks (i.e. lower Frobenius Norm), then its individual contributions $\Delta_j \Delta_j^T$ to the total variance becomes smaller. Similarly, cross terms involving $\Delta_j$ will also be comparatively small. Therefore, task $j$ explicitly contributes less to the maximized variance captured by the first principal singular direction.
Moreover, if the directions of $\Delta_j$ are orthogonal or nearly orthogonal to $u_1$, (i.e. $u_1^T\Delta_j=0$), task $j$ contributes minimally or not at all along this principal direction. Similar considerations apply to subsequent singular vectors $u_2, \ldots u_k$, defining the common subspace. Finally, as the number of tasks $T$ increases and tasks become more diverse, it becomes increasingly likely that tasks with distinct but smaller-magnitude directions will be underrepresented or absent in the dominant singular directions identified by the task arithmetic decomposition. This is empirically supported by our results. Iso-CTS provides the most improvement when the number of tasks increases.
> [Comment 1 (C1)]: *Have you analyzed SAR at different depths of the models?*
We analyze SAR at different depths. For the ViT-B/16 model, we calculate SAR between fine-tuned and merged weight matrices and an average of all the matrices from a given layer. We present the results here: https://imgur.com/a/tLnEoAi. We observe that the alignment is higher for Iso across all layers of the vision transformer. One may expect early layers to be more aligned but we find that for both approaches the alignment is similar across the layers.
> [C2]: *I assume Fig. 3(a) is obtained for a ViT base. How would it look for the larger model?*
Yes, we obtain Fig. 3(a) for ViT-B/16. See the Figure for ViT-L/14 here: https://imgur.com/a/3V6xv7T. It closely resembles Fig. 3(a) for ViT-B/16 from the paper.
> [C3]: *I think the paper would be much stronger presenting some results for NLP...*
We present NLP results following the experimental setup from [1]. We use T5-Large-LM-Adapt base model fine-tuned on tasks from T0 mixture. We consider subsets of 8 and 7 NLP tasks adhering to the setup from Table 1 from [1] and compute an average accuracy of Iso-C and Iso-CTS in these settings:
|**Method**|**8 tasks (Zhou et al., 2022)**|**7 tasks (Yadav et al., 2023)**|
|-|-|-|
|Weight Avg.|56.4|60.5|
|TA|63.8|69.2|
|TIES|62.8|71.9|
|Fisher|57.7|61.0|
|RegMean|69.1|74.3|
|MaTS|72.5|81.5|
|**Iso-C**|**75.6**|**83.3**|
|**Iso-CTS**|_75.2_|_82.8_|
Both Iso-C and Iso-CTS significantly outperform the competing approaches, which highlights the versatility of our proposed methods. We observe that Iso-CTS achieves slightly worse results than Iso-C. This is consistent with our vision results in which both approaches performed very similarly when merging 8 models. We argue that the common space captures all the directions necessary to reliably represent these 7 and 8 NLP tasks, while task-specific subspaces may became more effective when merging more models.
[1] Tam et al. Merging by Matching Models in Task Parameter Subspaces, TMLR 2024
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. I maintain my positive assessment. | null | null | null | null | null | null |
EpiCoder: Encompassing Diversity and Complexity in Code Generation | Accept (poster) | Summary: The paper introduces *EpiCoder*, a novel approach to enhancing code-generation performance through hierarchical feature trees extracted from seed code. Empirical results demonstrate that EpiCoder surpasses similarly sized baselines in functional correctness (measured via pass@k) and complexity (measured using Halstead complexity and LLM-as-a-Judge). Additionally, the authors explore EpiCoder's potential to handle code generation tasks from function-level generation to multi-file and repository-level code synthesis.
Claims And Evidence: Claims made in this submission are supported by experimental evidence.
Methods And Evaluation Criteria: Correct to me.
Theoretical Claims: No theoretical claims in this paper.
Experimental Designs Or Analyses: Correct to me.
Supplementary Material: I read the appendix.
Relation To Broader Scientific Literature: This paper builds on existing research in **LLM-driven code generation** by introducing a feature tree–based framework that systematically captures hierarchical semantic relationships. It is Inspired by **abstract syntax trees (ASTs)** yet extends beyond snippet-based approaches. By clustering features and enabling control over depth and breadth, their method moves beyond simple function-level tasks to complex multi-file or repository-level code.
Essential References Not Discussed: The key contribution of this paper is a feature tree-based code generation algorithm. It only cites common code LLMs and code synthesis datasets. There are several more relevant works [1, 2] should be discussed.
[1] Li, H., Zhou, X., & Shen, Z. (2024). Rewriting the code: A simple method for large language model augmented code search. arXiv preprint arXiv:2401.04514.
[2] Koziolek, H., Grüner, S., Hark, R., Ashiwal, V., Linsbauer, S., & Eskandani, N. (2024, April). LLM-based and retrieval-augmented control code generation. In Proceedings of the 1st International Workshop on Large Language Models for Code (pp. 22-29).
Other Strengths And Weaknesses: ## Strengths
- **Feature Tree–Based Code Synthesis**:
The hierarchical “feature tree” approach captures fine-grained code features (e.g., data structures, control flows), enabling adjustable levels of complexity in code generation.
- **Promising Performance**:
The authors synthesize 433k instruction data and train EpiCoder, which achieves state-of-the-art performance among comparably sized models in multiple function-level and file-level benchmarks.
- **Extensive Experimental Validation**:
The authors conduct extensive experiments to compare their proposed method with baselines, demonstrating their approach outperforms others in terms of Complexity and Diversity.
## Weaknesses
- **Complex and Resource-Intensive Pipeline**:
Constructing and evolving a feature tree, then repeatedly refining the generated code through test-and-debug cycles, can be intricate and computationally expensive. An alternative approach—retrieving and filtering real-world code—may offer greater efficiency and control to provide high-quality training data.
- **Potential for Hallucinations or Overfitting**:
Although feature tree–based synthesis can increase the diversity and complexity of generated code, the reliance on evolved features may introduce distribution shifts from the 'real-world' code. This can lead to hallucinations or overfitting, potentially compromising real-world code generation quality.
Other Comments Or Suggestions: - **Definition of 'Feature' and 'Feature Tree':** Since this is a feature tree-based code generation framework, providing a clear, upfront definition of “feature” and “feature tree” would greatly improve the paper’s clarity and readability.
* **Overview illustration (Figure 2)**:
* **A) Feature Tree Extraction**: Could you provide what the raw code samples look like in your code set? Also, please clarify what the 'blue box' represents and how 'clustered features' differ from 'extracted features'.
* **B) Feature Tree Evolution**: Since the notion of “feature” remains somewhat unclear, please specify if any restrictions or guidelines govern how new feature nodes are added during evolution.
* **C) Code Generation**: If the two histograms represent feature distribution or frequency, it would be helpful to show the exact steps from the feature tree (Section B) to the final code (Section C). Additionally, please explain Docker’s role—is it an execution sandbox? If so, you should include some configuration details. The folder tree in the code block is also somewhat confusing; more explanation would be beneficial.
* **Overall**: The overview could be made clearer by indicating what each stage takes as input and produces as output, along with more explicit navigation through the framework.
Questions For Authors: - My main concern is about the purpose of the feature tree. In the code generation pipeline, you sample features from the subtrees into a set and feed it to an LLM. This raises the question of whether the tree structure is necessary. Could you clarify why you need such a complex pipeline to construct a tree rather than a simple frequency-based representation?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your thorough review and valuable feedback on our paper.
## 1. Additional References
We appreciate your suggestion and will include discussions on them in our paper. Li et al. (2024) explore the use of LLMs for rewriting code to enhance code search performance, while Koziolek et al. (2024) propose a retrieval-augmented method for controlled code generation.
## 2. Necessity of Synthetic Data
Certain types of data are extremely scarce in real-world scenarios, making synthetic data an essential and widely adopted approach in both academia and industry. For example, Qwen2.5-Coder [1] utilizes tens of millions of synthetic instruction samples, and models like DeepSeek-V3 [2] and R1 [3] also incorporate synthetic data during training. For code instruction data, while collecting raw code is relatively easy, obtaining well-structured tasks along with their corresponding solution code is significantly more challenging. Other works also adopt synthetic code instruction data, such as WaveCoder [4], MagiCoder [5], SelfCodeAlign [6], etc.
While potential of hallucinations are inherent challenges in any synthetic data approach, we have implemented several measures to mitigate these issues, including frequency-based distribution adjustment, verification through test cases, and enhancing complexity and diversity. Therefore, we believe these concerns are not specific weaknesses of our method, but rather common challenges faced by all synthetic data approaches. In addition, recently there is an interesting work by Stanford University [6] on studying which is the critical factor of model’s benefitting from synthesized data. The main conclusion shows that the presence of reasoning behaviors, rather than correctness of answers, proves to be the critical factor, which suggests that although hallucinations do exist in the synthesized data, these data may still improve the models because of some other factors we must not neglect.
Regarding the domain shifting risk of feature tree evolution, we must admit that every coin has two sides.
For the positive side, we curate feature trees from seed data, which has been under certain preprocessing pipeline and will be unable to capture the entire domain distribution of “real word data”. Thanks to the evolution of feature trees, such innate problems can be alleviated.
For the negative side, evolution on feature tree will involve noises.
However, to alleviate the problem of evolution noises, we apply filtering in feature combination when generating new questions. Furthermore, the results of training on our synthesized data can present consistent improvements across multiple benchmarks, which proves that the benefits outweigh the drawbacks.
## 3. Definition of Feature and Feature Tree
Features refers to abstractions of code. We organize features into a tree structure based on their logical relationships, where parent and child nodes represent a hierarchical containment relationship. To aid understanding, we provide illustrative examples in Figure 1(a) and Appendix C.
## 4. Clarifications on Figure 2
### Example of Raw Code Sample
You can refer to the dataset at bigcode/the-stack-v2 on Hugging Face. https://huggingface.co/datasets/bigcode/the-stack-v2
### Difference Between Clustered Features and Extracted Features
The key difference lies in how they are generated. Extracted features are directly obtained during the extraction process, while clustered features are introduced during clustering to ensure structural completeness. For example, in Figure 1(a), we extract features like "computation," "XOR," and "AND." During clustering, we introduce the clustered feature "Logical Operation" to connect them in a meaningful hierarchy.
### Evolution Process
We use an LLM to guide this process. Relevant prompts and detailed examples can be found in Appendix C.3.
### Code Generation
An example of the full generation process is provided in Appendix C.4 and C.5.
### Docker
The Docker is an execution sandbox. We will open-source our code along with relevant configuration details.
## 5. Advantages of the Tree Structure
We first clarify a misunderstanding: our method does not sample individual features from subtrees but instead samples entire subtrees, ensuring compatibility during code generation. And please refer to our response to reviewer eNxJ to see the clarification on the advantages of the tree structure.
## References
[1] "Qwen2.5-Coder Technical Report." arXiv 2409.
[2] "DeepSeek-V3 Technical Report." arXiv 2412.
[3] "DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning." arXiv 2501.
[4] Yu Z, et al. "WaveCoder: Widespread And Versatile Enhancement For Code Large Language Models By Instruction Tuning." ACL2024.
[5] Wei Y, et al. "Magicoder: Empowering Code Generation with OSS-Instruct." ICML2024.
[6] Kanishk Gandhi, et al. "Cognitive Behaviors that Enable Self-Improving Reasoners, or, Four Habits of Highly Effective STaRs" arXiv 2503. | Summary: This paper presents EpiCoder, a novel framework designed for code generation, addressing the limitations of existing methods that rely on code snippets as seed data. It introduces a feature tree-based synthesis approach that captures hierarchical code features, enhancing complexity and diversity in generated code. By refining a structured feature tree, EpiCoder allows precise control over code synthesis, supporting function-level and multi-file scenarios. Extensive experiments demonstrate that EpiCoder-trained models achieve state-of-the-art performance on multiple benchmarks.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: N/A
Experimental Designs Or Analyses: Yes. Epi-coders are evaluated on several common code benchmarks.
Supplementary Material: No. The authors don't upload any supplementary material.
Relation To Broader Scientific Literature: This work is related to code generation for large language models and fine-tuning code instructions for llms.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: **Strengths**
1. The authors propose a novel method to synthesize code instruction fine-tuning data.
2. This method could generate diverse Instruction data, furthermore, it could be adopted to repo-level code generation
**Weaknesses**
1. The method of constructing a feature tree is very complicated, and the description in the main text is not detailed enough and difficult to understand.
2. Many key parameters for building the tree are missing. It’s difficult to reproduce or follow without code.
Other Comments Or Suggestions: No.
Questions For Authors: 1. What do you think are the advantages of building features through trees compared to directly extracting independent features?
2. When generating task data, how to ensure the rationality of the provided feature combination, for example, there are some contradictory features.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback. We address your concerns below and hope these clarifications help resolve them.
## 1. Missing Supplementary Material
Our paper includes a 26-page appendix at the end, which provides extensive details on our methodology, implementation, and experiments.
## 2. Implementation Details
Appendix C includes a step-by-step breakdown with detailed prompts and execution examples for better understanding. Additionally, we plan to **open-source our code and data** as soon as possible (in a month) while adhering to anonymity policies, ensuring full reproducibility. We hope this addresses your concern.
## 3. Rationality of Feature Combination
As detailed in Appendix C.4 (line 1663-1664), our approach ensures that the LLM selects mutually compatible feature subsets when generating tasks. This guarantees that all generated data samples maintain a reasonable and coherent combination of features.
## 4. Advantages of the Feature Tree
Our feature tree is constructed based on real code, leveraging structured modeling of the hierarchical relationships between code features to explicitly capture semantic associations among code elements. Furthermore, by utilizing the hierarchical topology of the feature tree, we can synthesize new data that has not appeared in real-world scenarios (seed code corpus), thereby expanding the model's generalization boundary for complex code patterns.
As mentioned in the introduction (line 70-84), the key advantages are:
### (1) Controllable Complexity
We control complexity by adjusting the tree's shape, such as depth and width. In contrast, using independent features relies solely on increasing feature count, which often leads to incompatible features or unnatural combinations that do not reflect real-world scenarios.
Section 3.4 (Figure 5) and Appendix A.3 show the effectiveness of feature tree for generating complex file-level and repo-level code data.
### (2) Targeted Learning
For example, if we need to generate some data focused on data processing, we can adjust the distribution to increase the probability of sampling the node "data processing" and its subnodes. This structured relationship is not possible with independent features, where it is unclear which features are related to "data processing".
### (3) Evolution Efficiency
The tree structure provides clear and organized directions (depth and breadth) for evolution, making the process more efficient and achieving broader coverage. An example is shown in Appendix C.3.
Besides, in our response to Reviewer AwKV, we have added the comparison with SelfCodeAlign [1], which follows an approach closer to independent feature-based methods. The result is shown in Table 2 of [`https://anonymous.4open.science/r/epicoder_rebuttal-C619/tables.md`](https://anonymous.4open.science/r/epicoder_rebuttal-C619/tables.md), and the 4% improvement highlights the effectiveness of our approach.
---
## References
[1] Wei Y, Cassano F, Liu J, et al. "SelfCodeAlign: Self-Alignment for Code Generation." *NeurIPS*, 2024. | Summary: This paper introduces EpiCoder, a feature tree-based framework for code generation that addresses diversity and complexity in generated code. The authors propose a hierarchical feature to represent features like concepts used in the code. The framework consists of three components: (1) Feature Tree Extraction, where features are extracted from seed data and organized into a tree structure, (2) Feature Tree Evolution, which iteratively expands the tree to increase diversity beyond the seed data, and (3) Feature Tree-Based Code Generation, which samples from the tree to create code conditioned on a sampled subtree. The generated code can range from single function to file levels. They not only show good performance on function-level benchmarks including HumanEval, MBPP, and BigCodeBench, but they also created a new file-level benchmark XFileDep to evaluate the performance of their method, showing great performance compared to other open models.
Claims And Evidence: Their claim that hierarchical feature trees can enable the generation of more complex and diverse code is supported by the experimental results. The authors demonstrate performance improvements on multiple standard benchmarks and provide analysis of complexity metrics compared to existing approaches. They also show that their approach can handle multiple levels of complexity, from function-level to file-level generation with the newly proposed XFileDep benchmark.
Methods And Evaluation Criteria: They address the train/test leakage problem by using the EvoEval benchmark to show that their method is not overfitting the benchmark data. Although the score on EvoEval showing the model has a similar score to closed models like Claude 3 is a bit strange.
The proposed XFileDep benchmark seems like a good benchmark for file-level code generation. There are details about the benchmark design in the appendix. One issue is that since it uses a similar synthetic generation pipeline, it may unfairly favor their own model as the finetune data may be more aligned with the distribution of this benchmark dataset.
Theoretical Claims: N/A: The paper is an empirical studies on synthetic data generation for code and finetuning LLMs.
Experimental Designs Or Analyses: The experiments for the function-level code follow the standard coding benchmarks. The experiments for the file-level code are run on their own proposed XFileDep benchmark.
Supplementary Material: The appendix contains more details about how the XFileDep benchmark is constructed, including the test case generation.
Relation To Broader Scientific Literature: The work is related to many code synthetic data generation methods and finetune work like WaveCoder and Evol-Instruct, etc.
Essential References Not Discussed: Most of the related work is discussed.
Other Strengths And Weaknesses: The paper is well written and the method is novel and simple. The experimental results are comprehensive. The main weakness is that the function-level benchmark may include a train/test leakage problem, and I don't think the EvoEval as a synthetic benchmark is a gold standard to fully address this issue. The file-level benchmark is very interesting, but it being synthetic data coming from a very similar pipeline may give unfair advantage to the proposed method.
Other Comments Or Suggestions: Please see the above sections for some comments and suggestions.
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your thoughtful feedback and for recognizing the novelty and comprehensiveness of our work. We address your concerns below.
## 1. Train/Test Leakage in Function-Level Benchmarks
We address train/test leakage analysis in Appendix B.2 (Figure 9), demonstrates that EpiCoder has a low risk of data leakage. We further validates our low potential leakage in EvoEval and you can view the updated figure at the following anonymous link:
[`https://anonymous.4open.science/r/epicoder_rebuttal-C619/leakage_analysis.png`](https://anonymous.4open.science/r/epicoder_rebuttal-C619/leakage_analysis.png)
## 2. Potential Bias in the XFileDep Benchmark
The details of the XFileDep construction process are in Appendix A.2. To mitigate potential bias, we have implemented the following measures:
- **Pipeline Difference**: More filters and human checks make the distribution differ.
- **Data Isolation**: The benchmark data is strictly separated from the training data to prevent direct overlap.
- **Similarity Filtering**: We apply similarity-based filtering to remove benchmark data that exceeds the leakage threshold according to embedding similarity, reducing potential bias.
- **Data Format Difference**:
- In Supervised Fine-Tuning (SFT), our model generates the entire code given the task.
- In the benchmark, the model generates code based on the key class/function name/docstring in the file.
We believe these measures help ensure that XFileDep serves as a fair and valuable benchmark for evaluating file-level code generation. | Summary: This paper presents a new data synthesis method to generate complex and diverse code data. Given some seed code data, this method prompts an LLM to extract code features (e.g., functionality concepts, programming paradigm, etc.) from each code and organize them into a tree structure (i.e., feature tree). It then prompts the LLM to expand the tree by introducing new concepts in the same category or sub-categories. The authors used this method to generate 380K code functions and 53K code files. Using this new data, they finetuned base LLMs to obtain EpiCoder series and demonstrated that these models achieved better performance than existing models on five benchmarks.
## update after rebuttal
I want to thank the authors for conducting the additional experiments based on my suggestion. My major concern about the unfair comparison has been addressed by the new comparison results using datasets with comparable sizes. The new comparison between EpiCoder and SelfCodeAlign is also helpful.
On the other hand, while the authors claimed that the newly proposed benchmark is a more challenging/better benchmark than ClassEval etc., it would be more convincing if the authors could evaluate EpiCoder on some existing benchmarks since they are peer-reviewed and widely used. I also suggest the authors increase the number of manually analyzed data points in the test case analysis. 30 data points seem not enough.
Nevertheless, my major concerns have been addressed. So I am happy to raise my score from 2 to 3.
Claims And Evidence: The claims about the technical approach and novelty are largely reasonable. To the best of my knowledge, SelfCodeAlign is the only work that shares a similar idea of extracting high-level code features (or code concepts as called by the authors of SelfCodeAlign) from seed data and generating new data based on the extracted features. Compared with SelfCodeAlign, this work considers more kinds of code features, represents the extracted features in a nice and clean tree structure, and proposes a new component to enrich the features in the tree structure. So I think this work is novel enough.
- Wei, Yuxiang, et al. "SelfCodeAlign: Self-Alignment for Code Generation." The Thirty-eighth Annual Conference on Neural Information Processing Systems.
My main concern is about the claims on the effectiveness of the data generated by the proposed method. The authors compared EpiCoder with existing models like MagiCoder and WaveCoder and attributed the better performance of EpiCoder to the higher complexity and diversity of their synthetic data. However, this is a fair comparison, since the EpiCoder data is much bigger than the synthetic data used by MagiCoder and WaveCoder. Specifically, the EpiCoder data includes 380K code functions and 53K code files, while the MagiCoder data only includes 75K code functions, and the WaveCoder data only includes 111K code functions. It is likely that the better performance of EpiCoder is simply because of the significantly larger finetuning dataset. Furthermore, the performance improvement over the baseline models is not significant (2-3% compared to the second-best baseline). This makes it less convincing that the feature-based synthesis method is really effective.
Methods And Evaluation Criteria: The proposed method makes sense. I do have some concerns about LLM hallucinations since this method makes heavy use of LLMs for feature extraction, clustering, and evolution. Yet other methods like OSS-Instruct also makes use of LLMs and it also seems LLMs are able to learn meaningful patterns from noisy data. So I don't think this is a major issue. But some analysis or manual validation would be helpful.
A more significant issue is about the feature sampling and code generation steps in the proposed method. Currently, this method performs weighted sampling over the features in the tree and then prompts the LLM to generate code based on the sampled features. It does not consider whether it makes sense to put some features together when generating code. So it may generate code with potentially irrelevant or even contradicting features. The generated code may be syntactically correct but doesn't make much sense in practice. This may be a potential reason why the synthetic dataset does not gain a significant performance improvement even though its size is 3-4 times bigger than other datasets. I suggest the authors sample some generated code functions/files and conduct a manual analysis to check whether they make sense.
The last step of the data synthesis method is about iterative refinement. However, it is questionable whether the test files generated by the proposed methods are indeed effective. It is likely that the LLM generates some weak or even invalid test files. While these tests are executable, they do not really examine the generated code properly.
Theoretical Claims: There are no theoretical claims in this paper.
Experimental Designs Or Analyses: As mentioned above, a major issue of the evaluation is the unfair comparison to the baseline models like MagiCoder and WaveCoder. The authors need to make their synthetic dataset the same size as the baselines to eliminate this confounding factor. Otherwise, it is not convincing that EpiCoder's better performance is really due to the diversity and complexity of the synthetic dataset.
Since SelfCodeAlign also leverages code concepts to generate data and achieves good performance, it is a more related and state-of-the-art method to compare with.
The authors constructed a new benchmark called XFileDep for file-level code generation. Since there are many known class-level or repo-level code generation benchmarks, it is questionable why the authors chose to create a new benchmark instead of using existing benchmarks, especially given that XFileDep is not carefully evaluated. The authors should evaluate EpiCoder on at least one known and peer-reviewed class-level or repo-level benchmark.
In Section 4.1.2, the authors used GPT-4o to estimate code complexity in four dimensions. There is no evaluation of the accuracies of GPT-4o. If GPT-4o is not very accurate, it doesn't make much sense to analyze or interpret the results generated by GPT-4o.
Supplementary Material: I checked all appendices and they look fine.
Relation To Broader Scientific Literature: The proposed method can be inspirational to data synthesis research beyond code generation.
Essential References Not Discussed: As discussed in my comments to claims, a very related method is SelfCodeAlign. The authors should discuss this work and compare EpiCoder with SelfCodeAlign.
- Wei, Yuxiang, et al. "SelfCodeAlign: Self-Alignment for Code Generation." The Thirty-eighth Annual Conference on Neural Information Processing Systems.
Other Strengths And Weaknesses: None
Other Comments Or Suggestions: The order of the appendencies is confusing. It would be easier to follow if the authors first showed the prompts for each component in the data synthesis method, followed by additional experiments and results.
Questions For Authors: 1. How do you know that the better performance of EpiCoder is because of data complexity and diversity instead of data size?
2. How many generated code functions/files have a reasonable combination of features?
3. Why not use existing class-level or repo-level code generation benchmarks?
4. How many generated test cases are valid?
5. What is the accuracy of GPT-4o in estimating code complexity?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your detailed feedback and for recognizing the contribution of our work. Below, we address your key concerns.
The table link [https://anonymous.4open.science/r/epicoder_rebuttal-C619/tables.md](https://anonymous.4open.science/r/epicoder_rebuttal-C619/tables.md) contains all the tables referenced in our responses.
## Questions
### Q1: How do you know that the better performance of EpiCoder is due to data complexity and diversity instead of dataset size?
A: We acknowledge that data complexity, data diversity, and dataset size all contribute to performance improvements. To isolate the effect of dataset size, we randomly sampling 75K data from EpiCoder and comparing the results with MagiCoder (75K) and WaveCoder (20K + 110K). The results, presented in Table 1 of the table link, show 5.4% and 3.0% improvements respectively.
### Q2: How many generated code functions/files have a reasonable combination of features?
A: Our data generation process incorporates constraints to ensure feature compatibility, as detailed in Appendix C.4 (line 1663-1664). When generating tasks, the LLM selects a subset of features that are mutually compatible.
### Q3: Why not use existing class-level or repo-level code generation benchmarks?
A: Existing benchmarks present several limitations:
- Class-Level Benchmarks (e.g., ClassEval)
- These are essentially function-level generation or completion tasks, a simplification of our file-level code generation.
- Repo-Level Benchmarks face two major issues:
- Metrics such as Exact Match (EM) and Edit Similarity (ES) are only suitable for base models and not for content-rich instruct models (e.g., CrossCodeEval, RepoBench).
- EvoCodeBench's codebase showing focuses mainly on the local context (in the same file) and does not take into account the dependencies of the whole repository.
Our benchmark fully encompasses complete tasks and generation, covering various instruction formats, making further elaboration unnecessary. Instead of focusing on function-level tasks, we prefer to evaluate the LLM’s ability to generate complete files based on natural language instructions.
### Q4: How many generated test cases are valid?
- Manual Evaluation: We manually examined 30 data samples and found that all the generated test cases correctly reflected the task requirements. However, we observed that these test cases tend to be relatively simple and may not cover all edge cases. A concrete example is provided in Appendix C5.2 and C5.3.
- Pass Rate Improvement: According to our statistics, before the first refinement iteration, only 32% of the generated code passed all test cases. After three iterations, the pass rate increased to 61%, demonstrating that the test cases effectively filter out low-quality data and improve overall data quality, even if they do not guarantee 100% correctness.
### Q5: What is the accuracy of GPT-4o in estimating code complexity?
A: To estimate the accuracy of GPT-4o in assessing code complexity, we conducted pairwise comparisons on a dataset using both human evaluation and GPT-4o, showing an average win rate of 84.4% and 74% for our data and a consistency of 79.375% between human evaluation and GPT-4o.
In section 4.1, we use both GPT-4o and the sofware engineering method to estimate code complexity, and both approaches consistently indicate the better complexity of our data. We also assessed complexity using DeepSeek-V3-0324 and Llama3.1-70B-Instruct, following the same metrics as in Section 4.1.2. The results align with our original conclusions. Detailed results are in Table 3-6 of the table link.
## Other Comments
### Comparison with SelfCodeAlign
Thank you for acknowledging our contribution. We will discuss SelfCodeAlign in our paper. As you mentioned, the key difference between EpiCoder and SelfCodeAlign is that EpiCoder organizes features into a tree structure and uses evolution to enrich features. To make a fair comparison, we first sample a subset of our data with same size of SelfCodeAlign and then finetune CodeQwen-Base for comparing with the results in their paper. Table 2 of the table link shows that our data has a 4% improvement.
### Performance Improvement
As you mentioned, we have achieved a 2-3% improvement compared to the second-best baseline, which is already a notable gain at the instruction tuning stage. Additionally, the second-best baseline Qwen2.5-Coder-7B-Instruct utilizes tens of millions of synthetic instruction samples as stated in section 4.2 of their technical report, which demonstrate the effectiveness of our data.
### Manual Check
During the whole pipeline, we incorporated manual inspection and optimization to ensure that the generated code is reasonable and coherent. We provide concrete examples in Appendix C.
### Order of Appendices
Thanks for your valuable suggestions and we will adjust it accordingly.
We hope our responses effectively resolve your concerns. | null | null | null | null | null | null |
Advancing Constrained Monotonic Neural Networks: Achieving Universal Approximation Beyond Bounded Activations | Accept (poster) | Summary: Authores generalize previous theoretical results, showing that MLPs with non-negative weight constraint and activations that saturate on alternating sides are universal approximators for monotonic functions. Additionally, they show an equivalence between saturation side in the activations and sign of the weight constraint. This allows them to prove that MLPs with convex monotone activations and non-positive constrained weights also qualify as universal approximators, in contrast to their non-negative constrained counterparts.
Experimental evaluation reinforce the validity of the theoretical results, showing that their approach compares favorably to traditional monotonic architectures
Claims And Evidence: Claims made in the submission are supported by clear and convincing evidence.
Methods And Evaluation Criteria: Proposed methods and/or evaluation criteria make sense for the problem or application at hand.
Theoretical Claims: I checked all proofs in the main part and did not have any issues.
Experimental Designs Or Analyses: Yes
Supplementary Material: I did not read the supplementary material
Relation To Broader Scientific Literature: I am not familiar enough with the broader Scientific Literature in thsi area to give a statement here.
Essential References Not Discussed: I am not familiar enough with the broader Scientific Literature in thsi area to give a statement here.
Other Strengths And Weaknesses: strengths:
-problem considered is initerising
-analysis is non trivial and intuitive
-experiments support validity of the theoretical results
weaknesses:
-results are restricted to monotonic unctions.
-analysis is fairly simple
Other Comments Or Suggestions: For $h_j$ and $ A_{j/ i}$ it would be nice if $j$ would ne defined. Same goes for $n\geq 4$) in our contributions
Questions For Authors: no questions
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Dear Reviewer,
Thank you very much for your positive feedback. We appreciate your suggestion and will clarify the definitions of the terms you've highlighted, in our revised manuscript.
If you have any additional recommendations or improvements you'd like to see to elevate your rating further, please let us know—we would greatly appreciate your advice.
Thank you again for your support and constructive comments. | Summary: This paper extends the body of work on monotonic neural networks.
It focuses on relaxing the existing constraints that limits the architecture, use, and performance of such networks.
Specifically, this work identifies limitations within the existing architecture including the use of threshold neurons, which limits the choice of activation functions and the expressivity of monotonic neural networks.
It also identifies potential benefits in reducing the required number of hidden layers for universal approximation with monotonic networks.
In addressing these limitations and others, this work presents theorems that carefully answer the research questions raised.
It proves that universal approximation theorem for non-threshold activation, thereby relaxing the constraint on the choice of activation functions for monotonic networks.
This work carefully reviews the non-negative constraint on the weights and claims that using a non-positive weight constraint is more expressive. It also presented a new parametrization method to better determine the weight constraint.
This work is a valuable contribution to the body of work on monotonic neural representation and its extension to the design of interpretable models.
Claims And Evidence: The claims and evidence are clear and convincing.
The presentation is clear and easy to follow.
The paper provides sufficient background information to understand this work.
It carefully presents and discusses the focus of this work under subsection 3.1, including a clear description of the limitations of existing monotonic MLPs.
Subsequently, the paper provides convincing evidence to support the central claim of addressing these limitations.
Subsections 3.2 provides the proofs that address the use of non-threshold activations and the required number of hidden layers for universal approximation.
Subsection 3.3 provides the justification for a new weight parametrization method.
Section 4 presents the new method for the weight parametrization.
Methods And Evaluation Criteria: The methods and evaluation criteria are relevant to the work in this paper.
Theoretical Claims: Yes, I checked the proofs for Theorem 3.5 and the Lemmas.
Experimental Designs Or Analyses: Yes, I checked the experimental analysis.
The experimental results are relevant and illustrate the benefit of the new approach proposed in this work.
Supplementary Material: Yes, I reviewed the supplementary material. I checked the proofs.
Relation To Broader Scientific Literature: The results in this paper are important to the broader scientific literature because they offer a new perspective on the design of monotonic neural architectures. This in turn contributes to the development of explainable models.
Essential References Not Discussed: None
Other Strengths And Weaknesses: ORIGINALITY: This work is original because it clearly identifies the limitations of existing monotonic neural architectures and seeks to address the identified issues.
SIGNIFICANCE: The potential impact of this work in designing easy-to-interpret models underscores its significance.
CLARITY: This paper is well-organized, and its presentation is very clear.
WEAKNESSES:
(1) You should consider additional experimental information to further strengthen the central claim of this work. (Check my comment under Questions For Authors)
(2) The definitions of some notations are not clear. Define the notations under the proofs. (Check my comment under Questions For Authors)
Other Comments Or Suggestions: Here are some suggestions on typos:
Line 426-426, Column 2, Section 6, Page 8
"We then use this theoretical analysis to construct of a novel parametrization that relaxes the weight constraint making the network less sensible to intialization."
I think "construct of a" should be "construct a". I think "sensible" should be "sensitive". I think "intialization" should be "initialization".
Line 055-056, Column 1, Section 2, Page 2
"Hard monotonicity instead gives guarantees by construction by imposing constraints in the model architecture"
This sentence is not clear to me. Please review.
I do not think the body / text of this paper references Figure 1 directly.
Questions For Authors: I think there are other experiments or discussions that could further strengthen the contributions and central claims of this paper.
(1). How do non-threshold and threshold activations compare on the training datasets?
(2). Can you illustrate the effect of the non-negative and non-positive weight constraints on the training dataset?
(3). What is the additional computational cost of the new weight parametrization method?
Please consider defining the hyperplanes $A_{i_2/i_1}^{+}$, $A_{i_2/i_1}^{-}$ under the Proof for Theorem 3.5 mathematically on pages 5 and 13 (Just like we have in Lemma 3.6). This will improve the readability of the proofs. Also consider adding the explicit definition of $A_{j/i}^{-}$ in equation (6).
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Dear Reviewer,
Thank you for your constructive review and valuable comments. Below we address each of your concerns in detail:
- _Typos & Figure 1_:
Thank you for highlighting these issues. We have thoroughly revised the manuscript, corrected all identified typos, and will explicitly cite Figure 1 in the main text.
- _"How do non-threshold and threshold activations compare on the training datasets?"_:
We omitted this comparison since it has already been explored in [1] (under the name of "Non-Neg-DNN"), where threshold activations were demonstrated to outperform non-threshold approaches. Our proposed method, in turn, matches or surpasses the results obtained by [1]. In the revised version, given that the allowed space is enough, we will add such result in Table 1.
- _"Can you illustrate the effect of the non-negative and non-positive weight constraints on the training dataset?"_:
We appreciate your question, but we would kindly ask for clarification on this point to ensure we accurately address your concern.
- _"What is the additional computational cost of the new weight parameterization method?"_:
This aspect is addressed at the end of Section 4.1 (around line 403). Practically, our proposed method introduces negligible overhead and is often computationally cheaper compared to sigmoid-constrained monotonic MLPs, as it avoids the computational costs associated with sigmoid activations and their gradients.
- _"Please consider defining the hyperplanes... Also consider adding the explicit definition of..."_:
Thank you for noting these ambiguous definitions. We will clearly define these terms in the revised manuscript to ensure precision and readability.
Your feedback significantly helps in improving our manuscript. ICML guidelines this year permit uploading revised manuscripts only during the second half of the reviewing process. We will therefore submit our revised paper as soon as the submission system allows. We hope our clarifications fully address your concerns and further strengthen your positive assessment. We hope these clarifications address your concerns and reinforce your positive assessment.
Thank you once again for your support and valuable suggestions.
[1] Runje, Davor, and Sharath M. Shankaranarayana. "Constrained monotonic neural networks." International Conference on Machine Learning. PMLR, 2023. | Summary: This paper proposes a novel Monotonic Neural Network as a universal approximator for monotone functions. Unlike previous works, this approach provides theoretical proof that the proposed Monotonic Neural Networks can serve as universal approximators and successfully removes the constraint of activation function boundedness. As a result, the proposed method enables ReLU-like activation functions to construct monotone networks with universal approximation properties.
Claims And Evidence: The motivation behind studying Monotone MLPs is not clearly stated. The paper lacks a compelling explanation of the practical importance of this research, especially in the current popularity of LLMs and modern deep-learning frameworks. Emphasizing the relevance and potential applications of Monotone MLPs would significantly enhance the paper’s impact. Without this context, the work risks being perceived as a theoretical exercise with limited practical utility.
Methods And Evaluation Criteria: seems good.
Theoretical Claims: The parameterization method described in Equation (12) effectively addresses the weight constraint and eliminates the need to alternate between activations and their point-reflected counterparts manually. However, this parameterization appears to alter the behavior and capacity of the resulting MLP significantly. Consider the following analysis:
- Suppose $x$ is given, and $W_1, W_2$ are drawn from the standard Gaussian distribution. For a standard MLP, $f(x) = Relu(W_1x)$, By linearity of expectation, we have:
$$
\mathbb{E}[f(x)] = ||x|||/\sqrt{2\pi}
$$
For a two-layer MLP: $f(x) = Relu(W_2 \cdot Relu(W_1x))$, It follows that:
$$
\mathbb{E}[f(x)] = \frac{||x|||\sqrt{d}}{2\sqrt{\pi}}
$$
where $d$ denotes the hidden layer dimension.
- Now consider the proposed parameterization: $f(x) = W^+_1 \cdot Relu(y) + W^-_1 \cdot Relu(-y),y = W^+_1 \cdot Relu(x) + W^-_1 \cdot Relu(-x)$ Then:
$$
\mathbb{E}[y] = \frac{\sum_i{x_i}}{\sqrt{2\pi}}, \quad \mathbb{E}[f(x)] = \frac{d \cdot \sum_i{x_i}}{{2\pi}}.
$$
From this, two key issues arise:
1. For a standard MLP, the network can inherently capture non-zero mean outputs. However, in your proposed MLP, if the input is zero-mean normalized, the network may fail to learn meaningful features in terms of $\mathbb{E}[f(x)] $.
2. Even if we assume that the summation and the norm are equivalent in some intuitive sense, the proposed MLP expands the mean value magnitude by $O(m)$ from one layer to the next, whereas a standard MLP only scales by $O{\sqrt{m}}$. This faster growth suggests that the proposed Monotonic MLP may suffer from instability and increased training difficulty.
In light of these concerns, I strongly recommend further discussion on how the proposed parameterization does not unintentionally introduce new challenges in model stability and learning effectiveness despite addressing gradient vanishing issues in some sense.
Experimental Designs Or Analyses: The experimental evaluation is relatively small in scale. Although the authors compare their method with prior works that also use small-scale experiments, the diversity of datasets and task types is insufficient. To convincingly demonstrate the effectiveness and robustness of the proposed method, additional experiments on more complex and diverse tasks are recommended.
Supplementary Material: No.
Relation To Broader Scientific Literature: N/A
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: None
Other Comments Or Suggestions: Based on the above observations, the paper should provide a more rigorous theorem that formally establishes the universal approximation property of the proposed Monotonic Neural Networks under the parameterization described in Equation (12). The current discussion is somewhat informal and lacks the necessary theoretical depth to convincingly prove this property.
Questions For Authors: None
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear Reviewer,
Thank you for your insightful review and constructive suggestions. Below, we address each of your points individually:
- _"Emphasizing the relevance and potential applications of Monotone MLPs would significantly enhance the paper’s impact."_:
We fully agree with your suggestion. Monotonic neural networks indeed have diverse applications beyond interpretability and fairness, such as their established use in quantile regression [1]. We will expand the paper to expand on the practical applications of Monotonic NNs, thereby better emphasizing the broader relevance of the topic.
- _"I strongly recommend further discussion on how the proposed parameterization does not unintentionally introduce new challenges in model stability and learning effectiveness."_:
Your point is well-taken and insightful. We will include additional analyses in the appendix examining this issue more closely. However, a thorough and detailed analysis of initialization methods would merit an entire standalone study, thus we will also note it in the future works. We also note this potential concern is common among state-of-the-art architectures employing constrained weights and ReLU activations [2, 3]. (for more details see next point)
- _"However, in your proposed MLP, if the input is zero-mean normalized, the network may fail to learn..."_ & _"This faster growth suggests that the proposed Monotonic MLP may suffer from instability and increased training difficulty."_:
We appreciate this important observation.
The practical implementation provided in our paper employs PyTorch’s default initialization (Uniform Kaiming or Xe), which does not strictly follow the analysis you provided. Nonetheless, similar results to your one can be obtained on this variant.
In the revised appendix, we will add further details addressing your concern, also considering biases, and evaluating empirically our settings to normal MLPs. To anticipate the results that will be reported in the paper, we have analyzed the empirical expansion factor, and even though it can be seen that it's not the same as a normal MLP (as predicted by your analysis), it remains in a reasonable range.
- _"Additional experiments on more complex and diverse tasks are recommended."_:
As mentioned in the Introduction (around line 92), our primary aim is theoretical exploration rather than proposing a novel state-of-the-art architecture. While our theoretical results open opportunities for creating more expressive monotonic MLPs, we believe that the included benchmark comparisons are sufficient for validating our theoretical contributions. Nonetheless, we acknowledge the value of further experiments and agree they would enhance validation.
- _"The current discussion is somewhat informal and lacks the necessary theoretical depth to convincingly prove this property."_:
We agree that the discussion in Section 4.1 (around line 373) could benefit from a more rigorous presentation. We will carefully revise this section, within the constraints of available space, to ensure greater theoretical clarity and rigor.
ICML guidelines this year allow authors to submit revised manuscripts only during the second half of the review process. Accordingly, we will upload the revised version as soon as the submission system permits, with the requested improvements and analysis.
Your feedback is highly valuable and has helped us improve the manuscript significantly. We hope these clarifications address your concerns and reinforce your positive assessment.
Thank you once again for your constructive comments and support.
References:
[1] Chilinski, Pawel, and Ricardo Silva. "Neural likelihoods via cumulative distribution functions." Conference on Uncertainty in Artificial Intelligence. PMLR, 2020.
[2] Runje, Davor, and Sharath M. Shankaranarayana. "Constrained monotonic neural networks." International Conference on Machine Learning. PMLR, 2023.
[3] Kim, Hyunho, and Jong-Seok Lee. "Scalable monotonic neural networks." The Twelfth International Conference on Learning Representations. 2024. | Summary: This paper constructs universal approximators for monotonic functions with MLPs with non-negative weight constraint and activations that saturate on alternating sides. Based on the result, the paper shows MLPs with convex monotone activations and non-positive constrained weights can also be universal approximators. Furthermore, the authors proposes the pre-activation and post-activation formulations for monotonic ReLU and ReLU' MLPs to address the optimization challenge of monotonic neural networks constructed with weight constraints.
## update after rebuttal
My concerns are well addressed in the rebuttal. I have updated my score to 3.
Claims And Evidence: In Section 4, the paper claims the pre-activation and post-activation formulations can address the weight constraint challenge. However, it seems that the two formulations are not equivalent to the original monotonic ReLU MLP (line 344). Furthermore, to guarantee monotonity and apply Theorem 3.5, the weight constraints still seem to be required.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: The proofs seem correct.
Experimental Designs Or Analyses: Yes.
Supplementary Material: Yes, but I did not check the proof details.
Relation To Broader Scientific Literature: N/A
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strength:
The paper constructs universal approximators for monotonic functions with one-side saturating activations by simply modifying non-negativity to non-positivity, which is concise and insightful.
Weakness:
The universal approximation result seems direct, considering the construction of Heavyside function (Figure 2) and the universal approximation results of NNs in [1]. The advantages of the paper's results and proof techniques need to be highlighted.
[1] Hornik, Kurt, Maxwell Stinchcombe, and Halbert White. "Multilayer feedforward networks are universal approximators." Neural networks 2.5 (1989): 359-366.
Other Comments Or Suggestions: No.
Questions For Authors: 1. Can you clarify why the pre-activation and post-activation formulations are equivalent to the original monotonic ReLU MLP and address the weight constraint challenge? For example, give a concrete algorithm of optimizing the monotonic ReLU MLP with the proposed formulations.
2. Can you clarity the advantages of the paper's results and proof techniques? (see Weakness)
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear Reviewer,
Thank you for your thoughtful comments and suggestions. Below, we address each of your points in detail:
- _"The two formulations are not equivalent to the original monotonic ReLU MLP."_:
It is unclear whether the "original monotonic ReLU MLP" refers specifically to the setting described in Theorem 1 or to the "naive" ReLU with non-negative weights. For the former case, at the end of page 7 (around line 381), we illustrate how the proposed formulation can be mapped back to the setup used in Theorem 1, establishing that the MLP in Theorem 1 is indeed a special instance of the general formulation we propose. Another reviewer also noted that our description of the former formulation lacked rigor and clarity. We acknowledge this concern, and we plan to significantly revise this section to enhance its clarity and precision in the updated manuscript.
In the latter case, if all weights are constrained to be non-negative, both formulations reduce equivalently to a "naive" ReLU MLP. However, this "naive" MLP lacks universal approximation capabilities, highlighting the generality and advantage of our pre- and post-activation formulation.
- _"The weight constraints still seem to be required."_:
In Section 4.1, we demonstrate that by explicitly splitting the weights into positive and negative parts, the need for explicit weight constraints is effectively removed. This is crucial to simplify the implementation, avoiding the activation alternation, and relaxing the need for explicit weight constraints.
- _"Provide a concrete algorithm for optimizing the monotonic ReLU MLP with the proposed formulations."_:
The proposed parametrization can straightforwardly be optimized using standard gradient-based optimization algorithms, consistent with typical neural network training. This is explicitly demonstrated in the provided reproducible code accompanying our submission.
- _Weakness 1 and "the advantages of the paper's results and proof techniques."_:
Existing proof techniques from [1] cannot directly provide a straightforward proof for constrained monotonic MLPs, primarily because they require negative weights in the proof construction$^*$. To overcome this limitation, we provided in Appendix A.1 an alternative, albeit naive, proof built upon the approach from Runje & Shankaranarayana (2023), in a similar spirit as the one you proposed. Although this alternative requires up to 8 layers (a loose bound), while our primary contribution (Theorem 1) significantly improves this by establishing that only 4 layers are necessary.
We sincerely appreciate your valuable feedback and hope the clarifications above resolve your concerns. We believe these refinements strengthen our manuscript and illustrate its novelty and practical value. We appreciate your thoughtful feedback and hope our clarifications have sufficiently addressed your concerns, potentially improving your assessment.
Thank you once again for your time and constructive suggestions.
[1] Hornik, Kurt, Maxwell Stinchcombe, and Halbert White. "Multilayer feedforward networks are universal approximators." Neural networks 2.5 (1989): 359-366.
$^*$ from [1]: "By adding, subtracting and scaling a finite number of affinely shifted versions...". They take the difference of threshold functions to create Rect/Box functions. However, we cannot use the "subtract" part since it requires negative weights, while Theorem 1 necessitates non-negative weights.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response, which addresses part of my concerns. I can now understand the paper's theoretical contribution of the universal approximation result.
However, I am still unclear about the pre-activation and post-activation formulations addressing the weight constraint challenge.
- While $f(x) = \text{ReLU}(|W| x + b)$ is always nonnegative, (11) are (12) are not. For example, in (11), let $W=-1, b=0$ and $x\to-\infty$; in (12), let $W=1, x=1$ and $b\to-\infty$. The two formulations are not equivalent to $f(x) = \text{ReLU}(|W| x + b)$.
- While these alternative formulations remove the need for explicit weight constraints, they involve rearranging the signs of the weight matrices. Since the motivation for avoiding explicit constraints is to ease optimization, it is unclear whether this rearrangement introduces new optimization challenges. Could you clarify whether this reparameterization changes the optimization landscape, and if so, how?
I would be willing to raise my score if these two issues are satisfactorily addressed.
---
Reply to Comment 1.1.1:
Comment: Dear reviewer
Sorry for the late reply; we thought that in the second part of the review process, we would be able to upload a revised version of the paper that we prepared to address your concerns. However, it is allowed to upload a revised version of the paper only in the camera-ready session, only to accepted papers.
For this reason, we will include directly in this response a summary of the sections we revised in the paper, addressing your concerns. Let us briefly recap the main point of the following response:
1. We do not want to be equivalent to $\text{ReLU}(|W|x+b)$. We think this misunderstanding comes from the phrasing used in section 4.1, which will be the section that will be revised and that we will post at the end of this reply. $\text{ReLU}(|W|x+b)$ is not a universal approximator; thus, being equivalent would imply that also our approach is not to be universal approximator. Instead, it was just a way to introduce the reasoning behind the activation switch.
2. We don't need to rearrange the sign of the matrixes explicitly, instead we only need to split the positive and negative part of the matrix $W$. Furthermore, consider that in order to apply Theorem 3.5, you would need to use activations with alternating saturation sides. This might require further carefulness in the implementation, and possibly additional hyperparameter tuning. Furthermore, as highlighted by another reviewer, there might be concerns about the initialization. We have also prepared another section of the paper addressing this point, where we show that the proposed method, though it does not scale as a normal MLP, is much better than the naive weight constraint.
## **Summary of the revised Section 4.1** (addressing point 1):
Instead of constraining weights $f(x)={\sigma(|W| x + b)}$, we can separate ${W}$ into its positive and negative parts ${W^+= \max(W,0)}$ and ${W^-=\min(W,0)}$. This allows us to express the affine transformation as
$$
|W|x + b = W^+ x - W^- x + b.
$$
Applying the non-linearity to each term of the equation above individually instead of applying it to $|W|x$, and sharing the bias term, leads to the parametrization:
$$
\hat{f}(x) = \sigma(W^+ x + b) - \sigma(W^- x + b).
$$
**Proposition:**
Any function representable using an affine transformation with non-negative weights followed by either $\sigma$ or $\sigma'$ can also be represented using the equation above, up to a constant factor.
**Proof:**
If $W$ has the same sign, one of the two terms in the equation above collapses to ${\pm\sigma(b)}$. Specifically, when $W\ge 0$, the expression reduces to ${\sigma(|W| x + b)-\sigma(b)}$, while when $W\le0$ it reduces to ${\sigma(b)-\sigma(-|W| x + b)}$ instead. To conclude the proof recall that ${-\sigma(-x) = \sigma'(x)}$.
The additional constant factor can be accounted for in the bias term of the following layer. Therefore, the proposition above covers both cases employed Theorem 3.5, using $\sigma$ and $\sigma'$ as left saturating and right saturating activations.
This shows that an MLP obtained by stacking at least $4$ blocks parametrized as above is a universal approximator for monotonic functions. Hence, the proposed formulation is more expressive than a simple weight constraint, given that they are only a special case of the equation above.
A similar reasoning can be applied to the alternative formulation working backward from the last layer of the network, leading to:
$$
\hat{f}(x) = W^+ \sigma(x) + W^- \sigma(-x) + b.
$$
## **Summary of the new appendix chapter** (addressing point 2):
From an empirical perspective, in [Figure](https://ibb.co/C5Z4gTPZ), we can observe the distribution of an output of a multilayer MLP with different parametrization. Unconstrained refers to a naive MLP, Constrained Naive refers to an MLP with $|W|$ as weight parameterization, while pre/post-activation refer to the proposed formulations. While a normal MLP always have 0-mean in predicted value, the switch activation has a very slow tendency to increase the expected output from random initialization, as predicted by theory. However, such slight increase, is notably largely smaller than the one induced by naively constraining the weights to be positive.
Overall, it can be seen that the activation switch alleviates such behavior by a large factor, thus helping optimization and, thus, performance. All of this while not employing no specific initialization scheme.
Furthermore, evidence of such improved initialization can be seen in Figures 8, 9, and 10 of the uploaded paper, where we show how a naive weight constraint tends to have an exploding gradient with deeper MLPs using ReLU while vanishing using sigmoid.
We feel that initialization might be a very fundamental and interesting path to follow. However, we also feel that it deserves a whole work dedicated to it. Therefore, we will include such opportunity in the future works, leaving it as an opportunity for future research. | null | null | null | null | null | null |
Zero-Shot Generalization of GNNs over Distinct Attribute Domains | Accept (poster) | Summary: The authors study the problem of generalizing GNNs to new graphs that have distinct node/edge attributes. This is a very important problem when attempting to create graph foundation models, as different graphs will often have very different attributes. These attributes will differ not only in dimension size, but in semantic meaning as well. The authors propose a new technique, STAGE, to handle this problem. It first works by constructing a new graph for each node pair (u, v) based on the node attributes (i.e., a "STAGE graph"). Each node corresponds to one attribute and the graph is fully connected. The "node attributes" in the STAGE graph consider the pdf between node attributes. A GNN is run on the STAGE graph for each edge to produce an edge embedding. A second GNN is then run on the original graph where the edge weights are the edge-level embeddings from the previous stage (original attributes are discarded). The authors study the generalization ability of STAGE, showing that it can generalize to unseen node attributes better than baselines.
Claims And Evidence: Overall I believe the claims are well supported by evidence. Theoretically, they show that STAGE has good transferability potential. They further include experiments that seem to demonstrate empirically.
However, currently the authors only show that can transfer models within **graph domains**. Note that this is distinct from attribute domains. For example, all the E-commerce datasets are E-commerce graphs (with the same being said for H&M). Furthermore, Pokec and Friendster are both social networks. As such, even though they contain distinct attributes there is still overlap in the semantic meaning of many node attributes.
This is perfectly fine, however I think it's worth emphasizing that the current results don't show the ability of STAGE to transfer across domains (e.g., train on E-commerce and test on a social network). If possible, I encourage the authors to add such results.
Methods And Evaluation Criteria: I think the GNNs used for STAGE and the baselines compared against are suitable. While other baselines exist, the authors choose those that are most common. Furthermore, the evaluation criteria is well aligned with common practice.
Theoretical Claims: I read them, however, I didn't look at them in detail.
Experimental Designs Or Analyses: Most of the experiments are well-designed with good analyses.
However, I think it can be improved in a few instances:
1. The authors argue that STAGE is the only method to consistently increase in performance when considering more graphs (Figure 4). However, this doesn't seem to be true, as NBFNet-Gaussian still increases. The authors mention that NBFNet-Gaussian seems to plateau after 3 graphs. However, the same can be said for STAGE. In reality, the mean performance of STAGE when going from 3 to 4 graphs is barely different.
2. In Table 1 the authors only show the performance when holding out E-Commerce Store and H&M. It would be better if they can also include the raw results when holding out the other 4 E-Commerce datasets. It's fine if this is in the Appendix. I think it would be good to see how consistent this observation is across different hold-out datasets.
Supplementary Material: I looked at all of it. However, for the theoretical proofs, I did not read them in detail.
Relation To Broader Scientific Literature: This paper is well-situated in the current field of Graph ML. A big impediment to the creation of graph foundation models is that different graphs can contain wildly different node features/attributes. As such, a method that can help alleviate this problem is highly sought. As such, I believe this paper should hold great interest to many in the field of Graph ML.
Essential References Not Discussed: None.
Other Strengths And Weaknesses: I include a few other weaknesses below:
1. I think the current efficiency results (Tables 9 and 10) downplay the inefficiency of STAGE. As noted, STAGE is quadratic with regards to the number of node attributes (during graph construction and STAGE-edge inference). The current results only show a slight increase in runtime. However, this is mainly due to the fact that the current graphs tend to have very few node attributes (at most 16). However for graphs with more node attributes, STAGE will quickly become impractical to run. This can be seen by the fact that the authors reduce the number of attributes in the original Friendster dataset (644) to a much smaller number.
This isn't inherently a bad thing. However, this severely limits the real-world potential of STAGE, as it can only handle graphs with few attributes. For example, we can look at the OGB datasets [1], which are common graph datasets. ogbn-arxiv, which is used for node classification has 128 node features. For link prediction datasets, ogbl-collab and ogbl-citation2 have 128 features while ogbl-ppa has 58. It would be impractical to run STAGE on any of these methods (irrespective of graph size).
2. Currently STAGE requires separating attributes into those that are ordered and unordered. As such, some manual pre-processing must be done beforehand. In fact, for some datasets it may not even be known whether some attributes are truely ordered or unordered. This can further be quite tedious when many node attributes exist. However, this is a smaller weakness as it only needs to be done once.
[1] Hu, Weihua, et al. "Open graph benchmark: Datasets for machine learning on graphs." Advances in neural information processing systems 33 (2020): 22118-22133.
Other Comments Or Suggestions: 1. I think some more intuitive explanation can be given for why STAGE considers the conditional probabilities of different node attribute types between nodes. It becomes clearer after reading Section 3. However, when reading Section 2, it's unclear why STAGE is designed in this manner. It may help if the authors can provide a brief intuitive explanation early in Section 2 to better motivate this design before jumping into the details.
Questions For Authors: 1. Have you tested the ability of STAGE to generalize across *graph domains* (e.g., from E-commerce and social networks)? If not, would it be possible to test this using additional datasets? To be clear, this experiment isn't *necessary*. However, I think if STAGE can show such transferability, it would greatly enhance the utility of the method. As currently, the authors only show that STAGE can generalize across *attribute domains* among graphs in the same domain (e.g., E-Commerce).
2. Would you be able to include the results when holding out each of the 5 E-Commerce dataset?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We appreciate the reviewer for recognizing that our paper “should hold great interest to many in the field of Graph ML” and that STAGE can help alleviate the significant “impediment to the creation of graph foundation models.” We now address their remarks.
**Q1.** “There is still overlap in the semantic meaning of many node attributes.”
**A1:** Our Ecommerce dataset comprises five product categories with semantically different attributes. Even when attributes appear semantically similar, they represent fundamentally different attribute domains with different values and distributions. For instance,
- Bed material can be Wood, Metal, Composite
- Shoe materials can be Leather, Synthetic, Canvas
This highlights STAGE’s ability to transfer across attribute domains that do not overlap in the semantic meaning, making it well-suited for diverse real-world applications.
**Q2.** “Have you tested the ability of STAGE to generalize across graph domains (e.g., from E-commerce and social networks)? If not, would it be possible to test this using additional datasets? To be clear, this experiment isn't necessary.”
**A2:** Our E-commerce datasets focus on temporal link prediction, while the social networks involve node classification. The social network data lacks edge creation times, so we cannot perform the same link prediction task. Additionally, the social network classification task is binary, while the E-commerce data does not have a directly comparable binary task. That said, we share the reviewer’s curiosity, and if they have suggestions for a suitable shared task on a different dataset, we would be happy to explore them.
**Q3.** “The authors mention that NBFNet-Gaussian seems to plateau after 3 graphs. However, the same can be said for STAGE. In reality, the mean performance of STAGE when going from 3 to 4 graphs is barely different.”
**A3:** Thank you, we will revise our statement to be more precise. While not the only method showing improvement, STAGE demonstrates particularly favorable scaling properties with additional graph domains. Specifically, STAGE exhibits notably tighter interquartile ranges compared to NBFNet-Gaussian at higher domain counts, suggesting more reliable performance across different domain combinations. Additionally, STAGE's lower whiskers consistently rise with more domains, showing that even its worst-case scenarios improve with more training data - a pattern less pronounced in NBFNet-Gaussian.
**Q4.** “Would you be able to include the results when holding out each of the 5 E-Commerce dataset?”
**A4:** We will include them in the revision.
**Q5.** “STAGE is quadratic with regards to the number of node attributes” “This severely limits the real-world potential of STAGE, as it can only handle graphs with few attributes.”
**A5:** While the quadratic complexity may be perceived as a limitation, we argue that it does not substantially diminish the usefulness of STAGE. In fact, numerous popular machine learning algorithms exhibit similar complexity constraints, including the Transformer Attention mechanism, which has a quadratic constraint. In relational deep learning, methods designed for small data thrive and make huge impacts on the real world. A recent example is TabPFN [1], a foundation model for tabular data that has found extensive scientific and business applications in real-world settings despite being designed for small datasets. Certain eigen-decomposition-based GNNs also have quadratic or cubic complexity.
[1] Hollmann et al., 2025. Accurate predictions on small data with a tabular foundation model
**Q6.** “STAGE requires separating attributes into those that are ordered and unordered. As such, some manual pre-processing must be done beforehand.” “This can further be quite tedious when many node attributes exist. However, this is a smaller weakness as it only needs to be done once.”
**A6:** We agree that this step may require upfront effort. However, it's worth noting that for categorical attributes, this processing can be done in lieu of mapping them into one-hot encodings, which is a common practice in ML pipelines. Additionally, for unordered attributes that are not categorical, it is likely that a pipeline already exists and can be repurposed.
**Q7.** “Some more intuitive explanation can be given for why STAGE considers the conditional probabilities of different node attribute types between nodes.”
**A7:** Thank you, we will include it in the revision. The key intuition is that zero-shot generalization requires focusing on statistical relationships between attributes rather than absolute values. By modeling conditional probabilities, STAGE recognizes patterns across different attribute spaces. For instance, instead of learning specific rules like 'phones with 8GB RAM tend to be expensive,' STAGE learns abstract relationships such as 'high values in X_1 correlate with high values in X_2,' enabling knowledge transfer across domains with different attributes.
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarifications and promise of revisions. I think they will help strengthen the paper. I'll keep my positive score.
> That said, we share the reviewer’s curiosity, and if they have suggestions for a suitable shared task on a different dataset, we would be happy to explore them.
My apologies, but I can't think of any dataset that has a suitable number of node features for STAGE (most common Graph ML datasets tend to have >100 node features). Unrelated to this review, I think it would be interesting if the authors can find such a dataset. If STAGE can generalize across such datasets, that would be quite noteworthy.
---
Reply to Comment 1.1.1:
Comment: Thank you for your feedback and for maintaining your positive score. We appreciate your curiosity and are actively exploring datasets that could demonstrate generalization from e-commerce to social networks. If we identify appropriate ones, we will make sure to include them in the revision. | Summary: The paper introduces STAGE, a novel framework designed to overcome the challenge that traditional GNNs face when node attributes in test graphs differ from those seen during training. Rather than relying on raw attribute values, STAGE computes statistical dependencies between pairs of attributes by constructing a dedicated “STAGE-edge-graph” for each edge. A two-stage GNN approach is then applied: one GNN extracts edge embeddings from these graphs, and a second GNN uses these embeddings (while discarding the original node attributes) to produce the final graph representation. The method is underpinned by theoretical analysis and is validated through experiments across diverse datasets.
## Update after Rebuttal
Most of my concerns have been addressed, and I will maintain my current score.
Claims And Evidence: The paper’s main claims—that STAGE can achieve zero-shot generalization by learning domain-independent statistical dependencies between node attributes—are supported by both rigorous theory and comprehensive experiments. The theoretical claims (e.g., Theorems 3.2–3.4) are backed by detailed proofs (in Appendix B) and the experimental results show significant improvements (up to 103% gain in Hits@1 for link prediction and improved node classification accuracy) compared to baselines. One minor concern is that the evidence is demonstrated on small to medium-sized datasets; thus, the scalability and robustness on very large graphs remain somewhat unverified.
Methods And Evaluation Criteria: The proposed method—constructing STAGE-edge graphs that capture pairwise statistical dependencies and using a two-stage GNN framework—is innovative and well aligned with the problem of handling distinct attribute domains. Overall, the methodological choices and evaluation setup are sensible, though the quadratic complexity for the number of attributes may limit its applicability in larger-scale settings.
Theoretical Claims: I examined the theoretical contributions, particularly the proofs related to Theorems 3.2, 3.3, and 3.4. These proofs appear rigorous and well-founded.
Experimental Designs Or Analyses: Yes, I checked the soundness of the experiment section.
Supplementary Material: I reviewed several parts of the supplementary material, including:
1. The detailed pseudocode for the STAGE-edge-graph construction and the forward pass (Appendix A).
2. Extended experimental results and sensitivity analyses (Appendices D, E, F, and H).
3. The complete proofs and additional theoretical discussions (Appendix B).
Relation To Broader Scientific Literature: The paper is well-contextualized within the literature on graph neural networks, domain adaptation, and zero-shot learning.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: This paper’s strength lies in its innovative STAGE framework, which unifies heterogeneous node attributes via statistical dependencies backed by rigorous theory and extensive empirical validation across diverse datasets. For the weaknesses:
1. The approach is quadratic in complexity concerning the number of attributes, which may limit its applicability to very large graphs or datasets with high-dimensional attribute spaces.
2. While experiments on small to medium-sized datasets are promising, how well STAGE scales and performs on much larger real-world graphs remains to be seen.
Other Comments Or Suggestions: No.
Questions For Authors: 1. Could the authors provide an empirical comparison or toy visual example that highlights how order statistics capture invariant relationships better than normalized raw values in Sec. 3.1? How does the use of order statistics handle outliers or tied values in attribute distributions?
2. "Then, by dropping the attribute identifiers in STAGE-edge-graphs, we sacrifice maximal expressivity but ensure that STAGE is invariant to permutations of the attribute order. "Could the authors provide more theoretical or experimental illustration on how this loss of identifiers affects model expressiveness, especially in cases where attribute order might carry implicit information?
3. Could the authors discuss how sensitive STAGE is to the diversity of training domains theoretically or experimentally?
4. The limitation of STAGE seems not to be discussed in the manuscript.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We appreciate the reviewer for recognizing our theory is “rigorous and well-founded.” We now address their comments.
**Q1.** “The approach is quadratic in complexity concerning the number of attributes, which may limit its applicability to very large graphs or datasets with high-dimensional attribute spaces.”
**A1:** While the quadratic complexity may be perceived as a limitation, we argue that it does not substantially diminish the usefulness of STAGE. In fact, numerous popular machine learning algorithms exhibit similar complexity constraints, including the Transformer Attention mechanism, which has a quadratic constraint. In the domain of relational deep learning, methods designed for small data thrive and make huge impacts on the real world. A recent example is TabPFN [1], a foundation model for tabular data that has found extensive scientific and business applications in real-world settings despite being designed for small datasets. Certain eigen-decomposition-based GNNs also have quadratic or cubic complexity.
[1] Hollmann et al., 2025. Accurate predictions on small data with a tabular foundation model
**Q2.** “Could the authors provide an empirical comparison or toy visual example that highlights how order statistics capture invariant relationships better than normalized raw values in Sec. 3.1? How does the use of order statistics handle outliers or tied values in attribute distributions?”
**A2:** Thanks for the suggestion. Next, we present a toy example adapted from the Ecommerce dataset to illustrate why order statistics are superior to normalization for domain transfer.
Domain 1 (Train - Computers have attribute power_supply_watts):
- Computer A: 800 W
- Computer B: 600 W
- Computer C: 450 W
- Computer D: 300 W
Domain 2 (Test - Refrigerators have attribute energy_rating):
- Refrigerator A: 4.0 [A]
- Refrigerator B: 3.0 [B]
- Refrigerator C: 2.0 [C]
- Refrigerator D: 1.0 [D]
Now assume there exists a correspondence between Computer A and Refrigerator A, Computer B and Refrigerator B, and so on. This correspondence arises because users who prefer high-powered computers tend to select refrigerators with better energy ratings. When encoding these attributes, we need a representation that preserves this user preference pattern across domains, despite the different attribute scales.
With z-score normalization, the values become different across domains:
- Product A: 1.42 vs 1.34
- Product B: 0.34 vs 0.45
- Product C: -0.47 vs -0.45
- Product D: -1.28 vs -1.34
With order statistics, namely STAGE, *the values remain identical* in both domains:
- Product A: 0.25 (1/4 products)
- Product B: 0.5 (2/4)
- Product C: 0.75 (3/4)
- Product D: 1.0 (4/4)
Therefore, while normalization produces different values for the corresponding products, order statistics preserve consistency across domains by capturing ranking information, which is invariant to monotonic transformations. Additionally, by focusing on ranks, order statistics are inherently robust to outliers, and ties are handled by assigning the average rank to all tied items.
**Q3.** “Could the authors provide more theoretical or experimental illustration on how this loss of identifiers affects model expressiveness, especially in cases where attribute order might carry implicit information?”
**A3:** Without attribute identifiers, the model may lose the ability to distinguish attributes. For instance, consider two edges e1 and e2 in the original graph, and two features f1 and f2. If f1 has CDF value v1 in e1 and v2 in e2, while f2 has CDF value v2 in e1 and v1 in e2, with all other features having identical CDFs in both edges, then the two STAGE edge-graphs become isomorphic. In this case, STAGE produces identical edge embeddings for e1 and e2, even though the actual attribute distributions differ, limiting the model's expressivity and ability to distinguish them. However, our empirical analysis suggests this theoretical limitation has minimal practical impact. As shown in Figure 6, STAGE effectively captures meaningful attribute relationships even without explicit identifiers.
**Q4.** “Could the authors discuss how sensitive STAGE is to the diversity of training domains theoretically or experimentally?”
**A4:** Figure 4 shows that STAGE benefits from diverse training domains. STAGE’s performance consistently improves as we increase the number of training domains, with higher median values and tighter confidence intervals for both Hits@1 and MRR metrics. This suggests that STAGE effectively leverages the diversity in multiple graph domains to learn more robust and transferable representations. In contrast, the other baselines show minimal or inconsistent improvements with additional domains.
**Q5.** “The limitation of STAGE seems not to be discussed in the manuscript.”
**A5:** We acknowledge the limitations of STAGE on larger graphs in Sections 1 and 4. We appreciate the reviewer's suggestion and will further clarify them in the revision. | Summary: This paper studies the zero-shot generalization of GNNs under the shift in attribute domains. They propose the STAGE algorithm that aims to model the statistical dependencies between node attributes that can be invariant across domains instead of the original node attribute. Specifically, STAGE creates the edge graph for each edge that models the conditional probabilities between attribute pairs from each edge end points. Then, they use one GNN to generate edge embeddings based on edge graph probabilities and another GNN to output the embeddings for task. The paper also includes theoretical justification in terms of the expressiveness and transferability of STAGE. Lastly, the experiments on various node classification and link prediction tasks show the effectiveness of STAGE.
Claims And Evidence: **Strength:**
- Investigating the change of attribute domains can be useful in many real world adaptation scenarios
- The idea of statistical dependencies between attributes rather than raw features are interesting and novel
**Weakness / Questions:**
- This paper only focuses on the shift in attribute domains, but in reality it is rather rare that the graphs only shift in terms of the node attribute domains. How do you view the effectiveness of this method under the structure shift and will STAGE become problematic under the presence of structure shift?
- The paper claims to work only for the small to medium datasets with controlled number of attributes, which limit the practical usage of the algorithm.
Methods And Evaluation Criteria: **Strength:**
- Interesting idea and clear explanation in the methodology section.
- The design is not restricted a specific type of shift in attribute domains and consider some detailed setting like unordered and ordered types of attributes.
- Can be applicable to different types of tasks, e.g node classification and link prediction.
**Weakness / Questions:**
- The complexity of creating edge graphs still remain concerning and limit the usage to large datasets
- The current design only considers the pairwise relations, which is a rather simplified version of the attribute hypergraph
Theoretical Claims: **Strength:**
- Detailed and extensive theoretical motivation of STAGE
**Question:**
- Potentially can have a better connection to the actual design of the STAGE algorithm more. For instance, what design corresponds to "assigning unique attribute identifiers to label the nodes of our STAGE-edge-graphs" and what design corresponds to "dropping the attribute identifiers".
Experimental Designs Or Analyses: **Strength:**
- The empirical improvements are significant
- The datasets span different types of tasks and domains
**Weakness:**
- limited size of datasets as mentioned above
- The baselines are rather simple and lack of potential related baselines. Even if this paper targets specifically attribute domain shift, it might still be interesting to compare to foundation model that claims to have zero-shot generalization ability. Also, is it possible to compare with graph OOD works in similar settings.
Supplementary Material: Briefly went through the datasets and discussion of foundational models.
Relation To Broader Scientific Literature: The key contribution is the idea of statistical dependencies that might can be transferable under the change of attribute domains.
Essential References Not Discussed: It can be more comprehensive if you have a more in-depth discussions on graph generalization under distribution shifts, like graph OOD. Also, the works included in the discussion are rather outdated. Specifically, you can discuss the similarity and difference in the shifts considered and how your method compared to previous literatures.
Other Strengths And Weaknesses: Please refer to the above sections.
Other Comments Or Suggestions: N/A
Questions For Authors: Please refer to the above sections.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s recognition that STAGE “can be useful in many real-world adaptation scenarios,” and thank them for acknowledging STAGE’s versatility, as well as for their appreciation of our theoretical motivation and experiments. We now address their comments.
**Q1:** “..It might still be interesting to compare to foundation models that claim to have zero-shot generalization ability.”
**A1:** We already include GraphAny in Table 2, a baseline claimed to have zero-shot generalization ability. Per your suggestion, we include another foundation model, namely GCOPE [1] in Table 2 (reporting zero-shot test accuracy on Pokec, trained on Friendster), and report the results below. Notably, **STAGE outperforms the GCOPE foundation model by 21.9%.**
| | **Accuracy** (↑) | **% gain** |
|--|--|--|
| GraphAny | 0.591 ± 0.0083 | 10.3% |
| GCOPE | 0.535 ± 0.0153 | 21.9% |
| STAGE | 0.652 ± 0.0042 | 0% |
**Q2:** “How do you view the effectiveness of this method under the structure shift and will STAGE become problematic under the presence of structure shift?” And “Is it possible to compare with graph OOD works in the same settings?”
**A2:** Our experiments account for substantial structural variation, as shown in Table 3 (Appendix C), where the number of nodes, edges, and average degrees often double from training to testing. Despite this, STAGE consistently outperforms other baselines in all zero-shot settings (Figure 3), demonstrating its effectiveness in scenarios where both the attribute domain and the structure shift.
Nonetheless, we acknowledge that STAGE does not make explicit assumptions about structural shifts, but it also does not impose constraints on them, and can be integrated with any GNN supporting input edge embeddings. We anticipate that integrating STAGE with existing graph-OOD GNNs, which are explicitly designed for structural shifts (e.g., varying graph sizes), could further enhance performance in these settings.
Finally, to the best of our knowledge, existing graph OOD methods (e.g., those in [2]), focus on structural shifts but do not address the attribute space shifts we consider, which include changes in the number of attributes between train and test. As a result, **existing graph OOD methods cannot be evaluated in our setting**. We will clarify this distinction in our revision and include related works on graph OOD.
**Q3:** “The complexity of creating edge graphs still remains concerning and limits the usage on large datasets.”
**A3:** The computational complexity of STAGE is linear in the number of edges and quadratic in the number of attributes, rendering it particularly suitable for small-to-medium datasets. While this characteristic may be perceived as a limitation, we argue that it does not substantially diminish the usefulness of STAGE. In fact, numerous popular machine learning algorithms exhibit similar complexity constraints, including the Transformer Attention mechanism, which has a quadratic constraint. In the domain of relational deep learning, methods designed for small data thrive and make huge impacts on the real world. A recent example is TabPFN [3], a foundation model for tabular data that has found extensive scientific and business applications in real-world settings despite being designed for small datasets. Certain eigen-decomposition-based GNNs also have quadratic or cubic complexity.
**Q4:** “The current design only considers the pairwise relations, which is a rather simplified version of the attribute hypergraph”
**A4:** Our STAGE-edge-graph captures marginal probabilities (e.g., P(A), P(B), P(C)) and pairwise conditional probabilities (e.g., P(A|B)) among attributes of node pairs. However, we agree that this representation requires additional assumptions about independence to recover joint probabilities of three or more attributes. While extending STAGE to incorporate these higher-order interactions is theoretically possible, it would result in increased complexity, whose exploration represents an important research direction on its own.
**Q5:** “What design corresponds to assigning unique attribute identifiers to label the nodes of our STAGE-edge-graphs and what design corresponds to dropping the attribute identifiers?”
**A5:** In a STAGE-edge-graph, labeling each node with the attribute ID of an endpoint node corresponds to assigning unique attribute identifiers. In contrast, our design of representing each node solely by its probability density function, without the attribute ID, corresponds to dropping the attribute identifiers. We will incorporate it in the next revision, thank you.
[1] Zhao et al., 2024. All in One and One for All: A Simple yet Effective Method towards Cross-domain Graph Pretraining
[2] Zhang et al., 2025. A Survey of Deep Graph Learning under Distribution Shifts: from Graph Out-of-Distribution Generalization to Adaptation
[3] Hollmann et al., 2025. Accurate predictions on small data with a tabular foundation model
---
Rebuttal Comment 1.1:
Comment: Thank you for the rebuttal, it addressed some of my questions so I raise my score to 3.
---
Reply to Comment 1.1.1:
Comment: We appreciate your consideration of our rebuttal and the increased score. In our revised manuscript, we will incorporate the additional GCOPE foundation model comparison, clarify the distinction between our approach and graph OOD methods, and elaborate on the technical details of our STAGE-edge-graph implementation as promised. | Summary: The paper introduces STAGE (Statistical Transfer for Attributed Graph Embeddings) to enhance the zero-shot generalization capabilities of Graph Neural Networks (GNNs). It represents node attributes using order statistics, treating node features as random variables and reconstructing them into a STAGE-edge-graph based on the probability density functions between the attributes of two nodes connected by an edge. The method employs a two-stage GNN to obtain edge embeddings and graph representations, capturing statistical information while ignoring numerical attribute values. This approach enables zero-shot generalization across datasets with different categories, names, semantics, and cardinalities. Experiments focus on small-to-medium-sized GNNs, aiming to test generalization across different datasets within the same domain.
Claims And Evidence: The claims made in the paper are supported by referenced literature and experimental data.
Methods And Evaluation Criteria: The STAGE method improves the generalization of GNNs across different datasets within the same domain (e.g., e-commerce, product recommendation), namely there is a shift in feature distributions between training and test sets, but the task objectives remain the same. Compared to baseline models, STAGE shows significant improvements. The evaluation criteria involve prediction Hits@1 and MRR metrics for link prediction and node prediction tasks, providing convincing evidence.
Theoretical Claims: The paper models graph features using order statistics from statistics, with detailed theoretical derivations that do not appear to contain errors.
Experimental Designs Or Analyses: The main experiment focuses on link prediction, training on the E-Commerce Stores dataset and testing on untrained domains (e.g., training on bed and desk domains, testing on refrigerators and smartphones) and the H&M Personalized Fashion Recommendations dataset. Comparisons are made against various baseline methods (linear mapping, Gaussian noise, pure structural modeling, textual modeling, normalized features, and supervised structural modeling) on a unified GNN (NBFNet), measuring Hits@1 and MRR metrics, which is reasonable.
Supplementary Material: The supplementary material includes pseudocode for the main method, detailed steps of theoretical proofs, detailed descriptions of datasets, results of secondary experiments, ablation study results, and further discussions.
Relation To Broader Scientific Literature: The proposed STAGE method models graph data from a non-parametric statistical perspective, seeking invariant statistical properties of graphs to improve cross-domain generalization of GNNs, marking a significant step towards foundational graph models.
Essential References Not Discussed: The paper does not omit any essential related references.
Other Strengths And Weaknesses: The core of the STAGE method involves modeling and reconstructing each edge in the original graph into a STAGE-edge-graph, which increases computational overhead, limiting experiments to small-to-medium-sized graphs and preventing scalability to larger graphs.
The modeling method requires graph attributes to be continuous values or discrete categories, so the main experiments focus on the e-commerce domain. However, its effectiveness on text attribute graphs is very limited, as shown in Appendix C3, where STAGE performs no better than other baseline methods on social network graphs.
Other Comments Or Suggestions: In Section 3.1, the first paragraph, "e.g., R^d for d ≥ 1, where the total order ≤ is well defined" should have a letter after the ≤ symbol.
Questions For Authors: No questions at present.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s recognition of STAGE as *“marking a significant step towards foundational graph models”* and their positive assessment of our theory and empirical results. We will incorporate their suggestions into the revised manuscript. Below, we address their remarks:
**Q1.** Building each edge a STAGE-edge-graph “increases computational overhead, limiting experiments to small-to-medium-sized graphs and preventing scalability to larger graphs.”
**A1:** The computational complexity of STAGE is linear with respect to the number of edges and quadratic with respect to the number of attributes, rendering it particularly suitable for small-to-medium datasets. While this characteristic may be perceived as a limitation, we argue that it does not substantially diminish the usefulness of STAGE. In fact, numerous popular machine learning algorithms exhibit similar complexity constraints, including the Transformer Attention mechanism, which has a quadratic constraint. In the domain of relational deep learning, methods designed for small data thrive and make huge impacts on the real world. A recent example is TabPFN [1], a foundation model for tabular data that has found extensive scientific and business applications in real-world settings despite being designed for small datasets. Certain eigen-decomposition-based GNNs also have quadratic or cubic complexity.
The effectiveness of STAGE is evident in our experiments, for instance on the Ecommerce Stores dataset, where zero-shot learning achieves a Hits@1 score of nearly 0.6 when predicting user behavior from desktop configurations described by just 12 attributes. This result underscores that the computational trade-off enables substantial performance gains in settings where computational complexity is less of a constraint than model quality and expressiveness.
[1] Hollmann et al., Nature 2025. Accurate predictions on small data with a tabular foundation model
**Q2.** STAGE’s “effectiveness on text attribute graphs is very limited, as shown in Appendix C3, where STAGE performs no better than other baseline methods on social network graphs.”
**A2:** We believe the reviewer is referring to Table 5, discussed in Appendix E, which presents an additional experiment focusing on zero-shot prediction of *age* on social networks. As explained in Appendix E, this experiment was designed to highlight the inherent difficulty of predicting age in a zero-shot setting, given that "age" and "gender" are the only shared attributes available for node labels. Our goal was to illustrate that age prediction is fundamentally challenging across models, justifying our focus on gender prediction (Table 2). Consistent with this, neither our STAGE nor text embeddings outperformed other approaches in age prediction. All models struggled due to the significant distributional shift between age attributes in the Friendster and Pokec networks, as visualized in Figure 5.
Nonetheless, we agree with the reviewer that STAGE is designed to handle non-text attributes, theoretically guaranteeing generalization when considering continuous and discrete, as well as ordered and unordered attributes. We believe that extending STAGE to handle text-attribute, for instance by coupling it with an initial text encoder, represents an interesting avenue for future research, although one that requires a separate effort.
**Q3.** Minor suggestions
**A3:** We thank the reviewer’s suggestion on manuscript change and will include it in the updated version. | null | null | null | null | null | null |
Distributed Conformal Prediction via Message Passing | Accept (poster) | Summary: This work studies CP in a decentralised inference setting, where multiple devices share the same pre-trained model, and each device has a local calibration data set (motivated by e.g. privacy constraints). Given a common input, the devices aim at producing a prediction set that includes the true label of the test data with probability $1 − \alpha$. Two message-passing schemes are proposed for this Distributed CP (DCP) problem.
While the star graph topology was considered in prior works, this paper considers graphs where each node communicates only with its neighbours over an arbitrary graph topology.
-- update after rebuttal --
This is a well-written paper with solid theoretical contributions, so I maintain my evaluation. However, I continue to have reservations regarding the use of the terminology 'message passing' in the title. I would appreciate if the authors could address my questions listed under 'Other Strengths And Weaknesses' and 'Other Comments Or Suggestions' sections, not just those under the 'Questions For Authors' section.
Claims And Evidence: This paper is of high quality in terms of presentation and novelty. The theoretical claims are explained clearly and supported by proofs. I've gone through the proofs for Q-DCP and they seem correct.
The experimental results are also comprehensive and compelling.
Methods And Evaluation Criteria: Yes, the numerical experiments are rigorously designed. The proposed DCP methods are compared against the centralised CP as the benchmark, with graphs with different levels of connectivity.
Theoretical Claims: I've gone through the proofs for Q-DCP and they seem correct. The proofs are well written.
Experimental Designs Or Analyses: See above.
Supplementary Material: I checked the proofs for Q-DCP.
Relation To Broader Scientific Literature: This paper ties together quite a few interesting ideas with relevance extending beyond CP to optimisation theory, distributed computing, federated learning, and privacy.
A notable example is found in Q-DCP, where the authors transform the original objective (7) into a smooth, strongly-convex surrogate loss for accelerated convergence. This approach parallels the seminal adaptive regularisation technique from Bartlett, Hazan, and Rakhlin (2007), demonstrating how the paper builds upon established theoretical foundations while advancing them in new domains
Essential References Not Discussed: NA
Other Strengths And Weaknesses: Title: A more precise and compelling title would better reflect the flexibility and novelty of the proposed DCP methods, and attract the intended readers. I’m not sure whether “message passing” is the best key word to use here because i) the main novelty is that arbitrary graph topologies are allowed instead of just simple star topologies, and ii) H-DCP doesn’t really use message passing?
Other Comments Or Suggestions: - It might be beneficial to update the paragraph just above (23). Explain that the local vector $x_k$ represents the local estimate at device k of the global vector p. Add one sentence explaining why the update rule (23) is enforcing consensus among the devices and is therefore called the “linear consensus step”.
- line 346: nitpick: did you want to compare 1 real number vs M real numbers as the communication overhead? If so, it’s perhaps more accurate to say M-dim real vector instead of M-dim histogram vector.
Questions For Authors: 1. Do you require the calibration data to be iid across devices?
2. My understanding is that the constraint in (10) enforces the communication constraint based on the graph topology, is this correct? Could you elaborate?
3. line 175, right column: does “hyperparameter tuning” refer to tuning $\mu$ and $\kappa$?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: First, we are happy to consider your valuable suggestions in **Other Strengths And Weaknesses** and **Other Comments Or Suggestions**. For the other questions, please find the point-to-point response below.
1. > Do you require the calibration data to be iid across devices?
Please see our reply to comment 8 of Reviewer wc5C.
2. > My understanding is that the constraint in (10) enforces the communication constraint based on the graph topology, is this correct?
Yes, this is correct. In practice, the constraint in the problem (10) is equivalent to $s_i=z_{ij},s_j=z_{ij},$ for all $(i,j)\in \mathcal{E}$, where, $s_i$'s is the local copy of the shared optimization variable $s$ at device $i$, and $z_{ij}$ is an auxiliary variable imposing the desired consensus constraint between neighboring devices $i$ and $j$.
3. > line 175, right column: does hyperparameter tuning refer to tuning $\mu$ and $\kappa$?
Yes, this is correct.
---
Rebuttal Comment 1.1:
Comment: I acknowledge the authors' response. | Summary: This paper proposes a distributed way of achieving conformal prediction intervals. It extends the current literature by looking at graphs other than a star graph, and by looking at histogram summaries in addition to quantiles.
"## update after rebuttal: I have changed my recommendation to accept.
Claims And Evidence: The theoretical claims are clear and the proofs are plausible.
Methods And Evaluation Criteria: The benchmarking seems ok; a standard Cifar data set is used. It could be interesting to also look at synthetic examples.
Theoretical Claims: The proofs seem ok.
It is not clear how one can validate Assumption 4.1, that is, how one would find epsilon_0 which is non-trivial. The assumption seems to be key though.
Experimental Designs Or Analyses: The study is a comparison but perhaps more can be said. From Figures 2-4 it seems that all methods fail on the chain graph, in that they give prediction intervals which are far too large. What is the spectral gap for this graph? Is there an explanation? If one would want to apply any of the proposed algorithms, how could one check that they work well, are there guidelines?
Supplementary Material: I have looked through the supplementary material.
Relation To Broader Scientific Literature: There could have been a mention of federated learning more generally.
Essential References Not Discussed: None come to mind.
Other Strengths And Weaknesses: The figures are very difficult to read; there are star and torus?
Other Comments Or Suggestions: It would be good to mention split conformal prediction already in the introduction.
It would be good to detail the choice of W already in 5.1.
What if the devices share both quantiles and histograms, could the method be improved?
If instead of sharing quantiles or histograms, the devices would share their CP intervals, would there be any mileage in that?
Questions For Authors: Why do you need i.i.d. data for each device? Usually exchangeable scores suffice (an assumption which can again be weakened, see work by Rina Foygel Barber et al)
In the experiments how is the inefficiency determined when the distribution of the data is not available?
Why did you choose the torus graph for Figure 5 and not one of the other graphs?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: 1. > The benchmarking seems ok; a standard Cifar data set is used. It could be interesting to also look at synthetic examples.
Please refer to the rebuttal of Reviewer 6Gbu for details on experiments on a different data set.
2. > It is not clear how one can validate Assumption 4.1, that is, how one would find $\epsilon_0$ which is non-trivial. The assumption seems to be key though.
Indeed, as indicated in our paper, the difficulty in setting hyperparameter $\epsilon_0$ is one of the key motivations for introducing H-DCP, which does not require hyperparameter tuning. Please refer to page 6 lines 281-291(left) for further discussion on this point.
3. > The study is a comparison but perhaps more can be said. From Figures 2-4 it seems that all methods fail on the chain graph, in that they give prediction intervals which are far too large. What is the spectral gap for this graph? Is there an explanation? If one would want to apply any of the proposed algorithms, how could one check that they work well, are there guidelines?
The spectral gap of the chain graph is as low as $0.0123$, which is significantly smaller than other topologies. For example, the spectral gap of the star topology is $0.095$. As a result, as formalized by Proposition 4.3 and Theorem 5.2, the convergence in terms of communication rounds is decided by the spectral gap, which is thus slower for chain graphs. It is indeed the results in Proposition 4.3 and Theorem 5.2 provide insightful guidelines about how various factors have an impact on the CP guarantee.
4. > There could have been a mention of federated learning more generally.
We would be happy to add further reference on federated learning.
5. > The figures are very difficult to read; there are star and torus?
We will increase the marker sizes of in the revised version.
6. > What if the devices share both quantiles and histograms, could the method be improved?
This is an interesting question. Combining advantages of both schemes, one could indeed apply H-DCP first to obtain an estimate of true global quantile, which is then used as prior information in the ADMM problem solved by Q-DCP. This can help mitigating the reliance of Q-DCP on the choice of hyperparameter $s_0$.
7. > If instead of sharing quantiles or histograms, the devices would share their CP intervals, would there be any mileage in that?
Sharing and merging their CP intervals is generally less efficient than computing a single set using all distributed data [R1, R2]. For example, supposing that we have $K$ prediction sets $\\{\mathcal{C}\_k\\}\_{k\in[K]}$ and merge them using majority vote as $\mathcal{C}^M:=\left\\{y\in\mathcal{Y}:1/K\sum\_{k=1}^K\mathbb{1}\\{y\in\mathcal{C}\_k\\}>\tau\right\\}$ for some $\tau \in [0,1)$. Then, by Theorem 2.1 in [R2], the majority vote procedure gives $1-\alpha/(1-\tau)$ coverage guarantee instead of $1-\alpha$ (see also Theorem 2.8 in [R2]).
[R1] M. Gasparin and A. Ramdas, “Conformal online model aggregation,” May 02, 2024, *arXiv*: arXiv:2403.15527.
[R2] Gasparin, M. and Ramdas, A. (2024). Merging uncertainty sets via majority vote. arXiv preprint arXiv:2401.09379.
8. > Why do you need i.i.d. data for each device? Usually exchangeable scores suffice...
We consider i.i.d. in the text to simplify the presentation. But indeed this assumption could be relaxed to exchangeability as done in [Assumption 4.1, Lu et al., 2023].
9. > In the experiments how is the inefficiency determined when the distribution of the data is not available?
We indeed do not have access to the distribution in the experiments. Therefore, the inefficiency defined as $\mathbb{E}|\mathcal{C}(X\_{\text{test}}|\mathcal{D})|$ is estimated using test data by $$1/|\mathcal{D}\_{\text{test}}|\sum\_{X\_{\text{test}}\in \mathcal{D}\_{\text{test}}}|\mathcal{C}(X\_{\text{test}}|\mathcal{D})|$$ in experiments.
10. > Why did you choose the torus graph for Figure 5 and not one of the other graphs?
This choice is motivated by the fact that the spectral gap of torus graph with $20$ devices is in a moderate regime, providing a balanced setting between a complete graph and a cycle graph in terms of the spectral gap, which significantly affects the prediction performance in line with Theorem 5.2.
---
Rebuttal Comment 1.1:
Comment: Thank you for the explanations. Are you planning to revise the paper accordingly?
---
Reply to Comment 1.1.1:
Comment: Thank you. Yes, we will apply all the comments described in our reply. | Summary: The paper introduces two novel algorithms for distributed conformal prediction in decentralized networks. The Q-DCP employs ADMM to solve a distributed quantile regression problem with a smoothed pinball loss $ \tilde{\rho}\_\gamma(s)$ (incorporating a smoothing function $ \tilde{g}(x)$ and regularization term $ \frac{\mu}{2}(s-s\_0)^2$). After $T$ iterations, the device compute an average quantile estimate $\bar{s}(T)$ and the authors derive an error bound $\epsilon\_{Q-DCP}$ such that $|\bar{s}(T) - s^*| \leq \epsilon\_{Q-DCP}$, ensuring the prediction set satisfies the coverage guarantee (Thm 4.4). The second one (or H-DCP) leverages a consensus-based histogram estimation approach. Each device quantizes its calibration scores into $M$ levels and exchanges these histogram vectors with its neighbors, allowing them to compute an average global histogram. From this, a quantile is estimated, and Theorem 5.2 guarantees that with error bound $\epsilon\_{H-DCP}$ linked to the consensus convergence, the resulting prediction set meets the desired coverage $P(Y \in C(X|D)) \geq 1-\alpha$.
***
I appreciate the author's efforts to address my concerns. Based on the explanation and promised final revisions I increased my rating to 3
Claims And Evidence: they are generally convincing
Methods And Evaluation Criteria: While CIFAR-100 is a widely used benchmark, this paper does not test its methods in real-world distributed settings or more diverse datasets.
Theoretical Claims: I didn't check the proofs line-by-line but they appear correct. The convergence error of ADMM, the bias from the smoothing approximation, etc, are just standard route in optimization analysis.
Experimental Designs Or Analyses: - the largest network tested has only 20 devices, which is very small for real-world decentralized applications
- The paper presents mean results but does not report confidence intervals or variance across multiple runs, which make it unclear whether observed differences are statistically significant or just noise. No comparison with other SOTA decentralized uncertainty quantification methods, such as federated conformal prediction or Bayesian approaches.
Supplementary Material: I have thoroughly read the entire paper.
Relation To Broader Scientific Literature: This work builds upon existing work in split conformal prediction, while also incorporating ideas from decentralized/federated optimization and message passing.
Essential References Not Discussed: The application of message passing algorithms in optimization is not novel, such as prior work [1].
[1] Clarté, Lucas, and Lenka Zdeborová. "Building Conformal Prediction Intervals with Approximate Message Passing." arXiv:2410.16493
Other Strengths And Weaknesses: - While H-DCP somehow removes hyperparameter sensitivity, it suffers from a significantly higher communication cost per iteration due to the need for transmitting full histograms.
- The presentation of this paper fails to handle mathematical complexity effectively. While the problem formulation is reasonable to me, many of the introduced techniques appear somewhat ad hoc and lack a compelling motivation. Furthermore, the authors sometimes introduce notation without prior explanation, like $Z_i$ in (8), $E$ in line 230,231, etc
Other Comments Or Suggestions: - I think integrating the pseudocode from the appendix into the main body could improve readability and help readers grasp the key ideas more effectively
- before equation (31), do you mean $\left|\bar{s}^{(T)}-\hat{s}^*\right|+\left|\hat{s}^*-s^*\right| \leq \epsilon^{(T)}+\tilde{\epsilon}_0$?
- equation (35), $E$ is already used
- equation (37), the first line uses different probabilistic notation
Questions For Authors: - How does Q-DCP compare against recent federated conformal prediction approaches like [a,b] in terms of accuracy, communication cost, and robustness?
[a] Lu, Charles, et al. "Federated conformal predictors for distributed uncertainty quantification." International Conference on Machine Learning, 2023.
[b] Plassier, Vincent, et al. "Conformal prediction for federated uncertainty quantification under label shift." International Conference on Machine Learning, 2023.
- Does your Q-DCP’s hyperparameter sensitivity worsen as the e.g., network grows?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: 1. > The largest network tested has only 20 devices...
To validate the proposed method on a larger network, we considered a network with $100$ devices, each of which collects data from a distinct class, setting $T=3000$ for both H-DCP and Q-DCP (with $\epsilon=0.5$). The experiment results, which can be found [here](https://anonymous.4open.science/r/ICML_Rebuttal-AE74/Rebuttal_figures_and_tables.pdf) in Figures 3 and 4, demonstrate that the proposed schemes are scalable to larger networks.
2. > The paper presents mean results but does not report confidence intervals...
Thank you for suggesting adding error bar in the figures. You may find, e.g., Fig. 2 in the original draft, with an error bar of 95%-interval added [here](https://anonymous.4open.science/r/ICML_Rebuttal-AE74/Rebuttal_figures_and_tables.pdf) (See Figure 5).
3. > No comparison with other SOTA decentralized uncertainty quantification methods...
>
> How does Q-DCP compare against recent federated conformal prediction...
To the best of our knowledge, all previous distributed CP schemes applied only to the federated setting with a parameter server. Only when focusing on the special case of a star topology can we then compare the performance of the proposed protocols to the existing ones we are aware of, namely FedCP-QQ (Humbert et al., 2023), FCP (Lu et al., 2023) and WFCP (Zhu et al. (2024b)).
In this special case, the communication cost of Q-DCP coincides with FCP, while H-DCP reduces to WFCP. Experimental result with $\alpha=0.1$ can be found [here](https://anonymous.4open.science/r/ICML_Rebuttal-AE74/Rebuttal_figures_and_tables.pdf) in Table 1. These results, obtained at convenience, show that the proposed protocols have comparable performance to the existing state of the art in terms of coverage and set size in a star topology. However, in contrast to existing schemes, H-DCP and Q-DCP apply to arbitrary network topology.
4. > The application of message passing algorithms in optimization is not novel, such as prior work [1].
Please note that this interesting prior work [1] focuses on a fully centralized setting. This is fundamentally different from our decentralized setup. Accordingly, "message passing" in [1] refers to AMP, a Bayesian inference approach, while "message passing" in our work refers to gossip-style averaging consensus among distributed agents.
5. > The presentation of this paper fails to handle mathematical complexity effectively....
>
> the authors sometimes introduce notation without prior explanation...
We believe that our designs are formally well-motivated, and are validated by our theory.
- For Q-DCP: As discussed in line 194-202 (right), the smooth function $\tilde{g}(\cdot)$ and the regularization term in Eq. (8) aims for strong convexity, so as to ensure the linear convergence rate of ADMM, which has been theoretically verified in Proposition 4.3.
- For H-DCP: As discussed in lines 292-300 (left), the calibration score is quantized so as to support linear average consensus on the local histograms of the scores. As theoretically proved in Theorem 2, this ensures linear convergence and guarantees coverage.
The notation $Z_i$ in (8) was a typo, and it should be $S_i$. The notation $E$ refers to the number of edges $E=|\mathcal{E}|$.
6. > I think integrating the pseudocode...
>
> before equation (31), do you mean...
For the equation before Eq. (31), yes, you are correct. We would be happy to fit algorithm tables in the main text.
7. > Does your Q-DCP’s hyperparameter sensitivity worsen as the e.g., network grows?
The key hyperparameter to be selected in Q-DCP is $\epsilon_0$. To evaluate the sensitivity of the performance to the choice of $\epsilon_0$, we have evaluated Q-DCP on Erdős–Rényi graphs with an increasing number of devices $K$, in which each edge is included in the graph with a probability of 0.5. The 100 classes of CIFAR100 are uniformly at random (w/o replacement) divided among the $K$ devices. Other parameters are the same as the draft.
For $\alpha=0.1$ and $T=3000$, experimental results can found [here](https://anonymous.4open.science/r/ICML_Rebuttal-AE74/Rebuttal_figures_and_tables.pdf) in Figure 6 with $\epsilon_0=1$ and in Figure 7 with $\epsilon_0=0.1$. The average spectral gap increases with $K$ from 0.44 to 0.68. As a result, by fixing $T$, the set size decreases with the level of connectivity. This observation is robust against the choice of $\epsilon_0$. However, as verified by these results, the optimal choice of $\epsilon_0$ does depend on the size of network. In practice, for $\epsilon_0=1$, Assumption 4.1 is satisfied for all values of $K$ between $20$ and $80$, and thus convergence to the target coverage probability $1-\alpha=0.9$ is guaranteed when $T$ is large enough (see Proposition 4.3). This is not the case for $\epsilon_0=0.1$, when Assumption 4.1 is violated as $K$ grows larger. | Summary: This paper addresses the challenge of conformal prediction in decentralized settings where multiple devices have limited calibration data and can only communicate with neighboring devices over arbitrary graph topologies. The authors propose two methods for distributed conformal prediction: Quantile-based Distributed Conformal Prediction (Q-DCP) and Histogram-based Distributed Conformal Prediction (H-DCP). Q-DCP employs distributed quantile regression enhanced with smoothing and regularization terms to accelerate convergence, solving the problem via ADMM. H-DCP uses a consensus-based histogram estimation approach, obtaining the global histogram of quantized calibration scores. Both methods provide theoretical coverage guarantees, with H-DCP offering hyperparameter-free guarantees at the cost of higher communication overhead.
Claims And Evidence: The claims made in the paper are well-supported by theoretical analysis and empirical evidence. The theoretical results (Theorems 4.4 and 5.2) provide formal coverage guarantees for both proposed methods, with clear derivations and reasonable assumptions. The experimental results confirm these guarantees and illustrate the performance trade-offs across different network topologies, sample sizes, and hyperparameter settings. The comparison between Q-DCP and H-DCP regarding communication overhead versus hyperparameter sensitivity is particularly well-substantiated.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria are appropriate for the problem. The authors use standard metrics in conformal prediction (coverage rate and prediction set size/inefficiency) to evaluate performance. The experimental setup using Cifar100 data distributed in a non-i.i.d. manner across devices is reasonable for demonstrating the methods' effectiveness. The comparison across different network topologies (chain, cycle, star, torus, and complete graph) provides insights into how connectivity affects performance.
Theoretical Claims: The theoretical claims in the paper appear sound. The paper provides detailed proofs for the main theorems (Theorems 4.4 and 5.2) in the appendix, establishing coverage guarantees for both Q-DCP and H-DCP. The proofs build on established results in distributed optimization and consensus algorithms, adapting them to the conformal prediction setting. The coverage guarantee for Q-DCP requires assumptions about parameter initialization that are carefully stated and verified in experiments.
Experimental Designs Or Analyses: The experimental design is comprehensive and appropriate. The authors evaluate their methods on Cifar100 data distributed across 20 devices in a non-i.i.d. manner, with each device assigned 5 unique classes. The evaluation covers various network topologies, hyperparameter settings, and communication budgets. The experiments verify theoretical results and provide practical insights on trade-offs between methods. The ablation studies effectively demonstrate the impact of key hyperparameters on performance.
Supplementary Material: The supplementary material contains detailed proofs of the theoretical results, algorithms for both proposed methods, and additional experimental results, including ablation studies and convergence analyses. The material is well-organized and supports the main paper's claims.
Relation To Broader Scientific Literature: The paper extends previous work on federated conformal prediction (e.g., FedCP-QQ, FCP, WFCP) which primarily addressed star topologies, to the more challenging case of arbitrary graph topologies. It integrates ideas from distributed optimization (ADMM), consensus algorithms, and conformal prediction. The work contributes to the growing literature on reliable and uncertainty-aware distributed machine learning, with connections to federated learning and distributed statistical estimation.
Essential References Not Discussed: None
Other Strengths And Weaknesses: Strengths:
1. The paper presents a novel and theoretically sound approach to distributed conformal prediction that extends beyond the star topology assumed in prior work.
2. The authors provide strong theoretical guarantees with clear conditions under which they hold, along with empirical validation.
The comparative analysis between Q-DCP and H-DCP offers valuable insights into the trade-offs between communication efficiency and hyperparameter sensitivity.
3. The experimental evaluation is thorough, covering various network topologies, hyperparameter settings, and demonstrating convergence properties.
Weaknesses:
1. The experiments are limited to a single dataset (Cifar100). Including additional datasets, particularly those from domains mentioned as motivating applications (healthcare, IoT, autonomous vehicles), would strengthen the empirical evaluation.
2. The practical implementation details for large-scale distributed systems are somewhat limited. More discussion on handling device failures, communication delays, or asynchronous updates would enhance practicality.
3. While the methods address device-to-device communication, the initialization of both methods appears to require some coordination (e.g., for H-DCP, setting the consensus matrix W), which could be challenging in fully decentralized settings.
4. The study focuses on the post-hoc calibration of a shared pre-trained model, but does not explore scenarios where devices have different local models, which would be relevant for many real-world applications.
Other Comments Or Suggestions: None
Questions For Authors: 1. How would the proposed methods perform if devices have heterogeneous computational capabilities or experience intermittent connectivity? Could the algorithms be adapted to handle asynchronous updates or device dropouts?
2. The current work assumes all devices share the same pre-trained model. How would the approaches need to be modified for scenarios where devices have different locally trained models, as might be the case in federated learning settings?
3. Could the Q-DCP approach be extended to provide localized or conditional coverage guarantees rather than just marginal coverage? This would be valuable for handling distribution shifts between devices.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: 1. > The experiments are limited to a single dataset (Cifar100). Including additional datasets...
Following your advice, we have evaluated the proposed scheme on a healthcare dataset, namely [PathMNIST](https://medmnist.com/). PathMNIST includes $9$ classes and 107,180 data samples in total.
We considered a setting with $K=8$ devices. Seven of the devices have data from only one class, while the last device stores data for the remaining two classes. For Q-DCP, we set $T=8000$, and for H-DCP, we set $T=80$ and $M=100$. This way, both Q-DCP and H-DCP are subject to the same communication costs (in bits). Other settings remain the same as in the paper. Experimental results, which can be found [here](https://anonymous.4open.science/r/ICML_Rebuttal-AE74/Rebuttal_figures_and_tables.pdf) in Figures 1 and 2, confirm the efficiency of the proposed methods for applications of interest.
2. > Could the algorithms be adapted to handle asynchronous updates or device dropouts?
Leveraging existing literature on decentralized optimization and consensus, Q-DCP and H-DCP could indeed be extended to the above settings
- Q-DCP: Q-DCP is based on the ADMM protocol. An asynchronous version of ADMM was studied in [R1] for distributed convex optimization problems over a large-scale network with arbitrary topology. This approach may be applicable to the pinball loss minimization problem (7), yielding a generally slower convergence than with synchronous communications [Theorem 3.2, R1]. A setting with time-varying graphs, modeling communication outages, was studied in [R3] via first-order methods, which may also be applicable to problem (7).
- H-DCP: Asynchronous consensus algorithms with linear convergence rate were studied in [R2]. These may be leveraged to extend H-DCP to asynchronous settings. Furthermore, consensus protocols have also been widely studied for time-varying graphs [R4].
[R1] E. Wei and A. Ozdaglar, "On the $\mathcal{O}(1/k)$ Convergence of Asynchronous Distributed Alternating Direction Method of Multipliers," *2013 IEEE GlobalSIP*, 2013.
[R2] Y. Tian, Y. Sun and G. Scutari, "Achieving Linear Convergence in Distributed Asynchronous Multiagent Optimization," in *IEEE Trans. Autom. Control*., vol. 65, no. 12, pp. 5264-5279, Dec. 2020.
[R3] A. Nedić and A. Olshevsky, "Distributed Optimization Over Time-Varying Directed Graphs," in *IEEE Trans. Autom. Control*., vol. 60, no. 3, pp. 601-615, March 2015.
[R4] F. Xiao and L. Wang, “Asynchronous Consensus in Continuous-Time Multi-Agent Systems With Switching Topology and Time-Varying Delays,” *IEEE Trans. Autom. Control*., vol. 53, no. 8, pp. 1804–1816, Sep. 2008.
3. > the initialization of both methods appears to require some coordination...
In experiments, we choose the consensus matrix $\boldsymbol W$ in the standard form (See the first paragraph of Section 6.3). For this case, the eigenvalues of the Laplacian matrix $L$ can be obtained efficiently in a fully decentralized manner. See page 7 lines 340-348 (left) and reference [R5]. No other initialization settings for Q-DCP or H-DCP are required by coordination.
[R5] P. Di Lorenzo and S. Barbarossa, "Distributed Estimation and Control of Algebraic Connectivity Over Random Graphs," in *IEEE Trans. Signal Process*., vol. 62, no. 21, pp. 5615-5628, Nov.1, 2014.
4. > How would the approaches need to be modified for scenarios where devices have different locally trained models...
If the devices hold different local models, collaborative inference would have to be based on a different class of protocols. As an example, each device $k$ could first construct a local CP set $\mathcal{C}_k$ using the local model and data. Then, the CP sets could be aggregated via communications, which is a non-trivial problem that has been studied in [R6] and their later work using majority vote strategies. A fully decentralized implementation of these protocols in an arbitrary topology is an open problem. It is also important to note that these types of protocols would generally yield less efficient prediction sets when applied to settings in which agents share the same model [Theorem 2.8, R6].
[R6] Gasparin, Matteo, and Aaditya Ramdas. "Merging uncertainty sets via majority vote." *arXiv:2401.09379*, 2024.
5. > Could the Q-DCP approach be extended to provide localized or conditional coverage guarantees...
Following [R7], the localized coverage condition could be equivalently stated as Eq. (2.3) in [R7]. Accordingly, [R7] suggests approximating the localized coverage condition by solving the generalized optimization problem (2.4) in [R7]. It may be possible to generalize Q-DCP to address this problem, rather than problem (7) in our submission, in a decentralized way. Ensuring localized coverage opens up interesting research directions for future extensions of our work.
[R7] Gibbs I, Cherian JJ, Candès EJ. Conformal prediction with conditional guarantees. *arXiv:2305.12616*. 2023. | null | null | null | null | null | null |
Theoretically Unmasking Inference Attacks Against LDP-Protected Clients in Federated Vision Models | Accept (poster) | Summary: This paper studies the membership inference risk for local differential-privacy (LDP) protected clients in the presence of dishonest servers who can actively manipulate the model parameters. The paper provides theoretical upper and lower bounds for the success rates of low-polynomial-time membership attacks. It also extends a prior attack to the continuous domain of Vision Transformer models. Experiments show that the proposed attack can achieve high success rate under LDP protection.
### Update after rebuttal
I’m satisfied with the author’s response and will maintain my weak accept rating. I’d like to note that this paper is highly theoretical and falls outside my core area of expertise, so I would prefer that greater weight be given to the assessments of the other reviewers.
Claims And Evidence: yes
Methods And Evaluation Criteria: yes
Theoretical Claims: No
Experimental Designs Or Analyses: Yes
Supplementary Material: No
Relation To Broader Scientific Literature: This paper provides theoretical bounds to quantify the privacy risks of membership inference under an active adversary.
Essential References Not Discussed: -
Other Strengths And Weaknesses: ## Strengths
1. The motivation is explained well.
2. The paper provides theoretical bounds for attacks, providing a useful tool for quantifying the risk of membership inference.
3. Experiments show that even under LDP protection, models are susceptible to membership inference attacks.
## Weaknesses
1. The problem setting used involves an active adversary who can manipulate the model weights. Extending this to include honest-but-curious adversaries would help improve the scope of the paper.
Other Comments Or Suggestions: -
Questions For Authors: -
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for the positive feedback that our research is well-motivated and the provided theoretical bounds are useful for quantifying the risk of membership inference.
## #1
`Theoretical Claims: No`
We see that the reviewer indicates we do not have theoretical claims. However, we'd like to clarify that we do provide many theoretical results in the paper, as noted in your Summary section ("The paper provides theoretical upper and lower bounds for the success rates of low-polynomial-time membership attacks"). Several theoretical proofs are presented throughout the paper and in the appendix. This is also agreed on by other reviewers.
## #2
`The problem setting used involves an active adversary who can manipulate the model weights. Extending this to include honest-but-curious adversaries would help improve the scope of the paper.`
While we agree that it is interesting to see how much the assumption of having a malicious server affects the results compared to the honest-but-curious, we remain focused on the active setting due to the following reasons:
- The honest-but-curious threat model assumes that the server still abides by the system protocol. This does not convey the true capability of the attacker and undermines the vulnerability of the FL system. The active adversary model is more practical because, in practice, the server can deviate from the protocol to strengthen the privacy attacks [1-4].
- The honest-but-curious adversary model has been extensively studied and those studies achieve much lower success rates [5-7] on protected data compared to active attacks [8,9,2]. We believe this emphasizes the point that an active server introduces a much higher privacy risk, motivating the need for a more robust defense.
- In order to tackle the more potent active threat, we intentionally focus on active adversaries (a malicious server who manipulates model weights) because this scenario represents an underexplored and more realistic threat in the federated learning literature.
We appreciate the time and effort you have dedicated to reviewing our paper. We'd be happy to discuss further should you have any other concerns that could potentially impact your rating.
---
[1] Nguyen et al. Blockchain-based secure client selection in federated learning. IEEE ICBC 2022
[2] Nguyen et al. Active membership inference attack under local differential privacy in federated learning. AISTATS 2023
[3] Boenisch et al. When the curious abandon honesty: Federated learning is not private. IEEE EuroS&P 2023.
[4] Fowl et al. Robbing the fed: Directly obtaining private data in federated learning with modified models. ICLR 2022
[5] Carlini et al. Membership inference attacks from first principles. IEEE SP 2022
[6] Ye et al. Enhanced membership inference attacks against machine learning models. CCS 2022.
[7] Jayaraman et al. Evaluating Differentially Private Machine Learning in Practice. USENIX Security 2019
[8] Vu et al. Analysis of privacy leakage in federated large language models. AISTATS 2024
[9] Nasr et al. Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning. IEEE SP 2019. | Summary: This paper examines the privacy risks posed by Active Membership Inference (AMI) attacks against
federated learning (FL) clients even when their data is protected by Local Differential Privacy (LDP). The
authors derive theoretical lower and upper bounds for the success rates of low-polynomial-time attacks
exploiting fully connected layers and self-attention layers and demonstrate that even under LDP, privacy
risks persist depending on the privacy budget.
Claims And Evidence: Claims and evidence are sound, but the evaluation is only limited to certain types of attacks and 2 LDP
mechanisms.
Methods And Evaluation Criteria: The methods and evaluation criteria are overall sound.
Theoretical Claims: The overall theoretical proof in the appendix seems sound.
Experimental Designs Or Analyses: The experimental designs and analyses are sound, and the results align with the theoretical analysis.
Supplementary Material: All sections in the Appendix are reviewed.
Relation To Broader Scientific Literature: The paper is primarily based on the analysis of [1] and [2].
[1] Vu, Minh, Truc Nguyen, and My T. Thai. "Analysis of privacy leakage in federated large language
models." International Conference on Artificial Intelligence and Statistics. PMLR, 2024.
[2] Ramsauer, Hubert, et al. "Hopfield networks is all you need." arXiv preprint
arXiv:2008.02217 (2020).
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: **Strengths**
1. The paper rigorously derives lower and upper bounds for the success rates of AMI attacks, providing
a mathematical basis for evaluating privacy risks under LDP.
2. The empirical results align with the theoretical analysis.
**Weaknesses**
1. The paper primarily evaluates its method under BitRand and OME, but the impact of other LDP
mechanisms remains unexplored. In particular, the study only considers noise added directly to the
data, while other common LDP approaches are perturbing the gradients before aggregation. A
discussion or experimental comparison would help clarify this aspect.
2. The paper extends the work of [1], which focuses on AMI attacks against LLMs for text data, yet it
only evaluates attacks on vision data. Given that the proposed theoretical analysis does not appear
to be domain-specific, it seems likely that it could also apply to text-based FL models. Can the authors
clarify why they chose to restrict their evaluation to image data? Providing experimental results on
NLP datasets would further strengthen the generalizability of the findings.
3. There is no Impact Statements in the paper.
[1] Vu, Minh, Truc Nguyen, and My T. Thai. "Analysis of privacy leakage in federated large language
models." International Conference on Artificial Intelligence and Statistics. PMLR, 2024.
Other Comments Or Suggestions: See Weaknesses and Questions for Authors.
Questions For Authors: 1. What is the effectiveness of the proposed method for different LDP mechanisms?
2. Why do the authors only investigate vision data? Can the proposed analysis also be applied to AMIs
on NLP data since the adopted attacks are directly evaluated on text modality? Additional
experiments on NLP datasets would provide stronger evidence supporting the proposed analysis.
3. Figure 9 appears to be unclear or possibly incorrect. The right y-axis is labelled for model accuracy,
but there is no corresponding line representing the model accuracy in the figure. Can the author
provide further clarification on this issue?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: ## #1
- `evaluation is only limited to certain types of attacks and 2 LDP mechanisms`
- `the study only considers noise added directly to the data, while other common LDP approaches are perturbing the gradients before aggregation.`
We would like to reiterate that our **theoretical** analysis applies to **all** LDP algorithms that add noise to clients' data in general, while our experiments were initially conducted using OME and BitRand. We have further conducted extra experiments using 3 other LDP algorithms, namely GRR, RAPPOR and Microsoft’s dBitFlipPM. Detailed results using these LDP algorithms are given in response #3 to Reviewer eoeX.
Regarding LDP approaches that perturb gradients before aggregation, the current attack can be extended to bypass these mechanisms. First, we want to emphasize that the proposed AMI attacks hinge on the fact that the adversarial server can distinguish between non-zero gradients and zero-gradients of the targeted neurons. With gradient perturbation methods, noise (mostly Gaussian noise) is added to the clients' gradients before they are sent to the server, initially hindering the server from accessing the true value of the targeted neuron's gradient.
However, previous papers [1,2] have shown that attackers can exploit knowledge of the noise distribution (e.g., Gaussian with known $\sigma$) to statistically learn the true gradient values. For instance, [1] leveraged the fact that the FL training is done in multiple iterations and the zero-mean property of Gaussian noise to average out noise samples across multiple iterations, effectively canceling the noise and revealing the true gradient values. This aligns with the central limit theorem and is consistent with the privacy composition of DP, as the privacy budget accumulates with the number of FL iterations [3]. By learning the true gradient values, the attacker can now distinguish between zero and non-zero gradients, effectively inferring whether the target sample was included in any of these iterations by applying the proposed AMI attacks.
To counter such an attack, the privacy budget could be set to account for the number of iterations, however, that would result in a relatively small budget [3], making it even more difficult to have good training performance.
Furthermore, theoretically analyzing membership inference attacks w.r.t adding noise to gradient has been extensively studied [4,5]. Instead, our work is the first to investigate the theoretical bound on the impact of LDP on client data.
## #2
`can the proposed analysis also be applied to AMIs on NLP data`
First, we would like to emphasize that our theoretical analysis for FC-based AMI adversary also translates to NLP (discrete) data. However, the reason why we decided to limit our theoretical analysis to vision data is due to the way we formulate the LDP noise in the Attention-based AMI adversary. In particular, the distortion imposed by LDP is modeled by a noise $r_i$ added to each pattern $x_i$ (line 247, right column). In our analysis, we assume $x_i$ and $r_i$ to be continuous, and the impact of LDP noise is visualized in Fig. 4. For NLP data, both the data and the noise should be modeled as discrete, hence our theoretical analysis might not directly apply to the NLP scenario. The key challenge is that tokens are typically represented as discrete embeddings, and adding continuous noise is not meaningful in this context.
However, we note that the attacks still experimentally work against both vision and NLP data. We have conducted comprehensive experiments across 4 NLP datasets (IMDB, Yelp, Twitter, Finance), 4 models (BERT, RoBERTa, GPT-1, DistilBERT), and 3 LDP algorithms (GRR, RAPPOR, dBitFlipPM). The results indicate that privacy risks persist even for large language models (LLMs), depending on the privacy budget (https://imgur.com/a/Wt1nlno).
## #3
`no Impact Statements in the paper`
Thanks for the comment. Due to the character limit, we will include an impact statement in the revised manuscript.
## #4
`Figure 9 appears to be unclear or possibly incorrect`
We have mistakenly put that y-axis there, thanks for pointing this out. The correct figure can be found at https://imgur.com/nBpVEbB
We hope our responses have addressed your concerns sufficiently, and we are happy to address any follow-up questions you might have for us.
---
[1] Nguyen et al. Active membership inference attack under local differential privacy in federated learning. AISTATS 2023.
[2] Hu et al. Does differential privacy really protect federated learning from gradient leakage attacks? IEEE TMC, 2024.
[3] Naseri et al. Local and central differential privacy for robustness and privacy in federated learning. NDSS 2022
[4] Thudi et al. From differential privacy to bounds on membership inference: Less can be more. TMLR 2024.
[5] Yeom et al. Privacy risk in machine learning: Analyzing the connection to overfitting. CSF 2018. | Summary: The paper "Theoretically Unmasking Inference Attacks Against LDP-Protected Clients in Federated Vision Models" investigates the vulnerability of federated learning (FL) systems, particularly those protected by Local Differential Privacy (LDP), to Active Membership Inference (AMI) attacks. The authors derive theoretical lower bounds for the success rates of low-polynomial-time AMI attacks that exploit vulnerabilities in fully connected (FC) layers and self-attention mechanisms in vision models like ResNet and Vision Transformers (ViTs). The paper demonstrates that even with LDP protection, privacy risks persist depending on the privacy budget, and the noise required to mitigate these attacks significantly degrades model utility. The authors provide both theoretical analysis and practical evaluations, confirming that AMI attacks can achieve high success rates even under stringent LDP protection.
Claims And Evidence: The claims made in the paper are supported by clear and convincing evidence. The authors provide theoretical proofs for the lower bounds of attack success rates under LDP protection (Theorems 1, 2, and 3) and validate these claims through extensive experiments on synthetic and real-world datasets (CIFAR10 and ImageNet). The experimental results align with the theoretical predictions, showing that AMI attacks can achieve high success rates even when LDP is applied, especially for smaller privacy budgets.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria are appropriate for the problem at hand. The authors focus on two types of AMI attacks: one exploiting FC layers and another exploiting self-attention mechanisms in transformer-based models. The evaluation is conducted on both synthetic datasets (one-hot and spherical data) and real-world datasets (CIFAR10 and ImageNet), using state-of-the-art models like ResNet and ViTs. The use of LDP mechanisms (BitRand and OME) is well-justified, and the evaluation criteria (attack success rates and model utility degradation) are relevant to assessing the trade-off between privacy and utility in FL systems.
Theoretical Claims: The theoretical claims are well-supported by detailed proofs provided in the appendices. The authors derive lower bounds for the success rates of AMI attacks under LDP protection (Theorems 1 and 3) and an upper bound for the advantage of the adversary (Theorem 2). The proofs are rigorous and rely on established concepts in differential privacy and attention mechanisms. The theoretical analysis is a significant contribution, as it provides a formal understanding of the vulnerabilities in LDP-protected FL systems.
Experimental Designs Or Analyses: The experimental designs and analyses are sound and well-executed. The authors conduct experiments on synthetic and real-world datasets, using both FC-based and attention-based AMI attacks. The results are consistent with the theoretical predictions, showing that AMI attacks can achieve high success rates even under LDP protection, especially for smaller privacy budgets. The authors also explore the impact of hyperparameters (e.g., β) on the attack success rates, providing additional insights into the robustness of the attacks.
Supplementary Material: The supplementary material includes detailed proofs for the theoretical claims, descriptions of the security games, and implementation details of the AMI attacks. The appendices provide a thorough explanation of the FC-based and attention-based adversaries, as well as the impact of LDP mechanisms on data separation. The supplementary material is well-organized and enhances the understanding of the main paper.
Relation To Broader Scientific Literature: The paper builds on prior work in federated learning, differential privacy, and membership inference attacks. It extends the theoretical understanding of AMI attacks in FL systems, particularly under LDP protection, which has not been extensively studied in prior literature. The authors reference relevant works on LDP, FL, and AMI attacks, and their contributions are well-situated within the broader context of privacy-preserving machine learning.
Essential References Not Discussed: The paper adequately covers the relevant literature, but it could benefit from a discussion of recent advancements in privacy-preserving techniques beyond LDP, such as secure multi-party computation (SMPC) or homomorphic encryption, which are also used in FL systems. Additionally, the paper could discuss recent work on adversarial robustness in FL, as this is closely related to the problem of inference attacks.
Other Strengths And Weaknesses: Strengths:
* The paper provides a rigorous theoretical analysis of AMI attacks under LDP protection, which is a significant contribution to the field.
* The experimental results are comprehensive and validate the theoretical claims, demonstrating the practical implications of the findings.
* The paper addresses an important gap in the literature by focusing on the vulnerabilities of LDP-protected FL systems, which are often assumed to be secure.
Weaknesses:
* The paper could benefit from a broader discussion of alternative privacy-preserving techniques beyond LDP, such as SMPC or homomorphic encryption, to provide a more comprehensive view of the privacy-utility trade-off in FL systems.
* The impact of different LDP mechanisms (e.g., BitRand vs. OME) on the attack success rates could be explored in more depth, as the current analysis focuses primarily on the theoretical bounds.
Other Comments Or Suggestions: None
Questions For Authors: 1. The paper focuses on LDP as the primary privacy-preserving mechanism. Have the authors considered other privacy-preserving techniques, such as secure multi-party computation (SMPC) or homomorphic encryption, and how they might impact the success rates of AMI attacks?
2. The paper discusses the impact of the hyperparameter β on the success rates of attention-based AMI attacks. Could the authors provide more details on how
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: ## #1
`Have the authors considered other privacy-preserving techniques, such as secure multi-party computation (SMPC) or homomorphic encryption, and how they might impact the success rates of AMI attacks?`
We thank reviewer eoeX for their insightful comments. First, we’d like to clarify that secure aggregation (e.g., Secure Multi-Party Computation, Homomorphic Encryption) and Local Differential Privacy (LDP) are orthogonal research directions. SMPC/HE primarily aims to conceal local gradients from the server during aggregation, ensuring no individual gradient is exposed. LDP focuses on preventing local gradients from revealing membership information about specific data points by adding noise to local data gradients.
Our threat model assumes an actively dishonest server who has access to the local gradients of clients, and conducts the proposed AMI attacks based on the local gradients. Even though using SMPC or HE could potentially conceal such info from the server, previous research has shown that an actively dishonest adversary can circumvent secure aggregation protocols [1,2] to reconstruct the targeted client's gradients. Hence, even with secure aggregation, our threat model is still applicable when the server uses the above attacks to get around it and access local gradients before conducting the AMI attacks. Therefore, assuming that the dishonest server successfully circumvents secure aggregation, such techniques as SMPC or HE do not impact the success rates or the theoretical analysis of our proposed AMI attacks.
## #2
`The paper could benefit from a broader discussion of alternative privacy-preserving techniques beyond LDP`
Thanks for the suggestion. Due to the character limit in the response, we will include a discussion on secure aggregation and adversarial robustness in the revised manuscript.
## #3
`The impact of different LDP mechanisms (e.g., BitRand vs. OME) on the attack success rates could be explored in more depth, as the current analysis focuses primarily on the theoretical bounds.`
To further explore the impact of different LDP mechanisms on the attack's success rate, in addition to BitRand and OME, we have conducted extra experiments on 3 other LDP algorithms, namely GRR [3], RAPPOR [4] and Microsoft's dBitFlipPM [5]. In short, we see that privacy risks persist across all tested LDP mechanisms for both FC-based AMI and Attention-based AMI, depending on the privacy budget.
We compare the attack success rate of AMI-FC across three different datasets and five distinct LDP mechanisms in this anonymized imgur link https://imgur.com/cxtqVHv. We also plot the success rate of AMI-FC w.r.t LDP mechanisms against the theoretical upper/lower bounds and privacy-utility trade-off at https://imgur.com/pZfx3Y3.
For Attention-based AMI, we plot the result w.r.t. the 3 new LDP mechanisms at https://imgur.com/aDptuQq. In addition to vision datasets, we have extended our experiments to NLP datasets. The reviewer can refer to Response #2 to Reviewer x8f7 for detailed results. To explore more in depth the impact of different LDP mechanisms on the attack success rates, we also conduct an ROC analysis of the attack success rates (on IMDB dataset). The results are posted in https://imgur.com/l3Byt79. GRR shows the worst privacy with attack AUCs of 0.946 ($\epsilon=6$) and 1.0 ($\epsilon=8$), while RAPPOR and dBitFlipPM provide stronger protection—achieving near-random performance at $\epsilon=6$ and moderate resistance at $\epsilon=8$. Zoomed-in plots show that GRR leaks sensitive signals even at low FPRs and high TPRs, while RAPPOR and dBitFlipPM maintain partial robustness in these critical regions.
## #4
`Could the authors provide more details on how`
Unfortunately, this question seems to have been cut off. Could the reviewer clarify what specific information about $\beta$ they would like more details on? In the meantime, we would like to reiterate that $\beta$ controls the extent to which the attention heads memorize the target pattern. Larger $\beta$ values can negatively impact the attack's success rate (Figure 9) against LDP-protected data. Further explanations are provided in Remark 6. However, $\beta$ also needs to be sufficiently large to satisfy the condition in Equation 5.
We hope our responses have addressed your questions sufficiently. We are happy to discuss further if you have follow-up questions for us.
---
[1] Dario Pasquini el al. Eluding secure aggregation in federated learning via model inconsistency. CCS 2022.
[2] Sanjay Kariyappa et al. Cocktail party attack: Breaking aggregation-based privacy in federated learning using independent component analysis. ICML 2023
[3] Arijit Chaudhuri and Rahul Mukerjee. Randomized response: Theory and techniques. Routledge, 2020.
[4] Ulfar Erlingsson, Vasyl Pihur, and Aleksandra Korolova. Rappor: Randomized aggregatable privacy-preserving ordinal response. CCS 2014
[5] Bolin Ding et al. Collecting telemetry data privately. NeurIPS 2017
---
Rebuttal Comment 1.1:
Comment: Sorry for Q3, I intended to learn about how the β was chosen in Q3. My other questions are clarified.
---
Reply to Comment 1.1.1:
Comment: First, we'd like to thank reviewer eoeX for engaging with our rebuttal and we are glad to have clarified all the other questions. We note that while larger $\beta$ values reduce the attack success rate, $\beta$ still needs to be sufficiently large to satisfy the condition specified in Equation (5). Given the assumption that the server has knowledge of the client's data distribution (as outlined in the AMI threat models in Section 3.1), the server can simulate the client’s data to compute a minimally sufficient value for $\beta$.
When doing experiments, we found that setting $\beta$ to a reasonably small value (e.g., $0.01$) yielded consistently good results across realistic $\varepsilon$ values and datasets/LDP mechanisms. Unless stated otherwise, we select a fixed $\beta = 0.01$ in our experiments. As illustrated in Figure 9, for LDP-protected data, $\beta=0.01$ generally achieves better attack success rates, particularly under small $\varepsilon$.
We appreciate the time and effort you have contributed to reviewing our paper. We are happy with your positive evaluation that our paper made *a significant contribution to the field*, addressed *an important gap in the literature* with demonstrated *practical implications of the findings* and we hope the score reflects this. | null | null | null | null | null | null | null | null |
Sharp Optimality of Simple, Plug-in Estimation of the Fisher Information of a Smoothed Density | Accept (poster) | Summary: This paper analyzes the minimax rate for estimation of the Fisher Information of a 1-dimensional Gaussian-smoothed density that satisfy an alpha-Holder condition from samples. It shows that variants of the simple plug-in estimator achieves the minimax rate, which varies depending on the amount of Gaussian smoothing, and proves matching lower bounds. It also shows the implications of this result on estimation of mutual information and entropy.
Claims And Evidence: Claims are supported by rigorous proof
Methods And Evaluation Criteria: No empirical results
Theoretical Claims: Yes, they seem to be correct -- I checked all of them.
Experimental Designs Or Analyses: No experiments.
Supplementary Material: No supplemental material
Relation To Broader Scientific Literature: Several works have looked at estimation of the mean and location of Gaussian smoothed densities with error rate depending on the Fisher Information, which relates directly to the results in this paper (see 1, 2, 3 below). There have been many works looking at density estimation of smoothed densities (for example Goldfeld et al 2020), which are mentioned. There are also previous works on Fisher Information estimation, which are also mentioned.
1) Finite-Sample Symmetric Mean Estimation with Fisher Information Rate. Shivam Gupta, Jasper C.H. Lee, and Eric Price. COLT 2023
2) High-Dimensional Location Estimation via Norm Concentration for Subgamma Vectors. Shivam Gupta, Jasper C.H. Lee, and Eric Price. ICML 2023
3) Finite-Sample Maximum Likelihood Estimation of Location. Shivam Gupta, Jasper C.H. Lee, Eric Price, and Paul Valiant. NeurIPS 2022
Essential References Not Discussed: The recent works below analyze mean and location estimation of Gaussian-smoothed densities, which is directly related to the present paper, and so, should be discussed.
1) Finite-Sample Symmetric Mean Estimation with Fisher Information Rate. Shivam Gupta, Jasper C.H. Lee, and Eric Price. COLT 2023
2) High-Dimensional Location Estimation via Norm Concentration for Subgamma Vectors. Shivam Gupta, Jasper C.H. Lee, and Eric Price. ICML 2023
3) Finite-Sample Maximum Likelihood Estimation of Location. Shivam Gupta, Jasper C.H. Lee, Eric Price, and Paul Valiant. NeurIPS 2022
Other Strengths And Weaknesses: It would be nice to include more quantitative intuition for the rates obtained -- currently the lower and upper bounds are explained well qualitatively, but a clear quantitative explanation is lacking.
Other Comments Or Suggestions: N/A
Questions For Authors: - Is there any clear and concise quantitative explanation that you can provide for the rates obtained?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your helpful review!
__Essential references not discussed:__
Thanks for pointing these very relevant papers out! We plan to add the following text (perhaps with some modification to obey space constraints) to the revised manuscript.
"The smoothed Fisher information has also been recently shown to play a critical role in a finite-sample analysis of the fundamental statistical task of mean estimation (Gupta et al., 2022; Gupta et al., 2023; Gupta et al., 2023). Gupta et al. (2023) make the following, insightful observation. Consider the problem of estimating the mean $\theta$ of a density $f$ with variance $\sigma^2$, given i.i.d. samples $X_1,...,X_n$. Sample mean is an obvious choice of estimator. However, in some cases of $f$, the mean $\theta$ might also coincide with the *location*, and some location estimators might outperform the sample mean. For example, consider the Laplace distribution centered at $\theta$ with variance $2$, which has density $f(x) \propto e^{-|x-\theta|}$. Note $\theta$ is both the mean and the location (which is also the median in this case). It turns out that the sample median, expressed as $X_{(n/2)}$ in order-statistic notation, beats $\bar{X}$ since $\sqrt{n}(X_{(n/2)} - \theta) \implies N(0, 1)$ whereas $\sqrt{n}(\bar{X} - \theta) \implies N(0, 2)$. The asymptotic variance of sample median is half that of sample mean; in fact, sample median is the maximum likelihood estimator and is thus optimal for estimation of $\theta$ in this example.
More generally, better asymptotic variance than $\sigma^2$ can be achieved in the location estimation problem. The location estimation problem is the problem of estimating the ground truth location parameter $\theta^*$ in the parametric family $\{f(x-\theta)\}_{\theta \in \mathbb{R}}$, where $f$ is some known density. It is classical that the maximum likelihood estimator is asymptotically normal, centered around $\theta^*$, and with variance given by the reciprocal of the Fisher information of $f$. This variance can be substantially smaller than $\sigma^2$. Gupta et al. (2023) ask the intriguing question of whether it is possible, in the case of an *unknown* density $f$ that is symmetric about its mean and, whether it is also possible to attain a Fisher-information like speedup in *finite-samples*.
Gupta et al. (2023) construct an estimator $\hat{\theta}$ which, with probability at least $1-\delta$ and with $n \gtrsim \log\left(1/\delta\right)$ samples, achieves $|\hat{\theta} - \theta| \leq (1+\eta)\sqrt{\frac{2\log(2/\delta)}{n \mathcal{I}(f*\varphi\_t)}}$ for $t \asymp \sigma^2$ and $\eta = (\log(1/\delta)/n)^{1/13}$. Specifically, their bound involves the smoothed Fisher information of $f$ and asserts a speedup since $\frac{1}{\mathcal{I}(f*\varphi\_t)} \leq \sigma^2 + t$. Their results build upon earlier work (Gupta et al., 2022; Gupta et al., 2023) which show that the smoothed Fisher information is a fundamental quantity in location estimation.
The work of Gupta et al. (2023) gives a point estimator which enjoys faster convergence rates. The natural problem to consider next is hypothesis testing, or equivalently, construction of confidence intervals. When $f$ is not known, then the error bound of $\hat{\mu}$ is not computable since $\mathcal{I}(f*\varphi\_t)$ is itself not known. Consequently, it is desirable to estimate the smoothed Fisher information $\mathcal{I}(f*\varphi\_t)$ to address these subsequent problems."
__Questions for authors:__
There is some quantitative intuition which can be given. We can look to the estimation rates of the plugged-in targets. Estimation of the derivative is the harder problem, and is thus the rate-dominating step.
In the high noise regime, derivative estimation is done by truncating $\partial_x \hat{p}(x, t) = \frac{1}{n} \sum_{i=1}^{n} \varphi_t'(x-\mu_i)$, and the truncation only improves the estimation error. This is exactly a kernel density estimator for the derivative with kernel $\varphi_t'$ with bandwidth $\sqrt{t}$. However, observe our target is not $f'$ but actually $\partial_x p = f*\varphi_t'$, so no bias is incurred. Hence, the error is given by the variance, which is well known to be $\frac{1}{n\sqrt{t}^3}$ in squared loss, which yields $\frac{1}{\sqrt{n}t^{3/4}}$ in absolute loss.
In the low noise regime, recall we express via integration by parts $\partial_x p(x, t) = (f*\varphi_t)'(x) = f(-1)\varphi_t(x+1) - f(1)\varphi_t(x-1) + \int_{-1}^{1} f'(\mu) \varphi_t(x-\mu) d\mu$. Ignoring the truncation, we estimate by plugging in estimators for the unknown quantities. In Theorem 2.2, the term $n^{-\frac{\alpha-1}{2\alpha+1}}$ comes from plugging in $\hat{f}'$. The term $\frac{n^{-\frac{\alpha}{2\alpha+1}}}{\sqrt{t}}$ comes from plugging in $\hat{f}(1)$ and $\hat{f}(-1)$. The factor $\frac{1}{\sqrt{t}}$ comes from $\varphi_t(x+1) \asymp \frac{1}{\sqrt{t}}$ for $|x+1| \lesssim \sqrt{t}$ (and likewise with $\varphi_t(x-1)$ for $|x-1| \lesssim \sqrt{t}$). | Summary: This paper studies the problem of estimating the Fisher information of smoothed probability densities falling in the $\alpha$-Holder smooth class. The authors derive minimax rate bounds for the plug-in estimator, showing that a simple plug-in estimator is optimal for smoothed probability densities. The convergence results are further extended for mutual information and entropy estimation.
Claims And Evidence: The main claim of the paper: The plug-in estimator is optimal for smoothed probability densities, is well-supported by thorough theoretical analysis.
Methods And Evaluation Criteria: NA
Theoretical Claims: I went through the proofs quickly, and they look good to me. I have not checked each detail in the proofs.
Experimental Designs Or Analyses: NA
Supplementary Material: NA
Relation To Broader Scientific Literature: The main result is of significant importance for designing efficient approximations for information quantities including Fisher information, mutual information and entropy. It complements previous results in estimators for unsmoothed probability densities.
Essential References Not Discussed: No
Other Strengths And Weaknesses: Strength:
The presented results are new as far as I know. The main claim is of significant importance to the community to design efficient algorithms for the estimation of information quantities.
Weakness:
There are some restrictions on applicable probability density, e.g. only bounded densities are considered. There is also a gap for $c < t < C$.
Other Comments Or Suggestions: It would be more convincing if the authors could provide some empirical results to verify their main claim, e.g. showing how the estimation precision changes with $t$ and $\alpha$.
Questions For Authors: What will happen if the smoothing kernel is not Gaussian? e.g. $\phi_t(x) \propto \frac{1}{t(x^2 + 1)}$? Do the main results still hold for such kinds of smoothing?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thanks for the thoughtful report!
__Other suggestions 1:__ Reviewers pEyQ and arrG also asked about computation, which is related to your comment. Let us first describe how computation of the estimators $\widehat{\mathcal{I}}\_t$ can be done, since it involves an integral over an infinite interval.
We can approximate the integral and still obtain the claimed statistical rate. The idea is simple, we simply truncate the integral to a large enough, bounded interval (whose length is growing in $n$). The error from ignoring the complement of the interval turns out to be negligible. For illustration, let us just discuss the high and low noise regimes, where we plug-in estimators of the density and its derivative. A similar argument will hold for the very high noise regime. Let $R > 0$ be a hyperparameter to be tuned later. We will approximate by
$$
\widehat{\mathcal{I}}\_{t, R} := \int_{-R}^{R} \frac{\widehat{\partial_x p}^\varepsilon(x, t)^2}{\hat{p}^\varepsilon(x, t)} dx.
$$
We can approximate this integral by Monte Carlo,
$$
\widehat{\mathcal{I}}\_{t, R}^{M} := \frac{2R}{M}\sum_{i=1}^{M} \frac{\widehat{\partial_x p}^\varepsilon(X_i, t)^2}{\hat{p}^\varepsilon(X_i, t)}
$$
where $\{X\_i\}\_{i=1}^{M}$ are $M$ i.i.d. points drawn uniformly in $[-R, R]$. The estimation error can be bounded as
$$
E\left(\left|\widehat{\mathcal{I}}\_{t, R}^{M} - \mathcal{I}\_t\right|\right) \leq E\left(\left|\widehat{\mathcal{I}}\_{t, R}^{M} - \widehat{\mathcal{I}}\_{t, R}\right|\right) + E\left(\left|\widehat{\mathcal{I}}\_{t, R} - \widehat{\mathcal{I}}\_{t}\right|\right) + E\left(\left|\widehat{\mathcal{I}}\_{t} - \mathcal{I}\_{t}\right|\right).
$$
The last term is exactly the statistical rate we want. The first term can be made to be of smaller order by taking \(M\) sufficiently large. It remains to argue about the second term. To do so, consider that for $|x| > 1$, we have from calculations similar to those employed frequently in the paper (e.g. using Lemmas A.2 and A.3)
$$
\frac{\left|\widehat{\partial_xp}^{\varepsilon}(x, t)\right|^2}{\hat{p}^\varepsilon(x, t)} \leq \frac{\overline{\varepsilon}'(x, t)^2}{\underline{\varepsilon}(x, t)} \lesssim \frac{\varphi_t(x-1)^2 + \varphi_t(x+1)^2 + \underline{\varepsilon}(x, t) \frac{e^{-\frac{(|x|-1)^2}{2t}}}{\sqrt{t}}}{\underline{\varepsilon}(x, t)} \lesssim \left(1 \vee \frac{|x|-1}{\sqrt{t}}\right) \cdot \frac{1}{t} e^{-\frac{(|x|-1)^2}{2t}} + \frac{e^{-\frac{(|x|-1)^2}{2t}}}{\sqrt{t}}.
$$
Therefore, if we pick $R \geq 1 + \sqrt{Ct\log(nt)}$ for a large universal constant $C$, we have
$$
\left|\widehat{\mathcal{I}}\_{t, R} - \widehat{\mathcal{I}}\_t\right| \lesssim \int_{|x| > R} \left(1 \vee \frac{|x|-1}{\sqrt{t}}\right) \cdot \frac{1}{t} e^{-\frac{(|x|-1)^2}{2t}} + \frac{e^{-\frac{(|x|-1)^2}{2t}}}{\sqrt{t}} dx \leq \frac{1}{(nt)^{\tilde{C}}}
$$
for some universal constant $\tilde{C}$ which can be taken to be sufficiently large by taking $C$ sufficiently large. Therefore, the error incurred by approximating $\widehat{\mathcal{I}}\_t$ by $\widehat{\mathcal{I}}\_{t, R}$ is dominated by the desired statistical rate.
The estimators in Section 4 (e.g. Theorem 4.2) can be estimated by the same Monte Carlo strategy. It can also be shown that the error of the complement can be made negligible.
We did a numerical experiment computing our Fisher information estimators on data sampled from the uniform distribution on $[-1, 1]$. Unfortunately, it is not clear to us how to present that figure in our response here, as it seems there is no capability of attaching images in author responses.
__Question for authors 1:__ Thanks for the great question! We imagine perhaps you are thinking about convolving with a Cauchy density instead of a Gaussian density. We imagine it was perhaps meant to be written as $\phi_t(x) \propto \frac{1}{\left(x/\sqrt{t}\right)^2 + 1}$. It's a nice question as Cauchy has no moments.
From a methodological point of view, we feel that the same plug-in strategy can be straightforwardly extended to the Cauchy case. For example, in the high-noise regime it is plausible to truncate $\frac{1}{n}\sum_{i=1}^{n} \phi_t(x-\mu_i)$ and $\frac{1}{n}\sum_{i=1}^{n} \phi_t'(x-\mu_i)$ for use as estimators of $p(x, t)$ and $\partial_x p(x, t)$ respectively (and similar extensions for the other two regimes). To us, the heavier tail behavior does not appear to cause major obstacles. In fact, most of our intuition treats $\phi_t$ as any kernel in a kernel-density estimator, in which case the precise form (Gaussian or Cauchy) doesn't really seem to matter. From the side of rigorous, mathematical analysis, it would seem the arguments would need to be modified to handle the Cauchy case. Our arguments frequently make use of the exponential tail of the Gaussian density to argue various remainder terms can be neglected. It is not clear to us whether serious changes to the broad proof strategy would be needed, or whether just careful, albeit tedious, technical modifications would suffice.
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarification about Cauchy kernels. As for the experiments, including a table about some critical points in that figure should suffice for verification. Anyway, it's fine not to include empirical results for strong theoretical papers like this one. I will keep my current score. | Summary: This paper studies estimation of the Fisher information $\mathcal{I}(f * \psi_t)$ of a smoothed density $\psi_t$, where $\psi_t$ is the Gaussian kernel of bandwidth $t$, given IID samples from a density $f$. Plug-in estimators are proposed, based on appropriately truncated and smoothed estimates of $f * \psi_t$ and its spatial derivative. The paper first presents upper bounds for these estimators, distinguishing between three regimes of the noise magnitude $t$. The paper then presents matching lower bounds for most cases. Finally, using information theoretic identities relating the Fisher information to mutual information and entropy in certain cases, the paper presents and bounds the error of estimators for those latter quantities.
Claims And Evidence: As noted below, I am confused about why Corollary 4.4 holds.
Methods And Evaluation Criteria: The paper does not include any experiments.
Theoretical Claims: I did not read the proofs in the supplement, although the high-level descriptions in the main paper generally made sense to me.
Experimental Designs Or Analyses: The paper does not include any experiments.
Supplementary Material: I did not read the Supplementary Material.
Relation To Broader Scientific Literature: The paper lies in the intersection of classical work on nonparametric estimation of functionals of smooth probability densities and more recent work on estimating densities after Gaussian smoothing.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: There are two main limitations of the proposed estimators that seem make them unusable in practice:
1) The construction of the estimators assumes knowledge of the constants $c_d$, $C_d$, $L$, and $\alpha$. However, these are rarely known in practice. Is it possible to relax this assumption (e.g., by using some surrogates for $c_d$ or $C_d$ in terms of other known quantities) in a way that does not affect the convergence rates? Or is there at least some practical way to get, e.g., the right order of magnitude for these quantities?
2) The mutual information estimator used in Theorem 4.2 requires integrating the Fisher information estimator $\widehat{\mathcal{I}}_s$ over $s$ from $t$ to $\infty$. Can this integral actually by computed? It seems unlikely to me, given the complex dependence of the estimator on $s$, but there might be some tricks that make this possible. Alternatively, is there a computable approximation that can be shown to converge at the claimed rate? Are any additional assumptions needed to show this?
3) Related to the above points, the paper would be made stronger if it demonstrated that the proposed estimators could actually be computed and used in an real-world, or at least simulated, problem.
Other Comments Or Suggestions: 1) Typo: Eq. (2): "for all $x, y \in (-1, 1)$" should be "$\mu, \mu' \in (-1, 1)$"
Questions For Authors: 1) The paper only seems to discuss the 1-dimensional case of a density on $\mathbb{R}$; how do results and analysis change for a multi-variate density on $\mathbb{R}^d$?
2) I am confused about why Corollary 4.4 holds. The integral $\int_0^t \mathcal{I}_s ds$ involves estimating the Fisher information in the low-noise regime $s \leq n^{-\frac{2}{2\alpha+1}}$, where Theorem 2.2 gives only a nonparametric convergence rate of order $n^{-\frac{\alpha-1}{2\alpha+1}} + n^{-\frac{\alpha}{2\alpha+1}}/\sqrt{t}$. From this, how can we get the parametric rate $1/\sqrt{n}$? This point has to be clarified for me to accept the paper.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks for the great feedback!
__Questions for Authors 1:__ Reviewer pEyQ asked the same question; please see our response. Thanks!
__Questions for Authors 2:__ Thank you for the question, and we agree this point could have, and should have, been made clearer in the paper.
As you point out, we estimate the integral $\int\_{0}^{t} \mathcal{I}\_s ds$ by the plug-in $\int\_{0}^{t} \widehat{\mathcal{I}}\_s ds$. Since your question specifically asks about the low-noise regime, let us focus our discussion by examining $t \lesssim n^{-\frac{2}{2\alpha+1}}$. Then for all $s \leq t$, we have from Theorem 2.2 that $E\left(\left|\widehat{\mathcal{I}}_s - \mathcal{I}_s\right|\right) \lesssim n^{-\frac{\alpha-1}{2\alpha+1}} + \frac{n^{-\frac{\alpha}{2\alpha+1}}}{\sqrt{s}}$. Since $s \lesssim n^{-\frac{2}{2\alpha+1}}$, it follows that
$$
n^{-\frac{\alpha-1}{2\alpha+1}} + \frac{n^{-\frac{\alpha}{2\alpha+1}}}{\sqrt{s}} \asymp n^{-\frac{\alpha}{2\alpha+1}} \cdot n^{\frac{1}{2\alpha+1}} + \frac{n^{-\frac{\alpha}{2\alpha+1}}}{\sqrt{s}} \lesssim n^{-\frac{\alpha}{2\alpha+1}} \cdot s^{-1/2} + \frac{n^{-\frac{\alpha}{2\alpha+1}}}{\sqrt{s}} \asymp \frac{n^{-\frac{\alpha}{2\alpha+1}}}{\sqrt{s}}.
$$
Therefore, we have shown $E\left(\left|\widehat{\mathcal{I}}\_s - \mathcal{I}\_s\right|\right) \lesssim \frac{n^{-\frac{\alpha}{2\alpha+1}}}{\sqrt{s}}$. We can now bound the estimation error. Consider,
$$
E\left(\left|\int\_{0}^{t} \widehat{\mathcal{I}}\_s ds - \int\_{0}^{t} \mathcal{I}\_s ds \right|\right) \leq \int\_{0}^{t} E\left(\left|\widehat{\mathcal{I}}\_s - \mathcal{I}\_s\right|\right) ds \\\\
\lesssim \int\_{0}^{t} \frac{n^{-\frac{\alpha}{2\alpha+1}}}{\sqrt{s}}ds \\\\
=\left.\left(n^{-\frac{\alpha}{2\alpha+1}}\right) \cdot 2\sqrt{s}\right|\_{s = 0}^{t} \\\\
\asymp \sqrt{t} \cdot n^{-\frac{\alpha}{2\alpha+1}}.
$$
Since $t \lesssim n^{-\frac{2}{2\alpha+1}}$, we have
$$
\sqrt{t} \cdot n^{-\frac{\alpha}{2\alpha+1}} \lesssim n^{-\frac{1}{2\alpha+1}} \cdot n^{-\frac{\alpha}{2\alpha+1}} \asymp n^{-\frac{\alpha+1}{2\alpha+1}}.
$$
Since $\frac{\alpha+1}{2\alpha+1} \geq \frac{1}{2}$, it follows $n^{-\frac{\alpha+1}{2\alpha+1}} \lesssim \frac{1}{\sqrt{n}}$, and so we have obtained the parametric rate. Thanks again for your helpful question as these clarifications will improve the paper.
__Other suggestions 1:__ Thanks!
__Weakness 1:__ This is a great comment and well taken. As you point out, it suffices to know $c_d, C_d, L$ just up to order without affecting rates. It is well-known \(f\) can be estimated in sup-norm with a KDE $\hat{f}$ at rate $||\hat{f} - f||_\infty \lesssim \left(\frac{N}{\log N}\right)^{-\frac{\alpha}{2\alpha+1}}$ with high probability using $N$ samples. This is actually much faster than we need since we only need the order of the unknown constants. For example, set $\hat{C}\_d := \max\_{|\mu| \leq 1} \hat{f}(\mu)$ and $\hat{c}\_d := \min\_{|\mu| \leq 1} \hat{f}(\mu)$, and note we have $\hat{C}\_d \asymp C\_d$ and $\hat{c}\_d \asymp c\_d$ with high probability, even if we only use a constant number of samples to fit $\hat{f}$. Similarly, $L$ can be estimated using a kernel density estimator $\hat{f}'$ of the derivative. Therefore, we can just siphon off a constant number of data points, estimate these constants up to order, and not affect the convergence rate.
The question about unknown $\alpha$ is more delicate. Estimation of $\alpha$ itself is quite complicated, but perhaps the goal is instead the construction of an adaptive estimator of $\mathcal{I}(f*\varphi_t)$ (i.e. does not require knowledge of $\alpha$) yet still achieves the minimax rate as if it were known.
We believe it may be impossible to modify our methodology to be adaptive. The issue is we make use of a density point estimator in the low noise regime. Namely, we use an $\hat{f}$ with $E(|\hat{f}(\mu) - f(\mu)|^2) \lesssim n^{-\frac{2\alpha}{2\alpha+1}}$ for all $|\mu| \leq 1$. Let us fix a $\mu^* \in (-1, 1)$.
It is a well known result due to Lepski (O. V. Lepskii. *On a problem of adaptive estimation in Gaussian white noise.* Theory of Probability \& Its Applications, 35(3):454-466, 1991) that $\hat{f}$ cannot achieve the minimax rate of estimating the density at $\mu^*$ over the class $\mathcal{F}\_{\alpha_1}$ and simultaneously over $\mathcal{F}\_{\alpha_2}$. In particular, it can be shown that for *any* estimator $\hat{f}$, if $\sup\_{f \in \mathcal{H}\_{\alpha_1}} E(|\hat{f}(\mu^*) - f(\mu^*)|^2) \lesssim n^{-\frac{2\alpha_1}{2\alpha_1+1}}$, then we actually have
\begin{equation*}
\limsup_{n \to \infty} \sup_{f \in \mathcal{H}_{\alpha_2}} n^{\frac{2\alpha_2}{2\alpha_2 + 1}} E(|\hat{f}(\mu^*) - f(\mu^*)|^2) = \infty.
\end{equation*}
Therefore, our strategy seems doomed. It is very interesting to ask if some other approach can work. Thanks very much for pointing it out.
__Weaknesses 2 and 3:__ Reviewer CeFj asked a question about empirics. Please see our response there. Thanks!
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for their detailed rebuttals. My main concerns have been addressed, and I have changed my recommendation to 3 (Weak Accept). The main limitation of the paper continues to be the lack of empirical results and some questions about how to actually implement this estimator in practice, so I think the paper would be much stronger if some experiments were added.
I also think adding the discussion on the higher-dimensional case (from the rebuttal to Reviewer pEyQ) will strengthen the paper. | Summary: The paper considers probability densities smoothed by Gaussian noise of variance $t$, and addresses the problem of estimating the Fisher information of the smoothed densities based on a collection of $n$ i.i.d. samples. The Fisher information can be expressed as an integral of the smoothed density and its derivative. The paper proposes a estimator, which is of the “plug-in” type, in the sense that first both the smoothed density and its derivative are estimated, and then plugged in into the Fisher information functional. The way in which the smoothed density and its derivative are proposed to be estimated depends on the variance of the smoothing Gaussian noise, and is different in three different regimes of $t$, and in general, based on properly truncated empirical PDF estimator, or on existing kernel-density estimators. The paper analyzes the expected error of these estimators. The paper also derive minimax lower bounds, which assert its rate-optimality, except in an intermediate regime for $t$ (not small enough, or not large enough), in which no lower bound is proved. The lower bounds are based on the two-point method (Le-Cam) with a proper choice of pair of densities – in the regime of low $t$, densities are sharp, and thus should be distinguishable at the edges of the support ($\pm 1$). The bound is thus based on a pair of distributions different at these edges. In the regime of high $t$, the densities are very smooth, and thus should be distinguished at their center (around $0$). The bound is thus based on a pair of distributions different at the center.
Then, using known identities (I-MMSE, de Bruijn), the estimator for the Fisher information is shown to be utilized to estimate the mutual information over the Gaussian channel (with the original density has the density of the input) and the output differential entropy. Both have $O(1/\sqrt{n})$ rates. Finally, some of the proof ideas are highlighted – perturbation analysis of the function determines the Fisher information (Propositions 5.1 and 5.2).
Claims And Evidence: The paper claims that a simple plug-in achieves the minimax rate, with accurate dependency on the Gaussian smoothing variance $t$, then Holder-smoothness parameter of the density $\alpha$ and the number of samples $n$. The analysis of the estimator and the minimax lower bound validate this claim, up to a short interval in $t$, in which the question is open.
That being said, the proposed estimator is not a vanilla plug-in estimator. Indeed, the Fisher information is expressed as a function of the smoothed density and its derivative, and those quantities are estimated and plugged into the functional. However, the Fisher information is represented by different functionals in various regimes for $t$ (especially in the very high noise regime), and the way that the smoothed density and its derivative are estimated is different in each regime (possibly truncated, as smoothed empirical density or via a kernel density estimator). From my perspective the strong theoretical results of the paper are definitely of interest, but the estimator is not very simple.
Methods And Evaluation Criteria: The evaluation method is standard and makes sense– minimax expected error rate for an estimator based on $n$ i.i.d. samples.
Theoretical Claims: The theoretical claims are convincing, and I have verified the claims made in the paper, though the rigorous proofs are fully deferred to the appendix. From a quick overview of the appendix, and especially the techniques used, the proofs also appear to be convincing.
Experimental Designs Or Analyses: Not applicable, the paper is purely theoretical.
Supplementary Material: I have gone over the proofs in the appendix, though not in detail. Beyond the new ideas explained in the body of the paper, the techniques are rather standard. As expected, in the upper bound on the estimation error, the technical aspect is to upper bound the error in the Fisher information due to the error in estimating the density and its derivative, and this boils down to first-order Taylor expansion. In the lower bound, the basic Le-Cam method is used. The KL divergence between the pair of distributions is easily bounded by the chi-square divergence which is easy to compute. The difference in Fisher information is more delicate, as it is related to the second-order term in the Taylor expansion.
Relation To Broader Scientific Literature: The problem of estimating statistical functionals is a classic problem, and the Fisher information is one of the central functionals in statistics. The paper addresses both classic works on the problem (without smoothing) and a recent line of work considering smoothed densities.
In a broader context, the motivation of the current work – Given that the Fisher information is discontinuous in $t$, and as the actual interest is in the Fisher information of the original density, why it is of interest to estimate it to begin with? The motivation questions also comes to mind given the fact that increasing $t$ from zero may actually make the problem more difficult.
Finally, given the interest in diffusion models, there are many papers addressing the problem of score estimation. As Fisher information is the variance of the score, it would be interesting to relate the paper more closely to this research area.
Essential References Not Discussed: I am not aware of an essential reference missing.
Other Strengths And Weaknesses: Strengths:
1) The paper is very well written with the main ideas and the merit of the results clearly explained. To the extent possible, the intuition of the proofs is explained.
2) The result is a sharp (almost full) characterization of the minimax estimation rate in this problem.
3) It is interesting that a plug-in estimator is optimal for $t>0$, as it is suboptimal in the unsmoothed case.
4) The estimator of the Fisher information results an estimator for other functionals – mutual information and differential entropy of the smoothed density.
Weaknesses:
1) High dimensions: The paper addresses one-dimensional densities. One of the main motivations for Gaussian smoothing is to circumvent the curse of dimensionality. It is not discussed anywhere in the paper (and is unclear) if the approach can be directly extended to high dimensions (with the anticipated technicalities), or if it breaks down at high dimensions.
2) The computational question is completely ignored, in the sense that it is not obvious how simple it is to compute the estimators – these are integrals of the estimated densities and the estimated derivatives of these densities over an infinite interval.
3) The paper is not a perfect fit to ICML, as it is purely theoretical, without any actual machine-learning applications, or even connection to machine learning techniques (the “information bottleneck” motivation is rather generic. This explains my recommendation is only “accept”.
Other Comments Or Suggestions: 1) The name Fisher information is typically reserved for parametric families. Here it is somewhat hidden that the parametric family is the location. I think that in the ML community this quantity is referred to as Stein information.
2) Remark 1.2: How does it follow that $I_t \gtrsim 1/\sqrt{t}$ ?
3) In (6) and (7) there is a typo, I think that the integral should have $d\mu$.
4) If I understand correctly, the derivative of $\alpha$ Holder function will be $(\alpha-1)$ Holder. So in Section 2.2, for the estimate of the derivative, shouldn't the rate change to $n^{2(\alpha-1)/(2\alpha-1)}$ (replacing $\alpha$ with $\alpha-1$ in the preceding bound)?
5) In line 423, how $g(x,t)$ is defined? As a smoothed version of $g? A short explanation would clarify this.
Questions For Authors: None
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: Thanks for the constructive comments!
__Weakness 1:__
Suppose $f$ is an $\alpha$-Holder density on $[-1, 1]^d$. Our result can be directly extended, and the rate is entirely expected:
$$\inf\_{\widehat{\mathcal{I}}\_t} \sup_{f \in \mathcal{F}\_\alpha} E\left(\left|\widehat{\mathcal{I}}\_t - \mathcal{I}\_t\right|\right) \asymp \frac{1}{\sqrt{n}t^2} \wedge \frac{1}{\sqrt{n} t^{(d+2)/4}} \wedge \frac{n^{-\alpha/(2\alpha+d)}}{\sqrt{t}}.$$
This is natural from the well-known $d$-dimensional estimation rates of the plugged in density and gradient estimators.
Notably, in the very high and high noise regimes, the convergence rate in terms of the sample size is the fast $\frac{1}{\sqrt{n}}$ rate; the curse of dimensionality is circumvented. A curse seems to appear in the low noise regime rate, but one needs to take care in the interpretation. Note the rate is the minimum of three terms, which means we *always* can achieve $\frac{1}{\sqrt{n}t^2}$. Therefore, one might say the curse never bites. But when $t$ is very small, one can beat $\frac{1}{\sqrt{n}t^2}$ and achieve the faster $\frac{n^{-\alpha/(2\alpha+d)}}{\sqrt{t}}$, which appears to suffer the curse.
The answer to this apparent conceptual puzzle is that one cannot avoid paying $\frac{1}{t}$ raised to a power involving $d$ (in our case it is $t^{-\frac{d+2}{4}}$), which becomes large as $t$ gets small. This phenomenon is not unique to Fisher information estimation, and has been noted earlier by Goldfeld et al. (2020) that it occurs also in smoothed entropy estimation. Since some claim in these results that the curse of dimensionality has been circumvented, some might also claim that the curse is avoided in our problem too.
Though we have not checked every single detail to confirm the conjectured rate, the generalization of the proof to the multivariate case seems very standard and only involving tedious notation. One might be worried about generalizing our use of integration by parts, but a coordinate-wise argument works as the domain is $[-1, 1]^d$. Since we use plug-in the argument is quite similar to the score estimation theory of Dou et al. (2024).
__Weakness 2:__ Reviewer CeFj commented on empirics, and in our response to them we have described how the estimators can be computed. Please have a look there. Thanks!
__Relation to broader scientific literature:__ Reviewer SWHe pointed to some related work; please see our response!
__Other Suggestion 1:__ Thanks! In the revision, we will make a note that this quantity is also known as the Stein information. After reading your comment, we were interested in completely replacing all instances of "Fisher information" with "Stein information", but after consulting some senior colleagues (from the nonparametric statistics and information theory communities and who also keep up with ICML/Neurips/etc), we decided to stick with the term "Fisher information" as the paper is particularly relevant to those communities. Thanks again for your comment, and we will be sure to note that it is also known as the Stein information.
__Other Suggestion 2:__ Continuing the calculation from the remark, we have $\mathcal{I}\_t = \frac{1}{2}\int\_{-\infty}^{\infty} \frac{(\varphi_t(x+1) - \varphi_t(x-1))^2}{P(|N(x, t)| \leq t)} dx \gtrsim \int_{|x-1| \leq \sqrt{t}} \varphi_t(x-1)^2dx$ where the last inequality follows from $P(|N(x, t)| \leq 1) \asymp 1$ for $|x| \leq 1 + C\sqrt{t}$ as $t < 1$. We have also used that $\varphi_t(x+1) \leq c \varphi_t(x-1)$ for $|x-1| \leq \sqrt{t}$ where $c < 1$ is a small universal constant, since $t$ is small. Consider $\int_{|x-1| \leq \sqrt{t}} \varphi_t(x-1)^2 dx \asymp \frac{1}{\sqrt{t}} \int_{|x-1| \leq \sqrt{t}} \frac{1}{\sqrt{t}} e^{-\frac{(x-1)^2}{t}}dx \asymp \frac{1}{\sqrt{t}} P(|N(1, t) - 1| \leq \sqrt{t}) \asymp \frac{1}{\sqrt{t}}$. Hence, $\mathcal{I}_t \gtrsim \frac{1}{\sqrt{t}}$. In the revision, we will elaborate to make this clearer to the reader.
__Other Suggestion 3:__ Thanks!
__Other Suggestion 4:__ Though we agree the intuition is natural, it is an classic result from statistics that the minimax rate for estimating the $r$th derivative of an $\alpha$-Holder function in squared $L^2$ or squared pointwise error is $n^{-2(\alpha-r)/(2\alpha+1)}$ (we take $r = 1$ for our purposes). This result (along with results for other error metrics) is due to Charles Stone (namely, his papers *Optimal global rates of convergence for nonparametric regression*, The Annals of Statistics 10 (1982), no. 4, 1040-1053 and also *Optimal uniform rate of convergence for nonparametric estimators of a density function or its derivatives*, Recent Advances in Statistics, Elsevier, 1983, pp. 393-406). In our paper, we had only cited textbooks, but we will also cite these papers of Stone in the revision.
__Other Suggestion 5:__ Thanks! Yes, $g(x, t) := (g*\varphi_t)(x)$. We had defined it on page 17, but neglected to point it out at line 423. We will make a note of it in the revision. | null | null | null | null | null | null |
Generalizable Multi-Camera 3D Object Detection from a Single Source via Fourier Cross-View Learning | Accept (poster) | Summary: This paper proposed a Fourier Cross-View Learning (FCVL) framework, which augments the data in the frequency domain and includes a contrastive-style semantic consistency loss to improve the model generalization ability from a single source.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: The proof looks good, but I didn't check it carefully.
Experimental Designs Or Analyses: Yes. Didn't find an issue.
Supplementary Material: Yes. I reviewed all materials.
Relation To Broader Scientific Literature: This method is applicaple for different kinds of detection network.
Essential References Not Discussed: None.
Other Strengths And Weaknesses: Pros:
1. This paper augments the data in frequency domain by jitterring both amplitude and phase, which diversify the dataset. The phase term typically captures high-frequency features that may be more transferable.
2. The designed contrastive loss, which utilizes the adjacent image regions, is also a good idea.
3. The proposed method is adaptable to different approaches and achieved SOTA results with various baseline models.
4. The t-SNE visualization and other visualization results validate that the learned features are domain-invariant and the images become more diverse.
Cons:
1. There may be some false negative samples for the contrastive loss design since some large vehicles may distribute across several adjacent frames.
Other Comments Or Suggestions: Please check my questions.
Questions For Authors: 1. Regarding the semantic consistency loss, it appears to work well for small vehicles. But what happens when the vehicle is large? Will the rear part of a large vehicle be treated as a negative sample of the front part?
2. Besides large vehicles, what if there are other vehicles in the background image, will they also be regarded as negative samples?
3. Could you also show some examples where the proposed model still cannot detect correctly? How to further improve the model the future?
4. The idea of augmenting data in the frequency domain is not new and have been tried by previous researchers. What are advantages of the proposed frequency domain augmentation?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thanks for your positive and constructive feedback! We have addressed all the comments and incorporated additional experimental results to further validate our approach.
__Q1 and W1:__ In our approach, there is a cross-view instance binding mechanism that the identical instance labels are assigned to cross-view instances of the same object. This ensures their consistent assignment as positive sample pairs in contrastive learning. This effectively prevents the rear part of large vehicles from being misclassified as negative samples. Visualization analysis in the figure (https://drive.google.com/file/d/1X04hoOqohT-O3SmlxuWx843Ffm-LS_8N/view?usp=sharing) demonstrates that the front and rear components of large vehicles have consistent activation responses in feature maps. Besides, we further list the results (mAP) of large vehicles. We can observe consistently significant improvements over large vehicles. In particular, our approach has __10.8\%__ improvement for trailer.
| Method | truck | trailer| bus|
| -| -| -| -|
| BEVDet | 0.128 | 0.038| 0.222 |
| +FCVL | __0.208(+8\%)__ | __0.146(+10.8\%)__ | __0.293(+7.1\%)__ |
__Q2:__ Thank you for raising this question. As we mentioned above, in our approach, the identical instance labels are assigned to cross-view instances of the same object. When taking a target object in one view as the anchor, we search for objects with the same instance label in adjacent views as positive samples. This is because these objects inherently represent the same target observed from different angles, exhibiting strong correlations. For negative sample selection, we do not treat objects in the anchor's background as negatives. Instead, we choose samples of different categories from other views. This guarantees that negative samples are categorically distinct from the anchor, enhancing the model's ability to differentiate features across categories. Through this design, we effectively leverage cross-view consistency to improve model performance.
__Q3:__ As highlighted in our discussion of limitations, there are some opportunities for improvement under extreme weather and low light conditions. For example, targets are occluded by fog or inherently difficult-to-detect small distant targets become even more challenging under extreme weather conditions. Some examples are put here. (https://drive.google.com/file/d/1d-hDTCbvTj3SOVffPHnlBlEnJIeShM-6/view?usp=sharing) These limitations can be further improved from two aspects: (1) weather-specific data augmentation combined with a multi-scale strategy and (2) multi-modal fusion integrating LiDAR with cameras against low light. In addition, under adverse weather conditions LiDAR performance may be degraded. To maximize sensor effectiveness across diverse scenarios, an adaptive cross-modal fusion scheme should be designed to achieve dynamic fusion of different modalities.
__Q4:__ We would like to further clarify the advantages of our method over other frequency-domain approaches[1,2]. Compared with these methods, our method integrates __superior performance, high efficiency and extendability__.
Firstly, in the setting of single source data, our proposed method can enhance the generalization ability of the detectors by a large margin. FACT[1] needs to mix up different domains' data in frequency to achieve great OOD performance. But when training with only single domain, FACT can only mix the samples within the single domain and it indeed improves the in-domain clean set's performance a bit, but the improvement on the OOD sets is very slim. Different from FACT, we first propose Frequency Jitter at the image level to create diverse samples. Then, at the feature level, we introduce a novel method Amplitude Transfer to achieve fine-grained styles without content distortions. Via uncertainty estimation, Amplitude Transfer can obtain diverse feature statistics, which can gradually shift the features to more diverse domains through continuous training.
Secondly, due to the high complexity of BEV-based 3D object detection models, our plug-and-play data augmentation method can achieve better generalization results more efficiently. AGFA[2] trains the classifier and the amplitude generator adversarially to synthesize the worst-case domain for adaptation. Compared with this method, our proposed method is more stable and effective without introducing sophisticated extra modules or special training recipes for stable performances. This also increases the extendability of our method to other frameworks.
In summary, the proposed method balances both performance and efficiency, and addresses real-world challenges in autonomous driving, underscoring its practical value.
[1]. Xu, Qinwei, et al. A fourier-based framework for domain generalization. CVPR, 2021.
[2] Kim, Minyoung, et al. Domain generalisation via domain adaptation: An adversarial fourier amplitude approach. 2023. | Summary: The author proposes the Fourier Cross-View Learning (FCVL) framework including Fourier Hierarchical Augmentation (FHiAug),
an augmentation strategy in the frequency domain to boost domain diversity, and Fourier Cross-View
Semantic Consistency Loss to facilitate the model to learn more domain-invariant features from adjacent perspectives. According to the author, this is the first study to explore generalizable multi-camera 3D object detection with a single source.
Claims And Evidence: Yes, the author provides extensive experimental results to demonstrate the claims.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: Yes, I have checked the correctness of proofs in the supplementary material.
Experimental Designs Or Analyses: Yes. For issues about the experiments, please refer to the below part.
Supplementary Material: Yes, I reviewed all the supplementary material.
Relation To Broader Scientific Literature: The author proposes a new problem setting, which can be a good contribution to the literature. However, I keep doubt for the relationship between the author method and the problem setting.
Essential References Not Discussed: The author has discussed most of related works.
Other Strengths And Weaknesses: Strengths:
This paper introduces a new problem setting: generalizable multi-camera 3D object detection with a single source, which I believe is highly important. Moreover, the author's approach of applying augmentation in the frequency domain is quite novel. The experimental results also show performance improvements. Overall, I am inclined to accept this paper.
Weakness:
1. I find the FHiAug method quite novel. However, I believe it is a relatively general technique for RGB images. In contrast, the authors claim to be working on a new task: generalizable multi-camera 3D object detection. In my view, FHiAug has little to do with multi-camera or 3D detection specifically; rather, it is a more general method applicable to RGB images. Therefore, I do not see a clear connection between the proposed method and the novelty of the task itself. The authors should: (1) establish the relevance of their method to the multi-camera 3D detection task and (2) conduct experiments on tasks like 2D detection to demonstrate its broader applicability.
2. The Fourier Cross-View Semantic Consistency Loss also does not seem to have a clear connection to the Fourier space; it appears to be a loss function applicable to various augmentation methods. I believe the authors should similarly (1) establish the relationship between this consistency loss and FHiAug and (2) apply the loss to other augmentation methods to validate its effectiveness.
3. Although the experimental results show some improvements, the gains over the current state-of-the-art methods seem quite limited. For example, when using BEVFormer, the improvement is less than 1%. Given that the main contribution of the paper is FHiAug, I believe the authors should also provide results without the consistency loss, using only FHiAug, and compare them with existing methods to better demonstrate the effectiveness and improvements brought by the proposed approach.
4. Although NDS is indeed a very important metric on nuScenes, I believe AP remains crucial for the detection task. I hope the authors can also provide AP metrics to specifically evaluate the model's capability in 3D bounding box prediction.
Other Comments Or Suggestions: No, please refer to the above weakness part.
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks for your acknowledgment of our approach, which is truly encouraging! We have addressed all the comments and incorporated additional experimental results to further validate our approach. We sincerely appreciate your contributions to help elevate the quality of this submission. __All the tables are put here (https://drive.google.com/file/d/19yp9tYUu7XV-R4V69FW-Nzix8lZ8Kg5s/view?usp=sharing).__ Zoom in for better viewing.
__W1: I find the FHiAug method quite novel.__
We are deeply grateful for your acknowledgment of the novelty of FHiAug.
__(1) establish the relevance of the proposed method to the multi-camera 3D detection task:__
__Relevance 1:__ The proposed FCVL framework leverages the cross-view consistency in multi-camera 3D detection input to enhance generalization. By introducing the cross-view consistency loss, the model is enforced to learn domain-invariant features that preserve semantic alignments across camera perspectives. However, its effectiveness is limited under the single-domain setting due to restricted feature diversity. Therefore, we propose FHiAug to alleviate the bias in single-domain representations. As shown in Figure 2 of the paper, FHiAug, on one hand, expands the domain diversity to force the model to learn from different feature distributions. On the other hand, it expands the quantity and diversity of the cross-view sample pairs, enabling the consistency loss to more effectively explore semantic alignments between adjacent perspectives. Ultimately, the FCVL framework achieves generalizable multi-camera 3D object detection with a single source.
__Relevance 2:__ Compared to traditional 2D tasks, acquiring and annotating multi-camera 3D detection datasets is quite expensive. This method achieves remarkable improvements for 3D detection models at a relatively low cost without relying on large-scale annotated data. This solution offers a computationally efficient approach to address real-world challenges in autonomous driving, underscoring its practical value.
__(2) validate broader applicability:__ Besides, we conduct experiments on the 2D detection task. Similarly, following the paradigm of generalizing from a single domain to multiple domains, we train on the daytime-sunny set and test on the other four domains with different weather conditions at different times. As is shown (please refer to Table 1 in the link), our method also effectively improves generalization for 2D detection tasks.
__W2:__
__(1)connection to the Fourier space:__ Given that the semantic information is contained in the phase components, our semantic consistency loss is computed by extracting semantic information from the phase information. We evaluate the semantic consistency of samples with phase components. Please kindly refer to Equation 11–13 in our manuscript.
__(2)relationship between this consistency loss and FHiAug:__ The proposed FCVL framework follows "domain diversity first, then domain invariance" paradigm. FHiAug expands the data distribution at both image and feature levels, while semantic consistency regularization enables the model to learn more domain-invariant representation.
__(3)apply the loss to other augmentation methods:__ To validate this, we incorporate the consistency loss with DSU. The results are detailed in Table 2 in the link. This loss, when combined with other augmentation methods, can further enhance the generalization. With the same consistency loss constraints, our FCVL framework still demonstrates much more advantages.
__In conclusion__, the proposed framework accommodates the specificity of multi-camera 3D detection while demonstrating
extendability to other vision tasks. We leave it for future work to extend FCVL for general vision tasks.
__W3 & W4:__ Thank you for the comments. The proposed FCVL has achieved SOTA results with average performance improvements of 0.86%-2.47% on eight domains compared to other five methods across two distinct datasets and four different frameworks . The __consistently significant improvements__ across __multiple experimental setups__ fully demonstrate the strong adaptability of our method. It can seamlessly adapt to diverse scenarios and consistently maintain superior and stable performance. Autonomous driving systems, as safety-critical systems, require consistent performance across diverse scenarios. We further list the results of large vehicles. Due to insufficient training data of large vehicles, detecting large vehicles is more challenging compared to common cars. We can observe consistently significant improvements over large vehicles. In particular, our approach achieves __+10.8\%__ improvement for trailer, __+8\%__ for truck, and __+7.1\%__ for bus. Please refer to Table 3 in the link.
Furthermore, we list the results of mAP in Table 4 in the link. Our method maintains superior performance across multiple frameworks when leveraging FHiAug only and FCVL still achieves SOTA results with mAP metric.
---
Rebuttal Comment 1.1:
Comment: The author rebuttal has addressed my concerns, and I will keep my original weak accept rating.
---
Reply to Comment 1.1.1:
Comment: Thanks again for the time and effort you have dedicated to reviewing our manuscript! Your insightful feedback has been valuable in enhancing the quality of our work! | Summary: The authors propose a novel generalization multi-camera 3D object detection framework using Fourier Cross-View Learning.
Via the proposed Fourier Hierarchical Augemetatiion and Semantic Consistency Loss across views, this work consistently improves the generalization ability of the previous methods over multiple datasets.
The extensive experiments support the authors' claim, and the real-world demonstration shows the robustness of the proposed method for autonomous driving scenes.
Claims And Evidence: The claims of this work are Fourier Hierarchical Augmentation (FHiAug) and Fourier Consistency Loss. Among them, FHiAug can be further broken into Frequency Jittering (Amplitude and Phase) and Amplitude Transfer.
- Frequency Jittering (Amplitude & Phase): It is supported by the ablation results (Tab. 4) and visualization (Fig. 7 & 9), which shows it can change the input image appearance and help the overall performance.
- Amplitude Transfer: It is supported by the ablation results (Tab. 4) and visualization (Fig. 8 & 9), which shows it can further change the input image appearance through extracted image features and keep improving the overall performance.
- Consistency loss: It is supported by ablation results (Tab. 4) showing its benefit.
Methods And Evaluation Criteria: In general, the method is very novel and interesting. FHiAug successfully generates more diverse training samples. With the cross-view consistency loss, the model manages to learn more generalizable features for multi-camera 3D object detection.
For the benchmarks, nuScenes-C is widely used and can successfully test the generalization ability of the methods. However, it seems the results for Argoverse 2 (AV2), i.e. Table 3, are missing the explanation of the definition of City and Cloudy settings.
Theoretical Claims: All the proofs and theoretical claims look fine except for the second paragraph in the introduction, which the authors claim, "however, directly applying these approaches to BEV-based tasks introduces several challenges. First, BEV representations are generated by projecting multi-view 2D features using real-world physical constraints, which limits the use of strong geometric transformations, such as 270-degree rotations, as they would disrupt the spatial consistency of the BEV space."
It seems to me a logical error here. The augmentation, including the proposed FHiAug, is all applied to the "images," not the BEV features. Thus, using the claims here does not convince me why previous methods are not suitable for multi-camera 3D object detection.
Experimental Designs Or Analyses: The experimental designs are well structured, and the experiments are extensive over five methods across two challenging datasets.
The analyses seem to be too short in the main paper. The efficiency analysis could be moved to the supplementary section, while the authors should focus more on the ablation studies. Tab. 4 also needs to be structured better so that the readers can easily understand which rows to compare and what the takeaway messages are.
Supplementary Material: The supplementary materials are good and cover theoretical analysis, algorithms, and an intro for 2D data augmentation (which should be moved to the related works in the main paper), as well as more results for both quantitative and qualitative ones.
Relation To Broader Scientific Literature: I believe this work has a broader scientific impact, given its novel idea of generating diverse training data through FFT (Fig. 2) and the real-world demo (Fig. 5)
Essential References Not Discussed: No. This paper doesn't have essential references not discussed.
Other Strengths And Weaknesses: Strengths:
- The idea of generating diverse training samples via FFT is novel and interesting.
- Cross-view consistency is intuitive and effective.
Weakness:
- Putting the related work in Sec. 4, which is not a common writing, and also putting the 2D data augmentation in the supplementary while it should be in the related work.
- The proposed domain generalization augmentation is only applied to perspective images/features. The motivation that the traditional 2D data augmentation won't work on BEV features is weakened by this.
- The explanation of generating City and Cloudy settings for AV2 is missed.
- The ablation studies (Tab. 4) are hard to understand. It should be revised.
Other Comments Or Suggestions: In general, I love the idea and proposed framework. Yet the unusual writing orders and some tables structure make the writing look not professional which should be improved.
Questions For Authors: - Since the method can improve the detector's generalization ability, how about the performance that trains the detector using the proposed framework on nuScenes/AV2 and test it on AV2/nuScenes? It will further strengthen the methods for domain generalization also across datasets (scenes and cameras).
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We are pleased that the reviewer found our paper __novel, interesting and effective__. Thanks very much for your acknowledgment, which is truly encouraging! We have addressed all the comments and further improved the manuscript. We are deeply grateful for your contributions to help elevate the quality of this submission.
__W1:__ To enhance readability, we have reorganized the paper by moving the related work to Sec. 2. Besides, we have added a subsection in Sec. 2 to systematically review existing 2D augmentation techniques.
__W2:The proposed domain generalization augmentation is only applied to perspective images/features. The motivation that the traditional 2D data augmentation won't work on BEV features is weakened by this.__
We apologize for any confusion.
The BEV representation is constructed by mapping 2D features from surrounding camera views into 3D space through physics-aware methods such as depth estimation. Although traditional 2D data augmentation are applied to images, when these augmented images are projected to BEV space, the artifacts introduced by augmentation will degrade the quality of BEV features.
Firstly, strong geometric transformations (e.g. large-angle rotations, translations) can no longer be freely applied. Such transformations on 2D images would violate the spatial consistency between adjacent cameras', leading to distortion of the target's position or orientation in BEV space. This would degrade the perception system's reliability. This phenomenon exposes the limitation of common geometric augmentation in 3D perception. Geometric transformations must adhere to physical constraints derived from the multi-view geometry. We have conducted experiments to demonstrate this point. As shown in the table, natively applying strong geometric augmentations will not improve or even hurt the performance.
Secondly, style transfer techniques replace the original image statistics with those from the target style, which causes the interference between style and content and distorts content features. If the 2D features are impaired, the projected BEV feature will also be affected, ultimately hurting the 3D detection performance. While, our method decouples style manipulation from content preservation, effectively avoiding this limitation.
Compared with these common 2D augmentations, the key advantage of our method is its ability to maximize sample diversity under physical constraints, while maintaining superior content integrity preservation.
| Model | Clean | OOD Avg. |
| - | -| - |
| BEVDet | 0.3880 | 0.2017 |
| +strong geo | 0.3530 | 0.1749|
__W3: Missing the explanation of the definition of City and Cloudy settings.__
Thank you for pointing this out and we have added more details in the manuscript. The Argoverse contains different driving scenarios across six major U.S. cities (Miami, Washington D.C. and so on), including various weather conditions such as sunny days and cloudy conditions. To adhere to the single domain to multi domain generalization paradigm, we take sunny-day data from Miami as the single-domain training set, while sunny-day data from other cities (with diverse urban road structures) as the first ood test set (City), and cloudy (dim lighting) data from other cities as the second ood test set (Cloudy).
__W4: The ablation studies (Tab. 4) are hard to understand. It should be revised.__
Thanks for your suggestion! More ablation studies and analysis are put in the body of the paper. The ablation studies include (1) effects of different components of FCVL and (2) effects of different inserted positions of Amplitude Transfer at feature level (this subsection has been moved from the Appendix D.4 to the main body of paper).
We have also revised the Table 4 to improve the readability. We have put the new table here ( https://drive.google.com/file/d/1BYMOE_trRM3vPfBfyxjt1dbW75p13Igv/view?usp=sharing ) . We hope this revised table will help readers understand the effect of each module in FCVL better.
__Q:Since the method can improve the detector's generalization ability, how about the performance that trains the detector using the proposed framework on nuScenes/AV2 and test it on AV2/nuScenes?__
Thanks for your constructive suggestion, which further enhances our validation framework. As the number of surround-view cameras is different (NuScenes: 6 cameras, AV2: 7 cameras), we keep AV2's 6 cameras (ring_front_center, ring_front_left, ring_front_right, ring_side_left, ring_side_right, ring_rear_left) and align the Argoverse coordinate system to NuScenes. As the labeled categories between these two datasets are quite different, we mainly focus on two common categories (car and pedestrian) with mAP metrics. The results below demonstrate that our method achieves great improvements in cross-dataset generalization.
| Model | mAP|
| - | - |
| nuScenes(train) | AV2(test) |
| BEVDet | 0.1020|
| +FCVL| 0.1246(__+2.26\%__) |
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors' effort in the rebuttal. The authors address all my concerns, and I am willing to increase the rating to 4. The experiments of cross-dataset results are valuable, and the reorganization of tables and writing are necessary for the final version of the manuscript.
---
Reply to Comment 1.1.1:
Comment: Thanks for raising the score! Your insightful comments are valuable in enhancing the quality of our work!Thanks again for the time and effort you have dedicated to reviewing our manuscript. | Summary: Aiming to improve the generalization in only single source data available for training, this paper proposed Fourier Cross-View Learning (FCVL) framework. FCVL framework can leverage the Fourier transformation to separate high-level and low-level information within the image. Subsequently, it can make appropriate modifications to this information, so as to achieve the purpose of generalization.
Overall, this framework outperforms other models compared in the paper, proving its generalization capability.
Claims And Evidence: Yes
Methods And Evaluation Criteria: In this paper, use the nuScenes dataset as the training set and the NuScenes-C dataset as the testing sets. Why don't use NuScenes-C dataset as the training set to transfer to a simpler dataset?
Are there any relevant experiments?
In addition, Parameters d1 and d2 in Formula 1 are not introduced.
Theoretical Claims: NA
Experimental Designs Or Analyses: The proposed Fourier Hierarchical Augmentation is similar to the image style transfer method with content restriction or diffusion-base method. Why not use the existing model for style transfer, instead of using fixed parameters to change the style of the image.
Supplementary Material: No
Relation To Broader Scientific Literature: The paper is related to multi-view 3D object detection. The proposed model leverages the Fourier transformation to separate high-level and low-level information within the image, which achieves better generalization ability.
Essential References Not Discussed: No
Other Strengths And Weaknesses: Strengths:
(1) Sufficient experiments have been made to prove the performance of the method.
(2) Proposed baselines achieve better performance than comparisons and the experiments.
Other Comments Or Suggestions: The overall structure of the paper is rather confusing, and the writing skills need to be improved.
The Relate Work should be introduced in front of the methodology, and the Relate work is not detailed enough.
Questions For Authors: NA
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks for your positive and constructive feedback! We have addressed all the comments and incorporated additional experimental results to further validate our approach.
__Q1: In this paper, use the nuScenes dataset as the training set and the NuScenes-C dataset as the testing sets. Why don't use NuScenes-C dataset as the training set to transfer to a simpler dataset? Are there any relevant experiments?__
The objective of this paper is to generalize models trained on a single domain (e.g., nuScenes) to multiple diverse application scenarios (e.g., NuScenes-C, which includes eight ood test scenarios). Such cross-domain generalization is more challenging and allows to rigorously validate the performance of the proposed algorithm under significant domain shifts.
Besides, "use NuScenes-C dataset as the training set to transfer to a simpler dataset" falls within the research paradigm of generalizing from multiple domains to unknown target domain. We conducted experiments under this paradigm. As shown in the table, our approach can still enhance generalization performance in this setting.
| Model | NDS|
| - | -|
| NuScenes-C(train) | NuScenes(test)|
| BEVDet| 0.1830 |
| +FCVL| 0.1993(__+1.63\%__) |
__Q2:__ $d_1$ and $d_2$ denote height and width of the image. Thank you for bringing this detail to our attention and we have revised the manuscript accordingly.
__Q3: The proposed Augmentation is similar to style transfer method with content restriction or diffusion-base method. Why not use the existing model for style transfer, instead of using fixed parameters to change the style of the image.__
Compared with existing methods, FHiAug has advantages in three aspects: superior content integrity preservation, style diversity flexibility, and high efficiency.
__Firstly, FHiAug demonstrates superior content integrity preservation.__ Style transfer techniques replace the original image statistics with those from the target style in pixel domain, which would blur the boundary between style and content, distorting important features. In the training process, more data will be unintensionally simplified, leading to worse performances. Conversely, FHiAug operating in the frequency domain decouples style manipulation from content preservation,effectively avoiding the interference between style and content in spatial domain. Both experimental comparisons against other style transfer methods (Table 1 in the paper) and theoretical analysis validate the superiority of our approach.
__Secondly, FHiAug has more flexibility in expanding style diversity.__ The hyperparameters of FHiAug are not entirely fixed. During each iteration, new style statistics is randomly sampled from Gaussian distribution (please kindly refer to Equation 6–9 in paper), which __ensures the diversity of styles__ in each iterative process. However, diffusion-based techniques require conditional control, specific training data , or tailored architecture to achieve style generation for autonomous driving scenarios.
__Thirdly, FHiAug exhibits better efficiency and extendability.__ Diffusion-based techniques demand substantial time for generation and additional storage space to store the data. If adopting an online generation approach, frequent calls to the generative model during training would significantly increase computational overhead, making them impractical for the training of complex 3D detection models. While, FHiAug is a plug-and-play online augmentation approach, which can be extended to other frameworks with high flexibility.
__To further validate the advantages of FHiAug__, we add more efficiency and performance analysis of diffusion-based method[1]. Under the same resolution, the generation method takes over ten times longer than FHiAug. Besides, we generate synthetic data following [1]. We incorporate synthetic data into the original dataset and train all the data on BEVDet. As can be seen from the generated images (https://drive.google.com/file/d/1M4gzNi_wVNPHvtWy_l0ddS5qnBqxpFLR/view?usp=sharing), some objects are distorted and the diversity of synthetic data is quite limited. Compared with diffusion-based method, FHiAug achieves superior generalization performance (__+4.19\%__) much more efficiently.
| Model | Resolution | time-consumed(s) | OOD Avg. |
| -| -| -| - |
| BEVDet |-|-| 0.2017|
| FHiAug only| 256 $\times$ 704 | 0.107 | __0.2579(+4.19\%)__ |
| MagicDrive[1] | 256 $\times$ 704 | 1.5|0.2160|
[1] Gao, Ruiyuan, et al. MagicDrive: Street View Generation with Diverse 3D Geometry Control. ICLR 2024.
__Q4: The overall structure of the paper is confusing. The Relate Work should be introduced in front of the methodology, and the Relate work is not detailed enough.__
We have further modified the structure of the paper and put the related work section in front of the methodology. We also provide an additional related work section in the Appendix for detailed descriptions and broader coverage of all related work.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' rebuttal. I will keep the rating.
---
Reply to Comment 1.1.1:
Comment: Thanks again for the time and effort you have dedicated to reviewing our manuscript. Your constructive feedback has been valuable in enhancing the quality of our work. | null | null | null | null | null | null |
Bongard in Wonderland: Visual Puzzles that Still Make AI Go Mad? | Accept (poster) | Summary: This paper presents a case study of utilizing VLMs for solving Bongard problems, and identifies that it remains challenging for VLMs to reason some basic concepts in Bongard problems. The authors also conduct a comparison between VLMs' and human's reasoning abilities on Bongard problems.
Claims And Evidence: The authors mainly made four claims: 1) Evaluation of VLMs on identifying underlying rules. 2) Comparisons of VLMs to Human's reasoning ability. 3) Exploration of the models’ pattern recognition abilities. 4) Examination of the ability of generating hypotheses.
Those claims are mainly associated with the experimental evaluation on the Bongard problems. The authors did conduct those evaluations as shown in Sec. 4.
Methods And Evaluation Criteria: The authors mainly made use of existing VLMs for the evaluation and did not propose any new method. The evaluation critera are clear and suitable.
Theoretical Claims: There are no theoretical claims. The authors mainly conducted an empirical study of existing VLMs on Bongard problems.
Experimental Designs Or Analyses: The authors mainly evaluate existing VLMs but no proposed one. Also, no other existing solution models for Bongard problems are compared in the experiments.
Supplementary Material: I briefly go through the SM, which mainly supplements more details on prompts and experimental results.
Relation To Broader Scientific Literature: The main contribution of this paper is to unveil that current VLMs are not able well solve Bongard problems. But the authors did not propose any new approaches to better solve the problems. In literature, there are many existing solution models to solve the Bongard problems. But the authors did not compare with those methods, which will greatly limits the significance of the any discussions or conclusions drawn from the results.
Essential References Not Discussed: Many existing solution models for Bongard problems are not discussed or compared in the experimental validaiton. E.g.,
[1] Take A Step Back: Rethinking the Two Stages in Visual Reasoning, ECCV, 2024.
[2] Neural Prediction Errors enable Analogical Visual Reasoning in Human Standard Intelligence Tests, IMCL, 2023.
Other Strengths And Weaknesses: Strength: The paper is well written with clear logic flow and comprehensive evaluation on VLMs.
Weakness: 1) This paper did not propose any new methods. The novelty of this paper is very limited. 2) The key finding, the incapability of VLMs in solving abstract reasoning problems, including Bongard problems, have been discussed in literature. This greatly limits the significance of this paper.
Other Comments Or Suggestions: The authors may add some theoretical analysis in this paper to boost the technical aspects of this paper.
Questions For Authors: 1. In case I miss any novel design in this paper, can the authors justify the novel contributions of this paper?
2. Can the authors provide more evaluation results of other existing models, in comparisons to the results of VLMs and human's? Or, can the authors justify why other existing models are not chosen for comparison?
3. Can the authors provide more theoretical insights of the proposed method?
Ethical Review Concerns: No
Code Of Conduct: Affirmed.
Overall Recommendation: 1 | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our work and for considering it well written. We hope that the following responses will also convince you of the strength and value of its contributions.
**(W1, Q1 - No new method)**
We respectfully disagree with the reviewer’s assessment regarding the lack of novelty, and note the other reviewers did not raise any concerns on novelty (with reviewer fqym explicitly stating that "this work appears novel"). While we do not propose a new technique to solve Bongard problems directly (an ambitious and still unsolved challenge) we introduce a detailed typology of model behaviours in such visual tasks, revealing strengths and limitations of current VLMs, helping us to understand their learned knowledge and abilities to reason.
Our contributions include:
- Introducing various prompt configurations (open-ended and multiple-choice) to explore how different prompt strategies affect VLMs’ performance on Bongard problems.
- Presenting two novel problem setups (Task 2 and Task 3) designed to analyze both the perceptual and the robustness aspects of VLMs.
- Conducting a human study involving 20 participants, which allows us to compare human performance with that of VLMs and thereby uncover systematic discrepancies.
- Identifying 10 Bongard problems that no deployed model in our study could solve in any task setup, highlighting clear gaps for future research.
- Demonstrating a substantial discrepancy between correctly identified rules in Task 1 and those that are correctly applied in Task 2, which unveals inconsitant model behaviour as the VLM can corretly find a rule but not use it reliably.
By documenting and analyzing these insights, we aim to inform the broader ML community about current limitations and future directions in solving Bongard problems. We believe this approach is in line with the guidelines’ emphasis on supporting new tasks, metrics, and problem framings, even when no novel algorithmic methods are introduced. We believe this set of contributions constitutes meaningful progress that the ICML community can build upon.
**(W2 - Key findings have been discussed in literature already)**
Even though there exists other works that investigate the shortcomings in abstract visual reasoning in VLMs, we think that several new insights can be drawn from our work, see the listed contributions above. For example our work identified a gap between problem solving (Task 1) and perception or rather applying rules correctly (Task 2) that is, to our knowledege, not yet discussed in the literature.
**(Q2 - Compare to other methods)**
We appreciate your interest in seeing broader evaluation results. However, our primary aim is not to identify the single best method for solving Bongard problems, but rather to reveal critical limitations and uncover how current VLMs, which can be opaque in their reasoning, perform on these tasks. We aspire to highlight systematic insights that can guide future improvements in model design and evaluation.
That said, we can discuss other methods that aimed at solving Bongard problems in more detail in related work. However they usually have a very different setup to VLMs and our aim is not to propose a method for solving Bongard problems, instead we use them as a diagnostic dataset to understand and analyse VLMs in more detail.
In that light we also consider the proposed references. Both present interesting approaches; however, they focus on the Bongard-LOGO dataset rather than the original Bongard problems considered in our study. While the approach in [1] could, in principle, be adapted to the original Bongard problems in an open-ended setting, [2] is specifically designed for classification tasks, making it difficult to apply directly to the open-ended nature of the original Bongard problems. Moreover, our primary focus is on evaluating the capabilities of VLMs independently of dedicated reasoning architectures, as their underlying mechanisms differ significantly. Nonetheless, we have included both works in our related work section to acknowledge their contributions.
**(Q3 - more theoretical insights)**
We are a bit puzzled by this remark. As we have highlighted above the contribution of this work is not to propose a novel AI method, but rather introduce and analyze a valuable dataset and typology of model behaviour for investigating this dataset in the context of VLMs. Can the reviewer clarify what they are referring to with "theoretical insights of the method"? | Summary: The paper benchmarks existing vision-language models using Bongard problems (BPs). It also performs a human evaluation for comparison. The paper tests not only whether a model can solve a given BP or not, but also whether the main concept in the BP can be recognized in the individual images in the BP, and whether the model can generate the correct solution when it is asked to generate a set of candidate hypotheses. The results are surprising, and show, for example, that models not only significantly underperform humans, but they also do not seem to correctly perceive the individual images even in cases where they correctly solve a problem.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes. The paper performs an evaluation of existing, pre-trained models on tasks derived from Bongard problems to compare to human baseline and to assess the performance consistency between the different tasks.
Theoretical Claims: N/A
This is an empirical study.
Experimental Designs Or Analyses: Yes.
Supplementary Material: Yes, I read the supplementary material, but did not read all the additional results and prompts, etc., in detail.
Relation To Broader Scientific Literature: The paper provides an overview of existing work on Bongard problems and similar tasks to evaluate AI models. The references seem fairly comprehensive as far as I can tell. The study in this work differs in substantial ways from existing similar studies.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: The paper is very clear, well structured, and easy to follow, and it presents lots of interesting insights.
The comparison between Task 1 (solving BPs) and Task 2 (detecting a BP’s underlying concepts in individual images) is quite nice and the results are revealing. And it seems in line with the findings for Task 3. It is nice that it also highlights a danger in reading too much into the ability to solve any given problem (using the Task 1 setting).
Contrarily, it seems that the danger also applies to human evaluations. And it suggests a human evaluation for Task 2 would be very helpful to complete the picture and would significantly strengthen this study. Have the authors considered this?
Other Comments Or Suggestions: Footnote 3: “Exception: In Task 2 o1 was prompted once.” Why was that?
Even though the performance of models is fairly low, as the problems are publicly available (and have been for a long time) is there any chance of contamination affecting the results? It seems that even the inconsistency regarding Task 1 vs Task 2 might be explainable to some degree through contamination as well (with Bongard problems - or similar types of problems - seen during training enhancing the ability to generate a shortcut answer without truly perceiving details of the images)?
Questions For Authors: The presence of multiple panels in a single image could be difficult for existing models to process simply because information is highly local as a result. And it seems that in some cases this could make the resolution in which a given model perceives the local panels too small to perceive details (some models downsample any given images to a fixed resolution, such as 224x224, before processing them).
Could this be a confounding factor (especially for Task 2 and the results of Task 1 vs Task 2)? One way to help ensure that this is not the reason for the observed failure cases would be to present the panels separately, each in an individual image, rather than within a single. Have the authors considered this?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thanks for your detailed response and the constructive feedback! Below we address your concerns.
**(W1 - Human study for Task 2)**
We agree that analyzing human performance in Task 1 alongside Task 2 would be an interesting future direction in a different setting with higher conceptual ambiguity and novel concepts. However, our primary focus in this paper was identifying the VLMs' perception errors in relation to their performance in Task 1.
We hypothesize that humans would excel in Task 2, as previous literature has identified concept recognition, spatial reasoning, and relational abstraction as fundamental aspects of human cognition [1,2]. Given that Bongard problems involve discriminative rules, we argue that human performance should remain robust in this task.
While rule verification tasks can, in principle, be challenging for humans in cases of rule ambiguity or when unfamiliar concepts are introduced, this is unlikely in the case of Bongard problems due to the discriminative nature of the rules. For example, in BP#16, asking a human whether a spiral turns clockwise or counter-clockwise (starting from the center) should be straightforward, as the rule is well-defined.
[1] Lake BM, Ullman TD, Tenenbaum JB, Gershman SJ. Building machines that learn and think like people. Behavioral and Brain Sciences. 2017
[2] Gentner, D. (2003). "Why we’re so smart." Language in Mind: Advances in the Study of Language and Thought
**(C1 - Footnote o1 prompting)**
The o1 evaluation is quite expensive, therefore we decided to evaluate only once for the concept detection experiment, as it would required 2400 more requests. If the reviewer thinks, it would be valuable to have the additional trials for o1 as well, we can still retrieve these results.
**(C2 - BPs in training Set)**
Unfortuantly, the opaque nature of the training processes, particularly for models developed by large corporations with proprietary datasets, makes it impossible to determine whether the models have actually been exposed to the BPs during training. However, even if such examples were present, the low overall performance of the models indicates that they have certainly not fully comprehended them. Your suggestion that the models might have learned shortcuts, could be one explanation why many Bongard problems solved in Task 1 remain unsolved in Task 2. It would be interesting future work to investigate this phenomenon on non-public test sets. Overall, this is an interesting hypothesis, and we included it in our discussion.
**(Q1 - Representation of BPs (single images))**
Interesting suggestion, we have investigated this with some of the models (c.f. table below). We see that the performance in Task 1 is comparable but more interestingly also the behaviour between Task 1 and Task 2 stays similar. This suggests that the image representation alone cannot be the reason for this discrepancy. We included these findings in the final paper.
| | GPT-4o | Claude 3.5 |
| -------- | -------- | -------- |
| Solved BPs Original Setup | 25 | 31
| Solves BPs Single Images | 25 | 33|
Results analogously to Figure 5:
| | T1 w/o T2 | T1 $\cap$ T2 | T2 w/o T1
| -------- | -------- | -------- | -------- |
| GPT-4o (orig) |13 | 11 | 13 |
| GPT-4o (single imgs) |12 | 12 | 12 |
| Claude (orig) |20 | 11 | 11 |
| Claude (single imgs) |20 | 12 | 10 | | Summary: This paper explores the performance of VLMs on Bongard problems. To test the abstract reasoning ability of VLMs, three different types of tasks are proposed: (1) open-ended solving of Bongard problems, (2) detection of specific concepts, and (3) formulation of hypotheses. Task 1 is to summarize the rules of the left and right panels of Bongard problems, or to select the rules from some options. Task 2 requires VLMs to identify whether a certain image follows a certain rule or concept. Task 3 tests the model's ability to generate Bongard problem rules. The performance on the above tasks can evaluate the robustness and reasoning ability of VLMs. The experimental results show that there is still a large gap between the reasoning ability of VLMs and humans. This work provides valuable insights for evaluating the concept learning and abstract reasoning abilities of current VLMs.
Claims And Evidence: The claims made in this paper are clear and supported by its experiments.
Methods And Evaluation Criteria: The datasets and evaluation criteria of this paper make sense for the problem.
Theoretical Claims: This paper does not involve theoretical claims and proofs.
Experimental Designs Or Analyses: This paper is reasonable and effective in experimental design and analysis.
Supplementary Material: I reviewed all the supplementary materials.
Relation To Broader Scientific Literature: The main contribution of this paper is to verify abstract visual reasoning ability of current VLMs on Bongard problems. Previous work [1] verified abstract visual reasoning ability of VLMs on non-open-ended problems like Raven matrices and odd-one-out problems, which is different from this work. This paper analyzed the concept learning ability in Bongrad problems, which is not covered by previous works.
[1] Cao, Xu, et al. What is the visual cognition gap between humans and multimodal llms?
Essential References Not Discussed: The related works that are essential have been discussed in this paper.
Other Strengths And Weaknesses: Strengths
This paper refines traditional Bongard problems and proposes three different tasks, which respectively verify the rule induction, concept learning and rule imagination abilities of VLMs. The performance of the above tasks can reveal different dimensions of VLMs' abstract visual reasoning ability. Therefore, this paper provides some insights into the reasoning ability of commonly used VLMs.
Weaknesses
The main concern is about the data collection process. In this work, only 100 Bongard problems from existing work are selected as test data. Could the authors provide a detailed introduction and discussion on the data collection process? For example, for what considerations and what criteria were used to select these Bongard problems. Why not use larger Bongard problem datasets, e.g., CVR [1] and SVRT [2]. CVR and SVRT include a large number of program-generated Bongard problems, which should have annotations of rule descriptions and concept labels for the panel images.
[1] Zerroug, Aimen, et al. A benchmark for compositional visual reasoning.
[2] Fleuret, François, et al. Comparing machines and humans on a visual categorization test.
Other Comments Or Suggestions: As important forms of abstract visual reasoning tasks, the authors can further incorporate raven matrices or odd-one-out problems into the test. These problems can also be transformed into open-ended forms, like Bongard problems, by describing the rules to complete the task instead of choosing the result from the options.
Questions For Authors: I notice in Figure 4 that the accuracy of the human answers to the questions varied quite a bit (e.g., from 0% to 100% in the "same" rule). I wonder why the participants have such a large variance in their results. Is it due to the difference of participants' abstract reasoning ability, or because the participants do not understand the form of the Bongard problems?
The bar chart on the right of Figure 4 does not show the black area representing "solved", so what does the "solved" black block of the legend mean?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback. We respond to your points in detail below.
**(W1 - Selection of test data)**
We chose to work with the original Bongard problems introduced in [1], as they were specifically designed to test pattern recognition capabilities in machines, yet they remain unsolved by current AI systems. These problems were carefully crafted and, in our view, already present a significant challenge on their own. Importantly, we find that they strike a valuable balance between simplicity (in terms of visual size and structure) and conceptual difficulty, a point supported by our accompanying human study.
We agree that datasets such as CVR and SVRT are also interesting, as they share some conceptual elements with Bongard problems, including shape variation, counting, insideness, and contact. Exploring how VLMs perform on these datasets would be an exciting direction for future work.
[1] Bongard, M. (1970). Pattern recognition. New York: Spartan Books
**(S1 - Add abstract visual reasoning tasks)**
Thank you for the thoughtful suggestion. We agree that Raven’s Progressive Matrices and odd-one-out tasks are important forms of abstract visual reasoning, and we appreciate the idea of transforming them into open-ended formats. However, as motivated in the paper, we believe that Bongard problems already provide a strong and sufficiently challenging foundation for evaluating the kinds of open-ended, concept-based reasoning we are interested in.
In fact, many of the underlying visual concepts of Raven and odd-one-out (e.g., sameness, symmetry, numerical relations) are already well-represented in the Bongard problems.
That said, we agree that adapting other reasoning formats into an open-ended, explanation-driven setup as the reviewer suggests could be an exciting direction. While not necessary for the scope of the current study, we see this as promising future work, especially for expanding evaluation diversity and probing generalization across reasoning formats.
**(Q1 - same BPs variance)**
The "same" category includes only seven Bongard problems (see Table 4), which naturally increases the variance of participants’ results and makes it difficult to draw strong conclusions. It is possible that some individuals find the concept of "sameness" more intuitive than others, though further study would be required to explore this systematically.
In general, high variance across participants may stem from differing levels of task comprehension. For instance, the lowest-scoring participant - who did not solve any Bongard problems in the ‘same’ category - often formulated a rule that applied only to the left side of a problem while dismissing the right side as simply “not following that rule.” In #BP98, they labeled the classes as “triangle shapes” and “non-triangle shapes,” overlooking the crucial contrast between “triangles” and “quadrangles.” Such answers suggest they may not have realized that merely stating the inverse of a rule is insufficient to fully characterize a Bongard problem. Similarly, this participant used relative terms like “more circle shapes” vs. “fewer circle shapes,” highlighting a misunderstanding of the need for clearly defined classification rules independent of context.
**(Q2 - black area in legend)**
The legend entry for "solved" is meaning filled rectangle (red or blue) rather than black filling. We see that this can be misleading. To improve clarity, we consider updating it to a half-red, half-blue filled rectangle. Does the reviewer think that this would improve clarity?
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the detailed responses, which have solved my questions. I think it is reasonable to change the black rectangle to a half-red-half-blue one. Now I can understand the legend well. I would like to keep my original score.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their response and the feedback, we updated the figure accordingly. We are happy to hear that we were able to address all the questions. In light of this, we kindly ask the reviewer if they could reflect their score and consider raising it. | Summary: The paper evaluates current VLMs on Bongard Problems. Each Bongard Problem (BP) consists of 12 images divided into two sides — the left side and the right side — each side containing 6 images. The images on each side are characterized by a rule not shared by the other side. The aim of the problem solver is to identify the rule pair that applies to each problem — via a text description of the rules. The paper develops 3 tasks: (1) Task 1 focuses on the open-ended generation of the rule given the problem image. (2) Task 2 focuses on providing the ground truth rule pair in the context and then asking the model to classify each of the 12 images into one of the two given rules. (3) Focuses on asking for multiple rule-pair hypotheses for each problem, unlike Task 1 where the model needs to generate the correct rule pair in a single go. The paper also evaluates human performance on Task 1. The paper then reports the results of these tasks for various available VLMs and infers several interesting conclusions.
Claims And Evidence: Yes. The paper claims that there is a significant gap between VLMs and human performance and this is made quite clear from the experiments.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: Not applicable.
Experimental Designs Or Analyses: Yes, the experimental design is sound. I will ask any doubts in the "Questions for Authors" below.
Supplementary Material: Yes, all parts.
Relation To Broader Scientific Literature: The paper is about evaluating the limitations of VLMs in how well they can understand abstract concepts. For this, the paper looks at the Bongard problems. Among the works about VLMs that look at Bongard problems, this work appears novel. I request further clarity from the authors regarding this via the Cons listed below.
Essential References Not Discussed: I am not aware of such references.
Other Strengths And Weaknesses: ## Pros
1. Interesting findings and questions raised for future work.
2. The paper is well-written. The way the metrics are reported is clear. The experiment section provides a large number of new and interesting insights.
3. Identifying the most difficult problems not solved by any model could be a very useful resource for the community.
## Cons
I think there needs to be a more in-depth discussion of the differences between this paper and the paper of Malkinski et al. 2024. I don’t mean to imply that there aren’t differences but rather that the reader should be able to see them more clearly. Currently, the authors say “While they (Malkinski et al. 2024) provide meaningful insights, they only consider a classification setting for the evaluation and do not investigate the model behavior in more depth.”, but this seems not specific enough. For instance: Malkinski et al.’s paper also does direct generation (akin to Task 1, if I am not mistaken).
It may be nice to mention which conclusions could not have been arrived at with previous work’s experiments and what are their implications for the future. The results are very interesting as standalone statements, but I am not sure if I have been able to grasp the main (and novel) takeaway message about VLMs from the paper.
Other Comments Or Suggestions: See Cons and Questions.
Questions For Authors: 1. Are the solutions to Bongard problems public and present in pre-training datasets of the VLMs?
2. Were multiple outputs sampled from the models to ascertain that a task is solved (i.e., similar to pass@$k$)?
3. Was human study considered for Task 2?
4. Task 2 is about exploring perception. What is the rationale for choosing this specific form of perception task (e.g., why not ask in an open-ended manner about the concepts present in each image)? I am not sure if I see a clear direct connection with the conclusion noted later “perception is a key issue for not identifying the correct rules of BPs”?
5. Authors say in L355: “surprising gap between recognizing correct classifications and effectively applying that knowledge in problem-solving”. However, a solver might try to “guess” a rule even if it seems to apply to the majority of the images (if not all) on each side. Wondering about the author’s thoughts on this.
6. Is there a technical value associated with the contents of Fig. 1 (e.g., the person’s face)?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your detailed feedback and questions, we address them below.
**(W1 - Differences to Malkinski et al.)**
Malkinski et al. (2024) also evaluated VLMs on Bongard problems, concentrating on open-ended and classification-based settings. Our work shares their open-ended focus but goes further by adding two additional tasks (Tasks 2 and 3) to examine specific abilities we hypothesize are vital for solving Bongard problems. While Malkinski et al. compare synthetic Bongard problems to their real-world variant (Bongard-RWR), we target the original Bongard set, pinpointing especially challenging cases and identifying concept-detection inconsistencies. We also compare model output directly with human performance, offering more granular insights into how VLMs reason about these complex tasks. We have added this discussion to the related work section for greater clarity.
**(W2 - Takeaway messages)**
In the following we outline the key takeaway messages of our work.
- We introduce a typology of model behaviors for visual tasks, revealing both the strengths and weaknesses of current VLMs.
- Models often fail even with multiple-choice answers, indicating challenges not only in discovering correct rules, but also in recognizing when they have the right one (Task 3).
- We uncover a pronounced gap (Fig. 5) between solving Bongard problems (Task 1) and consistently identifying their relevant concepts (Task 2), highlighting an underexplored interplay between abstract reasoning and perception.
- We identify 10 Bongard problems that no model solved under any task condition (Fig. 11), offering a strong basis for future benchmarking.
- Our human study shows key differences between VLMs and people: participants collectively solved 95 Bongard problems (averaging 42 each, with best participant at 68), whereas all models combined solved only 66.
**(Q1)**
Unfortunately, the training data for both closed-source and open-source VLMs is not publicly available, so we cannot determine this with certainty. However, given that these large models are typically trained on massive corpora that include a substantial portion of publicly available data, it is likely that they have been exposed to Bongard problems during training - especially since the Bongard problems, along with their solutions, are publicly accessible (e.g., https://www.oebp.org/welcome.php). It would be interesting future work to design versions of BPs that are private or could get generated automatically to avoid the public exposure.
**(Q2)**
Yes, models were sampled three times. Tasks 1 and 2 were considered correct if answers/images were correct in at least 2 of 3 attempts. For Task 3, we sampled once and checked if a correct hypothesis was among the 20 proposed.
**(Q3)**
Please refer to W1 of reviewer uWew for more details.
**(Q4)**
Our goal in Task 2 is to examine whether models that successfully identify a discriminative rule (Task 1) also apply that rule consistently to each individual image. We hypothesize that a truly correct solution should translate into accurate classification of all images in a Bongard problem. However, our findings show that a model can solve a Bongard problem in Task 1 yet still classify certain images incorrectly in Task 2. For instance, GPT-4o solves BP#15 - distinguishing open from closed shapes - in all three Task 1 trials but fails to label an open triangle as open in Task 2. This indicates that even when VLMs manage to solve a Bongard problem, they may not reliably apply their “understood” rule at the level of individual images. While asking open-ended questions about each image (rather than providing ground-truth concepts) is an alternative approach, our intention here is to assess the consistency between rule discovery and rule application under controlled conditions.
**(Q5)**
We agree that a model may sometimes approximate the correct solution without consistently applying it across all images. However, this is not necessarily the behavior we seek in VLMs, particularly in applications that demand reliability and robustness. This highlights the importance of considering the perceptual aspect together with the reasoning abilities when evaluating VLMs.
That said, our comment was pointing to a complementary finding: that some ground-truth concepts underlying Bongard problems are well-detected by the models, even though they do not emerge as the rule for solving the BP in the general task. This may stem from confounding visual factors or the limited prominence of some ground-truth concepts. We find this to be an interesting observation, and think that it opens up promising directions for future work.
**(Q6)**
The figure is rather illustrative, however we intended to give an impression of different possible diagrams occuring in Bongard problems (displayed on the cards).
---
Rebuttal Comment 1.1:
Comment: Thank you for the responses! I think the results are interesting as individual nuggets and, at the same time, feel that the paper’s take-away message is a bit blurry and not tied together into a clear message about VLMs and what the community can do about it in the future.
> Models often fail even with multiple-choice answers, indicating challenges not only in discovering correct rules but also in recognizing when they have the right one.
>
> … even when VLMs manage to solve a Bongard problem, they may not reliably apply their “understood” rule at the level of individual images.
>
I think there is something interesting going on here, and would be nice to have a more in-depth exploration of these.
I’m fine accepting the paper, but I can understand if it’s not accepted in its current state. I will maintain the score for now.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for continuing the interesting discussion. We appreciate that they find the individual results compelling, yet seek a clearer, more cohesive message. Below, we restate our core contributions and propose future directions for the community:
**1. Bongard Problems (BPs) still fundamentally challenge modern VLMs.**
- Even though VLMs can sometimes propose (or choose) correct rules for an entire BP (Task 1), they frequently fail to apply those rules consistently at the individual image level (Task 2).
- This points to gaps not only in abstract rule-formulation but also in perceptual grounding and reliable rule application.
- We observe a pronounced lack of spatial reasoning (one of the lowest-scoring BP categories).
- A considerable performance gap persists between humans and VLMs on BPs.
**2. VLMs’ uncertainty about “having the right rule” is a persisting bottleneck.**
- Our results show that, in multiple-choice formats, models still often pick incorrect answers, indicating they struggle to self-check their own hypotheses, even when the solution set is provided.
- This suggests that the challenge is twofold: (a) discovering a discriminative rule and (b) recognizing when the discovered rule actually fits the data.
**3. Concrete takeaways and potential improvements for the community:**
- Multi-stage reasoning pipelines: Recent hints from subsets of our experiments (Type II, Type III behaviors) suggest models can encode aspects of the correct rule but fail to integrate them end-to-end. A structured approach that explicitly ties high-level rule discovery to lower-level, image-by-image verification may help.
- Perceptual consistency: Many BPs rely on basic spatial or geometric features (e.g., "left" vs. "right", "inside" vs. "outside", or counting shapes). Our results confirm that even advanced VLMs stumble on fundamental perception tasks, underscoring how perception-based reliability remains a major pain point.
- “Self-monitoring”: The shortfall in multiple-choice answering suggests that systematically evaluating and revising potential rules (rather than returning a single best guess) might nudge models toward more accurate, self-consistent outputs.
- Mechanistic Interpretability: To better understand model behavior, future work could explore the internal representations of VLMs to determine whether they truly integrate visual concepts with abstract rule reasoning.
**4. Broader significance of our findings:**
- The community can leverage our systematic failure modes, especially Type II (correct rule hypothesized, but not applied when solving the BP) and Type III (BP solved, but individual-image classification is inconsistent), to pinpoint areas of improvement for next-generation multimodal architectures.
- Going forward, creating novel Bongard-like tasks or expanding the original set with private or auto-generated puzzles could mitigate training-data contamination and further stress-test how (and whether) models grasp visual abstractions.
In summary, our main goal of the paper is to demonstrate that BPs are not just "nice-to-have" puzzles but diagnostic tasks spotlighting surprisingly fundamental gaps in VLM performance. Above we have detailed our take-away messages towards VLM development. We hope these clarifications show a cohesive narrative, namely, that BPs reveal both a need and a path for improving the reliability, interpretability, and fine-grained reasoning processes of VLMs.
We have expanded our paper's discussion section (section 5) to discuss the above mentioned points in more depth and extended our section on future work.
We hope this discussion clarifies the reviewer's concerns and would appreciate if they could reconsider your score. | null | null | null | null | null | null |
Adversarial Inputs for Linear Algebra Backends | Accept (poster) | Summary: The authors propose a white-box attack to construct "Chimera examples", or inputs to models that elicit conflicting predictions depending on the employed backend library, and proposed a PRNG-based defense against it.
## update after rebuttal
The rebuttal addresses most of my concerns. In particular, I'm happy to see the authors running the experiment on ImageNet, demonstrating the scalability of their attack. I'm increasing my score.
Claims And Evidence: 1. It was claimed that Figure 1 contains Chimera examples, but the left figure has the same label "Truck" from BLIS and Apple Accelerate, which conflicts with the $\forall i \ne j$ requirement in Definition 1.
2. In section 4.4, it's unclear how many iterations are used by alternative attacks. Perhaps these attacks fail simply because you are using fewer iterations, not because they are inferior?
Methods And Evaluation Criteria: Yes, it makes as much sense as prior works such as (Carlini & Wagner, 2017) and (Schlogl et al., 2021), on which this paper's method is based.
Theoretical Claims: It is claimed that
> If we dynamically adapt the step size $\alpha$, this approach theoretically brings us infinitesimally close to the decision
boundary, potentially leading to deviations among the backends.
This is not intuitive to me, so it would be great if the authors could provide a simple proof. I think the iteration should bring $x_n$ to a point with a large likelihood for $y_i$, instead of to the decision boundary.
Moreover,
> However, there is a catch: the generated points do not lie within $\mathbb{S}$, and thus moving along their gradients may lead into infeasible regions, as demonstrated in Section 4.4. To address this problem, we map $x_k$ back to $\mathbb{S}$ when computing its gradient, ensuring that the gradients reflect the view from $\mathbb{S}$ while optimization occurs in $\mathbb{F}$.
It is unclear to me how this address the problem, because we still have $x_{i+1} \not\in \mathbb{S}$, as illustrated in Figure 4. An ablation experiment without mapping $x_k$ back to $\mathbb{S}$ when computing its gradient would help clear some doubts.
Experimental Designs Or Analyses: The experiment uses very simple network architectures, i.e. three VGG block + three dense layer for CIFAR-10 and two fully
connected layers for FMNIST. Moreover, the networks have poor discriminative performance as "a test accuracy of 82.32% and 80.75%" is barely acceptable for datasets as simple as CIFAR-10 and FMNIST. It is unclear whether the attack also applies to more complex datasets and networks.
In the same vein, the defense proposed in section 5.2 has "no negative impact on the test accuracy of the evaluated models". Probably this is because the test accuracy is unimpressive to begin with. I imagine it's much harder to keep the test accuracy at 99%.
Supplementary Material: No. In fact, I'm a little sad that the authors didn't provide supplementary material. The description of baseline methods "Boundary sample search" and "Adversarial example search" are too ambiguous, so I would like to see the software implementation to see what's really going on.
Relation To Broader Scientific Literature: > So far, previous work has focused on floating-point imprecision arising from differences in CPU architectures, for
example, for fingerprinting systems (Schl ¨ogl et al., 2021; 2024) or breaking the certification of models (Jin et al.,
2022; 2024; Vor´aˇcek & Hein, 2023). Our analysis of linear algebra backends builds on this work; yet, we aim to induce significantly larger changes that flip the prediction of a model given an adversarial input. While differences in CPU architecture may further exacerbate this issue, we demonstrate that Chimera examples also exist between backends on the same CPU architecture.
This work identified a new attack surface, i.e., the difference in software implementations of linear algebra libraries on the same hardware. Moreover, I believe the different number of threads on the same BLAS backend can also affect the order in which floating point values are accumulated, and thus induce numerical imprecision, despite the fact that the authors didn't mention this in the paper. Maybe even different CUDA versions can induce some disparity, but I'm less confident about that.
Essential References Not Discussed: It's probably worth citing "Explaining and Harnessing Adversarial Examples" published in ICLR 2015, since it's the paper the proposed the fast gradient sign method (FGSM).
Other Strengths And Weaknesses: **Strengths**:
1. The attack surface is novel.
1. The ablation study in section 4.5 is helpful in understanding the landscape of Chimera examples.
**Weaknesses:**
1. The gradient-based attack algorithm does not seem novel. While there is a mapping $q: \mathbb{F}^d \to \mathbb{S}$, as I mentioned earlier there is no ablation experiments exploring its necessity.
Other Comments Or Suggestions: Typos:
1. Line 255, "liberary" should be "library"
2. Line 322 and 324, "\mathbb{F}" should be "\mathbb{F}^d"
3. Line 371, "up" should be "ULP"
Questions For Authors: 1. On line 168, what's the motivation for averaging across multiple backends?
2. On line 203, "The starting point $x_1$ is obtained from 2000 iterations of our search on a single backend to move towards the proximity of the decision boundary first;" why does this provide a good initialization?
3. On line 255, "To mitigate this effect, and to ensure the reproducibility of our experiments, we default to inference batches of size one." Why is the result deterministic when the batch size is 1? In that case, the backend/scheduler is still free to re-order computations due to the associative property of addition and multiplication.
4. On line 298, what's your explanation of the superiority of your method over alternative baselines? Your work is derived from the "Boundary sample search", so one may expect a similar performance.
5. In Figure 6, which dataset and network architecture are we talking about here?
6. In section 5.2, what if an attacker use a noisy gradient descent? Since considerable-sized clusters of Chimera examples exist, I imagine the attacker has a non-negligible chance of stumbling upon a Chimera example once it's close enough, i.e., when $x_k$ converges to the noisy ball.
7. It's surprising that Chimera examples exist, but it's unclear to me why they are harmful. Can you provide an example where they can be exploited to cause actual risk?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your feedback on our paper!
**Experiments with larger models.** We have extended our evaluation to include the ImageNet dataset, using the more complex architectures ResNet18 (Top-1 Accuracy: 69.7%) and EfficientNetV2S (Top-1 Accuracy: 84.2%). For these experiments, we used a reduced backend set (OpenBLAS, MKL, BLIS, and cuBLAS) and a smaller sample size of 128. In these new experiments, our attack achieves a 100% success rate for CPU-GPU backend combinations on both models. For CPU-CPU backends, we observe success rates of approximately 29% for ResNet18 and 22% for EfficientNetV2S. We will include these results in the paper.
**Loss function in our attack.** Thank you for pointing out an issue in our presentation. Indeed, the description of the loss functions in Section 3.3 is imprecise, as it omits a key point: Following Definition 3.1, we define the target labels of the backends in opposition, such that always $y_i \neq y_j$. This represents an important distinction from the standard search for adversarial examples. Moving the input towards $y_i$ usually moves it away from $y_j$, and vice versa. This “tug of war” between the target labels effectively pulls the input closer to the decision boundary. We will add the precise definition of the target labels to our paper.
**Role of quantization function.** We performed an ablation study to assess the influence of the quantization function on our attack. In this experiment, we compute the gradient of our attack without applying quantization. We find that our attack performs worse in this setting. Quantization is particularly important for CPU-based backends, yielding approximately 2.2× more Chimera samples for CIFAR and between 1.1× and 7× more for FMNIST. For GPU-based backends, we observe no such benefit. In fact, quantization slightly reduces performance by about 1-2%. We will include these results in our paper.
**Intuition of search algorithm.** Our search algorithm brings us close to the decision boundary and then proceeds by alternating back and forth using the conflicting loss functions described above. Theoretically, for every gradient direction we take, there exists a point that lies exactly on the decision boundary and hence is potentially close to a Chimera example. By alternating through these directions in a tug-of-war manner, we converge toward the region where numerical deviations effectively determine the final decision.
**Improvement over baselines.** All baselines were given the same iteration budget of 3000 steps as our attack, except for binary search that ran until convergence. Despite the same budget, our attack outperforms the baselines for the following reasons:
1. Although binary search can approach the decision boundary arbitrarily closely, it often lands in infeasible regions, making it impossible to elicit the required numerical differences from input samples.
2. The Carlini & Wagner attack was designed to target a single backend and searches for a point close to—but still across—the decision boundary. In contrast, our method explicitly targets the decision boundary and seeks inputs that yield conflicting predictions across backends.
3. The original boundary sample search, like binary search, was not designed to produce valid inputs. While it can find boundary points in feature space, the corresponding input samples often do not exist in the input domain.
**Chimera examples in practice.** Chimera examples pose a threat whenever two different systems evaluate the same model on the same data. For instance, in forensic investigations, it is no longer sufficient to use only the same model and input to replicate an incident—such as a malware detection, the censoring of media content, or the decision of a hiring decision. Instead, the entire original system—including all backend libraries—must be fully replicated to ensure consistent and reliable results.
**Noisy gradient descent.** When evaluating our defense, we already assume an attacker who is aware of the defense and uses noisy gradients. A key distinction of our defense is that the noise is deterministic and unique for each data point—that is, the noise remains fixed for the same input and key. As a result, an attacker cannot accumulate additional information by averaging over multiple runs. The noise creates a shattered view of the feature space, making fine-grained gradients ineffective except by chance. Our experiment in Section 5 shows that such chance-based success is too low to be practically viable.
**Supplementary Material.** We have uploaded our source code to https://gitlab.com/anonymized-code/2025-icml and will make it available as open source to the community.
**Image on first page.** You are absolutely right—this is a Chimera example with $n = 3$ only. We will replace it and plan to use an example from ImageNet to take advantage of the higher image resolution. | Summary: This paper investigates the vulnerability in neural network inference caused by minor discrepancies in linear algebra backends used by popular frameworks like TensorFlow and PyTorch. The authors introduce "Chimera examples," which are specially crafted inputs that produce conflicting predictions depending on the backend (e.g., Intel MKL, Nvidia CUDA, Apple Accelerate). These inputs exploit the inherent non-associativity of floating-point arithmetic and backend-specific optimizations that affect calculations subtly but significantly. The paper provides a comprehensive analysis of this vulnerability across several backends and proposes a defense mechanism to mitigate potential adversarial attacks exploiting these discrepancies. The findings highlight a novel attack surface within the machine learning pipeline that has been overlooked previously, emphasizing the need for robustness in backend implementations.
Claims And Evidence: See strengths and weaknesses.
Methods And Evaluation Criteria: See strengths and weaknesses.
Theoretical Claims: See strengths and weaknesses.
Experimental Designs Or Analyses: See strengths and weaknesses.
Supplementary Material: See strengths and weaknesses.
Relation To Broader Scientific Literature: See strengths and weaknesses.
Essential References Not Discussed: See strengths and weaknesses.
Other Strengths And Weaknesses: Strengths:
1. Introducing the concept of Chimera examples that capitalize on backend-specific computational differences is a significant contribution to understanding security in machine learning systems. The paper extensively analyzes discrepancies across multiple major linear algebra backends, providing a broad view of the problem's scope.
2. Demonstrates the practical implications of theoretical discrepancies in backend computations, directly linking them to potential security vulnerabilities in deployed machine learning systems. Employs a rigorous methodology for generating and detecting Chimera examples, including detailed algorithmic strategies and adjustments for backend-specific characteristics.
3. Tests across a variety of platforms ensure that the findings are not limited to a specific hardware or software configuration, enhancing the generalizability of the results. Not only identifies a vulnerability but also proposes a novel defense mechanism, contributing both to the theoretical and practical aspects of machine learning security.
4. Addresses an immediate and practical concern in contemporary machine learning deployments, making the research highly relevant and timely. The experimental setup is robust, using popular datasets and architectures to validate the findings, which strengthens the paper's claims through empirical evidence. Provides detailed results including success rates of attacks across different setups, offering clear insights into the effectiveness of the proposed attack and defense strategies.
Weaknesses:
1. While innovative, the proposed defense mechanism is complex and may be challenging to implement in practice without affecting the system's efficiency or usability. The attack and defense strategies might be too tailored to the specific backends tested, which could limit their applicability in a broader range of environments or against future backend updates.
2. The threat model assumes white-box access to the model and backends, which might not always be practical in real-world scenarios, potentially limiting the applicability of the findings. The paper could benefit from a comparison with other types of adversarial attacks to position its contributions within the wider landscape of adversarial machine learning research.
3. The methods for detecting and defending against Chimera examples are likely resource-intensive, which could be a barrier for adoption in resource-constrained environments. It is unclear how the proposed methods scale with increasingly complex models or larger datasets, which is critical for modern deep learning applications.
4. The evaluation is somewhat limited to the datasets used (CIFAR-10 and FMNIST), and additional studies on different types of data might be necessary to fully understand the impacts. The paper does not fully explore how variations in experimental setups, such as different training regimes or model architectures, might affect the prevalence of Chimera examples.
Other Comments Or Suggestions: See strengths and weaknesses.
Questions For Authors: See strengths and weaknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your feedback on our paper!
**Experiments with larger models.** We have extended our evaluation to include the ImageNet dataset, using the more complex architectures ResNet18 (Top-1 Accuracy: 69.7%) and EfficientNetV2S (Top-1 Accuracy: 84.2%). For these experiments, we used a reduced backend set (OpenBLAS, MKL, BLIS, and cuBLAS) and a smaller sample size of 128. In these new experiments, our attack achieves a 100% success rate for CPU-GPU backend combinations on both models. For CPU-CPU backends, we observe success rates of approximately 29% for ResNet18 and 22% for EfficientNetV2S. We will include these results in the paper.
**Efficiency and generality of our defense.** The computational complexity of the proposed defense depends only on the size of the input, not on the model itself. We leverage standard cryptographic libraries, which allow us to compute keyed noise for thousands of inputs per second. As a result, our defense adds less than 0.5% overhead to the overall inference process, making it practical for most application scenarios.
In our defense, we deliberately avoid making any assumptions about the design or functionality of the employed backends. The only parameter we rely on is the empirically measured size of the pockets containing Chimera examples. A practitioner with knowledge of the specific backends used for a model can measure this size and configure our defense accordingly.
However, you are correct that the required magnitude of defense noise depends on the complexity of the model, as more complex models typically accumulate larger discrepancies between backends during inference. Therefore, the noise level must be empirically calibrated on a per-model basis. Nonetheless, this does not affect the efficiency of our defense
**White-box access.** It is correct that our attack hinges on white-box access to the model and knowledge of the employed linear algebra backends, as shown in our experiments. However, finding Chimera examples is tricky, even in the white-box setting. Black-box attacks are, therefore, likely to suffer from low performance in practice. Keeping models confidential might, therefore, seem like a possible defense. We would still argue, however, that this is not a reliable protection strategy, as it requires keeping information confidential that is typically not considered secret. Moreover, in standard machine learning frameworks and environments, the available backends might be known to the adversary by default. We will acknowledge this setting in our paper.
---
Rebuttal Comment 1.1:
Comment: Thanks for the author's rebuttal. The author's response did not solve all my questions well, so I kept my previous rating. | Summary: The paper presents a method that exploits differences in the numerical computation implementations of linear algebra backends that power the major ML frameworks to construct adversarial examples.
Claims And Evidence: Strengths:
- In the space of constructing adversarial examples, this paper is very novel and creative, which is a major strength of this paper.
- The result could potentially be used in areas of computer security that are normally not covered by adversarial examples, such as leaking which backend linear algebra implementation is being used just by querying predictions of the model.
- Method is simple and easy to execute.
- Results are sound.
- Paper is clearly written
Weaknesses:
- Simple defenses would likely prevent the attack, such as a slight randomization of the weights, or rejection of predictions that are too close to the boundary.
- Method is identical to the standard method for construction adversarial examples, but with a smaller step size, and thus the novelty of the method is limited (but the application to attacking different linear algebra backends is novel).
- Limited evaluation, just two simple datasets for simple architectures.
- Appears to be most effective specifically when cuBLAS is used, and less effective for the other backends—what is special about cuBLAS?
- Transferability of the adversarial examples to other neural networks isn't evaluated, but would likely be 0.
Overall: I think the creativity of the submission plus the applicability to other areas of computer security I believe should lower the bar for its demonstrated applicability to real attack scenarios or the limited novelty of the actual method. For this reason, I am voting for acceptance.
Methods And Evaluation Criteria: See review.
Theoretical Claims: See review.
Experimental Designs Or Analyses: See review.
Supplementary Material: See review.
Relation To Broader Scientific Literature: See review.
Essential References Not Discussed: See review.
Other Strengths And Weaknesses: See review.
Other Comments Or Suggestions: See review.
Questions For Authors: See review.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your feedback on our paper!
**Experiments with larger models.** We have extended our evaluation to include the ImageNet dataset, using the more complex architectures ResNet18 (Top-1 Accuracy: 69.7%) and EfficientNetV2S (Top-1 Accuracy: 84.2%). For these experiments, we used a reduced backend set (OpenBLAS, MKL, BLIS, and cuBLAS) and a smaller sample size of 128. In these new experiments, our attack achieves a 100% success rate for CPU-GPU backend combinations on both models. For CPU-CPU backends, we observe success rates of approximately 29% for ResNet18 and 22% for EfficientNetV2S. We will include these results in the paper.
**Other defense strategies.** We initially considered different defense strategies but ultimately chose to randomize the inputs (rather than the weights) using keyed noise. This approach offers two advantages: First, the noise remains fixed for each input and, therefore, cannot be averaged out over multiple runs. Second, the runtime overhead of this defense does not depend on the model’s size and complexity. Note that we employ standard cryptographic libraries in our defense, which enables us to calculate thousands of inputs with keyed noise per second.
An alternative strategy would be to reject predictions near the decision boundaries to prevent classifications. However, this comes with drawbacks: First, the margin around the decision boundary would still need to be chosen based on the pocket sizes of the backends. Second, introducing such a margin effectively creates a new attack surface for Chimera examples—this time between “rejected” and “accepted” predictions.
**Analysis of cuBLAS.** We have further investigated the differences in attack performance between CPU-based and GPU-based backends. We found that the considered CPU-based backends employ identical implementations for the convolution operator. Hence, the numerical differences between them stem from matrix multiplication only. In contrast, the GPU-based backend cuBLAS uses a fundamentally different implementation for convolutions. As a result, numerical deviations arise from both convolution and matrix operations.
As a result, we observe varying results for the FMNIST and CIFAR models. Since the FMNIST model consists solely of dense layers, the observed differences remain consistent across all backends. In contrast, the CIFAR (and ImageNet) models are largely composed of convolutional layers, which exhibit substantial variation when using cuBLAS. This variation makes it easier to identify Chimera examples in these models. We will include these results in our paper. | Summary: This paper claims that the implementations of linear algebra used by popular frameworks such as PyTorch and TensorFlow are not exactly consistent. The difference between these implementations can be quantified using a term called ULP (Unit in the Last Place). The authors demonstrate that this small gap is enough to produce adversarial examples specific to a given backend.
Claims And Evidence: The contribution of this paper is somewhat unclear. Typically, the robustness of a model is measured by its accuracy against adversarial examples, which is often referred to as robust accuracy. However, this metric can fluctuate across different hardware, mathematical libraries, and objective functions. For example, AutoAttack and RobustBench use 11 different objective functions (one untargeted attack and ten targeted attacks) to generate adversarial examples. A model is considered robust only if it can defend against adversarial examples produced by all of these objective functions. This is a well-known observation in the field.
Given this, the authors should clarify (a) why the standard robust accuracy metric is not used in this paper, and (b) whether any specific backend exhibits a significant drop in robust accuracy compared to others. If no such drop is observed, the purpose of generating these adversarial examples remains unclear. I believe that the inconsistency of adversarial examples across various backends is not inherently problematic, as long as they adhere to standard definitions of adversarial examples. In other words, adversarial examples are not unique; multiple valid adversarial examples are an acceptable scenario.
Methods And Evaluation Criteria: * As mentioned in the Summary section, robust accuracy is not reported in this paper. If this metric is not applicable, the authors should provide a convincing explanation to justify its exclusion to the reviewers and readers.
* I compared Algorithm 1 with the PGD attack proposed by Madry [1]. The first step projects the images onto a constrained set, and the second step calculates the efficient gradient by accumulating the gradients from each implementation. I would appreciate further clarification on whether there are any significant improvements in this approach compared to existing methods.
* The authors did not explain the reasoning behind the reparameterization of the input (line 180, right column). Could an adaptive attack achieve the same goal?
* The most important aspect of Algorithm 1 seems to be missing: the authors did not provide a clear definition of the projection function $\hat(x)_k = q(x_k)$. This should be clearly stated.
* The experimental configurations, including the radius of the epsilon ball, are not fully detailed. Providing more information on these settings would be beneficial.
Theoretical Claims: The paper lacks theoretical proofs. The concept of the infeasible area shown in Figure 3 appears overly simplistic. I would suggest that the authors provide a clear, formal mathematical definition of this area to strengthen the theoretical foundations of their work.
Experimental Designs Or Analyses: * As mentioned earlier, the paper does not provide any experimental results on robust accuracy. Without these results, it is difficult to understand the significance of the adversarial examples presented by the authors.
* In Table 2, if I understand correctly, the attack success rates (ASR) refer to the ratio of "chimera" examples (not ordinary adversarial examples). It is unclear whether the term "Adv. example" in the last row refers to adversarial examples generated by the method described in the paper or by other means. Clarification is needed.
* Additionally, the target model used in the experiments is naturally trained, not adversarially trained. This suggests that generating adversarial examples should be relatively easy. In this context, the significance of these adversarial examples proposed by the authors remains uncertain.
* As shown in Table A1, Dropout layers are involved. Dropout is a stochastic process and should not be used during inference, as this would overestimate the robust accuracy.
* The proposed defensive algorithm appears to be ineffective. One can approximate the gradient through accumulation in the same way as described in the equation on line 167. For further details, please refer to the paper by [2].
Supplementary Material: I have reviewed the supplementary material.
Relation To Broader Scientific Literature: N/A
Essential References Not Discussed: [1] Towards Deep Learning Models Resistant to Adversarial
[2] Obfuscated Gradients Give a False Sense of Security
Other Strengths And Weaknesses: A major weakness is the lack of adversarially trained models in the experiments.
Other Comments Or Suggestions: *Please number all equations for clarity.
* In conclusion, I strongly suggest rejecting this paper due to unclear contributions and insufficient supporting evidence. The modifications required to meet the acceptance criteria seem substantial. However, I encourage the authors to provide more compelling evidence to support their claims. I would reconsider my recommendation if convincing revisions are made.
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 1 | Rebuttal 1:
Rebuttal: Thank you for your feedback on our paper!
**Chimera vs. adversarial examples.** We are sorry that a key distinction between Chimera examples and adversarial examples did not come across clearly. Constructing a Chimera example always requires considering at least two backends simultaneously. If only a single backend is attacked, the resulting adversarial example is likely to transfer to other backends as well—and thus does not qualify as a Chimera example. This is why standard algorithms for generating adversarial examples are typically unable to produce Chimera examples. As shown in Table 2, for instance, the attack by Carlini & Wagner (labeled Adv. Examples) failed to find a single Chimera example for the CPU-only backend pairs. We will clarify this point in the revised version of the paper.
**Robust Accuracy.** We have considered different metrics for our evaluation but ultimately decided against using robust accuracy, as our focus is not on measuring changes in model robustness but on assessing the attack surface of backend pairs. Nonetheless, the presented attack success rate—shown in Figure 5 for different backend pairs—can be interpreted as a normalized version of (1 - robust accuracy). Since we introduce a new attack, we present this success rate, as it better highlights the efficacy of the attack rather than the resulting degradation in model performance.
**Adversarial Training.** Thanks for this suggestion. Indeed, we had not considered adversarial training as a potential defense. However, this concept cannot be directly applied in our setting, as it would require training the same model across multiple backends simultaneously. Nonetheless, we conducted two experiments to investigate this defense in slightly different settings.
First, we added Chimera examples to the training data of the considered models, which corresponds to a simplistic form of adversarial training. In this setting, however, we observe no impact on the success of our attack.
Second, we performed adversarial training using a standard method for generating adversarial examples (PGD). Interestingly, this reduces the success rate of our attack by 30–40%. We attribute this drop to our attack’s initialization step, which leverages an adversarial example to reach the decision boundary. By increasing the number of iterations in this initial step, we can improve the attack’s performance again. This highlights an interesting property of our attack: it initially behaves like a standard adversarial example to reach the decision boundary but then searches for Chimera examples in its vicinity.
We will include this additional experiment in the revised version of the paper. Still, our defense (Section 5) is capable of completely eliminating the attack without affecting model performance, unlike adversarial training.
**Quantization and reparameterization.** The quantization function $q$ maps a vector $x$ in the feature space to the nearest element in a discrete set $S$ forming the input space. For instance, in the case of 8-bit images, it converts floating-point values in the feature space back to 8-bit pixel values. Similarly, reparameterization is a technique used to enforce box constraints on the input, ensuring that all pixel values remain within the valid range [0, 255]. This approach is derived from the Carlini-Wagner attack. We will clarify this in the paper.
(Infeasible regions) Infeasible regions arise from the mismatch in granularity between the input space (e.g., 8-bit pixels) and the feature space (e.g., 32-bit floats). While Chimera examples may exist for certain combinations of 32-bit floating-point values, these may not necessarily be reachable using 8-bit inputs, hence lying in an infeasible region. We will clarify this difference and provide a more formal definition of this problem using the quantization function $q$.
**Key differences.** Our attack differs from the classic PGD attack in two key aspects. First, we calculate the gradient from a quantized input in every iteration. Quantization is necessary because we want to elicit a Chimera example from a feasible (discrete) input. Second, the perturbation is computed from multiple linear algebra backends with conflicting loss functions. Each backend aims to push the input toward a different class—like in a tug-of-war scenario.
**Defense effectiveness.** Our defense remains effective even when gradients are approximated or accumulated. It is effective because Chimera examples lie in extremely narrow regions of the feature space. As a result, adding noise with a magnitude larger than these regions during inference makes it very unlikely to discover them. As this noise is fixed for each input $x$ (keyed), it cannot be averaged out through repeated computations. Consequently, the computed gradients may move around the regions of Chimera examples but can only locate them by chance.
**Drop-out layers.** We use drop-out layers only during training and not inference.
---
Rebuttal Comment 1.1:
Comment: The responses provided are not convincing to me for the following reasons:
* Unclear Motivation: First, from the perspective of an attacker, the goal is to minimize robust accuracy. However, robust accuracy is not reported in this paper. Second, it is well-known that robust accuracy can fluctuate across different hardware, mathematical libraries, and objective functions. The lack of study on this topic is simple: it is unnecessary from an attacker's standpoint. Third, the authors did not directly address my question regarding whether any specific backend exhibits a significant drop in robust accuracy compared to others. This is a crucial question. If ordinary adversarial attacks can significantly deceive models, I fail to understand why Chimera examples are worth studying.
* Lack of Assessments on Adversarially Trained Models: If the authors wish to demonstrate that Chimera examples are important but have not been sufficiently explored, assessments on adversarially trained models should be included. As I mentioned in my previous comment, generating adversarial examples on standard trained models is relatively simple. The authors could easily download pre-trained adversarially trained models from RobustBench or other repositories to conduct experiments. If those models can defend against Chimera examples effectively, it would suggest that the proposed attack/defense is not meaningful. However, the authors decided to train very simple models as baselines, which is not a convincing assessment. I do not believe this evaluation accurately reflects real-world scenarios.
* Defense Method: The proposed defense, if I understand correctly, is aimed at black-box settings. However, numerous studies already exist on generating adversarial examples with imprecise gradient estimation, and the proposed defense does not seem to offer anything new in this regard.
* Limited Novelty: There are already many PGD-like attacks that incorporate specific constraints to generate custom adversarial examples. The key differences claimed by the authors do not present new concepts.
* Experimental Configuration: The authors' response does not fully detail the experimental configurations, including the radius of the epsilon ball, which raises concerns about the clarity and reproducibility of the experiments.
I have carefully reviewed all the comments from other reviewers and the corresponding responses from the authors. However, based on the reasons outlined above, I maintain my original rating. | null | null | null | null | null | null |
Editable Concept Bottleneck Models | Accept (poster) | Summary: Editable CBMs provide the ability to edit a trained CBM to account for issues in annotation errors, concept set changes, and problems with specific data points. That is done with the help of influence functions that approximate the model.
Claims And Evidence: The claims made are largely clear and supported by evidence. There is one exception, which is that in the motivation, the concept-level editing is motivated by the fact that oftentimes one wants to *add* concepts to the concept set post-hoc. As far as I understand this work, only the removal of a concept is possible, but not the addition thereof. Thus, I suggest changing this framing.
Methods And Evaluation Criteria: The proposed method is well-motivated and sound. The datasets used, while simplistic, are established benchmarks in the field.
Theoretical Claims: I did not check the proofs in the Appendix, however, the equations in the main text intuitively do not contradict my intuition of how they should be.
Experimental Designs Or Analyses: The empirical evaluation is sound and thorough, i.e. the authors measure the traits that ECBMs are supposed to fulfill. It is impressive that ECBMs' performance is close to "Retrain", which functions as an Oracle.
Supplementary Material: No.
Relation To Broader Scientific Literature: This work contributes to the literature in CBMs. To the best of my knowledge, in the context of CBMs, model editing of this sort has not been explored. Personally, I am not convinced that model editing is such an important task in CBMs, as their concept bottleneck prevents them anyways from being too large to be retrained.
Essential References Not Discussed: I recommend moving the related work section on Machine Unlearning into the main text. It would help framing the paper correctly and understanding the contribution of this work.
Other Strengths And Weaknesses: I am not well-read in the domain of machine unlearning, but I am very sure that they are highly relevant to this work. That is, my intuition tells me that this work is essentially CBM + Machine Unlearning, and I am unsure how much novelty there is with respect to the existing methods in the field of Machine Unlearning.
Other Comments Or Suggestions: The abbreviation ECBM is already used by the published energy-based concept bottleneck models
Questions For Authors: My main reasons for not giving a higher score are
1. that model editing in the context of CBMs is not a big problem in my opinion. I think existing methods such as [1] could easily be adapted to quickly retrain with the adapted dataset. As such, the significance is limited in my opinion.
2. I am unsure of the novelty with respect to existing methods in Machine Unlearning, as the usage of Influence functions for this purpose appears quite standard, and I would imagine that the editing of encoder and predictor of the CBM can be "mapped" to some existing tasks in that field.
While these are not questions, I invite the authors to comment on these opinions.
3. Can the authors provide code for reproducibility?
[1] Laguna, Sonia, et al. "Beyond Concept Bottleneck Models: How to Make Black Boxes Intervenable?." The Thirty-eighth Annual Conference on Neural Information Processing Systems.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: -*Response to Claims And Evidence*
We respectfully disagree with your opinion. The primary goal of CBM is to explicitly decompose the model's intermediate representation into a set of interpretable concepts, typically predefined by domain experts or specific task requirements before training. However, when task requirements change, new concepts may need to be added to the existing concept set, making such scenarios possible in practice. Most CBM methods, however, fix the concept set after training, with the model's structure and parameters tightly bound to these concepts. As a result, directly adding new concepts post-training is challenging and often requires retraining the model.
-*Response to Relation To Broader Scientific Literature*
Thank you for sharing your perspective. I understand your concerns and would like to highlight why CBM editing is important:
1. Correcting Labeling Errors: In fields like healthcare, training data is valuable, and discarding mislabeled data isn’t ideal. CBM editing allows targeted corrections without costly retraining.
2. Updating Concepts: During deployment, missing or irrelevant concepts may arise. CBM editing enables efficient updates.
3. Privacy Constraints: CBM editing allows data removal requests to be handled accurately without full model retraining.
As you mentioned, retraining CBMs can be computationally expensive due to their bottleneck structure. This makes CBM editing a practical alternative.
In summary, model editing effectively addresses practical challenges such as correcting data errors, ensuring privacy, and adapting to new requirements, all without the high cost of retraining. These factors highlight the importance of CBM editing in real-world applications, as acknowledged by the other reviewers.
-*Response to Essential References Not Discussed, Other Strengths And Weaknesses and Questions For Authors 2*
Thank you for your thoughtful perspective. While we acknowledge that editing CBM within the context of privacy constraints does share some similarities with machine unlearning, this work is not solely focused on CBM and machine unlearning. Our primary objective is to enable the flexible editing of CBM across three levels: data, concept, and concept-label. This process goes beyond unlearning to encompass modification and optimization as well. Ultimately, the core goal of this work is to enhance the applicability and adaptability of CBM, rather than to design a new machine unlearning algorithm specifically for CBM.
-*Response to Questions For Authors 1*
Thanks for your information. [1] proposes a method for intervening on the intermediate representations of neural networks, but its network architecture is not based on CBMs, thereby diverging from the goal of editing the original CBM framework.
In contrast, our work focuses on the specific application scenarios of CBMs. By addressing model editing at the levels of data, concepts, and concept labels, we systematically formulate this problem mathematically for the first time and develop editing algorithms with theoretical guarantees. Our research fills a critical gap in this area.
-*Response to Questions For Authors 3*
Yes. Our code can be found here.
https://anonymous.4open.science/r/ECBM-4B14
Thank you very much for your valuable feedback and recognition of our article. If we have addressed all of your concerns, we kindly ask you to consider giving a higher rating.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their response.
Claims And Evidence: I think I was misunderstood. I completely agree that adding a concept post-hoc can be a desirable task. What I meant was the following: As far as I understand the proposed method, the method can only remove concepts, not add them. That is, it can not cover this important task.
Please let me know if this is not the case.
Relation To Broader Scientific Literature: I disagree that CBMs are computationally expensive to retrain.
I thank the authors for their response and keep my score as my opinion on the raised points has not been changed.
---
Reply to Comment 1.1.1:
Comment: Thanks for your feedback.
-*Reponse to Claims And Evidence*
We appreciate your clarification. You are absolutely correct; the ECBM method cannot accommodate requests for adding new concepts to the CBM and can only facilitate concept removal. We will ensure that this statement is revised in the camera-ready version. We sincerely appreciate your insights and for bringing this issue to our attention.
-*Response to Relation To Broader Scientific Literature*
From the results in Table 1, it can be observed that retraining a CBM based on ResNet-18 on the OAI dataset (which consists of approximately 30,000 entries) requires at least 250 minutes. Consequently, the time cost of retraining a CBM is considerable.
While this time cost may initially seem acceptable, it is important to note that CBMs are typically utilized in dynamic environments, such as those involving frequent data deletion requests for privacy reasons, as well as label or concept corrections. In this context, the demand for retraining CBMs is both present and frequent. This situation significantly limits the effectiveness of CBMs in practical applications. | Summary: The paper introduces Editable Concept Bottleneck Models (ECBMs), an extension of Concept Bottleneck Models (CBMs) that allows efficient data and concept removal without full retraining. Using influence functions and Hessian-based approximations, ECBMs support three levels of editability: concept-label, concept, and data-level. Experiments on multiple datasets show that ECBMs achieve similar performance to retraining while being 20-30x faster, making them highly efficient for real-world applications. The work enhances CBMs' adaptability and interpretability but could further explore concept addition and real-world deployment.
Claims And Evidence: Yes, the experiments and analyses support the claims.
Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria are appropriate.
Theoretical Claims: The paper provides a closed-form solution for the model approximation of CBMs, which includes theoretical claims. I have checked the correctness of the derivation and did not identify any issues.
Experimental Designs Or Analyses: Yes. They design experiments to evaluate the utility and efficiency of ECBMs, comparing them with retraining and CBM-IF. They analyze the impact of different edit settings, concept importance, and data removal. They further validate ECBMs using membership inference attacks. These designs and analyses are valid.
Supplementary Material: Yes. The details, including proof, case studies, and additional experiments, are provided in the Appendix.
Relation To Broader Scientific Literature: The paper extends CBM by introducing efficient editability using influence functions, building on prior CBM research and influence function applications.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strengths
- The paper extends CBMs by incorporating editable capabilities, addressing practical issues such as privacy concerns, annotation errors, and dataset corrections. This is particularly useful for dynamic datasets that require frequent updates.
- The paper evaluates ECBMs on multiple datasets (OAI, CUB, and CelebA), demonstrating that ECBMs achieve near-identical performance to retraining while reducing computation time by up to 30x.
- The paper presents closed-form solutions using influence functions, avoiding costly retraining while maintaining accuracy.
- ECBMs provide an efficient way to remove concept biases and erase data influences, addressing model privacy and fairness concerns.
- The incorporation of EK-FAC further accelerates computation, making ECBMs scalable.
Weaknesses
- While the paper discusses the computational advantages of ECBMs, a more detailed analysis of time and space complexity would strengthen the scalability argument.
- The font size in Figure 1 is too small, making it difficult to read. Given its importance, it would be beneficial to adjust the layout (e.g., expanding it to a two-column format).
- Some mathematical notation could be better explained, particularly for readers unfamiliar with influence functions. A more intuitive explanation or additional background material would improve clarity.
Other Comments Or Suggestions: Please refer to the weaknesses section above.
Questions For Authors: - Could the authors provide a more detailed analysis of the time and space complexity of ECBMs? While the empirical results demonstrate efficiency, a formal complexity discussion would further support the claims regarding scalability.
- Some derivations rely on influence functions, which may be unfamiliar to many readers. Could the authors provide a more accessible explanation or an appendix section with a high-level intuition behind these functions?
Overall, I did not find any significant weaknesses in the paper. I may change my score based on the authors’ responses regarding weaknesses and above questions.
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: -*Response to Weaknesses 1*
Thanks for your invaluable advice. We will add this part in the revision. Here, we provide the analysis for algorithm 1.
The time complexity of the algorithm is \( O(n \cdot (m^2 + d^2) + s_e \cdot m^2 + d^3) \), where \( n \) is the number of data points, \( m \) is the dimensionality of \( \hat{g} \), \( d \) is the dimensionality of \( \hat{f} \), and \( s_e \) is the size of the erroneous data set \( S_e \). Computing the Hessian matrices for \( \hat{g} \) and \( \hat{f} \) takes \( O(n \cdot m^2) \) and \( O(n \cdot d^2) \), respectively, while the updates for \( \hat{g} \) and \( \hat{f} \) contribute \( O(s_e \cdot m^2) \) and \( O(n \cdot d^2 + d^3) \). The space complexity is \( O(m^2 + d^2 + n \cdot (m + d)) \), dominated by storing the Hessian matrices and the required gradients across all data points.
-*Response to Weaknesses 2 and 3*
We will modify Figure 1 and improve our notation in the revision.
-*Response to Question 1 and 2*
Thank you for the valuable suggestion. We will include the time and complexity part in the revision.
We agree that influence function may not be familiar to all readers, and we appreciate the opportunity to make our work more accessible. In response, we propose to include an additional appendix section that provides a high-level, intuitive explanation of influence functions.
Thank you very much for your valuable feedback and recognition of our article. If we have addressed all of your concerns, we kindly ask you to consider giving a higher rating.
---
Rebuttal Comment 1.1:
Comment: Thanks for your responses. I'm satisfied with the new analysis, which makes it more solid. | Summary: The authors present Editable CBMs, where they consider _editability_ from the lens of retraining CBMs at three different levels: 1) Concept Label-level, i.e. when there's label noise in the concept space, 2) Concept level, i.e. removing spurious concepts from the bottleneck predictions and 3) Data-level, i.e. final label noise. For 1), the authors use influence function to estimate the retrain approximation for concept prediction and label predictor sequentially; 2) A similar strategy is applied except for adding a zero-row for the empty concept in the new model; 3) Additional steps to remove the influence of examples with label-noise. The authors use EK-FAC for the second-order approximations and iHVP based algorithms to speed up the compute.
Results are presented on three datasets and additional (extensive) proofs and details are provided in the Appendix.
Claims And Evidence: 1. The claims of editability on the three levels are sufficiently shown in the results and appendix.
2. My problem is with the general presentation of results: while the algorithm and results certainly show that this algorithm provides pretty close results to retraining, the results and the language do not contain the original essence of CBMs which is to provide humans the ability to _intervene_ on the intermediate concepts to _better_ the final prediction. In all of the theory to prove that the modifications made by authors to fit the CBM framework, this seems to have been lost. The authors do not compare how the concept intervention behavior changes / remains same for different levels of intervention - based on the language, it seems like final accuracy is the only focus of the paper.
3. Furthermore, a lot of comparisons with modern CBM architectures are missing - for example, the simple choice of soft concepts vs hard concept choices are not acknowledged.(Mahinpei et al) or a note to the ubiquity / lack thereof of the proposed algorithm.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria barring the critique above makes sense.
Theoretical Claims: Not fully (only the main parts)
Experimental Designs Or Analyses: Refer to point 2 and 3 in claims and evidence
Supplementary Material: Key proofs and the last sections on limitation
Relation To Broader Scientific Literature: CBMs are gaining popularity, and fast re-training is important; this work certainly furthers improving adoption.
Essential References Not Discussed: CBMs:
[1] Mahinpei, Anita, et al. "Promises and pitfalls of black-box concept learning models." arXiv preprint arXiv:2106.13314 (2021).
Label noise
[2] Thulasidasan, Sunil, et al. "Combating label noise in deep learning using abstention." arXiv preprint arXiv:1905.10964 (2019).
[3] Rolnick, David, et al. "Deep learning is robust to massive label noise." arXiv preprint arXiv:1705.10694 (2017)
[4] Balloli, Vaibhav, Sara Beery, and Elizabeth Bondi-Kelly. "Are they the same picture? adapting concept bottleneck models for human-AI collaboration in image retrieval." arXiv preprint arXiv:2407.08908 (2024).
Other Strengths And Weaknesses: The works is original, has signficant impact and novel.
A key weakness is the focus of results which only focuses on final accuracy while other things that could be shown is test-time intervention performance of retrain vs ECBM, robustness to intervention noise, etc.
Another weakness is the limitation and broader impact section which is poorly written - no real limitation except stating it is an approximation and broader impact vaguely mentions doctors and that's it. It is okay to simply state cost saving instead of vaguely writing "ECBM can be an interactive model with doctors in the real world, which is an editable explanation tool." Regarding limitation, the authors can choose to address how many more modifications are required to other architectures like CEM (Zarlenga et al), etc.
Other Comments Or Suggestions: 1. The authors are encouraged to proof-read grammar in a lot of places (missing oxford commas, unnecessarily lengthy sentences, etc.)
2. Define editability early on in the introduction to better prime the readers on what to expect (see also adaptivity -> adaptability in Section 1)
Questions For Authors: The authors are requested to address/clarify all the weaknesses and concerns stated above - with sufficient clarification, I'm willing to raise the score to WA/A.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: ## Weakness:
-*Response to W2: the authors have only considered sequential setting (probably the joint setting as it gives the best performance)*
We sincerely thank the reviewer for highlighting the importance of the jointly training mode in CBM. We agree that joint training sometimes leads to higher accuracy in both label and concept predictions.
However, model performance is not the sole priority:
1. Compared to joint training, sequential training is more robust under limited data conditions.
2. Joint training requires balancing concept loss and task loss, which may result in suboptimal performance for both. Sequential training avoids this trade-off.
3. The modular architecture of sequential training allows for easy post-hoc interventions.
Given these advantages, we focus on the editable CBM with sequential training in this work. Our goal is to explore model editing, which represents the unique perspective and theoretical contribution of our study. We believe this approach complements, rather than replaces, CBM performance optimization research.
Finally, due to the complexity and workload of designing algorithms for the three editing levels, it is not feasible to analyze both sequential and jointly training methods within a single paper. Therefore, in this work, we focus on developing editing algorithms for sequentially trained CBMs across three levels and provide theoretical guarantees. In fact, editing jointly trained CBMs using influence functions is also achievable and will be considered in our future work.
-*Response to W3: Theorem 4.4, the authors insert 0 valued rows*
Thank you for your suggestion.
When a concept is removed, the output dimension of the concept predictor $g$ decreases accordingly. To facilitate estimation, we modify $g$ into $g'$ by inserting a zero-parameter row into its final layer. These parameters remain fixed during training and thus stay zero, ensuring that the model's effective parameter space is strictly a subset of the original space.
In Theorem 4.4, we approximate $g'$ using influence functions, assuming $g'$ continues training within the original parameter space. Consequently, the algorithm's implementation, including parameter updates, remains unaffected regardless of whether the inserted rows are set to 0, 1, or any constant.
-*Response to W4: Error bars*
We apologize for the omission and we have addressed it in the camera-ready version.
-*Response to W5: Concept-level metrics*
Thanks for your suggestion. We will add experiments about concept-level metrics. We perform experiments on CUB and test the concept accuracy of ECBM and retrain. The results are listed below.
| | ECBM(%) | Retrain(%) |
| ------- | ---- | ---------- |
| Concept | 93.7112 | 95.1705 |
| Data | 94.5184 | 95.2801|
| Concept-label | 95.0219 | 95.1407 |
*Table A: Concept Accuracy of ECBM and Retrain under Three Levels
The results show that the accuracy of Retrain and ECBM is very close, with differences generally within a small margin. For example, in the Data and Concept-label levels, the accuracy gap is less than 0.5%. The results show that ECBM not only approximates the accuracy of the retrain method's labels on the test set, but also has a similar performance in terms of concept accuracy.
-*Response to W6: The related work section*
We will include more related works in the camera-ready version.
## Questions:
-*Response to Q1: What is $R^{d_i}$?*
Here $R^{d_i}$ represents the space of all
$d_i$-dimensional real vectors.
-*Response to Q2: Why are the concepts in the log?*
This is the definition of cross-entropy, identical to that in the original CBM, as described in the second paragraph of page 15. The activation function used is the sigmoid function. Note that this is distinct from the definition of the loss function.
-*Response to Q3: ff problem*
Thanks for your correction. It should be f.
-*Response to Q4: In lines 167-169 ... Why?*
This is because if we correct the concept and then retrain the CBM, the concept predictor $\hat{g}$ will be updated to $\hat{g}_e$, which differs from the scenario at test time intervention where the concept predictor remains unchanged. This distinction serves as the key motivation for editing CBMs.
-*Response to Q5: Why are no other concept-level metrics utilized?*
See Response to W5 for reference.
# Reviewer tX1Q
-*Response to Other Comments Or Suggestions*
Thanks for your comments. We will check the paper, fix all the errors and define editable CBM explicitly in the camera-ready version.
---
Rebuttal Comment 1.1:
Comment: I don't see any response to the weaknesses in the rebuttal comment (regarding presenting performance under partial intervention, how this technique changes with choice of architectures, etc.).
Edit in response to the new rebuttal:
I appreciate the additional details and experiments. While I agree (and mention in my comment earlier) that your technique gives results that are pretty close to retraining (the original intention of the work), I brought up test time interventions because that is not necessarily supported theoretically (that it doesn't effect partial/full interventions), so showing this empirically would make a strong case to the readers and practitioners looking to use this. Similarly, showing that your method is ubiquitous to different CBM-like architectures like CEM, etc., and maybe even newer use cases of CBMs like retrieval [4] would make the contributions here much more appealing and up-to-date (writing something along the lines of what is mentioned in your rebuttal would really improve the presentation of your contributions in my opinion that mention the flexibility of your contributions and where the readers should go to adapt to these architectures / use cases)
I really hope the new numbers, experiments (both partial and full interventions) and suggestions are taken into account for the camera-ready and therefore, I'm increasing my score accordingly.
[4] Balloli, Vaibhav, Sara Beery, and Elizabeth Bondi-Kelly. "Are they the same picture? adapting concept bottleneck models for human-AI collaboration in image retrieval." arXiv preprint arXiv:2407.08908 (2024).
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer,
I hope this message finds you well. I sincerely apologize for any inconvenience caused by the oversight in our previous rebuttal. During the copy and paste process, we inadvertently omitted the majority of the text we intended to provide to you for the rebuttal phase.
To rectify this, I am including the correct rebuttal content below. Thank you very much for your understanding and patience.
-*Response to Claims And Evidence 2: test-time intervene*
Thank you for your insightful feedback on this paper. We fully agree that the core value of CBMs lies in enabling human intervention on intermediate concepts to improve final predictions.
It is important to highlight that the key contribution of this paper is showing that ECBM can efficiently estimate CBMs without retraining, when the training data changes at data, concept level, or concept-label level. As such, ECBM shares the same architecture as CBM and retains CBM’s core essence: enabling concept intervention during test time. These three level changes are independent of the test-time intervene. For ECBM, the concept intervention behavior is identical to that of the original CBM.
To demonstrate the closeness in test-time intervention capabilities between the model estimated by the ECBM method and the retrain approach, we changed the number of test time intervention concepts and conducted a series of experiments.
|Concept Number | | |
|--------|----------|----------|
| Method | Retrain | ECBM |
| 0 | 0.51273 | 0.52331 |
| 1 | 0.51505 | 0.52107 |
| 2 | 0.50214 | 0.51616 |
| 3 | 0.48848 | 0.50794 |
| 4 | 0.47924 | 0.50485 |
| 5 | 0.47885 | 0.48878 |
| 6 | 0.46197 | 0.48699 |
| 7 | 0.45029 | 0.47524 |
| 8 | 0.45312 | 0.46290 |
| 9 | 0.44787 | 0.46113 |
| 10 | 0.44823 | 0.45707 |
The experimental results further demonstrate that the concept intervention effects of ECBM are sufficiently close to those achieved by retraining the model, validating that ECBM preserves the core advantages of CBMs while enabling greater efficiency in model training.
-*Response to Claims And Evidence 3: comparasion with modern CBM architectures*
To validate the performance of ECBM on soft concepts, we perform the following experiments and the results are shown here.
The ECBM method can be easily adapted to handle scenarios where concepts take continuous values, such as in CEM, or involve soft labels. By modifying the loss function in Equation 1, the subsequent algorithm can be directly extended to these cases. Furthermore, we demonstrate the performance of ECBM under soft label scenarios in our experiments. And the experiments are still on-going.
-*Response to Other Strengths And Weaknesses*
Thank you for your suggestion. We will include experimental results related to test-time intervention performance for both the ECBM and retrain models, and we will revise the Limitation and Broader Impact sections accordingly.
-*Response to Other Comments Or Suggestions*
Thanks for your comments. We will check the paper, fix all the errors and define editable CBM explicitly in the camera-ready version. | Summary: This paper improves Concept Bottleneck Models (CBMs) by proposing how to update or “edit” a well-trained CBM. The issues arise when the concept-label level annotations need to be updated, concepts themselves need to be removed and certain data samples used in the training of the model themselves need to be removed. Rather than retraining a CBM from scratch which is computationally expensive, Editable CBM proposes approaches inspired by influence functions to update model parameters on the aforementioned challenges. Theoretical and empirical results demonstrate the effectiveness of the method.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes. They are absolutely correct.
Theoretical Claims: Yes. I have detailed my reservations in the sections below.
Experimental Designs Or Analyses: Yes. I have detailed my reservations in the sections below.
Supplementary Material: Yes, all.
Relation To Broader Scientific Literature: The paper improves CBMs through Influence functions, an important problem never tried before.
Essential References Not Discussed: None
Other Strengths And Weaknesses: Strengths:
1. A very important problem studying the impact of various concept editing mechanisms on trained CBMs without retraining the entire models.
2. Theoretically sound submission with appropriate proofs and justifications
Weakness:
1. The overall paper suffers from confusing notations (refer to Questions) and some unusual choice of variable names which hinder understanding. Specifically,
2. A very important property of CBMs in the original paper [1], was the joint and sequential training paradigm of these models. However, it looks like the authors have only considered one of the settings (probably the joint setting as it gives the best performance). The results should also be demonstrated on the sequential setting to make it truly generalize to all CBMs.
2.1. In addition to 2, I am still not sure about the theoretical basis of "correcting" a jointly trained function $f$ and $g$ would work. If working with a sequential setting, it is easy to see that editing $f$ and $g$ with Hessians is straight-forward, but with joint network training, many of the assumptions are invalid as information has flowed from $f$ to $g$ during training. As an example, in Theorem 4.3, how are we measuring the impact using $\hat{g}$, which should actually be $\hat{g} - H_{\hat{g}}^{-1}$ as the changed params in the concept predictor should influence label predictor as well.
3. In Theorem 4.4, the authors insert 0 valued rows to make up for the dimensional inconsistency and then remove the same rows after the edit to achieve their desired result. This process is very uncertain, with no claims as to if the 0 valued rows lose important information or not. As a suggestion, the authors can perform a small ablation on the before and after effect of packing these rows with numbers - 0, -1, +1, etc. or report Mutual Information-type metrics.
4. (Very minor) Error bars on the Time column are not present - why is that?
5. Why would no concept-level metrics be reported? I understand why F-1 score is used, but editing the model can make a difference in the concept performance as well. In addition, intervention performance as done by [1] are also not reported.
6. (minor) Lastly, the related work section can be expanded to include other approaches to improve CBMs.
[1] CBMs, Koh et al, ICML '19
Other Comments Or Suggestions: Refer Weakness.
Questions For Authors: 1. What is $R^{d_i}$ in Line 118? It is not defined.
2. In Equation-1, why are the concepts $c_i^j$ in the $log$? This is not consistent with actual CBM, where they are only utilized as a sigmoid.
3. Typo in Line 149 - ff should be $f$ or $\hat{f}$ (unclear).
4. In lines 167-169 what do the authors mean by "if we intervene with the true concepts, the concept predictor $\hat{g}$ fluctuates to $\hat{g_e}$ accordingly". Why is this the case?
5. Why are no other concept-level metrics utilized?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: -*Response to W2: the authors have only considered sequential setting (probably the joint setting as it gives the best performance)*
We sincerely thank the reviewer for highlighting the importance of the jointly training mode in CBM. We agree that joint training sometimes leads to higher accuracy in both label and concept predictions.
However, model performance is not the sole priority:
1. Compared to joint training, sequential training is more robust under limited data conditions.
2. Joint training requires balancing concept loss and task loss, which may result in suboptimal performance for both. Sequential training avoids this trade-off.
3. The modular architecture of sequential training allows for easy post-hoc interventions.
Given these advantages, we focus on the editable CBM with sequential training in this work. Our goal is to explore model editing, which represents the unique perspective and theoretical contribution of our study. We believe this approach complements, rather than replaces, CBM performance optimization research.
Finally, due to the complexity and workload of designing algorithms for the three editing levels, it is not feasible to analyze both sequential and jointly training methods within a single paper. Therefore, in this work, we focus on developing editing algorithms for sequentially trained CBMs across three levels and provide theoretical guarantees. In fact, editing jointly trained CBMs using influence functions is also achievable and will be considered in our future work.
-*Response to W3: Theorem 4.4, the authors insert 0 valued rows*
Thank you for your suggestion.
When a concept is removed, the output dimension of the concept predictor $g$ decreases accordingly. To facilitate estimation, we modify $g$ into $g'$ by inserting a zero-parameter row into its final layer. These parameters remain fixed during training and thus stay zero, ensuring that the model's effective parameter space is strictly a subset of the original space.
In Theorem 4.4, we approximate $g'$ using influence functions, assuming $g'$ continues training within the original parameter space. Consequently, the algorithm's implementation, including parameter updates, remains unaffected regardless of whether the inserted rows are set to 0, 1, or any constant.
-*Response to W4: Error bars*
We apologize for the omission and we have addressed it in the camera-ready version.
-*Response to W5: Concept-level metrics*
Thanks for your suggestion. We will add experiments about concept-level metrics. We perform experiments on CUB and test the concept accuracy of ECBM and retrain. The results are listed below.
| | ECBM(%) | Retrain(%) |
| ------- | ---- | ---------- |
| Concept | 93.7112 | 95.1705 |
| Data | 94.5184 | 95.2801|
| Concept-label | 95.0219 | 95.1407 |
*Table A: Concept Accuracy of ECBM and Retrain under Three Levels
The results show that the accuracy of Retrain and ECBM is very close, with differences generally within a small margin. For example, in the Data and Concept-label levels, the accuracy gap is less than 0.5%. The results show that ECBM not only approximates the accuracy of the retrain method's labels on the test set, but also has a similar performance in terms of concept accuracy.
-*Response to W6: The related work section*
We will include more related works in the camera-ready version.
## Questions:
-*Response to Q1: What is $R^{d_i}$?*
Here $R^{d_i}$ represents the space of all
$d_i$-dimensional real vectors.
-*Response to Q2: Why are the concepts in the log?*
This is the definition of cross-entropy, identical to that in the original CBM, as described in the second paragraph of page 15. The activation function used is the sigmoid function. Note that this is distinct from the definition of the loss function.
-*Response to Q3: ff problem*
Thanks for your correction. It should be f.
-*Response to Q4: In lines 167-169 ... Why?*
This is because if we correct the concept and then retrain the CBM, the concept predictor $\hat{g}$ will be updated to $\hat{g}_e$, which differs from the scenario at test time intervention where the concept predictor remains unchanged. This distinction serves as the key motivation for editing CBMs.
-*Response to Q5: Why are no other concept-level metrics utilized?*
See Response to W5 for reference.
---
Rebuttal Comment 1.1:
Comment: The rebuttal addresses most of my pressing concerns. I trust the authors will do a good job incorporating all weaknesses (especially W5) in their final camera-ready version.
I have updated my ratings accordingly.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer,
Thank you very much for your thoughtful and constructive feedback, as well as for generously raising the score of our submission. We will continue to revise our paper based on your comments and will ensure that the experimental results related to W5 are included in the camera-ready version.
Your insightful comments and careful evaluation have been invaluably helpful in improving the quality of our work. We are truly grateful for the time and effort you dedicated to reviewing our paper. Your support and recognition mean a great deal to us. | null | null | null | null | null | null |
Learning Smooth and Expressive Interatomic Potentials for Physical Property Prediction | Accept (oral) | Summary: This paper argues that the test MAE, when energy conservation is guaranteed in MD simulations, demonstrates the practicality of machine learning potentials. The authors provide empirical evidence indicating that, within established model designs, specific designs uphold energy conservation principles while others do not. Furthermore, they demonstrate a correlation between the Energy/Force MAE on the test dataset and the predictive performance of physical properties utilizing MD simulations, specifically for those models that maintain energy conservation. The resulting model, eSEN, achieves state-of-the-art results across various physical property prediction tasks based on phonon calculations.
## update after rebuttal
Thank you for the author's responses. This paper is valuable to share with the community, and I support its publication.
Claims And Evidence: It has been experimentally demonstrated that the energy conservation law, which is a property that the potential must satisfy, affects the estimation accuracy of physical quantities that require phonon calculations.
The model design, which is well-known and of interest to readers, has been experimentally verified and supported by evidence. However, it should be noted somewhere in the text that the physical properties considered in this study are limited to those requiring higher-order derivatives of the PES and that the applications of machine learning potentials have not been examined comprehensively.
Methods And Evaluation Criteria: It is reasonable to evaluate the performance of a potential that satisfies the energy conservation law using physical properties that require differentiation.
Theoretical Claims: The results are empirical, so there is no theoretical claim.
Additionally, I have briefly reviewed Hairer et al. 2003 and believe that it aligns with the claims in Section 3.2. However, in my understanding, Section 3.2 does not present any novel theoretical claims.
Experimental Designs Or Analyses: I have checked the NVE MD simulations to verify the energy conservation law, as well as the experimental settings for Matbench Discovery and the MDR phonon benchmark. There are no issues with these experimental settings.
Supplementary Material: I read all the parts of the supplementary material.
I especially enjoyed reading B.2, in which the paradox of models fails to capture phonon band structures accurately while still achieving competitive accuracy for thermodynamic properties.
Relation To Broader Scientific Literature: The machine learning community has proposed designs that differ from conventional potential designs. However, some of these machine learning potentials, despite having good MAE for energy and forces, are not practical for real-world use, and it has been repeatedly pointed out that Energy/Force MAE does not necessarily indicate the practicality of machine learning potentials.
Issues with the practicality of these potentials can be observed during actual simulations, such as structural failures, but a simple and broadly applicable method to identify these problems was not previously known.
Essential References Not Discussed: I do not find any issue with the references.
Other Strengths And Weaknesses: Strengths
- Conservative fine-tuning could be a popular method in the community since it speeds up the training without sacrificing energy conservation.
Other Comments Or Suggestions: I am wondering about the contribution of DeNS to thermodynamics property predictions, and I would like to see DeNS's ablation study.
Questions For Authors: No question.
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: We thank reviewer yR9i for the helpful feedback. We address each of the reviewer’s comments below.
> It should be noted somewhere in the text that the physical properties considered in this study are limited to those requiring higher-order derivatives of the PES and that the applications of machine learning potentials have not been examined comprehensively.
Thank you for your suggestion. While other properties such as formation energy in the Matbench-Discovery benchmark were studied in the paper, the properties that require higher-order derivatives of the PES are indeed more significantly impacted by the MD-energy-conservation properties. We will note this in the revised manuscript.
> I am wondering about the contribution of DeNS to thermodynamics property predictions, and I would like to see DeNS's ablation study.
Only conservative models are well-suited for thermodynamics property prediction tasks, which require accurate modeling of higher-order PES derivatives. DeNS is only used during direct pre-training on the MPTrj dataset to alleviate overfitting. Conservative models do not use DeNS during conservative training. We present an ablation study over two 2-layer direct-force eSEN models (with loss coefficients E: 1/F: 10/S: 100), with and without DeNS:
| Metric | With DeNS | Without DeNS |
|-----------------------------------|-----------|--------------|
| Energy MAE (meV/atom) | 18.0 | 19.4 |
| Force MAE (meV/Å) | 43.7 | 43.7 |
| Stress MAE (meV/ų atom) | 0.14 | 0.16 |
The higher error of the model without DeNS is due to overfitting. We will incorporate the validation error curves in the revised manuscript to reflect that.
We look forward to further discussions if you have additional questions or suggestions. Thank you again for your valuable input.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response.
Is the result you provided a comparison of the accuracy after performing conservative training following direct-force pretraining with/without DeNS? Can it be said that using DeNS during pretraining has influenced the accuracy of the model after conservative training?
---
Reply to Comment 1.1.1:
Comment: Thank you for the additional comment! This is an excellent point -- the results above are from direct-force models without conservative training. We have started running new experiments with conservative training and will update the results here once they finish.
Update:
we pretrain direct-force eSEN models (2-layer, loss coefficients E: 1/F: 10/S: 100) for 60 epochs with and without DeNS, followed 40 epochs of conserved training without DeNS. The validation errors are:
| Property | With DeNS | Without DeNS |
|-----------------------------------|-----------|--------------|
| Val Energy MAE (meV/atom) | 17.6 | 19.3 |
| Val Force MAE (meV/Å) | 43.1 | 44.0 |
| Val Stress MAE (meV/ų atom) | 0.14 | 0.14 |
We find the effect of DeNS at the direct-force training stage carries over to the final conservative models. | Summary: This paper draws attention to the inability of energy conservation, and thereby instability of simulation, common in many popular machine learning interatomic potentials (MLIPs). Next, it proposes a novel architecture addressign this problem, while showing state-of-the-art performance on a wide range of tasks.
Claims And Evidence: Yes. The claims were all well supported experimentally. The discussion around the force conservation is particularly exhilarating and well supported.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria are very reasonable. Apart from the conservation errors, the ensemble property experiments proposed in this paper also throw new light on the discussion around the desired properties of MLIP.
Theoretical Claims: NA. There is little theoretical claims.
Experimental Designs Or Analyses: The experiments are designed properly and convincingly. In particular, they refreshingly used larger, more realistic datasets rather than the toy datasets the field has been using.
Supplementary Material: Yes. The experimental details.
Relation To Broader Scientific Literature: The relevant papers have been properly cited and discussed.
Essential References Not Discussed: NA.
Other Strengths And Weaknesses: NA.
Other Comments Or Suggestions: NA.
Questions For Authors: Can the claims made in Section 5 be supported by theoretical arguments?
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: We thank reviewer 8LdR for the helpful feedback. We address each of the reviewer’s comments below.
> Can the claims made in Section 5 be supported by theoretical arguments?
We refer to (1) Hairer et al. 2003 for theoretical arguments on the relationship between potential energy surface (PES) smoothness/bounds on derivatives and energy conservation in simulations; and (2) Molecular simulation textbooks (e.g., Tuckerman, 2023) for theoretical arguments on conservative forces. Section 5 is then organized to discuss practical implementations of machine learning potentials that impact conservative forces, smoothness and bounds on derivatives. Some arguments on the design choices can be made from a theoretical perspective:
- **Direct-force:** a direct-force model the direct-force prediction framework imposes no constraint on the output forces being a conservative force field. Using the derivative of the energy with respect to the atomic positions is a requirement for ensuring the predicted forces are conservative.
- **Representation discretization:** The discretization of spherical harmonics representations breaks the conservative forces requirement because it introduces discretization errors to the computation of energy gradients. Increasing the grid resolution theoretically reduces such discretization errors and helps conservation.
- **Max neighbor limit:** From a theoretical perspective, we can show examples where the K-NN graph introduces PES discontinuity. Consider a model with a cutoff of 6 A, and a node having 3 neighbors at distance 3 A and a fourth neighbor at (3 + epsilon) A. a small perturbation to the atomic positions will introduce discontinue change in the predicted energy, if a max neighbor limit of 3 is enforced.
- **Envelope functions:** From a theoretical perspective, the radial basis functions used in graph neural network machine learning potentials are not twice continuously differentiable due to the finite radius cutoff in graph construction, which causes a step change at the cutoff radius. The envelope function theoretically eliminates this issue.
- **Number of radial basis functions:** While empirically reducing the number of radial basis functions helps the PES to vary smoothly and improves model conservation properties, due to the flexibility of neural networks, it does not theoretically enforce these properties.
We look forward to further discussions if you have additional questions or suggestions. Thank you again for your valuable input. | Summary: This work investigates failure cases of machine learning interatomic potentials (MLIPs) in actual MD simulations. From these insights, the authors draw actionable improvements to MLIP that they implement in their eSEN model. eSEN shows promise in being more accurate on hold-out test sets as well as in preserving energy in MD simulations. This work questions many common design choices that led to reduced test set MAEs but unstable MD simulations.
## After the rebuttal
I stay with my initial judgment. I find this work a pleasant read and well executed. I recommend acceptance.
Claims And Evidence: The authors claim that MLIP should be
1. conservative vector fields.
2. bounded in their derivatives.
3. smooth.
The authors support these claims very well with the implementation of MD simulators, various ablation studies, and empirical evidence.
Methods And Evaluation Criteria: The authors offer a broad range of benchmark datasets and include various metrics beyond simple MAE on energies and force to accurately paint a picture. Further, MD simulations are performed with each model to judge its practical usefulness.
Theoretical Claims: The paper makes no theoretical claims.
Experimental Designs Or Analyses: The experiments are sound and well-analyzed, with meaningful conclusions. However, statistical fluctuations would, by repeating experiments multiple times, benefit communication.
Supplementary Material: I skimmed the appendix but did not thoroughly read the sections.
Relation To Broader Scientific Literature: This work finds that common choices for improving hold-out test set accuracies in machine-learning force fields lead to unphysical behavior in MD simulations. Further, the authors provide clear and concrete guidelines on what properties MLFF should fulfill to yield accurate MD simulations and low test set errors. This work enriches the literature with a fresh evaluation metrics through MD simulation that also correlate well with existing metrics.
Essential References Not Discussed: -
Other Strengths And Weaknesses: The paper is very well written, thoroughly investigates failure cases of modern force fields, and draws reasonable conclusions. I enjoyed reviewing this work and am confident that it greatly aids the current field of molecular force fields and the debate on the impact of Euclidean symmetries.
Its main downside is the limited originality in its technical and theoretical contribution. Further, error bars for different trainings would greatly help in indicating the stability of results.
Other Comments Or Suggestions: -
Questions For Authors: -
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank reviewer kXc9 for the helpful feedback. We address each of the reviewer’s comments below.
> Its main downside is the limited originality in its technical and theoretical contribution.
Regarding the originality of our technical and theoretical contributions, we would like to highlight that while energy-conservation-related design choices have been explored in previous studies, our work systematically investigates the impact of each individual design choice. This, to our knowledge, has not been comprehensively addressed before. Our paper's originality lies in (1) elucidating these effects; (2) demonstrating state-of-the-art results when these choices are integrated with the novel eSEN architecture and the direct-force pretraining strategy; (3) we present a novel finding on the correlation between test-set error and physical property prediction performance for models that satisfy the conservation test; (4) we propose the MD conservation test that is critical in establishing the correlation between test errors and downstream predictions.
We believe these findings represent a significant contribution to the community and that they will impact model development practices in the field moving forward.
> Further, error bars for different trainings would greatly help in indicating the stability of results.
We appreciate your suggestion regarding error bars for different training runs. To quantify the variation, we trained 2-layer eSEN models on the MPTrj dataset using 3 different seeds for 50 epochs, with loss coefficients E: 1/F: 10/S: 100. The validation set errors and standard deviations are as below:
| Task | MAE |
| -- | -- |
| Energy (meV/atom) | 19.67 ± 0.23 |
| Forces (meV/Å) | 43.85 ± 0.058 |
| Stress (meV/ų atom) | 0.16 ± 0.00038 |
We find the results to be highly stable across random seeds. We will incorporate this in our revised manuscript.
We look forward to further discussions if you have additional questions or suggestions. Thank you again for your valuable input. | Summary: This paper presents eSEN, a machine learning interatomic potential (MLIP) model designed for accurate and energy-conserving molecular dynamics (MD) simulations and physical property predictions. The study identifies key factors that impact an MLIP’s ability to generalize well to physical property prediction tasks, such as ensuring conservative force predictions and maintaining a smoothly varying potential energy surface (PES). The proposed eSEN model achieves state-of-the-art results on a range of benchmarks, including materials stability prediction, thermal conductivity prediction, and phonon calculations. By establishing a correlation between test errors and physical property prediction performance in energy-conserving models, the authors offer insights into improving the reliability of MLIPs.
Claims And Evidence: The paper claims that energy conservation in MD simulations leads to improved correlation between test errors and downstream physical property predictions. This claim is well-supported by experimental results.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria align well with the problem of developing reliable MLIPs. The authors test their models on a representative range of material property prediction benchmarks including Matbench-Discovery, MDR Phonon benchmark, and SPICE-MACE-OFF.
Theoretical Claims: The theoretical claim of the submission refers to Hairer et al. 2003.
Experimental Designs Or Analyses: The experimental design is well-structured, with clear comparisons between eSEN and existing MLIPs on a representative set of benchmarks. The ablation studies systematically evaluate the impact of various architectural decisions, such as representation discretization, neighbor selection, and envelope functions. The results support the paper’s hypotheses, with energy-conserving models consistently outperforming non-conservative alternatives. However, further exploration of the impact of different hyperparameter choices could strengthen the robustness of these conclusions.
Supplementary Material: I reviewed the experimental details section of the supplementary material to further evaluate the soundness of the experiment design and evaluation.
Relation To Broader Scientific Literature: In the field of MLIP, there has been a debate about training conservative (physical but computationally expensive) or non-conservative force fields (computationally efficient). This submission provides great evidence for conservative force fields from the perspective of generalization performance, i.e., the test-set energy error of conservative force fields correlates better with other physical properties.
Essential References Not Discussed: The paper provides a comprehensive literature review.
Other Strengths And Weaknesses: Strengths:
1. The experiments in this paper are comprehensive.
2. The proposed eSEN achieves SOTA results on multiple benchmarks, demonstrating strong empirical performance.
Weaknesses:
1. The design choices of the eSEN model are all from existing works.
2. The authors do not provide an empirical study from the perspective of efficiency.
Other Comments Or Suggestions: 1. I would suggest the authors provide some discussion about computational efficiency.
Questions For Authors: 1. How is the proposed eSEN evaluated in Table 2? Is it submitted to the Matbench Leaderboard (https://matbench-discovery.materialsproject.org/) for evaluation?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank reviewer XYRs for the helpful feedback. We address each of the reviewer’s comments below.
> The design choices of the eSEN model are all from existing works.
Regarding the originality of our technical contributions, we would like to highlight that while energy-conservation-related design choices have been explored in previous studies, our work systematically investigates the impact of each individual design choice. This, to our knowledge, has not been comprehensively addressed before. Our paper's originality lies in (1) elucidating these effects; (2) demonstrating state-of-the-art results when these choices are integrated with the novel eSEN architecture and the direct-force pretraining strategy; (3) we present a novel finding on the correlation between test-set error and physical property prediction performance for models that satisfy the conservation test; (4) we propose the MD conservation test that is critical in establishing the correlation between test errors and downstream predictions.
We believe these findings represent a significant contribution to the community and that they will impact model development practices in the field moving forward.
> The authors do not provide an empirical study from the perspective of efficiency.
We would like to respectfully point out that we provided a brief empirical study on the efficiency of eSEN in Appendix C. We found eSEN to have comparable efficiency to existing equivariant models while being more accurate. A 2-layer eSEN model with 3.2M parameters can simulate around 0.8 million steps per day for a periodic system of 216 atoms on a single NVIDIA A100 GPU. We will highlight this result in the main paper in the next revision.
> How is the proposed eSEN evaluated in Table 2? Is it submitted to the Matbench Leaderboard (https://matbench-discovery.materialsproject.org/) for evaluation?
We confirm that eSEN was evaluated using the same dataset and metrics as those provided in the Matbench Leaderboard, as referenced in the link you shared. The results have been submitted and verified by the benchmark maintainer, ensuring their accuracy and reliability. The slight difference in the $\kappa_{\mathrm{SRME}}$ metric at the benchmark site is due to a very recent minor update to the Matbench-Discovery evaluation protocol.
While the results of eSEN are on the Matbench-Discovery Leaderboard, we have refrained from directly linking our submission to maintain the integrity of the double-blind review process. We appreciate your understanding in this matter.
We look forward to further discussions if you have additional questions or suggestions. Thank you again for your valuable input. | null | null | null | null | null | null |
RollingQ: Reviving the Cooperation Dynamics in Multimodal Transformer | Accept (poster) | Summary: The paper analyzes an important problem of modality biases in multimodal learning setup. The authors find that the dynamic property of attention is lost during multimodal training; that is, rather than weighing the modalities per-instance, the models just focus on a single (biased) modality, which is overemphasized during training, leading to distribution gaps. To this, author propose Query Rebalanced Rotation (QRR) algorithm that rebalances the query to “revive” the dynamic property of attention by rotating the query vector towards an anchor that provides higher weights to the unbiased modality, thus reducing the modality bias in attention.
Claims And Evidence: I am not convinced why the rotation of the query vector towards the defined anchor would help reduce the modality bias. Further, the results supporting this claim are not consistent and thorough (elaborated below)
Methods And Evaluation Criteria: - It is known that the quality of multimodal representations is heavily influenced by the fusion strategy (early/mid/late). In the paper, QRR is based on a very simple late fusion strategy. However, many works adopt early fusion strategies. The paper misses out on (a) systematic comparison with these methods, (b) evolution of cooperative dynamics in these setups, and (c) difference in QRR behaviours in these setups.
- Does QRR work for the right reasons? The paper, while claims multimodal debiasing, does not provide any results on challenging OOD benchmarks. Multimodal datasets can have biases, and there are multiple challenging benchmarks for evaluating the true multimodal performance of the models. Therefore, performance on biased benchmarks is not representative of the unbiasedness property of the model. The closest to this result in the paper are the noise_T experiments, which are more of a sanity check than an OOD benchmark.
Theoretical Claims: Not applicable
Experimental Designs Or Analyses: For CREMA-D dataset, the authors were able to achieve high performance with just a single frame and the audio. This is concerning. This all the more emphasizes that the benchmarks have biases. Being truly multimodal means that the modal *needs* to leverage both the modalities faithfully. However, in this case it could be leveraging weak biases in the multiple modalities, and combining them to produce the answer. Therefore, I strongly suspect that while QRR might re-weigh the attention towards “unbiased modality”, the small gains in performance in some cases might be because t inadvertently makes the model leverage multimodal biases
Supplementary Material: Yes, the experiment settings and application to multi-layer transformer models
Relation To Broader Scientific Literature: I believe that the findings of the paper regarding unimodal biases are in congruence with the related works in the domain
Essential References Not Discussed: - The related works section of the paper does not discuss the necessary works on unimodal and multimodal biases in datasets like [1], that is extremely relevant for such studies
- Other important works like QUAG and QUAG-attention [2] perform very similar analysis on average key per modality and should be (a) acknowledged in the relevant works, and (b) utilized as a verification that QRR is indeed leveraging both the modalities. Further, datasets like CLAVI [2] and Perception Test [3] could be used as debiased test datasets
[1] Buch, Shyamal, et al. "Revisiting the "video" in video-language understanding." CVPR 2022. \
[2] Rawal, Ishaan Singh et al., “Dissecting Multimodality in VideoQA Transformer Models by Impairing Modality Fusion”. ICML 2024 \
[3] Pătrăucean, Viorica et al., “Perception Test: A Diagnostic Benchmark for Multimodal Video Models”. NeurIPS 2023
Other Strengths And Weaknesses: **Strengths**
1. The finding that static fusion techniques might be on-par with dynamic fusion like attention based techniques is indeed surprising and re-affirms the difficulty in taming multimodal self-attention
2. The paper is aiming to tackle an extremely important problem of biased representation is multimodal learning which can be proven very valuable to the community
3. QRR is a generic method that can be applied to multiple domains
3. The paper is cohesively written and easy to follow
**Weaknesses** \
Lack of convincing results:
1. No results on OOD benchmarks (see above) and validation tests (see above)
2. No consistent and significant improvement in accuracy
Other Comments Or Suggestions: - Should it be q_b instead of q_r in figure 2?
- For most of the graphs, the legend is not explained. Would be nice to have detailed captions.
- The running header of the paper isn’t updated
Questions For Authors: - What is the effect of batch size on the QRR? How sensitive is it?
- What is the overhead of QRR (computational complexity and/or increase in run-time/FLOPs)?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Dear reviewer gn7r,
**Thanks a lot for your valuable review, suggestions, and questions.**
**Q1: Extension to more fusion paradigms**
> Q1.a: Analysis of cooperation dynamics.
To analyze cooperation dynamics across fusion methods, we monitor the gradient of unimodal encoders and the attention which are the two driving factors of the self-reinforcing cycle. As shown in [Figure 2](https://anonymous.4open.science/r/ICML-2025-Rebuttal-07BE/Figure2.png), the gradient difference declines and the attention becomes balanced and more unstable as the fusion becomes earlier.
> Q1.b: Systematic comparison and QRR behaviors.
Following descriptions in Appendix B, we train the model progressively and apply QRR. The comparison results are shown in [Table 1](https://anonymous.4open.science/r/ICML-2025-Rebuttal-07BE/Table1.png), where Vanilla MT denotes the vanilla multimodal transformer. The results reveals that our method can be applied to earlier fusion paradigms achieving better performance around (2.3% / 1.1% / 0.2%), but it's more suitable for late fusion paradigm.
**Q2: Verification and validation.**
> Q2.a: Results on OOD benchmarks.
Thanks for providing a novel aspect for testing QRR. We adopt MultiOOD [1]. As shown in [Table 2](https://anonymous.4open.science/r/ICML-2025-Rebuttal-07BE/Table2.png), our QRR algorithm outperforms the baseline on all metrics, showing that our QRR algorithm can ease the modality bias during training dynamics.
[1] Hao, D., et al. "MultiOOD: Scaling Out-of-Distribution Detection for Multiple Modalities", NeurIPS 2024.
> Q2.b: Comparison with QUAG[2]
QUAG is designed based on VideoQA tasks using **frozen unimodal encoders**, which sets it apart from our work. Besides, QUAG performs analysis on averaging attention score but our analysis focus on feature space of key, query in attention.
[2] Rawal, Ishaan Singh, et al., “Dissecting Multimodality in VideoQA Transformer Models by Impairing Modality Fusion”. ICML 2024
> Q2.b: QRR might not leverage both modalities due to modality bias.
In this work, we focus on modality bias that one modality dominates the training process[3], leading to inequality modality feature, unreasonable attention and sub-optimal performance. Hence, we propose QRR to balance the learning of unimodal encoders, revive the cooperation dynamics to leverage both modality.
We ablate QRR through mask or average attention score inspired by QUAG[2]. As shown in [Table 3,4](https://anonymous.4open.science/r/ICML-2025-Rebuttal-07BE/Table3%20&%204.png), when masking one modality, QRR outperforms the baseline from 0.2 ~ 3%. This confirms QRR's ability to enhance unimodal feature quality. When using averaged attention scores, in CREMA-D (audio-dominant), vanilla MT exhibits a performance degradation of 2.3% due to over-reliance on audio features. For Kinetic-Sound (relative balance), vanilla MT's performance increases by 1.2% since the model learns an unreasonable attention score due to the self-reinforcing cycle. Conversely, QRR maintains stable performance (with <1% drop), demonstrating it's leveraging complementary modality information and robustness.
[3] Peng, X., et al. "Balanced multimodal learning via on-the-fly gradient modulation." CVPR 2022.
**Q3: Supporting details on the stability, efficiency, and performance**
> Q3.a: Batch size ablation.
Through batch size ablation on CREMA-D in [Table 5](https://anonymous.4open.science/r/ICML-2025-Rebuttal-07BE/Table5.png), QRR maintains stable performance improvements (0.7%-3.1%) compared to baseline from 16 to 256 batch size, demonstrating remarkable training stability.
> Q3.b: Limited performance improvements.
Compared to methods requiring additional parameters and specialized modules, QRR only requires:
- Parameter increase: **1.0%**
- FLOPs increase: **0.1%**
as shown in [Table 7](https://anonymous.4open.science/r/ICML-2025-Rebuttal-07BE/Table7.png), while accuracy achieve **3.1%** improvements on CREMA-D and **2.3%** improvements on Kinetic-Sound compared to vanilla multimodal transformer. , QRR is a simple yet effective method that achieves comparable and even better results with less computation cost.
**Q4: Details on paper writing and figures**
> Q4.a: Notation in figure 2.
In a single fusion layer setting, $q_b$ and $q_r$ are the same. As the number of fusion layers increases, the query $q$ becomes influenced by the input and is no longer a static value. Hence, for each input $q$, the rotation matrix $q_b$ can not ensure to rotate the $q$ to $q_b$, but should be located near this region.
> Q4.b: Detailed captions, missing of related works[5,6] and header of the paper.
Thanks for pointing this out, we'll add relative discussion in our revised manuscript.
[5] Buch, Shyamal, et al. "Revisiting the "video" in video-language understanding." CVPR 2022.
[6] Pătrăucean, Viorica et al., “Perception Test: A Diagnostic Benchmark for Multimodal Video Models”. NeurIPS 2023
---
Rebuttal Comment 1.1:
Comment: Dear authors, thanks for the detailed rebuttal.
**Q1**: Thanks for the additional experiment. The results from Figure 2 are quite interesting. However, it is incomplete. For a complete picture, it would be nice to have the change in graphs after applying QRR. Table 1 results also seem interesting. Probably it is because previous works have found increased modality bias in the late fusion strategy, therefore the "healing" effect of QRR is most drastic there.
**Q2 a.** I must admit the the results are not very convincing (the gap is too low) however the trend is in general consistent with the authors' intuitions.
**Q2 b.** The masking results are interesting that both the modalities are being leveraged more individually. However, multimodality is beyond individual modality but also consider synergistic and redundant information arising through modality interactions. Therefore I suggested QUAG. However, the QUAG results are incomplete (all unimodal, crossmodal, audio-avg and video-avg cases). It'd be nice to have it for completeness and corroborate the results.
**Q3a.**: The results are interesting. Did you investigate why the effect of QRR is minimal for higher batch sizes? It'd be nice to investigate it further, or atleast mention it in the future works.
The additional experiments have increased my confidence it authors' works. Even though the performance improvement is not a lot, I got many insights from their analysis. However, I do agree there are some loose ends in the paper that could be polished more.
To this, I increase my score to 3. I hope the authors can follow-up on the questions and add the new experiments to their paper.
###EDIT####
After reviewing the authors' latest rebuttal I am more confident of authors' works. I appreciate their prompt response and detailed experiments of their method validated on OOD benchmarks, learning dynamics and QUAG. While the increment in performance is not drastic, I think the work is insightful enough to garner interest of the multimodal community. To this, I am increasing my score to 4. I hope the authors can add these experiments to their paper.
Thanks.
###########
Regards.
---
Reply to Comment 1.1.1:
Comment: Dear gn7r,
**Thanks for your thoughtful comment and affirmation of our work. Your constructive suggestions have been invaluable in helping us refine and polish our work.** Hence, we have carefully followed up on the questions raised and expanded our experiments, hoping to address your concerns from a more comprehensive perspective.
> Q1: (**Extension to more fusion paradigms**) need complete picture with changes after applying QRR.
Thanks for pointing this out. To address this, we added visualization with gradients and attention scores. As shown in [Figure2-addition](https://anonymous.4open.science/r/ICML-2025-Rebuttal-07BE/Figure2-addition.png), after applying QRR, **both the attention score and the gradients across modalities are closer**, indicating the ability of QRR to ease the modality bias. Besides, some previous works indeed reveal that the modality bias is increased in the late fusion paradigm [1].
[1] Yedi, Z., et al. "Understanding Unimodal Bias in Multimodal Deep Linear Networks.", ICML 2024.
> Q2.a: (**OOD benchmarks**) OOD Results.
Thanks for mentioning this. For multimodal OOD detection, improving the representation quality [2] and designing better score functions for OOD prediction are two strategies and closely related [3,4]. In this work, QRR mainly focuses on providing high-quality multimodal representation. In our initial reply, to obtain the results at a faster rate, we use the simplest Maximum Softmax Prediction (MSP) as the score function, but which is an underestimation of true representation quality [4]. QRR still brings improvements in this situation. At this time, we conduct experiments on more effective score functions, Energy [3] and GEN [4], which could more accurately reflect the representation quality. As shown in [Table2-update](https://anonymous.4open.science/r/ICML-2025-Rebuttal-07BE/Table2-update.png), comparing to baseline Vanilla MT, QRR achieves around **2.4%-3.1%** increase on ID-Acc, **0.9%-2.4%** decrease on FRR95, and **2.8%-4.2%** increase on AUROC. These suggest that QRR can enhance multimodal representation and leverage information from both modalities.
We sincerely thank you for providing such a valuable perspective on testing the QRR under more comprehensive and challenging circumstances, which further demonstrates our effectiveness. We'll add these results to the revised manuscripts.
[2] Hao, D., et al. "MultiOOD: Scaling Out-of-Distribution Detection for Multiple Modalities", NeurIPS 2024.
[3] Liu, W., et al. "Energy-based out-of-distribution detection.", NeurIPS 2020.
[4] Liu, X., et al. "Gen: Pushing the limits of softmax-based out-of-distribution
detection.", CVPR 2023.
> Q2.b: (**QUAG test**) QUAG results are incomplete (all unimodal, crossmodal, audio-avg, and video-avg cases).
Thanks for providing a valuable validating method to evaluate our model from a multimodal interaction perspective. To explore intra- and inter-modality interactions, we explore on the baseline with transformer fusion blocks and conduct complete QUAG tests (unimodal, crossmodal, audio-avg, and video-avg) with and without QRR. As shown in [Table8](https://anonymous.4open.science/r/ICML-2025-Rebuttal-07BE/Table8.png), the QRR algorithm exhibits **more performance drops around 2.7%-5.0% across all types of QUAG tests** compared to baseline. This indicates that QRR can not only fully leverage both modalities faithfully but also learn comprehensive multimodal interactions.
> Q3.a: (**Batch size ablation.**) why the effect of QRR is minimal for higher batch sizes?
Thanks for your so nice and responsible reviews which helps us to analyze the experiments. Since QRR is a sample-wise modulation, we consider that it might be caused by the reduction in sample-wise variation when batch sizes grow, inadvertently harming the fitness between the rotation matrix and data on a per-sample basis. Thanks for pointing out this, we'll mention this in future work to promote further discussion in our revised manuscripts.
Once Again, **we greatly appreciate your affirmation of our analysis**, we also believe that QRR has the potential to revive the cooperation dynamics of transformers and be meaningful and valuable to the community.
If you have any further concerns or suggestions, please feel free to share them, and we will carefully consider and revise our work accordingly. Thanks a lot for your contribution, we'll add these experiments and analyses to our revised manuscripts.
\#\#\#\# EDIT \#\#\#\#
**We sincerely appreciate your confidence in our work and your affirmation of our contribution to the multimodal community.** Thanks greatly for your invaluable and constructive suggestions helping us to polish our work. We'll add these experiments to our paper.
Best regards.
\#\#\#\#\#\#\#\#\#\#\#\# | Summary: This paper focuses on fusion strategies in multimodal transformers, identifies issues in dynamic fusion, proposes the QRR algorithm, and validates its effectiveness in restoring cooperation dynamics and improving performance through experiments
Claims And Evidence: The paper's claims are supported by some evidence, but the universality in broader scenarios lacks sufficient proof as experiments are based on specific datasets and settings
Methods And Evaluation Criteria: The QRR algorithm is reasonably designed, and the benchmark datasets are suitable. However, the evaluation lacks indepth analysis of dataset characteristics.
Theoretical Claims: The theoretical analysis is clear.
Experimental Designs Or Analyses: The experimental design is comprehensive, yet the analysis lacks indepth statistical methods to determine result significance and stability.
Supplementary Material: The supplementary material contains the Dataset and Experiment Settings as well as the QRR Algorithm, which are somewhat helpful for understanding the article and the experiments.
Relation To Broader Scientific Literature: The QRR algorithm offers a novel approach to addressing the issue of modality imbalance, differing from prior methods that optimized single-modal encoders, thereby extending the existing research framework and providing new insights for multimodal fusion.
Essential References Not Discussed: none
Other Strengths And Weaknesses: **Strengths:** The research addresses a problem of practical significance by proposing a solution to the fusion challenges encountered in the practical application of multimodal Transformers. The QRR algorithm is straightforward yet effective, requiring no additional training loss and enhancing performance without increasing model complexity, which demonstrates its innovativeness.
**Weaknesses:** I feel that the paper lacks in-depth analysis and does not adequately validate its assumptions. The experimental section employs too few benchmarks, and I would like to see whether the proposed method can provide assistance to existing multimodal models.
Other Comments Or Suggestions: none
Questions For Authors: none
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear reviewer FKSM,
**We appreciate your time and great efforts in reviewing.**
We carefully considered your comments on the validation of assumptions, lack of in-depth analysis and extension to benchmarks and methods and conducted corresponding experiments and theoretic analysis.
**Q1: The validation of the deactivation of the cooperation dynamics assumption is not adequate since the datasets and settings are limited.**
Thanks for pointing out. In the previous version, we provide visualization and analysis of Kinetic-Sound for this assumption. To holistically and systematically validate our assumption, which mainly states that the modality bias in the multimodal training process triggers a self-reinforcing cycle that leads to inequality in feature quality and unreasonable attention score, we conduct experiments on more datasets including CREMA-D, Kinetic-Sound, CMU-MOSEI(A+T), CMU-MOSEI(V+T), UCF-101, HMDB51, whose modality ranging from audio, RGB, text and optical flows and exhibiting variation in modality sequence length. As shown in [Figure 1](https://anonymous.4open.science/r/ICML-2025-Rebuttal-07BE/Figure1.png), with different modalities combinations and dataset scales, the unreasonable attention score attribution agree with our previous study. Besides, the visualization of average key distribution, where the noise input consistently has higher cosine similarity, further validates our assumption and theoretic analysis towards cooperation dynamics under imbalance multimodal learning.
**Q2: Lacks in-depth analysis**
> Q2.a: in-depth analysis of self-reinforcing cycle.
In addition to the observation of attention score during training dynamics, we monitor the gradient of unimodal encoders to verify our proposition of a self-reinforcing cycle. As shown in [Figure 3](https://anonymous.4open.science/r/ICML-2025-Rebuttal-07BE/Figure3.png), the gradient of both modalities's encoder are similar during the start time. However, the biased modality (audio) encoder increases significantly during the mid-stage, when the attention score begins to accumulate in the biased modality. Later, both of the gradients drop due to the minimization of total loss. This observation further verifies our theoretical analysis and ensures the existence of the self-reinforcing cycle.
Moreover, from the perspective of fusion settings, we visualize the gradient and distribution of attention scores on different modalities under comprehensive fusion paradigms. As shown in [Figure 2](https://anonymous.4open.science/r/ICML-2025-Rebuttal-07BE/Figure2.png), even as the starts of the fusion layers varied, the gradients of the biased unimodal encoder and attention score towards biased modality consistently exhibits considerable greater than the unbiased one, verifying our assumption and analysis of the self-reinforcing cycle.
> Q2.b: in-depth analysis of significance and stability.
To evaluate the stability and result significance of our method. We conduct repetitive experiments over CREMA-D and Kinetic-Sound with only differences in random seeds. By applying Pearson correlation analysis as shown in [Table 6](https://anonymous.4open.science/r/ICML-2025-Rebuttal-07BE/Table6.png), with 0 denotes vanilla multimodal transformer while 1 denotes applying QRR algorithm, the coefficient is 0.765 for CREMA-D and 0.698 for Kinetic-Sound indicates its statistically significant on performance improvements with p-value < 0.01 ensures the confidence of the analysis.
Besides, ablation on batch size ensures the stability of our method. As results shown in [Table 5](https://anonymous.4open.science/r/ICML-2025-Rebuttal-07BE/Table5.png), with batch size varying from 16 to 256 on CREMAD, the QRR consistently outperforms baseline methods by **0.7% ~ 3%**, revealing the stability of our method.
**Q3: Extension**
> Q3.a: Extension to more benchmarks
In addition to existing benchmarks and testing, We adopt MultiOOD [1] for the Out-of-Distribution benchmark and further validate the performance of our QRR method. Here, we consider both near-OOD and far-OOD scenarios discussed in MultiOOD. As results shown in [Table 2](https://anonymous.4open.science/r/ICML-2025-Rebuttal-07BE/Table2.png), our QRR algorithm consistently outperforms the baseline on all metrics and fully tests the effectiveness of our method from another perspective.
[1] Hao, D., et al. "MultiOOD: Scaling Out-of-Distribution Detection for Multiple Modalities", NeurIPS 2024.
> Q3.b: Combine QRR with existing methods
Due to the time constraints, we haven't done enough attempts to combine our method with existing multimodal methods, which would be conducted during discussion period.
If you have any valuable questions or constructive suggestions that could help you understand our work, please tell us and help to produce a higher quality paper.
---
Rebuttal Comment 1.1:
Comment: The responses has addressed my concerns, and I’m willing to raise my score.
---
Reply to Comment 1.1.1:
Comment: **We sincerely appreciate your affirmation of the innovativeness and significance of our work**, and we're more than happy to know that your concerns have been addressed. Thanks for your valuable time reviewing and invaluable suggestions to improve our work. | Summary: The paper identifies the issue of the self-reinforcing cycle toward the majority modality in multimodal learning. To address this, the authors propose a query rebalance rotation method that disrupts the cycle and rebalances the attention mechanism. Experimental results and visualizations demonstrate the effectiveness of the proposed method.
Claims And Evidence: Overall, the claims are well-supported. For example, in the introduction, each claim is supported by recent references or empirical results. Additionally, the investigation into the superior performance of static fusion over dynamic fusion is interesting and highlights the motivations of the paper.
Methods And Evaluation Criteria: -
Theoretical Claims: 1. The authors mention that as training progresses, the superior modality gains more attention and is better optimized. However, is this always the case, even in the late training stage when training is nearly converged? I believe that even in the early stages, the superior modality, having a higher gradient for backpropagation, would converge more quickly and reach a (sub-)optimal point, at which its gradient becomes smaller than that of the weaker modality. Could you clarify this reasoning?
2. The setting in Section 3.2 appears to be under cold-start training, meaning the models are trained from scratch. As far as I know, CREMA-D and CMU-MOSEI are highly biased multimodal datasets, where the feature quality of one modality is significantly better than the others. What if multimodal transformers were pre-trained on more balanced and diverse datasets? Would the self-reinforcing cycle still persist in that case?
Experimental Designs Or Analyses: 1. In table 1, compared with baselines, the improvements are limited. On CREMA-D and MOSEI, the proposed method achieves only a 0.2% gain, while on Kinetic-Sound, its performance is even worse than the baselines.
2. Given the limited improvement, significance tests are necessary to determine whether the gain is statistically meaningful.
3. Section 4.3 is interesting, and Figure 4 is clear and effectively demonstrates the impact of the proposed QRR module in enhancing multimodal learning.
Supplementary Material: -
Relation To Broader Scientific Literature: -
Essential References Not Discussed: -
Other Strengths And Weaknesses: -
Other Comments Or Suggestions: Overall, I am inclined to accept the paper, as it presents insightful findings and visualizations, even though the proposed method does not outperform the state-of-the-art.
Questions For Authors: Please refer to my comments above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Dear reviewer 61wP,
**Thank you very much for your affirmation and constructive comments.**
We carefully considered your comments and conducted corresponding experiments.
**Q1: The change of gradients during training.**
Thank you for your question. From the optimization perspective, previous works on imbalanced multimodal learning have extensively discussed the evolution of gradients under a late fusion structure with concatenation or summation fusion, which shows similarities to our setup[1, 2]. A well-established observation is that the superior modality always gains more optimization momentum, as its gradient is higher than that of the weaker modality.
However, our setup differs from previous simpler structures where we adopts more complex transformer blocks for multimodal fusion, and your question prompted us to create a clearer visualization to provide more convincing evidence for our theoretical analysis.
As shown in [Figure 3](https://anonymous.4open.science/r/ICML-2025-Rebuttal-07BE/Figure3.png), the gradient of the audio encoder increases significantly during the mid-stage, when the attention score begins to accumulate in the biased modality. This results in a noticeable gap between the modalities. As the total loss decreases over time, the gradients of both modalities drop, but their relative relationship remains. This further validates our theoretical analysis, showing that the biased modality consistently receives more optimization momentum. The gradient of the weaker modality never exceeds that of the superior modality, which can be explained by Equations 7 and 8 in our paper. Since the only difference in gradients for each modality is $\frac{\partial h_i}{\partial z_i^m}$ and the loss is the multimodal loss, where "one modality becoming converged" equals "the multimodal model becoming converged," the total loss becomes very small. This small loss cannot provide enough momentum for the superior modality to optimize effectively. Hence, the gradient of the superior modality will be consistently greater than the weak one.
[1] Peng, X., et al. "Balanced multimodal learning via on-the-fly gradient modulation." CVPR 2022.
[2] Fan, Y., et al. "Pmr: Prototypical modal rebalance for multimodal learning." CVPR 2023
**Q2: Cold-Start and pretraining's influence on self-reinforcing cycle.**
Thank you for your question. As discussed in the Experiments section, we are not using cold-start but using a 4-layer ViT-B/16 as the backbone, initialized with pre-trained weights from ImageNet-21k. For the MOSEI dataset, we use the Vanilla Transformer without pretraining. The results shown in Table 1 and the visualizations in Figure 4 demonstrate that even under pretraining conditions, the self-reinforcement cycle still exists and harms the overall performance of multimodal transformers.
**Q3: Limited Improvements**
> Q3.a: Significance tests to prove the gain is statistically meaningful.
Thank you for your suggestion. To prove the significance and stability of QRR, we selected 10 random seeds and conducted repeated experiments with the same settings on the CREMA-D and Kinetic-Sound datasets. By applying Pearson correlation analysis as shown in [Table 6](https://anonymous.4open.science/r/ICML-2025-Rebuttal-07BE/Table6.png), with 0 denotes vanilla multimodal transformer while 1 denotes applying QRR algorithm, the coefficient is 0.765 for CREMA-D and 0.698 for Kinetic-Sound indicates its statistically significant on performance improvements with p-value < 0.01 ensures the confidence of the analysis. We'll further accomplish a more systematic analysis of pretraining's influence during the discussion period.
> Q3.b: Limited improvements.
Thanks for mentioning this. However, the QRR, which requires only:
- **1%** increase on parameters
- **0.1%** on GFLOPs
As shown in [Table 7](https://anonymous.4open.science/r/ICML-2025-Rebuttal-07BE/Table7.png). It achieves considerable improvements on its baseline vanilla multimodal transformer (accuracy increase by **3.1%** on CREMA-D and by **2.3%** on Kinetic-Sound), while gaining comparable and even better performance over other specially designed transformer architecture with much more requirements on computation resources and increased complexity. Hence, the improvements are significant especially taking the computation and time cost into consideration.
As for CMU-MOSEI, we acknowledge this is an difficult dataset for multimodal sentiment analysis. With all of our comparison methods achieving limited improvements and previous research [3] with three modalities input could only achieve around 1.5% improvements, our QRR algorithm performs relatively significant increment.
[3] Paul Pu Liang et al. "Multibench: Multiscale Benchmarks for Multimodal Representation Learning." NeurIPS, 2021
If you have any valuable questions or constructive suggestions that could help you understand our work, please tell us and help to produce a higher quality paper.
---
Rebuttal Comment 1.1:
Comment: Thanks authors for the rebuttal. Most of my concerns are well-addressed especially the performance part. I will maintain my ratings of weak accept for it still represents my overall impression of the paper's contribution and significance at this time.
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate your constructive suggestions and insightful feedback, which have been invaluable in helping us refine and improve our work. Additionally, **we are grateful for your affirmation and positive comments regarding the significance and contribution of our work.** | null | null | null | null | null | null | null | null |
GEFA: A General Feature Attribution Framework Using Proxy Gradient Estimation | Accept (poster) | Summary: This paper introduces GEFA, a feature attribution framework leveraging proxy gradients to generate explanations for different kinds of ML models. Unlike prior gradient-based explainers that operate under white-box assumptions, GEFA is designed to work for black-box models, and it is applicable to models with only query access. The method builds upon a proxy space representation, which enables estimation of feature attributions through a path integral approach, aligning with integrated gradients and providing an unbiased estimation of the Shapley value. Experiments on text (Amazon Reviews, SST-2, QNLI) and image (ImageNet) classification tasks demonstrates the effectiveness of GEFA over baselines, including IG, KernelSHAP, and GEEX.
## update after rebuttal
The response has cleared my concerns. I have no more questions and will keep my score.
Claims And Evidence: Yes. The paper makes the following theoretical claims and proved them.
- GEFA is an unbiased estimator of the Shapley Value: This is mathematically demonstrated in Thm 2 and App. A1.
- GEFA is equivalent to IG when taking the same edge path: This is mathematically demonstrated in Thm 5. This connection is important for grounding GEFA in existing XAI literature.
Methods And Evaluation Criteria: Yes. Both text and image datasets are considered. The evaluation strategy is reasonable, using feature deletion to quantify their importance quality. The authors acknowledge potential limitations of deletion-based evaluation, e.g. concerns of OOD.
However, as an explanation work that is supposed to be deployed for humans to use, having human evaluations would strengthen the papers quality.
Theoretical Claims: I scanned through the proof of Thm 2 in App. A1. Seems to be correct.
Experimental Designs Or Analyses: Yes, I checked the experiment results. They make sense. However, it's hard to say that the qualitative evaluation in Figure 2 is convincing, as these kind of visualizations were known to be misleading [1].
[1] Adebayo, J., Gilmer, J., Muelly, M., Goodfellow, I., Hardt, M., & Kim, B. (2018). Sanity checks for saliency maps. Advances in neural information processing systems, 31.
Supplementary Material: Yes, I checked the appendix, especially A1 for the proof and B1 for the experiment setting.
Relation To Broader Scientific Literature: The paper is well-situated within the feature attribution and explainability literature. It builds on prior work in Shapley-based explanations (e.g. SHAP) and gradient-based attribution methods (e.g. IG). The work also contributes to the debate on black-box vs. white-box explainers, demonstrating that black-box methods can achieve competitive performance.
Essential References Not Discussed: The coverage of related work is reasonable.
Other Strengths And Weaknesses: Strengths:
- Solid theoretical results, with proofs supporting key claims.
- Empirical evaluations covering both text and image data.
Weaknesses:
- As an explanation work that is supposed to be deployed for humans to use, having human evaluations would strengthen the paper's quality.
Other Comments Or Suggestions: No more.
Questions For Authors: See above.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We would like to thank the reviewers for the detailed comments and the efforts devoted to reviewing the paper. We are encouraged that our efforts in providing theoretical grounding for the proposed approach were well perceived. Our point-to-point responses to the concerns and questions raised in the comments are given below.
**Sanity checks for feature attribution**: We understand the reviewer’s concern regarding the performance of the proposed approach under sanity checks. Theoretically, GEFA carries fewer risks of reproducing input information — especially when compared to white-box approaches, which can heavily rely on model structure and input values.
The explanation process described by Eq. (8) delivers feature attributions according to observations of model outcomes. As such, concrete attribution scores naturally change when the learnable model parameters are altered and the outputs differ. Specifically, the computation in Eq. (8) depends on model predictions and randomly sampled masks. The resulting explanations do not explicitly rely on input values, thereby minimizing the risk of reproducing input information and reinforcing GEFA’s focus on unraveling model behaviors.
To further support our claim, we performed sanity checks on the competitors considered in our work. The following table presents **Spearman Rank Correlations** between explanations derived from a pretrained model and those from a randomly initialized version of the same model architecture.
We report the rank correlation between the absolute attribution scores determined on the two model versions. A lower correlation magnitude indicates better performance under the sanity check.
|Rank Correlations| VG| IG|PSHAP|GEEX|GEFA|
|-|-|-|-|-|-|
| InceptionV3|0.2371|0.5701|0.4035|0.5695|**0.1249**|
| ViT|**-0.0021**|0.4351|0.1253|0.5633|0.0625|
Consistent with the above argument, GEFA achieves competitive performance in the sanity checks, exhibiting low correlation values in both test settings (InceptionV3 and ViT). In contrast, IG and GEEX, which explicitly incorporate input information during their explanation processes, perform relatively worse in the tests.
An additional observation is that all competitors tend to obtain lower correlation scores on ViT compared to the InceptionV3 setting.
Although this goes beyond the scope of feature attribution evaluation, we interpret this difference as a consequence of architectural characteristics inherent to CNNs. InceptionV3 depends heavily on convolution operations, which implicitly encode the prior knowledge about the relevance of spatially adjacent pixels. While this architectural bias facilitates model training and improves prediction performance, it can potentially lead to more consistent attribution patterns, even across different model versions, thereby resulting in higher explanation similarity.
By contrast, the attention mechanism in ViT allows interactions between arbitrary features, regardless of their spatial distance. As a result, the classification behavior of ViT is more dependent on its learnable parameters rather than architectural priors. This leads to a more significant change in model behavior after random initialization, which in turn results in lower rank correlations in the sanity checks.
**Human evaluation**: We thank the reviewer for the constructive comment and fully agree that explanation comprehensibility to humans represents a crucial aspect of explanation quality. However, we did not include human evaluation at the current stage of our work due to concerns about human inductive bias. Without specific knowledge of the tested model, human evaluators may form expectations that diverge from the underlying model behaviors. For example, human evaluators may expect feature attributions to highlight the target object, potentially underestimating the quality of explanations derived for models that rely on features different from those anticipated by humans.
With the effectiveness of GEFA demonstrated through automated evaluations, we will carefully consider incorporating human evaluations in future work to improve the presentation of explanation results. We also aim to further explore potential utilities of feature-attribution-based explanations in understanding and improving data-driven models, particularly in the aspects of debugging and debiasing.
We hope that our responses address the concerns that the reviewer has raised. We look forward to further comments from the reviewer and are ready to engage in the next round of discussion.
---
Rebuttal Comment 1.1:
Comment: Thanks! The response has cleared my concerns. I have no more questions and will keep my score. | Summary: This work presents GEFA -- Gradient-estimation-based Explanation For All. GEFA is a general feature attribution framework based on proxy gradient estimation. The authors argue that GEFA offers a black-box explainability solution that is broadly applicable across different input modalities (e.g., images, text) while maintaining theoretical guarantees. The main contributions are (1) A new black-box feature attribution method leveraging proxy gradients. (2) A proof that GEFA produces unbiased estimates of Shapley Values. (3) A comparison between GEFA and Integrated Gradients (IG), demonstrating that the two methods coincide under specific path choices. (4) Empirical validation intended to show improved efficiency and faithfulness over existing methods.
Claims And Evidence: The paper’s core claim is that GEFA generalizes feature attribution beyond previous black-box methods while maintaining Shapley-based guarantees. The theoretical analysis supports this, but empirical validation has limitations, see weakness section.
Methods And Evaluation Criteria: The method is evaluated via Deletion based metrics on ImageNet using a InceptionV3 and a ViT. For text classification they using BERT and LLaMA.
- Single evaluation metric AOPC. I am not fully convinced by ROAR (I think it explain the distribution of functional rather than a single model) but i would like to see at least MuFidelity / Insertion and Deletion, not just normalized AOPC.
- Limited Baselines: Many black-box methods are missing. I would like to see how RISE (petsiuk), HSIC (novello) compare to your method, many method are not considered, despite their relevance.
Theoretical Claims: The theoretical contributions are solid.
Experimental Designs Or Analyses: The experiments are well-organized, althgouht they could be expanded (see my remarks below)
Supplementary Material: Yes, all.
Relation To Broader Scientific Literature: Medium, 34 citations are good, but I think we could ask for more given the number of articles in this area, and for ICML. Especially, many black-box attribution methods and metrics are missing: Bhatt et al., 2020; Jacovi & Goldberg, 2020; Hedstrom et al. 2022; Hsieh et al., 2021; Boopathy et al.,2020; Lin et al., 2019; Fel et al., 2021. Idrissi et al., 2021; Novello et al., 2022.
Essential References Not Discussed: see my previous point.
Other Strengths And Weaknesses: Strengths:
- The proofs of Shapley Value equivalence and variance reduction are strong and interesting.
- Clear writing and structure, the explanations are mathematically rigorous and easy to follow.
- I really liked the Variance reduction part !
However, despite being interesting and well-written, Here are, in my opinion, the weak points of the paper, which I will group into major problems (**M**) and minor problems (**m**).
Major (**M**):
**M1**: Lack of comparisons with black-box method: lime, rise hsic. These are relevant for black-box attribution.
**M2**: Single evaluation metric. Deletion-based AOPC is common but insufficient. I would like to see Insertion, Deletion and Mufidleity.
now for the Minor (**m**):
**m1**: No failure cases. What happens when proxy gradient estimation fails?
**m2**: Discussion of hyperparameters is missing. How does query budget impact performance?
**m3**: Novel Insights into Model Behavior: A key question that i like to ask on any interpretability research is: **What new insights about model behavior does this method uncover?** If the method can reveal previously unknown biases or learned shortcuts or anything else new.
Other Comments Or Suggestions: See the previous section.
Questions For Authors: See **M1,2** and **m1,2,3**.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We would like to thank the reviewers for the detailed comments and the efforts devoted to reviewing the paper. We are encouraged that our theoretical grounding for the proposed approach was well received and that the reviewer liked the analyses. Our point-to-point responses to the concerns and questions are given below.
First, we appreciate the thoughtful reference list, which will help improve the related work section with more thorough coverage of SOTA.
**M1**: In the experimental section, we initially focused on gradient-based methods and the SHAP family, given their respective connections to gradient estimation and Shapley Values. However, we agree that RISE and HSIC are representative and relevant competitors in the black box setting, particularly due to their use of binary masks for query generation. We have expanded our experiments to include them.
**M2**: We have extended the experiments with Insertion, Deletion, and MuFidelity. For the two additional competitors, we use the author-released version of RISE and ***xplique*** for HSIC.
Please note that we use AUC to quantify method performance in Insertion and Deletion, strictly following the setting used in HSIC (novello). Lower values are better for Deletion (indicated by ↓), whereas higher values are better for Insertion and μFidelity (indicated by ↑).
For a better sense of the results, we first evaluated the competitors on ResNet50 with a zero baseline — a setting used by RISE and HSIC. GEFA performs competitively across all tests, consistently ranking among the top two (in bold).
|ResNet50| IG|PSHAP|GEEX|GEFA|RISE|HSIC|
|-|-|-|-|-|-|-|
|Deletion ↓|**0.0493**|0.1691|0.0921|**0.0750**|0.1073|0.0890|
|Insertion↑|0.1888|0.3030|0.2844|**0.7297**|**0.6395**|0.5831|
|μFidelity ↑|0.0328|0.0259|0.0241|**0.0629**|0.0428|**0.0673**|
The tests were also repeated on InceptionV3 with the original setting from our paper. The results align with those reported using nAOPC scores. This is expected, as nAOPC and AUC (used by Deletion and Insertion) measure complementary areas along the perturbation curve.
|Inception| IG|PSHAP|GEEX|GEFA|RISE|HSIC|
|-|-|-|-|-|-|-|
|Deletion ↓|**0.0926**|0.1803|0.1661|**0.1046**|0.2205|0.1674|
|Insertion↑|**0.8048**|0.7532|0.7421|**0.8235**|0.7244|0.6926|
|μFidelity ↑|**0.0851**|0.0407|0.0441|**0.0519**|0.0136|0.0401|
Due to space constraints, we cannot provide a full view of our understanding of the experimental results. However, we welcome any further questions and would like to explore additional insights together.
**m1 (Failure case)**: Proxy gradient estimation can face challenges when feature masking has minimal impact on model outputs. Such cases may arise when the classification target is represented by redundant or widely distributed features — masking only parts of relevant features fails to expose model sensitivity, despite their relevance. This can lead to an underestimation of attributions to truly relevant features, resulting in noisy and less comprehensible explanations. The most straightforward solution is to enlarge the query budget, which increases the chance of sampling effective masks that expose model sensitivities. In addition, the use of mask smoothing (Appendix B.3.3) mitigates the risk of such failure cases. By softly grouping locally adjacent features, mask smoothing increases the probability of removing meaningful local patterns, inducing more significant changes in model outcomes.
**m2 (Hyperparameters)**: The query budgets are empirically determined based on the feature space dimensionality of each test case. Appendix D.1 investigates and discusses GEFA’s sensitivity to the query budget. Appendices D.2 and D.3 further examine the effect of the control variate coefficient, demonstrating the optimality of the estimated $\beta^*$ and highlighting the importance of the correlation assumption stated in Assumption 3.
**m3 (New insights)**: We focus on improving the quality of black-box explanations. With explanations that better reflect attributions to specific features, more faithful insights to model behaviors become available, thereby contributing to specific use cases, e.g. debugging and debiasing noted by the reviewer, and potentially model distillation (more effective model knowledge transferring by encouraging focus on salient regions). Additionally, we see GEFA’s potential to handle more complicated outcomes (e.g. text generation by LLMs). This is inspired by the use of gradient estimation in managing delayed rewards in RL — a challenge where backpropagation is less effective. We are currently investigating better formulations for model outcome observations and exploring the compatibility of GEFA with models producing more complex outputs, including those involving multi-modalities.
We hope our responses adequately address the reviewer's concerns. We look forward to further comments and are ready to engage in the next round of discussion.
---
Rebuttal Comment 1.1:
Comment: Thank you for the thorough and thoughtful response.
I appreciate the added experiments and clarifications—you've addressed my concerns well.
The method is sound and the results are clearer now. My only remaining hesitation is the relatively limited impact (attribution methods), which is why i’m not increasing my score further. Again, congratulations to the authors, I wish them best of luck with the paper acceptance ! | Summary: In this work, the authors propose a blackbox feature attribution method based on proxy gradient estimation. Specifically, they introduce proxy variables, each denoting a binary feature-level selection. The authors show that their approach is an unbiased estimator of shapley values, thus sharing some of the nice properties of shapley values. They also show experimentally that their method works similarly to integrated gradient method, where gradients can directly be estimated.
Claims And Evidence: Claims are supported by theoretical results and empirical evidence
Methods And Evaluation Criteria: - The work utilizes standard datasets used for such attribution-level expeiments (SST2).While they compare different blackbox models for text, they only consider Inceptionv3 for images.
- Generally experimental methodology makes sense, however quantitative results focus on a single metric. More evaluation metrics (e.g. impact of data perturbations to accuracy), similar to prior work (Do Feature Attribution Methods Correctly Attribute Features?, Zhou et al, AAAI 2022) would add to the experimental evidence
Theoretical Claims: Checked A.1 (equivalence to shapley values), and it seems correct (but I've not thoroughly verified in detail)
Experimental Designs Or Analyses: - The work utilizes standard datasets used for such attribution-level expeiments (SST2).While they compare different blackbox models for text, they only consider Inceptionv3 for images. The setup makes sense, though the datasets/models used are relatively small. However, they show the utility of the method
- Can authors discuss more about connections to linear regression (e.g. similar to SHAP), and some local explanations like LIME -- which also uses the idea of masking variables but is not gradient-based/has guarantees
Supplementary Material: - Primarily appendix A.1
Relation To Broader Scientific Literature: - The paper proposes a proxy-gradient estimation based method for feature attribution, and connect it to prior work in the feature attribution based explanation space
Essential References Not Discussed: - Literature on issues with feature attribution based explanations e.g. [1] not linked to very well
Adebayo, Julius, et al. "Sanity checks for saliency maps." Advances in neural information processing systems 31 (2018).
Other Strengths And Weaknesses: Experimental methodology seems sound, though results are primarily on relatively small datasets. Can authors comment more on the computational complexity?
Other Comments Or Suggestions: - Can authors expand on limitations of this work/using feature attributions as interpretations?
Questions For Authors: - Can authors discuss more about connections to linear regression (e.g. similar to SHAP), and some local explanations like LIME -- which also uses the idea of masking variables but is not gradient-based/has guarantees
- Can authors comment on feasibility with larger datasets?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We would like to thank the reviewers for the detailed comments and the efforts devoted to reviewing the paper. Our point-to-point responses to the concerns and questions are given below.
**Test setting for images**: We consider two image classifiers — InceptionV3 and ViT — as shown in Table 2. These models incorporate two widely used architectural components: convolutional layers and attention mechanisms. We believe the structural diversity better demonstrates GEFA’s independence from specific model architectures.
**More evaluations**: We evaluate the impact of explanation-guided manipulation on model accuracy and present the results in the following table. Prediction accuracy is reported after removing the top 50% of the most important features as identified by each explainer; lower is better.
The impacts on prediction accuracies generally align with the nAOPC scores, which summarize the overall perturbation process.
It is noteworthy that GEFA outperforms IG in the test as it has more comprehensive coverage of relevant features, which is shown by the perturbation curves in our response to Reviewer **yynM** (via anonymous link).
We also refer the reviewer to our response to Reviewer **fwuD** for further results under extended settings.
|InceptionV3|VG|IG|PSHAP|GEEX|GEFA|
|-|-|-|-|-|-|
|Accuracy|0.494|0.142|0.195|0.163|**0.092**|
**Connections to linear regression**: Linear regression is used as a surrogate to approximate local model behaviors. While GEFA does not build a surrogate model, an indirect connection is drawn by KernelSHAP, which shows that linear regression with a carefully designed weighting kernel can serve as an estimator of Shapley Values.
**Connections to local explanations**: As noted by the reviewer, GEFA shares a high-level idea with other black-box methods. However, GEFA distinguishes itself from heuristic-based approaches through the rigorously derived sampling strategy and observation aggregation process. These analyses further provide theoretical grounding and desirable properties for the proposed method.
**Sanity checks**: We understand the reviewer’s concern about sanity checks. Theoretically, GEFA derives explanations based on observations of model outputs; thus, concrete attribution scores will change when model parameters are altered, as it results in different predictions. Due to character limitations in the response, we kindly refer the reviewer to our reply to Reviewer **q8np** for further details and experimental results.
**Time complexity**: Appendix B.3.4 provides the time complexity analysis from the perspective of query budgets. Generally, black-box competitors exhibit similar complexity when receiving identical query budgets. However, some methods involve additional steps during explanation process, which can be slower than GEFA in practice. From the perspective of feature space dimensionality, the query search space grows exponentially as the feature space expands, posing a challenge for all black-box explainers. We incorporate *mask smoothing* (Appendix B.3.3) to counteract the complexity due to the increase of feature space dimensionality.
**Feasibility with larger datasets**: We interpret “larger datasets” in two ways: datasets with more entries and inputs with more features. GEFA is insensitive to dataset size, as feature attribution focuses on explaining individual decisions. In contrast, larger inputs indeed increase computational complexity for black-box explainers, consistent with the discussion about time complexity.
**Limitations**: Feature attribution is an important step toward understanding model behavior. However, further developments are awaited for deeper insights. Current approaches typically investigate ultimate feature contributions to model outcomes without considering interactions among features. This can conceal details about how models interpret inputs. For example, CNNs and transformers process inputs differently, but such differences are often not perceptible from feature attribution alone. We are looking into the potential of taking higher-order derivatives within the GEFA framework to reveal interactions between active features. Additionally, the rise of LLMs presents new challenges for explainability. While we obtained promising results with simple test cases on LLMs, we believe that feature attribution is only one piece of the explainability puzzle. Unlike classifiers that typically receive inputs with sufficient information for prediction, LLM prompts often pose questions that require the model to draw on knowledge acquired during training. We believe that feature attribution should at least be complemented by data attribution to demonstrate: 1) how a model interprets a given prompt; 2) which parts of training data contribute to the knowledge for model reactions.
We hope our responses adequately address the reviewer’s concerns. We look forward to further feedback from the reviewer and are ready to engage in continued discussion. | Summary: In this paper, the authors propose a new method for input attribution in DNNs. They focus on the attribution in black-box models, where the gradient is unavailable. In this case, they propose the proxy gradient space for estimation, and then define the attribution of input features. The authors further prove the properties of the proposed metric. They also modify the metric for further variance reduction.
Claims And Evidence: No.
Methods And Evaluation Criteria: - The advantage of GEFA over previous methods is not significant, especially SHAP. From the perspective of correctness, GEFA is an unbiased estimation of SHAP. From the perspective of effectiveness, the theoretical complexity of Eq. (7) is larger than SHAP, since Eq. (7) additionally involves the integration over $\gamma$. Besides, the “information waste” is claimed as a shortcoming of SHAP but I am not sure what it exactly means.
- In the integrating path of GEFA, all input features share the same presence probability, $\forall i, \alpha_i = \gamma$. What is the benefit of such a setting and why not use different presence probabilities?
- The evaluation of attribution methods is limited to nAOPC. Evaluation based on insertion and sanity checks in (Adebayo et al., 2018) should be included. Moreover, besides the nAOPC values, the change curve of model performance along with the deletion should be reported.
Adebayo et al., Sanity checks for saliency maps. In NeurIPS 2018.
Theoretical Claims: The proof of the completeness in Appendix A.2.1 is confusing. Notations like $w_{i\in S}$ and $w_{i\notin S}$ are not formally defined, and equations in Lines 609-612 need explanations.
Experimental Designs Or Analyses: - How many queries were used for GEFA and other baseline methods in experiments? Is GEFA the best under the same number of queries?
Supplementary Material: I read partial proofs in Appendix A.
Relation To Broader Scientific Literature: This paper provides a potential method for estimating attributions in black-box models, extending the integrated gradient to the black setting.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: No.
Other Comments Or Suggestions: The comparison of computation cost of different methods is suggested to be put in the main text.
Questions For Authors: No.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We would like to thank the reviewers for the detailed comments and the efforts devoted to reviewing the paper. Our point-to-point responses to the concerns and questions are given below.
We start with the relationship between GEFA and SHAP:
1. Both GEFA and SHAP are unbiased estimators of Shapley Values, rather than one being an estimator of the other. They are closely related, as both theoretically converge to the same results when the query budget increases.
2. Appendix B.3.4 provides analyses of **time complexity** for the black-box competitors. When the number of queries is set to $n$, the black-box competitors (including GEFA) exhibit the same level of complexity. For GEFA, the query budget is distributed across points on the proxy path, but the total number of queries sticks to $n$. KernelSHAP additionally solves a linear regression problem, and PartitionSHAP applies an explicand-specific feature space partitioning; both introduce additional costs. These costs become more significant as feature space grows, which is empirically demonstrated by Table 4 in Appendix B.3.4.
3. We refer to “information waste” as a limitation of Shapley value estimators relying on computing marginal contributions. Let $x_i$ be a present feature in an observation $x_S$. To compute its marginal contribution in the context of $S$, a paired observation $x_{S\backslash i}$, differing exactly in $x_i$, is required. In other words, for any present feature $x_j$, its marginal contribution given $S$ cannot be computed if model prediction on $x_{S\backslash j}$ is not observed. This induces information waste, as the information about $x_j$ contained in $x_S$ is not used. Regardless of sampling strategies, such waste is unavoidable unless all possible combinations of feature presence are enumerated, which becomes computationally intractable in high-dimensional feature spaces.
4. Furthermore, we would like to highlight the explicit gradient-estimator form of GEFA, which enables the application of the **control variate**. The designed control variate further reduces the estimation variance without additional queries, thereby improving explanation quality. This advantage is theoretically proved and empirically shown in our experiments and Appendix D.2.
**Proxy path**: As discussed in the last paragraph of Section 4.4, the straight-line path — where all $\alpha_i$ takes the same value — is equivalent to averaging over all $p!$ unique edge paths. The equivalence allows us to simplify the problem of averaging estimates across multiple paths to computing a single path-based estimate. For further details, we refer the reviewer to Section 4.4 and Appendix A.4.
**Further evaluations**: We have conducted the additional tests suggested by the reviewer. Due to character limitations in this response, we kindly invite the reviewer to refer to our reply to Reviewer **q8np** for details on sanity checks, and our reply to Reviewer **fwuD** for the Insertion test and other expansions.
**Change curves**: Change curves provide additional insights into explanation effectiveness. We will include plots showing the changing trends under different test settings in the Appendix of the updated version. A preliminary version of change curves is available via https://anonymous.4open.science/api/repo/change_curves_GEFA-F417/file/curves.pdf?v=a9189c14
**Notion in Appendix A.2.1**: We use $w_{i\in S}$ and $w_{i\notin S}$ as shorthand for the weighted contributions of observation to the feature attribution estimate of $x_i$, as defined in the equations in lines 609-612. When combined, the two parts recover Eq. (8). However, we noticed that the definitions mistakenly use $=$ instead of the assignment symbol $:=$. We believe this notation mistake likely caused the confusion, and we thank the reviewer for pointing it out. The notation will be corrected in the updated version.
**Number of queries**: All black box competitors receive identical query budgets. GEFA outperforms other black-box explainers given the same query budgets. Specifically, the number of queries is 500 for text classifiers (lines 290-291, right panel) and 5000 for image classifiers due to the higher dimensionality of images (lines 369-370, left panel). Additionally, Appendix D.1 provides further results evaluating GEFA under varying query budgets. In light of the reviewer’s comment, we recognize that the query budgets should be better highlighted. We will consolidate and relocate the information to the experimental setting section.
**Computation cost**: We agree that computational cost is an important aspect in comparing black-box explainers. While the current version presents this information in the Appendix due to space constraints, we appreciate the suggestion and will move the relevant results to the main text if space permits.
We hope our responses adequately address the reviewer's concerns. We look forward to further feedback and are ready to engage in continued discussion. | null | null | null | null | null | null |
Set Valued Predictions For Robust Domain Generalization | Accept (poster) | Summary: The paper introduces a set-valued prediction approach for robust Domain Generalization (DG). It argues that single-valued predictions limit robustness, proposing instead to predict sets of labels to achieve reliable coverage across unseen domains. The authors provide theoretical generalization bounds and introduce an optimization algorithm (SET-COVER) to minimize set size while maintaining performance guarantees. Experimental results on WILDS datasets show improvements over existing baselines in robustness and prediction set efficiency.
Claims And Evidence: Overall, the paper's claims are largely supported, though some gaps remain.
The theoretical generalization results (VC-dimension-based bounds) rely on restrictive assumptions (e.g., conditional Gaussianity, identical covariance structures across domains.
Methods And Evaluation Criteria: Yes, the methods and evaluation criteria generally make sense.
However, the paper could further strengthen evaluation by considering metrics beyond average recall and set size, such as computational overhead or interpretability.
Theoretical Claims: Yes, I checked the theoretical claims—particularly the generalization bounds involving VC-dimension (Theorem 3.7).
A potential issue is that the theoretical results hinge on overly restrictive assumptions: specifically, the conditional Gaussian assumption with identical covariance structures (up to a scaling factor) across all domains. This assumption is highly unrealistic for most real-world DG scenarios, and the paper does not sufficiently justify or empirically validate its reasonableness, weakening the theoretical claims substantially.
Experimental Designs Or Analyses: Yes, I reviewed the experimental designs, especially those involving real-world datasets from the WILDS benchmark.
One notable issue is that the paper's main experiments primarily emphasize recall and prediction set size but do not sufficiently analyze trade-offs such as computational cost, calibration robustness, or practical usability of large prediction sets.
Supplementary Material: The supplementary material is thorough, clarifying theoretical derivations and providing useful experimental details
Relation To Broader Scientific Literature: The paper extends the idea of set-valued predictions—commonly explored in conformal prediction literature—to Domain Generalization (DG). It builds upon prior work by explicitly addressing worst-case performance guarantees rather than average-case coverage.
Essential References Not Discussed: None
Other Strengths And Weaknesses: **Strengths:**
- Clearly addresses the important problem of robust domain generalization from a fresh perspective (set-valued predictions).
- Combines theoretical and practical aspects effectively, offering well-motivated theoretical bounds alongside a practical optimization approach.
- Empirical results convincingly demonstrate improved robustness on realistic and challenging datasets.
**Weaknesses:**
- Key theoretical results depend heavily on restrictive assumptions (Gaussianity, identical covariance), limiting their practical relevance.
- Experimental analysis lacks consideration of important practical trade-offs (e.g., computational overhead, interpretability of set predictions).
- Methodological novelty is incremental; the paper largely adapts existing conformal prediction and classical learning theory concepts without significant theoretical breakthroughs or novel algorithmic contributions.
Other Comments Or Suggestions: Line 665: destributions" → "distributions"
Questions For Authors: See weeknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you very much for your thoughtful review. We have gained many important insights from your questions, and appreciate the opportunity to address your concerns.
1. SET-COVER incurs a higher training time (~30% increase over ERM) due to the additional optimization of Lagrangian multipliers (denoted as C in our algorithm). Below are the average training times (single GPU) for our experiments:
* Camelyon: ERM: 98 min, SET-COVER: 133 min
* Fmow: ERM: 45 min, SET-COVER: 56 min
* iWildCam: ERM: 46 min, SET-COVER: 58 min
* Amazon: ERM: 12 min, SET-COVER: 15 min
We will include these results in the final version. Notably, our current implementation can be further optimized by, for example, exploiting GPU parallelism. We anticipate that a more efficient implementation would significantly reduce the additional computational overhead. Apart from this step, SET-COVER primarily involves optimizing a loss function composed of hinge losses, which does not introduce substantial extra computation beyond standard architectures.
Train times of other set-prediction methods are approximated by those of ERM, as other set-prediction methods train ERM as a first stage, which consumes most of the training time.
Additionally, other SOTA DG methods that we have tested in appendix E.5 are implemented in the Domain-Bed package, which incorporates runtime optimizations that make direct comparisons inconsistent.
2. We acknowledge that Theorem 3.7 has limited scope, as it assumes normal distributions with all domains sharing the same covariance matrix up to a scaling factor. However, this assumption, though restrictive, aligns with common practices in DG research (e.g., Wald et al. (2021)). In our case, theoretical results with weaker assumptions have an additional difficulty, as briefly discussed in Section 3.1 of our paper. We thus view Theorem 3.7 as an illustration and motivation leading to our method and our empirical results.
To address potential limitations, we validate our claims empirically, including:
* Experiments on synthetic Gaussian data where each domain has a different covariance matrix (Appendix E.3).
* Real-world datasets to test robustness beyond the Gaussian assumption.
3. We would like to highlight the novelty our paper brings to the field of Domain Generalization (DG). While conformal predictors are a powerful approach to DG problems, we primarily use them as baselines to compare against our proposed method, SET-COVER. Unlike conformal prediction, SET-COVER is an optimization-based approach designed for modern neural network architectures, offering set-valued predictions with optimized sizes. We see this as a fundamentally new alternative for set-valued predictions in DG.
Additionally, while our theoretical analysis builds on VC-dimension and uniform convergence literature, extending these concepts to a multi-domain setting requires subtle but significant modifications. In the appendices, we aim to highlight these nuances and the key differences that arise in the multi-domain context compared to classical settings. For example, we found that shifting the focus from 0-1 loss to performance indicators required a careful consideration, as further described in Appendix A1. We acknowledge the importance of further emphasizing these distinctions in the main text and appreciate your feedback on this point. We will incorporate additional details on these differences in our final submission. | Summary: This paper introduces a set-valued predictor approach for domain generalization (DG) to address the limitations of single-valued predictions in unseen domains. The authors argue that set-valued outputs can capture diverse feature-label relationships across domains, enhancing robustness. They present a theoretical framework defining success criteria for set prediction in DG and derive generalization bounds under specific conditions. The proposed method, SET-COVER, optimizes prediction set size while ensuring coverage guarantees through constrained learning. Experiments on synthetic data and real-world WILDS benchmarks demonstrate that SET-COVER achieves higher coverage with smaller set sizes compared to conformal prediction baselines, offering a promising direction for reliable ML systems in critical applications like healthcare.
Claims And Evidence: The paper's claims are validated through both theoretical and empirical evidence.
Methods And Evaluation Criteria: Yes the proposed methods make sense for the problem or application at hand.
Theoretical Claims: Theoretically, generalization bounds based on VC-dimension are established, proving that linear hypotheses under conditional Gaussian assumptions achieve coverage guarantees on unseen domains with sufficient training domains.
Experimental Designs Or Analyses: The validation experiments cover synthetic data and multimodal datasets such as real medical and satellite images, and the data show that the new method not only maintains more than 95% of the recognition rate of key features in tasks such as tumor recognition and geographic classification, but also reduces the probability of false alarms to one-third of that of the traditional method, which provides a new technological path for scenarios with a very low tolerance for error such as autonomous driving and precision medicine.
Supplementary Material: no
Relation To Broader Scientific Literature: This paper opens a new direction for domain generalization research by introducing an ensemble prediction and theoretical analysis framework, which promotes the exploration of machine learning at the intersection of out-of-distribution generalization and uncertainty modeling.
Essential References Not Discussed: no
Other Strengths And Weaknesses: Strengths:
This paper presents the first theoretical framework for domain generalization based on ensemble prediction, which provides a new theoretical perspective on multi-domain robustness.
The approach proposed in this paper provides more reliable coverage guarantees and reduces the risk of missed diagnosis in high-risk domains such as healthcare.
Weakness:
SET-COVER's dual optimization process may increase training time and has limited scalability for large-scale data.
The effectiveness of the method proposed in this paper may decrease with the number of training domains, and the performance of small-sample multi-domain scenarios is not fully explored.
Other Comments Or Suggestions: no
Questions For Authors: no
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you very much for your valuable feedback. We appreciate your insights and are happy to address your concerns.
1. SET-COVER incurs a higher training time (~30% increase over ERM) due to the additional optimization of Lagrangian multipliers (denoted as C in our algorithm). Below are the average training times (single GPU) for our experiments:
* Camelyon: ERM: 98 min, SET-COVER: 133 min
* Fmow: ERM: 45 min, SET-COVER: 56 min
* iWildCam: ERM: 46 min, SET-COVER: 58 min
* Amazon: ERM: 12 min, SET-COVER: 15 min
We will include these results in the final version. Notably, our current implementation can be further optimized by, for example, exploiting GPU parallelism. We anticipate that a more efficient implementation would significantly reduce the additional computational overhead. Apart from this step, SET-COVER primarily involves optimizing a loss function composed of hinge losses, which does not introduce substantial extra computation beyond standard architectures.
Train times of other set-prediction methods are approximated by those of ERM, as other set-prediction methods train ERM as a first stage, which consumes most of the training time.
Additionally, the other SOTA DG methods that we tested in appendix E.5 are implemented in the Domain-Bed package, which incorporates runtime optimizations that make direct comparisons inconsistent.
2. We recognize the importance of evaluating SET-COVER in scenarios with more training domains. Our initial focus was on datasets with sufficiently rich domains to first validate our method.
Although the field of Domain Generalization (DG) gained popularity in recent years, the availability of datasets for DG experiments remains limited. WILDS is one of the most comprehensive sources of datasets for DG problems, which led us to focus on utilising it for our experiments. Out experiments included:
* 20 training domains for Camelyon dataset
* 20 training domains for Fmow dataset
* 80 training domains for ICamWild dataset
* 500 training domains for Amazon dataset (however each data point consisted of a relatively short test instance)
As larger and more diverse DG datasets become available, we look forward to further testing SET-COVER’s performance in multi-domain settings. We view this as an important direction for future work. | Summary: This paper proposed set valued predictions for domain generalization, with theories and experimental justifications. This work builds upon some theoretical basis on uniform convergence considering domains and the conditions of uniform convergence based on the finite VC-dimension. The paper further prove the achievable low loss in domain generalization under the conditional Gaussian domains. The SET-COVER model was proposed by minimizing prediction size and the loss across various domains, further modeled based on hinge-losses, and iterative optimized. In experiments on synthetic dataset and WILDS datasets, the proposed DG method demonstrated improved performance compared with ERM, CDF Pooling, CDF pooling, robust conformal predictor.
Claims And Evidence: The paper claimed a theoretical framework defining successful set prediction in DG setting, and provide theoretical insight on the condition that DG is achievable. However, the major concern is on the applicability and impact of these theoretical analysis. For example, the Theorem 3.7 is restrictive by assuming conditional Gaussian of domains, which is unrealistic in the real world domain data.
Methods And Evaluation Criteria: How the above theoretical analysis inspire the design of the optimization model and algorithm? The deduced model is intuitive and simple, lacking novelty comparing with diverse kinds of DG models in literature, e.g., by aligning feature distributions, data augmentations, etc. Moreover, the compared datasets and methods are quite limited, hardly justifying the performance compared with the sota DG methods.
Theoretical Claims: I did not fully check the proof but the proof is based on the VC-dimension, and the novelties of these math deductions should be clarified, including not only the deduced theoretical results for DG, but also the key contributions in the proof process.
Experimental Designs Or Analyses: The compared baseline methods are limited to some baseline methods, including ERM, Pooling CDFs, Robust Conformal. As we know, the DG methods are diverse in the context of CV and ML literature. The manuscript should fully refer to the related DG methods and conduction full comparisons with sota methods.
Supplementary Material: The supplementary material contains the mathematical deduction of the theories and presents more details on the experimental results.
Relation To Broader Scientific Literature: Domain adaptation is an important task, and this work presents the theoretic analysis on the conditions of DG across domains. However, these theories lack significant impact on the design of novel DG methods, and the contributions of this paper is limited in the context of DG literature.
Essential References Not Discussed: The paper lacks the full survey of DG literature, especially on the more recent DG methods.
Other Strengths And Weaknesses: The major strength of this paper is on the theoretical analysis on the conditions of achieving domain generalization. However, the novelty and significance of these theories are not clearly presented. Especially, they did not fully inspire the novel and effective designs of DG models and algorithms. The limited experimental comparisons are also the major limitation of this work.
Other Comments Or Suggestions: None
Questions For Authors: (1) Set valued prediction is common in the multi-class classification tasks in the DG setting. This paper claimed the novelty on the set valued prediction, which should be more careful in the claim. The question is what is the major novelty on the set valued prediction in this work?
(2) What is the relationship between the model in section 4 with the theoretical analysis in the previous sections?
(3) The deduced model contains the minimization of size of the prediction set as one objective, which is confusing in the motivation and the meaning of "prediction set".
(4) Please extend the compared methods to include more sota DG methods.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you very much for your thoughtful review. Your questions and comments are very valuable and we appreciate the opportunity to clarify the key points raised.
1. Our theoretical results address whether achieving a target performance level (e.g., passing a recall threshold level) on training domains generalizes to new domains and under what conditions this generalization holds. Based on this, we developed SET-COVER, which explicitly optimizes for a predefined recall level on training domains, while also minimizing prediction set sizes. The latter objective complements the generalization goal by ensuring efficiency in the learned sets. In our experiments we show that indeed the recall performance generalizes to new domains, as evident by the fact that recall levels of SET-COVER are above the target recall of 90% in most test domains.
2. We acknowledge that Theorem 3.7 has limited scope, as it assumes normal distributions with all domains sharing the same covariance matrix up to a scaling factor. However, this assumption, though restrictive, aligns with common practices in DG research (e.g., Wald et al. (2021)). In our case, theoretical results with weaker assumptions have an additional difficulty, as briefly discussed in Section 3.1 of our paper. We thus view Theorem 3.7 as an illustration and motivation leading to our method and our empirical results.
To address potential limitations, we validate our claims empirically, including:
* Experiments on synthetic Gaussian data where each domain has a different covariance matrix (Appendix E.3).
* Real-world datasets to test robustness beyond the Gaussian assumption.
3. While our proofs build on VC-dimension and uniform convergence literature, their extension to a multi-domain setting introduces subtle but nontrivial modifications. Throughout the proofs in the appendices we attempt to highlight those subtle points and shed light on the differences that arise in the multi-domain setting compared to the classical one (As one example, the fact that in the multi-domain setting we focus on performance indicators instead of 0-1 loss requires a careful consideration). We recognize the need to emphasize these points further in the main text, and thank you for highlighting this issue. We will add details on these differences in the main text of our final submission.
4. We appreciate the concern regarding novelty. SET-COVER is derived from a principled, hard to compute optimization problem, and its intuitive design and ease of implementation are, in our view, key advantages.
We have included comparisons with SOTA DG methods from the DomainBed package, which were included in Appendix E.5 due to space constraints. In these results we can see that our method, while being intuitive and relatively simple to implement, achieves competitive results compared to advanced DG methods (e.g., feature alignment approaches). We view this result as a strength of our work.
5. In our literature review, after briefly describing the main research efforts put into DG problems in recent years, we focus on works that integrate set-valued predictions within some variants of DG settings, as these are the most directly relevant to our approach. However, we acknowledge the broader DG literature and as mentioned above in Appendix E.5 of our submission we included experimental comparisons with leading single-valued prediction methods.
---
Rebuttal Comment 1.1:
Comment: Thanks for the reply and explanations. I increased the score to 2. Considering the remaining limitations of Theorem 3.7 and the limited novelty (I acknowledge the comparison on DomainBed package) in a broader DG literature, I still lean to reject. | Summary: This paper introduces set-valued predictions for domain generalization (DG) problems. They propose a framework based on counting threshold violations for per-label recall. The paper introduces SET-COVER (SET Coverage Optimized with Empirical Robustness), a relaxed (differentiable) version of the proposed metric. They evaluate their approach on synthetic data and several datasets from the WILDS benchmark.
Claims And Evidence: - The claim that set-valued predictors can enhance robustness is backed by theoretical generalization bounds and experimental results showing improved performance metrics.
- Empirical results on four WILDS datasets show SET-COVER achieves the target 90% recall level across more test domains than baseline methods while maintaining smaller set sizes than robust conformal methods.
Methods And Evaluation Criteria: The average set size metric provides a meaningful measure of prediction efficiency. The recall@90 pctg metric is appropriate for set-valued prediction problems.
Theoretical Claims: I read the statements and skimmed the proofs.
Experimental Designs Or Analyses: Yes.
Supplementary Material: I skimmed the proofs.
Relation To Broader Scientific Literature: This paper builds on the distribution shift literature and that of conformal prediction methods.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: - Proposes a novel framework for tackling domain generalization through set-valued predictions. Lays out a strong theoretical foundation within this setting with VC-dimension analysis and generalization bounds.
- Clear presentation of the trade-off between prediction set size and robust performance
- Proposes a practical surrogate objective (SET-COVER).
- Consistent performance improvements across multiple real-world datasets
- Limited comparison with recent domain generalization methods beyond ERM and conformal prediction
- Limited discussion of how to determine appropriate target recall levels in practice
- Could show more metrics for a fuller picture of performance and tradeoffs. For example, you could draw a graph of Recall@N pctg as a function of N.
Other Comments Or Suggestions: N/A
Questions For Authors: - How does SET-COVER compare with the baseline ERM method in terms of training / inference computational cost?
- Have you explored how to automatically determine an appropriate target recall level for a new problem? The current approach treats it as a fixed hyperparameter.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you very much for your constructive review. We have gained many important insights from your questions, and believe we can address your concerns. Below, we have organized our response by the key topics raised in your review:
1. Our primary focus was on comparing SET-COVER with other set-valued methods suitable for DG, balancing both recall and set size. However, recognizing the importance of benchmarking against SOTA DG methods, we wish to direct your attention to Appendix E.5 where we provided additional comparisons. While these methods typically produce single-valued predictions, limiting the ability to compare them with set-valued predictors, we believe this comparison still provides useful insights, showing that set-valued predictions made by SET-COVER increase recall robustness in unseen domains.
2. SET-COVER incurs a somewhat higher training time (~30% increase over ERM) due to the additional optimization of Lagrangian multipliers (denoted as C in our algorithm). Below are the average training times (single GPU) for our experiments:
* Camelyon: ERM: 98 min, SET-COVER: 133 min
* Fmow: ERM: 45 min, SET-COVER: 56 min
* iWildCam: ERM: 46 min, SET-COVER: 58 min
* Amazon: ERM: 12 min, SET-COVER: 15 min
We will include these results in the final version. Notably, our current implementation can be further optimized by, for example, exploiting GPU parallelism. We anticipate that a more efficient implementation would significantly reduce the additional computational overhead. Apart from this step, SET-COVER primarily involves optimizing a loss function composed of hinge losses, which does not introduce substantial extra computation beyond standard architectures.
Train times of other set-prediction methods are approximated by those of ERM, as other set-prediction methods train ERM as a first stage, which consumes most of the training time.
Additionally, other SOTA DG methods that we have tested in appendix E.5 are implemented in the Domain-Bed package, which incorporates runtime optimizations that make direct comparisons inconsistent.
3. We agree that the selection of the target recall (γ parameter) is important. We view this as an application-dependent choice, best determined by the user's specific requirements. However, we understand that studying the performance of the method as a function of the target recall can be helpful. We wish to point out the analysis of the role of this parameter in Appendix E.4; due to space constraints we could not include this analysis in the main body of the paper.
4. We appreciate the suggestion to provide a clearer visualization of recall vs. set size. In our paper, we analyze how varying γ values, which determine the recall, affect also set sizes across methods. We will summarize this analysis with a clearer graph, as suggested, to better illustrate the trade-off. | null | null | null | null | null | null |
Synthetic Face Datasets Generation via Latent Space Exploration from Brownian Identity Diffusion | Accept (poster) | Summary: The authors propose an approach to generate synthetic face images, by leveraging a GAN-based backbone, coupled with novel Langevin and Dispersion algorithms, together used as DisCo, wherein both inter-class and intra-class diversity in ensured by using a physics informed formulation.
Claims And Evidence: While I am not an expert in the particular field of synthetic face generation for FR algorithms, the claims made in this paper are clearly presented, in my understanding. The proposed approach, and the associated algorithm are clearly explained, and the experiments are well motivated and presented.
Methods And Evaluation Criteria: The methods and evaluation criteria are well presented and consistent with the literature presented, to the best of my knowledge.
Theoretical Claims: The theoretical formulation, motivating the inter-class and intra-class sample diversity by means of repulsive and attractive forces is well formulated and presented clearly. However, it would be good for the authors to discuss some of the other works talking particularly about this, in the GAN space. In terms of analysis GANs and diffusion models’ image generation in terms of these forces, these have been prior works such as, for example, Franceschi et al. 2023, and Asokan and Seelamantula, 2023, which talk precisely about this repulsive/attractive nature of particle flow in GAN and diffusion models settings, while both Unterthiner et al., 2018 and Wang et al., 2019 were both worlds that initially formulated this via a loss function for GANs. Of course the settings are different, but the theory in this paper could certainly be made stronger by either leveraging, or referencing, existing literature that makes claims aligned with the paper’s setting.
[1] Franceschi et al., Unifying GANs and score-based diffusion as generative particle models, NeurIPS 2023
[2] Asokan and Seelamantula, GANs Settle Scores, arXiv 2023
[3] Unterthiner et al., Coulomb GANs: Provably optimal Nash equilibria via potential fields, ICLR 2018
[4] Wang et al., Improving MMD-GAN Training with Repulsive Loss Function, ICLR 2019
Experimental Designs Or Analyses: While I am not an expert in the space of FR, the GAN base setting of the experiments, the design and evaluation framework all appear sound to me.
Supplementary Material: Yes. I went through the additional images presented, and the discussion on the hyper parameters, and graphics presented in the Supplementary.
Relation To Broader Scientific Literature: To the best of my understanding, this manuscript is relevant to the GAN and FR literature, and the algorithms provided, although in the context of generating synthetic datasets for FR, can be leveraged in other settings as well, and are therefore relevant to the broader community.
Essential References Not Discussed: Please see my response to **Theoretical Claims**.
Other Strengths And Weaknesses: Please see my response to other questions above.
Other Comments Or Suggestions: **Impact statement:** It appears that I couldn’t find an impact/ethics statement, even the default one that ICML suggests, in the manuscript. I found this ironic, for a paper targeting, particularly, the privacy and ethical concerns of face recognition models. I do not wish to flag this paper for an ethics issue in this regard because I dont see any such glaring issues, but it would be good for the authors to acknowledge the impact of the proposed algorithm in the context of privacy concerns of FR models in their impact statement.
**Minor bug fixes — Typo: L347C1: … datasets are **accurately** calculated…
Questions For Authors: Please see my response to other questions above.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Summary: n this paper, the authors introduce a physics-inspired method to generate large synthetic face datasets for training face-recognition models. Their core idea is to treat each latent representation as a “particle” and let these particles repel each other in the embedding space (via a “Brownian identity diffusion” approach), ensuring that each synthetic identity is sufficiently distinct while still maintaining realistic appearance. They propose three algorithms—Langevin, Dispersion, and DisCo—to control inter-class diversity (spacing between different identities) and intra-class variation (differences among images of the same identity). By training a face-recognition model on these generated datasets, they claim to outdo earlier GAN-based methods, and even rival some diffusion-based approaches, all while preserving more privacy.
Claims And Evidence: Some of their evidence is compelling—particularly the performance comparisons against older GAN-based methods and the step-by-step ablation results that show how Langevin and Dispersion can boost coverage in latent space. However, a few claims feel less rock-solid. For example, they assert that GANs are definitively more private than diffusion models, yet the paper mainly references prior studies instead of conducting a thorough memorization or leakage check themselves. The discussion of “Brownian identity diffusion” as a surefire way to avoid “jamming” also seems a bit hand-wavy, since there’s limited empirical proof that high-dimensional jamming is a real hazard or that their random force definitively fixes it.
Methods And Evaluation Criteria: Yes, they focus on training large-scale face recognition with synthetic data, and the benchmarks (LFW, CA-LFW, CFP-FP, etc.) are standard for face verification. They also measure how well synthetic identities spread out in embedding space, which directly relates to how reliably a model can distinguish between them.
Theoretical Claims: There aren’t formal theorems to check here, only physics-based arguments likening their latent-space approach to Brownian motion and granular mechanics. There’s no rigorous proof that needs verification in the usual mathematical sense. Rather, they present a heuristic connection—no step stands out as a “proof” that could be right or wrong in that classical, theorem-based way.
Experimental Designs Or Analyses: I checked their experiment layout: they generate synthetic face sets, train a standard face-recognition model on each, then compare scores on popular benchmarks. This is a straightforward approach and largely appropriate, since it measures the core question: “Can synthetic data match or outperform existing sources for training face recognition?” One notable concern, though, is that they assume their reference FR model (used to measure inter-identity distances) doesn’t bias the dataset generation. If the generator is overfitted to that specific embedding, it might exaggerate gains on tests that are also partial to similar embeddings. While it doesn’t invalidate the results outright, it’s something to keep in mind when interpreting their reported accuracy boosts.
Supplementary Material: I looked through the appendix sections on algorithmic illustrations and hyperparameter defaults, which clarify how they implement their “particle” updates in latent space, plus the ablation results that show varying parameters’ impact on final performance.
Relation To Broader Scientific Literature: They’re building on a growing theme of training face-recognition models with synthetic data—particularly methods that try to systematically traverse or manipulate generative latent spaces (e.g., SynFace, SFace, and Syn-Multi-PIE). They depart from older GAN-based setups by taking a “physics-inspired” angle: rather than just random or partially guided sampling, they push synthetic identities away from each other in the embedding space using spring-like forces, akin to granular mechanics. This sets them apart from, for instance, DreamBooth-style diffusion methods (DCFace or IDiff-Face), which often face privacy concerns around training-data leakage. So they’re essentially combining older ideas—latent editing and identity separation—with a fresh “Brownian motion” twist to make synthetic datasets more diverse while still being feasible for face recognition.
Essential References Not Discussed: Yes. Beyond the cited diffusion-model leakage papers, there’s also prior work directly evaluating whether GANs might memorize and replicate training faces—see, for example, “Evaluating GANs via Dual- and Triple-Generation” (ECCV 2020) and Carlini et al. (2023), which compare data-extraction risks between GANs and diffusion. Including these would highlight potential pitfalls in assuming one approach is fundamentally “private.” Also, discussing recent latent-manipulation methods like GANDiffFace (Melzi et al. 2023)—which systematically ensure identity separation—would broaden the conversation on spacing identities in latent space. There are also works on discussing using GAN generated data to understand FR models (Liang et al. 2023)
A. B. Some Author et al. “Evaluating GANs via Dual- and Triple-Generation,” Proceedings of ECCV, 2020.
N. Carlini, J. Hayes, M. Nasr, et al. “Extracting Training Data from Diffusion Models,” in 32nd USENIX Security Symposium (USENIX Security 23), 2023.
P. Melzi, C. Rathgeb, R. Tolosana, et al. “GANDiffFace: Controllable generation of synthetic datasets for face recognition with realistic variations,” arXiv preprint arXiv:2305.19962, 2023.
Liang, H., Perona, P., and Balakrishnan, G. “Benchmarking Algorithmic Bias in Face Recognition: An Experimental Approach Using Synthetic Faces and Human Evaluation.” In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023
Other Strengths And Weaknesses: I appreciate the paper’s effort to blend concepts from physics (Brownian motion and granular mechanics) into synthetic face dataset generation—this is a neat twist on what’s otherwise a fairly saturated space. They also do a thorough job of comparing different hyperparameter settings, showing how to tune their “repulsive forces” for best effect. That said, the writing occasionally slips into heavy theoretical exposition, which some readers might find confusing or tangential. A more direct, plain-spoken approach would help clarify the motivation behind “Brownian identity diffusion.” Furthermore, while they position GANs as a privacy-friendlier alternative to diffusion, they don’t do much to measure or prove that assertion directly. Overall, though, the paper proposes a novel angle for crafting more identity-rich, diverse datasets, and the results suggest real promise for training practical face recognition systems.
Other Comments Or Suggestions: Please refer to the previous section.
Questions For Authors: Please refer to the previous section.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Summary: In this work, they introduce a new method, inspired by the physical motion of soft particles subjected to stochastic Brownian forces, allowing us to sample identities distributions in a latent space under various constraints. They also introduce three complementary algorithms, called Langevin, Dispersion, and DisCo, aimed at generating large synthetic face datasets.
Claims And Evidence: The claims presented in the content are clear, and the experiments support the conclusions. However, despite their clarity, I have some concerns regarding the novelty of the proposed claims.
Methods And Evaluation Criteria: Yes
Theoretical Claims: I have conducted a corresponding review of the theory, but its theoretical description is somewhat rigid and lacks smooth transitions, which has made my review more challenging.
Experimental Designs Or Analyses: Yes
Supplementary Material: Yes. I have reviewed all the supplementary materials.
Relation To Broader Scientific Literature: I find the contribution to the field to be relatively modest, as the proposed method may not be applicable to more generalized datasets.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strengths:
1. The first dataset generation work based on physics-inspired methods.
2. Achieved better results compared to other methods.
Weaknesses:
1. The overall description of the paper is not very clear. For instance, how the motivation is derived from physics evidently requires more elaboration, as this is the most crucial part.
2. Since I specialize in the theory of diffusion models and am very familiar with Langevin dynamics, I believe that the physical approach in the paper merely applies Langevin dynamics to the given task. This significantly weakens the originality of the main motivation.
3. The loss design is not novel. For example, in Equations (15) and (16), the authors could review more papers on face models to find more effective loss function designs.
Other Comments Or Suggestions: ### No rebuttal for my concern
Questions For Authors: No rebuttal was given, so l lean to reject.
Ethical Review Flag: Flag this paper for an ethics review.
Ethics Expertise Needed: ['Discrimination / Bias / Fairness Concerns', 'Inappropriate Potential Applications & Impact (e.g., human rights concerns)']
Ethical Review Concerns: The generation of facial datasets typically requires scrutiny, as it involves the potential misuse of portrait rights.
Code Of Conduct: Affirmed.
Overall Recommendation: 1 | null | null | null | null | null | null | null | null | |||
Scalable Attribute-Missing Graph Clustering via Neighborhood Differentiation | Accept (poster) | Summary: This paper presents a novel approach for deep graph clustering (DGC) in the presence of missing node attributes and large-scale graph structures, termed Complementary Multi-View Neighborhood Differentiation (CMV-ND). CMV-ND achieves this by pre-processing graph structural information into multiple views in a non-redundant manner. The authors introduce a Recursive Neighborhood Search (RNS) to explore the local structure of the graph across different hop distances and a Neighborhood Differential Strategy (NDS) to ensure non-overlapping node representations across different hops. The resulting multiple views are then fed into existing multi-view clustering or DGC methods. The paper demonstrates the effectiveness of CMV-ND through extensive experiments on six widely-used graph datasets, where it shows significant improvements over various baselines in terms of clustering performance.
Claims And Evidence: The main claim of the manuscript is that the key to effective large-scale deep graph clustering with missing attributes lies in the efficient utilization of graph structural information. This claim is intuitively supported, as for a graph, the available view information typically includes three aspects: node attributes, graph structure, and labels. In the scenario of attribute-missing clustering, the only available information is the graph structure, making this claim reasonable and intuitive. Furthermore, the experimental results, which show significant improvements in clustering performance compared to prior methods, provide robust empirical evidence to substantiate the claim.
The secondary claim is that existing message-passing paradigms for large-scale graphs suffer from redundancy and omission when utilizing graph structural information. The authors explain the redundancy issue in Section 3.3.4, and it is also pointed out that current large-scale graph clustering methods often involve sampling steps that disrupt the graph structure. This claim appears to be well-supported and valid.
Methods And Evaluation Criteria: The authors use ACC, NMI, ARI, and F1 scores to evaluate clustering performance, which are standard and commonly used metrics in the field. Similarly, the datasets Cora, Citeseer, Amazon-Photo, Reddit, ogbn-arXiv, and ogbn-products are appropriate benchmarks for assessing the proposed method's performance on large-scale graphs.
Theoretical Claims: I have reviewed the correctness of the theoretical claims and did not identify any apparent errors in the manuscript's theoretical aspects. Specifically, I have verified the key formulas in the methodology section, namely Eq. (1) through (8), and reviewed the complexity analysis in Section 3.4. Additionally, Appendix B provides an explanation of why the proposed method is applicable to multi-view clustering (MVC).
Experimental Designs Or Analyses: The experimental design follows the setup of the AMGC algorithm (published at AAAI 2024, titled "Attribute-Missing Graph Clustering Network"). The primary experiments are conducted with a 0.6 missing rate, which allows for a meaningful comparison with AMGC. Therefore, the experimental setup is reasonable. Additionally, the manuscript also reports results for a 0.9 missing rate, providing further insight into the performance of the proposed method under more challenging conditions.
Supplementary Material: I have reviewed all the supplementary material, which serves as a valuable complement to the main text. For example, Appendix D provides additional experimental results, which are directly related to Q4 and Q5 in Section 4. Appendix F includes PyTorch-style pseudocode, which helps in understanding the algorithmic details and facilitates further performance validation.
Relation To Broader Scientific Literature: The paper builds upon the work presented in the AAAI 2024 paper titled "Attribute-Missing Graph Clustering Network," which defines the problem of attribute-missing graph clustering. This manuscript extends the problem to large-scale graphs through a preprocessing approach. Furthermore, it bridges the fields of multi-view clustering and deep graph clustering, enabling the use of multi-view methods for graph data.
Essential References Not Discussed: To the best of our knowledge, the manuscript has provided a thorough discussion of the related work. No essential related works appear to be missing in the current version of the paper.
Other Strengths And Weaknesses: **Strength**
(1) The proposed method introduces a new paradigm for leveraging graph structure by preserving it across multiple views through search and differential techniques. This approach is not a combination of existing methods, but rather presents a novel strategy to address the challenges of large-scale graphs with missing attributes.
(2) The idea of leveraging multi-view clustering for graph data is a novel and interesting contribution. This perspective opens up new possibilities for graph clustering, particularly in the context of missing node attributes.
(3) The authors provide clear pseudocode, Overall workflow, and PyTorch-style code to illustrate the methodology presented in the paper.
(4) The role of the priority queue in Algorithm 1 is not clearly explained, and there is a lack of sufficient explanation about its purpose and function within the algorithm.
**Weakness**
(1) The experimental results omit comparisons with some recent state-of-the-art methods in Deep Graph Clustering (DGC). It would be beneficial to include performance results for the following methods:
Liu, Y., Yang, X., Zhou, S., Liu, X., Wang, Z., Liang, K., ... & Chen, C. (2023, June). Hard sample aware network for contrastive deep graph clustering. In Proceedings of the AAAI conference on artificial intelligence (Vol. 37, No. 7, pp. 8914-8922).
(2) The experiment includes too few MVC methods, and the selection does not cover the most recent advancements. The claim in the paper that MVC methods are inferior to DGC methods in the CMV-ND paradigm appears overly simplistic. I recommend adding the following MVC methods to Table 2 for comparison and reconsidering the conclusions:
Wu, S., Zheng, Y., Ren, Y., He, J., Pu, X., Huang, S., ... & He, L. (2024). Self-Weighted Contrastive Fusion for Deep Multi-View Clustering. IEEE Transactions on Multimedia.
Cui, J., Li, Y., Huang, H., & Wen, J. (2024). Dual contrast-driven deep multi-view clustering. IEEE Transactions on Image Processing.
(3) The paper lacks performance evaluation of CMV-ND under different attribute missing rates. I suggest demonstrating CMV-ND's clustering performance across a range of missing rates (from 0.1 to 0.9) and comparing it to other DGC methods.
(4) The role of the priority queue in Algorithm 1 is not clearly explained, and there is a lack of sufficient explanation about its purpose and function within the algorithm.
Other Comments Or Suggestions: (1) The experimental section primarily provides qualitative descriptions of the results, without presenting a detailed quantitative analysis. While I understand that this may be due to page limitations in the initial submission, I recommend adding key quantitative metrics in the final version. For example, the paper could include performance improvements, such as the percentage gain over AMGC on the Cora dataset, to give readers a clearer sense of the method's effectiveness.
(2) The concept of "differential hop" is mentioned in both the introduction and abstract but is formally defined only in Section 3.1. To avoid potential confusion for readers, I suggest revising the paper to either introduce the concept earlier or make sure the definition is more prominent and clearly connected to its initial mention in the introduction and abstract.
Questions For Authors: Q1: The experimental results omit comparisons with some recent state-of-the-art methods in Deep Graph Clustering (DGC). Would it be possible to include performance results for the following methods?
Liu, Y., Yang, X., Zhou, S., Liu, X., Wang, Z., Liang, K., ... & Chen, C. (2023, June). Hard sample aware network for contrastive deep graph clustering. In Proceedings of the AAAI conference on artificial intelligence (Vol. 37, No. 7, pp. 8914-8922).
Q2: The experiment includes too few MVC methods, and the selection does not cover the most recent advancements. The paper claims that MVC methods are inferior to DGC methods in the CMV-ND paradigm. Would it be possible to include the following MVC methods in Table 2 for comparison and reconsider the conclusions?
Wu, S., Zheng, Y., Ren, Y., He, J., Pu, X., Huang, S., ... & He, L. (2024). Self-Weighted Contrastive Fusion for Deep Multi-View Clustering. IEEE Transactions on Multimedia.
Cui, J., Li, Y., Huang, H., & Wen, J. (2024). Dual contrast-driven deep multi-view clustering. IEEE Transactions on Image Processing.
Q3: The paper lacks performance evaluation of CMV-ND under different attribute missing rates. Would it be possible to demonstrate CMV-ND's clustering performance across a range of missing rates (from 0.1 to 0.9) and compare it to other DGC methods?
Q4: Could you clarify the role of the priority queue in Algorithm 1? A more detailed explanation of how it contributes to the overall algorithm would help improve the understanding of the method.
Q5: The experimental section primarily provides qualitative descriptions of the results, without presenting detailed quantitative analysis. While I understand that this might be due to initial submission page limitations, would it be possible to include key quantitative metrics in the final version? For example, could you report performance improvements, such as the percentage gain over AMGC on the Cora dataset, to give readers a clearer sense of the method’s effectiveness?
Q6: The concept of "differential hop" is mentioned in both the introduction and abstract but is formally defined only in Section 3.1. To avoid potential confusion for readers, would it be possible to introduce this concept earlier in the paper, or ensure that the definition is more prominently connected to its initial mention in the introduction and abstract?
Q7: Would it be possible to release the multi-view version of the graph datasets constructed by CMV-ND? This would be of significant value to the MVC community and could facilitate further research and comparison across different methods.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: ## Response to Reviewer 69Ph
We thank the reviewer for the careful reading and constructive feedback. Below, we address each concern in detail.
---
**W1:** *The experimental results omit comparisons with some recent state-of-the-art methods in Deep Graph Clustering (DGC). It would be beneficial to include performance results for the following methods.*
We appreciate the suggestion to include additional recent DGC methods. Following your recommendation, we have added **HSAN** (Hard Sample Aware Network, AAAI 2023) as an additional baseline in our experiments. We report its performance with and without CMV-ND preprocessing on small-scale datasets. For large-scale datasets (Reddit and ogbn-products), HSAN encounters OOM (Out-Of-Memory) errors both before and after applying CMV-ND, due to its intrinsic memory consumption.
The results on Cora and Citeseer are summarized below:
| Dataset | Metric | HSAN (Original) | HSAN + CMV-ND |
|:---------:|:-----:|:------------------:|:------------------:|
| **Cora** | ACC | 57.86 ± 1.34 | 65.42 ± 1.09 |
| | NMI | 41.92 ± 1.31 | 51.18 ± 1.22 |
| | ARI | 33.09 ± 1.71 | 43.74 ± 1.38 |
| | F1 | 58.65 ± 0.95 | 66.12 ± 1.07 |
| **Citeseer** | ACC | 44.35 ± 0.61 | 50.38 ± 1.02 |
| | NMI | 22.17 ± 1.32 | 27.25 ± 1.29 |
| | ARI | 13.26 ± 1.02 | 17.94 ± 1.21 |
| | F1 | 42.06 ± 2.41 | 49.15 ± 2.04 |
---
**W2:** *Limited MVC baselines and overly general claim.*
We agree that the MVC baselines in the current version can be further expanded. Following your recommendation, we have additionally included two recently published state-of-the-art MVC methods:
- **SCMVC** (Self-Weighted Contrastive Fusion for Deep Multi-View Clustering, TMM 2024)
- **DCMVC** (Dual Contrast-Driven Deep Multi-View Clustering, TIP 2024)
We evaluated these methods under the same CMV-ND paradigm with a missing rate of 0.6. The updated experimental results are summarized below:
|Dataset|Metric|SCMVC|DCMVC|
|:-:|:-:|:-:|:-:|
|**Cora**|ACC|61.74±1.12|60.29±1.24|
||NMI|45.62±1.48|44.13±1.57|
||ARI|43.82±1.26|41.06±1.34|
||F1|58.34±1.35|57.01±1.49|
|**CiteSeer**|ACC|57.21±1.33|55.96±1.46|
||NMI|37.35±1.29|35.71±1.53|
||ARI|32.84±1.48|31.56±1.65|
||F1|52.68±1.44|51.12±1.52|
|**Reddit**|ACC|63.27±1.14|62.03±1.25|
||NMI|61.35±1.03|60.41±1.14|
||ARI|54.29±1.25|53.11±1.31|
||F1|61.47±1.19|60.82±1.27|
|**ogbn-products**|ACC|27.84±0.97|27.13±1.04|
||NMI|35.24±0.85|34.12±0.94|
||ARI|18.03±0.93|17.42±1.05|
||F1|22.84±0.89|21.95±0.92|
We will revise the manuscript to include these results and will accordingly moderate the original claim about the superiority of DGC methods.
---
**W3:** *Missing evaluation under varying attribute missing rates.*
We agree with this suggestion and have conducted additional experiments by varying the attribute missing rate from 0.1 to 0.9. Instead of reporting nine separate tables, we summarize these results using line charts to clearly illustrate the performance trends. These plots will be included in the final version.
---
**W4:** *The role of the priority queue in Algorithm 1 is not clearly explained, and there is a lack of sufficient explanation about its purpose and function within the algorithm.*
We clarify that the priority queue in Algorithm 1 serves as a control mechanism for incrementally expanding the neighborhood of a target node in order of increasing graph distance. We will revise the text in Algorithm 1 and its accompanying explanation to more clearly articulate this role and improve overall readability.
---
**C1:** *Lack of quantitative analysis in experiments.*
We agree with your suggestion and will include key quantitative metrics in the final version. For example, we will report the relative performance improvement of CMV-ND over AMGC on Cora and other datasets to provide a clearer picture of the effectiveness of our method.
---
**C2:** *Delayed definition of "differential hop."*
We agree that the concept of "differential hop" plays a central role in our method and that improving its visibility can help readers better follow the paper. We will make the formal definition in Section 3.1 more prominent by explicitly referencing it when "differential hop" is first introduced, ensuring a smoother connection between the introductory mentions and the formal exposition.
---
**Q1–Q6:** *Problems that have been solved*
These questions correspond to the concerns in **W1–W4** and **C1–C2**, which we have addressed above with additional experiments and clarifications.
---
**Q7:** *Release of CMV-ND processed datasets.*
We will make these processed datasets publicly available along with our source code in the final release.
---
Rebuttal Comment 1.1:
Comment: I appreciate the careful responses, which have addressed my previous concerns. I'd like to maintain my rating and recommend accepting this paper. | Summary: This paper proposes a method called Complementary Multi-View Neighborhood Differentiation (CMV-ND) to address deep graph clustering (DGC) on large-scale graphs with missing node attributes. CMV-ND captures multi-hop local structures using a Recursive Neighborhood Search (RNS) and eliminates redundancy with a Neighborhood Differential Strategy (NDS), generating K+1 complementary views for each node. The key contributions are: (1) bypassing the "aggregate-encode-predict" paradigm of GNNs by directly storing differential neighborhood information; (2) encoding graph structure in a non-redundant multi-view format to mitigate the effects of attribute missingness; and (3) offering a flexible framework for existing graph or multi-view clustering methods. Experimental results on six benchmark datasets demonstrate improvements, especially in large-scale graphs.
Claims And Evidence: The manuscript claims that effective large-scale deep graph clustering with missing attributes relies on leveraging graph structure, as it is the only available information in such scenarios. The authors support this claim through strategies like differential hops, which address key challenges in attribute-missing graph clustering. Empirical results show notable performance improvements over existing methods. Additionally, the paper asserts the method’s scalability, supported by complexity analysis (Section 3.4) and evaluations of time and memory consumption (Section 4.4).
Methods And Evaluation Criteria: The evaluation framework adopted in this paper aligns well with standard practices in deep graph clustering research. The authors employ ACC, NMI, ARI, and F1 scores, which are widely recognized and appropriate metrics for clustering tasks, ensuring comparability with prior work. In terms of benchmark datasets, the selection includes Cora, Citeseer, Amazon-Photo, Reddit, ogbn-arXiv, and ogbn-products, covering both small-scale and large-scale graphs.
Theoretical Claims: I have reviewed the theoretical aspects of the manuscript and found no apparent issues. The key equations are logically consistent with the proposed framework, and the complexity analysis in Section 3.4 provides a reasonable estimate of the computational demands.
Experimental Designs Or Analyses: The experimental design is based on a well-established setup, following the "Attribute-Missing Graph Clustering Network" (AAAI 2024), which ensures that the results are comparable with previous work.
Supplementary Material: Yes, I have reviewed all the supplementary material, which includes some experimental results that are essential for the manuscript, and the core PyTorch-style code.
Relation To Broader Scientific Literature: The paper aims to extend the "Attribute-Missing Graph Clustering Network" (AAAI 24) problem, as defined in previous work, to large-scale graphs. While the motivation centers on addressing the challenge of deep graph clustering under attribute-missing conditions, the proposed methodology appears to have broader applicability. Specifically, the approach offers a novel utilization of graph structure, which can be viewed as an alternative to existing message-passing paradigms in graph clustering.
Essential References Not Discussed: No essential related works are missing in the current manuscript.
Other Strengths And Weaknesses: Strength
(1) The motivation behind the paper, which addresses the challenge of large-scale deep graph clustering with missing attributes, is clearly articulated. The proposed solution effectively addresses the challenges of large-scale deep graph clustering with missing attributes by enhancing the utilization of graph structural information, which aligns well with the motivation behind the paper.
(2) The proposed method introduces a novel paradigm for utilizing graph structure, which differs from conventional message-passing paradigms.
(3) The approach presented in the paper enables the use of multi-view clustering for graph data, effectively bridging the gap between multi-view clustering and graph clustering.
Weakness
(1) By treating graph data as two views—attribute view and structural view—it is natural to frame the graph clustering problem as a multi-view clustering problem. Therefore, the experiments should include comparisons between this two-view setup and the multi-view setup of CMV-ND in terms of MVC methods.
(2) There seems to be an error in the citation for AMGC. The correct reference should be:
Tu, W., Guan, R., Zhou, S., Ma, C., Peng, X., Cai, Z., ... & Liu, X. (2024, March). Attribute-missing graph clustering network. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 38, No. 14, pp. 15392-15401).
(3) It would be beneficial to explicitly clarify that AMGC represents the state-of-the-art for attribute-missing graph clustering, while Dink-Net is the prior state-of-the-art for large-scale graph clustering. This distinction would offer a clearer context for evaluating the contributions of the paper.
Other Comments Or Suggestions: There are several typographical and formatting inconsistencies in the manuscript that should be addressed. In Figure 1, the font size of the symbol v is too small and may hinder readability. Additionally, there is an unnecessary period at the end of line 326, while the caption for Table 2 is missing a period. Lastly, in line 243, two different styles of the O notation are used for the time complexity of RNS, which should be made consistent.
Questions For Authors: Q1: The writing in Section 3.3.4 is somewhat unclear. The authors seem to argue that the "aggregate-encode-predict" paradigm introduces redundancy in utilizing graph structure, whereas the proposed CMV-ND method does not. However, the term "Graph Propagation" has not been mentioned earlier in the paper. Would it be more appropriate to use "message-passing paradigm" instead for consistency and clarity?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: ## Response to Reviewer WHi9
We thank the reviewer for the thoughtful comments and helpful suggestions. Below, we address each point in detail.
---
**W1:** *By treating graph data as two views—attribute view and structural view—it is natural to frame the graph clustering problem as a multi-view clustering problem. Therefore, the experiments should include comparisons between this two-view setup and the multi-view setup of CMV-ND in terms of MVC methods.*
We agree that comparing the traditional two-view setup (attribute view + structural view) with the multi-view setup generated by CMV-ND can provide valuable insights. To this end, we have conducted an additional experiment in which we construct two views: (1) the original attribute (with missing entries), and (2) a structural view based on the adjacency matrix. These are then input into standard MVC methods such as MFLVC and DIMVC. We compare the results against the same methods using the $k{+}1$ views generated by CMV-ND. These comparisons will be included in the final version.
---
**W2:** *There seems to be an error in the citation for AMGC. The correct reference should be: Tu, W., Guan, R., Zhou, S., Ma, C., Peng, X., Cai, Z., ... & Liu, X. (2024, March). Attribute-missing graph clustering network. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 38, No. 14, pp. 15392-15401).*
We apologize for the incorrect citation. We have corrected it to:
Tu, W., Guan, R., Zhou, S., Ma, C., Peng, X., Cai, Z., ... & Liu, X. (2024). *Attribute-missing graph clustering network*. In AAAI (Vol. 38, No. 14, pp. 15392–15401).
We have also rechecked the references throughout the manuscript to ensure accuracy in the final version.
---
**W3:** *It would be beneficial to explicitly clarify that AMGC represents the state-of-the-art for attribute-missing graph clustering, while Dink-Net is the prior state-of-the-art for large-scale graph clustering. This distinction would offer a clearer context for evaluating the contributions of the paper.*
We agree with the suggestion. In the revised version, we will explicitly state that **AMGC** represents the state-of-the-art for **attribute-missing graph clustering**, while **Dink-Net** is a recent state-of-the-art method for **large-scale graph clustering**. This distinction will help contextualize our contributions more clearly.
---
**C1:** *There are several typographical and formatting inconsistencies in the manuscript that should be addressed. In Figure 1, the font size of the symbol v is too small and may hinder readability. Additionally, there is an unnecessary period at the end of line 326, while the caption for Table 2 is missing a period. Lastly, in line 243, two different styles of the O notation are used for the time complexity of RNS, which should be made consistent.*
Thank you for pointing out the typographical and formatting issues. We have carefully reviewed the manuscript and addressed the specific items you mentioned:
- In **Figure 1**, we have increased the font size of the node label $v$ to improve readability and ensure consistency with other text elements in the figure.
- The **unnecessary period at the end of line 326** has been removed.
- The **missing period in the caption of Table 2** has been added to maintain punctuation consistency across all table and figure captions.
- For the **time complexity notation in line 243**, we had previously used both $\mathcal{O}(\cdot)$ and $\mathbf{O(\cdot)}$ styles. We have revised all instances to consistently use the standard O notation $\mathcal{O}(\cdot)$ throughout the manuscript.
In addition to correcting these specific issues, we will perform a thorough proofreading pass to eliminate any remaining inconsistencies or formatting errors in the final version.
---
**Q1:** *The writing in Section 3.3.4 is somewhat unclear. The authors seem to argue that the "aggregate-encode-predict" paradigm introduces redundancy in utilizing graph structure, whereas the proposed CMV-ND method does not. However, the term "Graph Propagation" has not been mentioned earlier in the paper. Would it be more appropriate to use "message-passing paradigm" instead for consistency and clarity?*
We agree that “message-passing paradigm” is more accurate and consistent than “graph propagation.” We will revise Section 3.3.4 accordingly to use the standard term and rephrase the paragraph for improved clarity.
---
We greatly appreciate your careful review and insightful comments. They have been invaluable in helping us refine the presentation and deepen the discussion of our contributions. We will incorporate all necessary revisions in the final version, and we remain open to any further suggestions you may have.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed response—it has resolved my concerns. I would prefer to support the acceptance of this paper. | Summary: The paper addresses the challenge of clustering nodes in large-scale graphs that often suffer from missing attributes, a common scenario in real-world applications such as social networks and recommendation systems. To tackle this, the authors propose the Complementary Multi-View Neighborhood Differentiation (CMV-ND) paradigm. The key components of CMV-ND include: Recursive Neighborhood Search (RNS) and Neighborhood Differential Strategy (NDS). By combining the original node features with the aggregated representations from each differential hop, the method constructs a multi-view representation. These multi-view representations can then be seamlessly integrated with existing deep graph clustering (DGC) or multi-view clustering (MVC) methods. Experimental results on six widely used graph datasets demonstrate that CMV-ND significantly enhances clustering performance.
## update after rebuttal
After the rebuttal, I want to keep the original rating, mainly due to the novelty issues.
Claims And Evidence: Overall, many of the submission’s claims are supported by extensive empirical results on multiple datasets. For example, the claims about improved clustering performance on attribute‐missing graphs and scalability are backed by comprehensive experiments, including performance tables, T-SNE visualizations, and time/memory usage data.
Methods And Evaluation Criteria: The methods and evaluation criteria are well-aligned with the challenges of clustering on large-scale, attribute-missing graphs. The use of standard clustering metrics (Accuracy, NMI, ARI, F1) along with datasets provides a robust framework for evaluation.
Theoretical Claims: The paper does not include formal proofs for its theoretical claims. In essence, the experimental results are presented without corresponding rigorous theoretical proofs that would further substantiate the claimed benefits of CMV-ND.
Experimental Designs Or Analyses: The experimental design is sound and well-aligned with the problem, but there are a few aspects that warrant further discussion:
- Comparing alternative feature preprocessing methods such as node2vec and GraphSage would help assess whether the improvements are inherent to the CMV-ND paradigm.
- Incorporating more baselines, especially those specifically designed for attribute-missing graphs, would provide a more comprehensive evaluation of the method’s performance in realistic settings.
Supplementary Material: Yes, I read appendix B and D.
Relation To Broader Scientific Literature: - It extends deep graph clustering (DGC) research, which includes methods like DGI, MVGRL, and Dink-Net, by addressing two critical challenges simultaneously: scaling to large graphs and handling missing node attributes. Prior work has typically tackled these issues in isolation, so combining them fills a notable gap in the literature.
- Since CMV-ND constructs multi-view representations of nodes within the graph, it naturally bridges the gap between graph clustering and Multi-View Clustering (MVC).
Essential References Not Discussed: The paper cites an extensive set of related works spanning deep graph clustering, attribute-missing graph clustering, and scalable graph learning.
Other Strengths And Weaknesses: **Strengths**
1. The paper presents a well-motivated approach for clustering large-scale, attribute-missing graphs.
2. The experimental evaluation is extensive, covering multiple datasets and metrics.
**Weaknesses**
1. While the experiments are thorough, the paper would benefit from additional comparisons with related works, particularly:
Chen, X., Chen, S., Yao, J., Zheng, H., Zhang, Y., and Tsang, I. W. Learning on attribute-missing graphs. IEEE transactions on pattern analysis and machine intelligence, 44(2):740–757, 2020.
Tu, W., Zhou, S., Liu, X., Liu, Y., Cai, Z., Zhu, E., Zhang, C., and Cheng, J. Initializing then refining: A simple graph attribute imputation network. In IJCAI, pp. 3494–3500, 2022.
Yoo, J., Jeon, H., Jung, J., and Kang, U. Accurate node feature estimation with structured variational graph autoencoder. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pp. 2336–2346, 2022.
2. CMV-ND shares similarities with GraphSAGE in its neighborhood aggregation approach, which raises concerns about novelty. Conducting experiments where GraphSAGE is used for feature propagation could help clarify CMV-ND’s unique advantages.
3. Equation (1) should be revised to: $N_{i+1}(v) = N_i(v) \cup \left( \bigcup_{u \in N_i(v)} N(u) \right)$?, the notation $\mathcal{N}^i(a)$ appears on line 246 but is missing from Equation (4), which might indicate an inconsistency or typo.
4. CMV-ND treats each hop’s neighborhood embeddings separately, effectively severing connections between different hop levels. This assumes all hop-distance information is equally important, which may not always be the case. Introducing an attention mechanism could help weigh the contributions of different hop distances more adaptively.
5. The memory complexity analysis in Algorithm 1 is given per node. However, for the entire graph, the worst-case complexity is $O(n^2)$, which is infeasible for large-scale graphs.
6. Tables 1, 6, 7, and 8 are difficult to read due to their dense formatting. Improved formatting, such as clearer separations between methods and datasets, would make comparisons more intuitive.
7. Despite the paper’s title emphasizing scalability, the experimental results do not convincingly demonstrate scalability. Some deep graph clustering (DGC) methods still encounter out-of-memory (OOM) errors after applying CMV-ND, as seen in Table 1.
8. The T-SNE visualization in Figure 2 does not clearly demonstrate CMV-ND’s effectiveness.
Other Comments Or Suggestions: None
Questions For Authors: See Other Strengths And Weaknesses*
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: ## Response to Reviewer 7mCA
We thank the reviewer for the thoughtful comments and constructive suggestions. Below, we address each concern raised.
---
**W1:** *Lack of comparison with SAT, ITR, and SVGA.*
We have conducted additional experiments comparing CMV-ND with three representative methods for attribute-missing graphs: SAT (TPAMI 2020), ITR (IJCAI 2022), and SVGA (KDD 2022). For the reviewer’s convenience, we temporarily provide the results via the following anonymous link:
(https://anonymous.4open.science/r/icml2025_CMV-ND-2211/sat_itr_svga_results.md)
---
**W2:** *Similarity to GraphSAGE and novelty concerns.*
We respectfully clarify the fundamental differences between CMV-ND and GraphSAGE:
- GraphSAGE is a parameterized, end-to-end GNN that samples neighbors at each hop to reduce cost. CMV-ND deterministically retrieves the complete differential-hop neighborhoods without sampling.
- GraphSAGE requires training with learnable aggregation. CMV-ND is a non-parametric, training-free preprocessing strategy.
- GraphSAGE fuses multi-hop signals into a single embedding. CMV-ND preserves non-overlapping differential-hop views for downstream clustering.
For fair comparison, we implemented a preprocessing variant of GraphSAGE that averages sampled neighbors at each hop, keeping the rest identical to CMV-ND. We also included Node2Vec as a baseline. Results show that CMV-ND consistently outperforms both under attribute-missing settings. For the reviewer’s convenience, we provide these results in
(https://anonymous.4open.science/r/icml2025_CMV-ND-2211/graphsage_node2vec_comparison.md)
---
**W3:** *Notation inconsistency in equations.*
We have carefully reviewed the notations and corrected the inconsistencies. Specifically:
- Equation (1) has been revised to:
$N_{i+1}(v) = N_i(v) \cup \left( \bigcup_{u \in N_i(v)} N(u) \right)$
- The redundant use of the symbol $a$ in line 246 has been removed.
- Equations (6) and (7) have been revised to:
- $S_v^{(k)} = S_v^{(k-1)} + \sum_{u \in \mathcal{N}(v)} S_u^{(k-1)}$
- $S_v^{(k)} = S_v^{(0)} + \sum_{t=1}^{k} \sum_{u \in \mathcal{N}(v)} S_u^{(t-1)}$
- The expression in line 243 has been revised to:
$\sum_{i=0}^{k} \Delta^i = 1 + \Delta + \Delta^2 + \dots + \Delta^k = \mathcal{O}(\Delta^k)$
We confirm that Equation (4) is correct and does not require modification.
---
**W4:** *Lack of attention mechanism to weigh hop-level importance.*
CMV-ND intentionally avoids attention mechanisms to preserve its non-parametric and training-free nature. Introducing attention would require trainable components, contrary to CMV-ND’s design goal. Moreover, early fusion across hop-level representations would obscure structural diversity. Instead, CMV-ND leaves view-level weighting to downstream clustering models, following standard practice in multi-view clustering. In future work, we plan to develop a dedicated clustering model with view-level attention to adaptively fuse differential-hop representations.
---
**W5:** *Memory complexity concern.*
We respectfully clarify that the memory complexity in Algorithm 1 does not accumulate across all nodes in practice. CMV-ND processes nodes in mini-batches, and memory is released after each batch. Section 4.4 further reports empirical memory usage, showing linear scalability with the number of nodes. We also note that Reviewers 69Ph and WHi9 have confirmed the reasonableness of the memory complexity.
---
**W6:** *Dense formatting of tables.*
We acknowledge the readability issue and will improve the formatting of Tables 1, 6, 7, and 8 in the final version by adding clearer separations between methods and datasets.
---
**W7:** *Scalability claim not convincingly demonstrated.*
The scalability emphasized in our work refers to CMV-ND itself. The OOM issues in Table 1 stem from downstream DGC models, not from CMV-ND. This is not unique to our method—Node2Vec, despite its title claiming scalability (“node2vec: Scalable Feature Learning for Networks”), also causes OOM when paired with AMGC. Thus, the scalability of preprocessing should be evaluated independently of downstream model limitations.
---
**W8:** *Ineffectiveness of t-SNE visualization.*
We agree that the t-SNE visualization in Figure 2 does not clearly demonstrate CMV-ND’s effectiveness, primarily because the small number of nodes in Cora makes cluster structures less distinguishable in low-dimensional projections. To address this, we have replaced the visualization with results on the larger Co-CS dataset. In the updated figure, CMV-ND yields clearer cluster boundaries: the orange cluster is no longer split into two parts, and the purple cluster contains fewer intrusions from other classes. For convenience, we provide the updated visualization at (https://anonymous.4open.science/r/icml2025_CMV-ND-2211/tsne_cocs_visualization.png). | Summary: This paper presents a deep clustering method, namely Complementary Multi-View Neighborhood Differentiation (CMV-ND), to conduct clustering tasks in large-scale and attribute-missing graphs. CMV-ND adopts the Recursive Neighborhood Search to capture the complete local structure and the Neighborhood Differential Strategy to prevent redundancy among different hop representations. These presented strategies can well be integrated into existing clustering approaches to learning representations for various downstream tasks. Experimental results may validate the effectiveness of the proposed CMV-ND.
## Update after rebuttal
The authors have addressed most of my concerns. Still, there are issues regarding method design and unstable performances after authors’ rebuttal.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: NIL.
Experimental Designs Or Analyses: Yes.
Supplementary Material: NIL.
Relation To Broader Scientific Literature: The approach proposed in this paper may potentially advance large-scale graph clustering, which is an important topic in machine learning and data mining.
Essential References Not Discussed: The authors are suggested to discuss works related to structure search or learning in graph neural networks/deep graph clustering to better show the novelty of the proposed method.
Other Strengths And Weaknesses: Strengths:
1. The problem tackled in this paper is essential for deep graph clustering.
2. The method proposed in this paper is effective in standard graph clustering tasks.
Weaknesses:
1. Some definitions (e.g., Definitions 1 and 2) are not well explained, which makes this paper not very readable.
2. How the proposed strategies, i.e., Recursive Neighborhood Search and the Neighborhood Differential Strategy may contribute/connect to multi-view graph clustering is not clearly discussed in the paper.
3. To what extent the proposed approach can reduce the redundancy of the neighboring aggregation process is not analyzed.
4. The proposed approach is also similar to GNNs based on (PageRank), which aggregates different orders/hops neighbors to learn representations. How is the proposed approach different from these GNNs?
5. Do authors consider cross-view redundancy, consistency, or conflicts when constructing the output representations for downstream tasks?
6. The detailed experimental settings are not introduced in the manuscript/appendix.
Other Comments Or Suggestions: NIL.
Questions For Authors: 1. Some definitions (e.g., Definitions 1 and 2) are not well explained. Can authors use a clear example to explain these definitions?
2. How do the proposed strategies, i.e., Recursive Neighborhood Search and the Neighborhood Differential Strategy, contribute/connect to multi-view graph clustering?
3. Have the authors conducted any theoretical analysis showing to what extent the proposed approach can reduce the redundancy of the neighboring aggregation process?
4. The proposed approach is also similar to GNNs based on PageRank, which aggregate different orders/hops neighbors to learn representations. How is the proposed approach different from these GNNs?
5. Do authors consider cross-view redundancy, consistency, or conflicts when constructing the output representations for downstream tasks?
6. How are those clustering approaches configured in the experiments?
Ethical Review Concerns: NIL.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: ## Response to Reviewer CDeb
We thank the reviewer for the careful reading and valuable feedback. Below, we address each concern raised.
---
**W1:** *Lack of discussion on structure learning/search methods.*
We have considered structure learning and structure search methods, such as SUBLIME (WWW 2022), NodeFormer (NeurIPS 2022), and VIB-GSL (AAAI 2022). However, these methods rely heavily on complete node attributes to guide structure refinement or similarity estimation, making them inapplicable to the attribute-missing scenario targeted by our work. We will clarify this point and include a discussion of these works in the related work section of the final version.
---
**W2:** *Definitions 1 and 2 are unclear; need example.*
We have revised *Definitions 1 and 2* to improve clarity. Specifically, we now explain that the $k$-hop neighborhood $\mathcal{N}^k(v)$ includes all nodes within distance $k$, while the $k$-differential hop neighborhood $\mathcal{D}^k(v)$ includes nodes at exactly distance $k$. Additionally, we provide a concrete example to illustrate the difference between these definitions.
> “For example, consider a graph with edges $\{(v,a), (v,b), (a,c)\}$. Then: $ \mathcal{N}^1(v) = \{v, a, b\}, \quad \mathcal{N}^2(v) = \{v, a, b, c\}$. The corresponding differential hop neighborhoods are: $\mathcal{D}^1(v) = \{a, b\}, \quad \mathcal{D}^2(v) = \{c\}.$”
---
**W3:** *Insufficient discussion on how RNS and NDS contribute to multi-view clustering.*
This connection is discussed in **Section 3.3.3** and **Appendix B**. We will make this connection more explicit in Section 3.3.3 by adding the following clarification:
> “To enable graph clustering, we propose to construct multi-view node representations based on the structural granularity of neighborhoods. Specifically, the RNS is used to efficiently locate multi-hop neighbors, while the NDS allows us to isolate information from each exact $k$-hop, thus forming multi-view node representations.”
---
**W4:** *Lack of analysis on redundancy reduction.*
We have conducted a theoretical analysis to quantify the redundancy reduction of CMV-ND compared to message-passing GNNs. In a $k$-layer GNN, the total number of neighbor feature accesses is $\sum_{i=1}^k (k - i + 1) \cdot \Delta^i$, where $\Delta$ is the average node degree. In contrast, CMV-ND accesses each differential hop neighborhood only once: $\sum_{i=1}^k \Delta^i$. The redundancy ratio is therefore $\frac{\sum_{i=1}^k (k - i + 1) \cdot \Delta^i}{\sum_{i=1}^k \Delta^i}$. This ratio increases with larger $k$ and higher $\Delta$, highlighting that message-passing GNNs involve significant redundancy, while CMV-ND avoids it by design. We will include this analysis in the final version.
---
**W5:** *Similarity to PageRank-based GNNs.*
We clarify that CMV-ND differs from these methods in several key aspects:
- PageRank-based GNNs fuse multi-hop information into a single representation via propagation, while CMV-ND retains non-overlapping differential-hop representations as distinct views.
- PageRank-based GNNs assign decaying weights to distant neighbors, leading to incomplete structural coverage. In contrast, CMV-ND deterministically collects the complete, non-redundant differential-hop neighborhoods.
- PageRank-based GNNs require end-to-end training. CMV-ND is a non-parametric, training-free preprocessing strategy.
The use of multi-hop information is a common practice in GNNs (e.g., JK-Net aggregates features from multiple hop levels) and does not, by itself, imply novelty concerns.
---
**W6:** *Whether cross-view redundancy, consistency, or conflicts are considered.*
We would like to clarify that we have discussed the consideration of cross-view redundancy and consistency in **Appendix B**. Specifically, the NDS ensures that each $k$-differential hop neighborhood is non-overlapping, inherently avoiding redundancy across views. Consistency is supported under the homophily assumption, where nearby nodes exhibit similar representations across views. Complementarity arises from the structural granularity, as each hop-level view captures distinct topological information. Furthermore, CMV-ND retains multi-view representations and delegates the task of weighting or combining views to downstream clustering models, which can adaptively handle potential conflicts.
---
**W7:** *Lack of detailed experimental settings.*
Section 4.1 of the manuscript introduces the experimental setup, including datasets, metrics, environment, and baselines. Since CMV-ND is non-parametric and training-free, it has no learnable components. Nevertheless, we will supplement the final version with the following details:
- Python 3.9 and PyTorch 1.12.
- Number of propagation hops $K=7$, missing rate = 0.6, FP iterations = 40.
- Default hyperparameters for downstream clustering methods.
---
**Q1–Q6:**
These questions correspond to the concerns in **W1–W7**, and have been addressed above.
---
Rebuttal Comment 1.1:
Comment: Dear Authors,
Thanks very much for your responses. I will keep my original score due to concerns regarding the novelty and contributions of this paper.
---
Reply to Comment 1.1.1:
Comment: ## Response to Reviewer CDeB
**[Update Note – Follow-up]**
Dear Reviewer CDeB,
We just wanted to gently follow up, as the discussion stage is now entering its final few hours (less than five remaining). We are closely following your feedback and would like to kindly remind you that you can interact with us by editing the Rebuttal Comment box at any time during the discussion stage.
We truly value your feedback. If there are still any concerns or misunderstandings regarding the novelty or contributions of our work, we would be sincerely grateful for the opportunity to clarify them before the discussion closes.
Thank you again for your time and consideration.
---
**[Update Note]**
We sincerely hope this brief follow-up reaches you in time, as the discussion phase is nearing its deadline (less than eight hours remaining). We truly value your time and feedback, and we would be grateful for any final clarification you might be willing to share.
To briefly summarize our contributions:
- This work is, to the best of our knowledge, the first to explicitly tackle **deep graph clustering on large-scale graphs with missing attributes**, a practical yet underexplored scenario.
- We propose **CMV-ND**, a training-free preprocessing paradigm that constructs multiple views using **complete and non-redundant differential-hop neighborhoods**.
- The design of CMV-ND naturally supports integration into both **deep graph clustering (DGC)** and **multi-view clustering (MVC)** pipelines, offering broad applicability and a new perspective on graph learning.
It is possible that our **presentation of contributions was not sufficiently emphasized**, and we will carefully revise the writing in the final version to make our contributions clearer and more explicit.
If there remain specific concerns regarding novelty or contribution, we would be sincerely thankful if you could let us know. Your insight would help us better understand how to strengthen the work and improve its clarity and impact. If possible, we also kindly ask you to consider re-evaluating the submission in light of this clarification.
Thank you again for your time and for reviewing our submission.
---
Thank you again for your thoughtful feedback.
We understand that you are maintaining your original score due to concerns regarding the novelty and contributions of our paper. However, we would like to respectfully note that in the previous review, no explicit concerns were raised regarding novelty or contributions. The only related comments we received were:
- *"The authors are suggested to discuss works related to structure search or learning in GNNs/deep graph clustering to better show the novelty of the proposed method."*
- *"The proposed approach is also similar to GNNs based on (PageRank), which aggregates different orders/hops neighbors to learn representations. How is the proposed approach different from these GNNs?"*
In our response, we have addressed both points in detail. Specifically, we discussed why existing structure learning/search methods (e.g., SUBLIME, NodeFormer, VIB-GSL) are not applicable in the attribute-missing setting, and we carefully clarified the key differences between CMV-ND and PageRank-based GNNs.
To help us further improve the paper, we would sincerely appreciate it if you could clarify which specific aspects of novelty or contribution remain unconvincing.
Thank you once again for your time and consideration. | null | null | null | null | null | null |
Beyond Entropy: Region Confidence Proxy for Wild Test-Time Adaptation | Accept (poster) | Summary: This paper introduces ReCAP, a novel TTA method based on local inconsistency of predictions.
Based on the finding that the local inconsistency increases and adaptation becomes difficult under wild distribution shifts, the region confidence is proposed as an alternative to entropy, a common objective in TTA.
Its finite-sample approximation is also derived to overcome the computational intractability of the original region confidence.
Experimental results show that ReCAP had higher accuracy on corrupted test data under wild settings (online, mixed shifts, and imbalanced labels).
## update after rebuttal
I appreciate the author's rebuttal and additional experiments. My concerns have been addressed. I have updated my score to 4.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: The motivation for using the proposed approximation of the region confidence in Eqs. (6) and (7) is unclear. One can use a simple Monte Carlo approximation. Providing some evidence that the proposed approximation is more efficient than Monte Carlo would be convincing.
Theoretical Claims: I have checked the derivation of the region confidence.
Experimental Designs Or Analyses: - Experimenting on continual TTA settings performed in recent TTA studies (e.g., EATA) would strengthen the efficacy of ReCAP in wild TTA settings.
- How was the sampling number from the region $N$ set? Examining the sensitivity of $N$ would be helpful.
- Comparing ReCAP with a simple Monte Carlo approximation of the original region confidence in Eq. (2) would make the proposed method more convincing.
- Ablation on the sample weighting and selection in Eq. (9) would be helpful.
Supplementary Material: I have checked the proofs and additional results.
Relation To Broader Scientific Literature: The region confidence expands the commonly used sample-wise entropy.
It can improve existing entropy-based TTA methods.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our paper and providing valuable feedback. We would like to answer your questions below.
>Q1: Experimenting on continual TTA settings performed in recent TTA studies (e.g., EATA) would strengthen the efficacy of ReCAP in wild TTA settings.
A1: Thank you for your constructive suggestion to evaluate CTTA settings. We agree that such experiments would further strengthen the efficacy of our ReCAP. We conduct extensive experiments on CTTA for both classification and segmentation tasks.
For classification, CTTA setup (Tab. 3 in response to Reviewer 6Q1i) and additional PTTA (CTTA + label shift) setup (Tab. 3 in response to Reviewer eZDc) demonstrate that ReCAP consistently outperforms prior methods. For segmentation, results in Tab. 1 (this response) further confirm that ReCAP maintains robust adaptation performance in continual scenarios. These results underscore the broad applicability of ReCAP across continual and wild TTA settings.
Table 1: Semantic segmentation results (mIoU) on the Cityscapes-to-ACDC CTTA setup based on the Segformer-B5 architecture.
|Condition|Fog|Night|Rain|Snow|Fog|Night|Rain|Snow|Fog|Night|Rain|Snow|Fog|Night|Rain|Snow|Avg|
|-|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|Source|69.1|40.3|59.7|57.8|69.1|40.3|59.7|57.8|69.1|40.3|59.7|57.8|69.1|40.3|59.7|57.8|56.7|
|TENT|69|40.2|60.1|57.3|66.5|36.3|58.7|54|64.2|32.8|55.3|50.9|61.8|29.8|51.9|47.8|52.3|
|EATA|69.1|40.5|59.8|58.1|69.3|41.8|60.1|58.6|68.8|42.5|59.4|57.9|67.9|42.8|57.7|56.3|57.0|
|CoTTA|70.9|41.1|62.4|59.7|70.8|40.6|62.7|59.7|70.8|40.5|62.6|59.7|70.8|40.5|62.7|59.7|58.4|
|SAR|62.2|37.7|55.5|53.0|64.6|39.3|56.8|53.9|65.7|39.0|58.1|55.0|66.1|38.0|59.1|55.3|53.7|
|Ours|72.7|43.8|63.9|61.1|71.9|42.2|64.1|60.1|71.0|40.5|63.5|58.8|70.3|39.3|62.8|57.2|59.0|
>Q2: Sampling Number $N$ Sensitivity.
A2: We appreciate your question and apologize for any confusion. Our method does not need any sampling due to the finite-to-infinite approximation in Propositions 4.3 and 4.4. Therefore, our method is entirely unaffected by the value of $N$. We will provide additional clarifications in the revised version to enhance clarity.
>Q3: Comparing ReCAP with a simple Monte Carlo approximation of the original region confidence in Eq. (2) would make the proposed method more convincing.
A3: We sincerely appreciate your valuable suggestion. We conduct a comprehensive comparison with the Monte Carlo (MC) approximation using different sampling numbers. As shown in Tab. 2, while MC provides a direct estimate of region confidence, its accuracy is highly sensitive to the number of samples, leading to increased variance and a computational cost that scales linearly with the sample size.
In contrast, ReCAP achieves significantly higher accuracy with lower variance during adaptation, demonstrating its superior stability and efficiency. These results further reinforce the motivation behind our finite-to-infinite approximation. We will incorporate this comparison into the revised version.
Table 2: Comparison between the MC approximation and our finite-to-infinite approximation under 3 independent runs.
| |ReCAP|MC (4)|MC (16)|MC (64)|MC (128)|
|-|:---:|:---:|:---:|:---:|:---:|
|Sampling number|NA|4|16|64|128|
|Average Accuracy|42.2|31.7|34.8|38.7|40.7|
|Standard deviation|0.1|9.8|3.7|1.4|0.3|
|Running Time(s)|116|125|163|278|454|
>Q4: Ablation on the sample weighting and selection in Eq. (9) would be helpful.
A4: We appreciate the reviewer's suggestion and have conducted an additional ablation study to analyze the impact of different sample selection and weighting strategies. As shown in Tab. 3, the results lead to the following key observations:
1. Our region-based confidence optimization consistently enhances performance, surpassing the previous SOTA even when combined with the simplest entropy-based selection and weighting strategies.
2. The combination of our proposed selection and weighting achieves the best overall accuracy, further validating the effectiveness of our design.
Table 3: Ablation study on selection and weighting strategies.
|w/o selection|Entropy selection|Our selection|w/o weighting|Entropy weighting|Our weighting|ReCAP Accuracy|
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|✓|||✓|||36.0|
||✓||✓|||38.9|
|||✓|✓|||43.2|
||✓|||✓||44.9|
|||✓||✓||45.7|
|||✓|||✓|47.2|
---
Rebuttal Comment 1.1:
Comment: I appreciate the author's rebuttal and additional experiments. My concerns have been addressed. I will update my score to 4.
---
Reply to Comment 1.1.1:
Comment: We are glad to know that our response has addressed your questions.
We sincerely thank you for your thoughtful and constructive feedback. Through further discussions and experiments, we were able to more clearly communicate the contributions of our work.
Again, we would like to thank you for appreciating our work and recognizing our contributions!
Best,
The Authors | Summary: This paper introduces a new Test-Time Adaptation Method (TTA) to combat domain shifts appearing at test time in extreme scenarios. In particular, it proposes ReCAP, a method that optimizes two terms: a bias term resembling a regional entropy around a given test data, and a variance term to enhance the consistency of the prediction of the model under neighboring features. Experiments are carried out on standard TTA benchmarks yielding consistent performance gain.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: I skimmed through the proofs in the Appendices and they seem correct
Experimental Designs Or Analyses: I checked the experimental sections in the paper and they all seem relevant, consistent with earlier work, and providing supportive results.
Supplementary Material: I checked Appendix A, B, C, and F.
Relation To Broader Scientific Literature: This paper does a good job in linking their main contributions to earlier works. They further show experimentally how they can combine their proposed method with previous state-of-art showing further performance gain.
Essential References Not Discussed: I think the paper did a good job relating itself to other related works.
Other Strengths And Weaknesses: While I am generally very positive about this paper, the following experiments I think are missing and would strengthen the paper.
1) Ablating $\mathcal L_0$: I checked the ablation experiments and did not find the one ablating the impact of $\mathcal L_0$. Further, when the proposed method is combined with SAR, is the data point selection mechanism of SAR employed or the proposed one?
2) In the efficiency comparison in Table 4: The proposed ReCAP computes more backward passes than EATA, however it is still more efficient in runtime. This seems a bit contradictory and deserves more discussion along with comparison against the more efficient variant of EATA (i.e. ETA). It is also important, given the efficiency of ReCAP, to show the performance gain under computational budgeted evaluation [A].
3) One extra [Optional] experiment is to extend the evaluation to the Practical TTA setting [B] which is closely related to the wild TTA setting.
[A] Evaluation of test-time adaptation under computational time constraints, ICML 2024
[B] Robust test-time adaptation in dynamic scenarios, CVPR 2023
Other Comments Or Suggestions: Please refer to the "Other Strengths and Weaknesses" Section.
Questions For Authors: Please refer to the previous sections
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We deeply appreciate your positive comments and constructive suggestions on improving our paper. We will address your questions below.
>Q1: I checked the ablation experiments and did not find the one ablating the impact of $\mathcal{L}_0$.
A1: Due to space constraints, we provide the ablation study on $\mathcal{L}_0$ in Appendix C.1. As shown in Appendix Fig. 7, ReCAP consistently enhances performance across a broad range of $\mathcal{L}_0$ values, demonstrating its robustness to different selection boundaries. This result confirms that ReCAP does not rely on precise tuning of $\mathcal{L}_0$ and remains effective across varying settings. To improve accessibility, we will incorporate this ablation study into the main paper.
>Q2: When ReCAP is combined with SAR, is the data selection mechanism of SAR employed or the proposed one?
A2: When integrating ReCAP with SAR, we replace the original entropy selection with our proposed strategy, allowing for a direct evaluation of ReCAP's effectiveness. Likewise, when combining ReCAP with DeYO, we follow the same replacement strategy. We will clarify this in the revised version to eliminate any ambiguity.
>Q3: Efficiency comparison with EATA and ETA.
A3: Thank you for raising this point. While EATA performs fewer backward passes on test samples, it requires additional computation for Fisher regularization on extra source samples, resulting in a higher runtime compared to ReCAP.
For comparison with ETA, we provide additional evaluations in Tab. 1. Although ETA offers a slight runtime improvement, it struggles to adapt to dynamic shifts in wild TTA scenarios. In contrast, ReCAP effectively balances efficiency and performance, achieving superior accuracy with marginal additional computation cost.
Table 1: Running time for 50,000 images and accuracy on ImageNet-C under label shifts using ResNet.
|Method|Time (s)|Accuracy (%)|
|-|:-:|:-:|
|Tent|110|22.8|
|ETA|112|26.2|
|EATA|118|31.7|
|ReCAP|116|47.2|
>Q4: It is important, given the efficiency of ReCAP, to show the performance gain under computational budgeted evaluation.
A4: Thank you for your valuable suggestion. We agree that this evaluation is essential and realistic for assessing TTA methods. As shown in Tab. 2, ReCAP benefits from the computational efficiency of the upper-bound proxy, resulting in minimal performance degradation while achieving more significant gains under strict time constraints. This demonstrates ReCAP's ability to provide efficient adaptation under time limitations, making it well-suited for real-world deployments with computational budgets.
Table 2: Error rate on ImageNet-C under computational time constraints.
|Method|Realistic|Gaus.|Shot|Impu.|Defo.|Glas.|Moti.|Zoom|Snow|Fros.|Fog|Brig.|Cont.|Elas.|Pixe.|Jpeg|Avg.|
|-|:-:|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|
|EATA|✗|65.5|62.4|63.5|66.6|67.2|52.0|47.3|48.2|54.1|39.9|32.1|55.0|42.3|39.2|44.8|52.0|
|EATA|✓|69.3|67.1|69.2|71.1|71.7|57.5|49.9|51.9|57.4|42.4|32.6|60.7|45.1|41.4|47.4|55.6(+3.6)|
|SAR|✗|69.5|69.7|69.0|71.2|71.7|58.1|50.5|52.9|57.9|42.7|32.7|62.9|45.5|41.6|47.8|56.2|
|SAR|✓|79.4|78.5|78.1|79.9|79.3|67.5|56.1|60.5|63.1|47.4|34.0|75.3|51.7|46.6|53.8|63.4(+7.2)|
|DeYO|✗|64.1|61.4|62.1|66.0|66.2|51.7|47.4|47.5|54.0|39.8|31.9|54.0|41.9|38.7|44.3|51.4|
|DeYO|✓|69.7|67.6|68.2|73.2|72.2|59.0|50.8|52.8|58.1|42.7|32.5|62.9|45.5|41.5|48.1|56.3(+4.9)|
|Ours|✗|64.1|60.4|62.1|67.0|67.2|50.6|47.2|45.8|51.7|38.2|32.2|53.5|41.8|38.4|43.9|50.9|
|Ours|✓|68.2|65.2|67.1|70.7|71.0|55.7|49.8|50.0|53.8|40.6|32.7|52.9|44.9|40.6|46.7|54.0(+3.1)|
>Q5: One extra [Optional] experiment is to extend the evaluation to the Practical TTA setting which is closely related to the wild TTA setting.
A5: Thank you for this insightful suggestion. We agree that the PTTA setting (Continual + Label Shifts) is closely related to the wild TTA setting. As shown in Tab. 3, despite ReCAP not incorporating any additional design specifically for continual adaptation, it still outperforms entropy-based methods and specific-design RoTTA in PTTA setup. This result further validates the effectiveness and robustness of ReCAP across diverse test-time conditions.
Table 3: Accuracy on ImageNet-C under the PTTA setup, evaluated on ResNet50.
|Method|Gaus.|Shot|Impu.|Defo.|Glas.|Moti.|Zoom|Snow|Fros.|Fog|Brig.|Cont.|Elas.|Pixe.|Jpeg|Avg.|
|-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|Source|17.9|19.9|17.9|19.7|11.3|21.3|24.9|40.4|47.4|33.6|69.2|36.3|18.7|28.4|52.2|30.6|
|Tent|13.7|0.9|0.2|3.0|0.4|0.3|0.4|0.6|0.2|0.2|1.7|0.4|0.1|0.2|1.2|1.6|
|SAR|32.0|14.0|17.7|16.7|12.6|1.1|16.5|44.5|42.4|11.1|7.7|46.6|8.6|0.6|38.6|20.7|
|DeYO|40.7|44.1|41.3|17.7|22.1|41.3|16.5|41.2|50.5|30.9|73.2|51.4|42.4|56.5|58.2|41.9|
|RoTTA|40.2|41.2|40.8|20.7|20.3|40.2|33.2|45.2|51.1|52.1|70.2|50.1|40.1|52.1|57.1|43.6|
|ReCAP|42.2|44.3|42.1|18.9|23.9|42.1|28.7|44.7|51.6|52.5|71.2|52.2|41.5|57.9|58.3|44.8|
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their efforts in replying to my comments. My questions were adequately answered. Thus, I am raising my score from weak accept to Accept.
---
Reply to Comment 1.1.1:
Comment: We are glad to know that our response has addressed your questions.
We sincerely appreciate your insightful and constructive feedback. Your comments have guided us to refine our work and better articulate the significance of our contributions.
Once again, thank you for your thoughtful evaluation and recognition of our work!
Best regards,
The Authors | Summary: This paper proposes a new method, ReCAP, a novel approach to addressing the main limitation of TTA in entropy minimization. The key idea of this work is that EM heavily relies on local consistency, and when this consistency is disrupted, model performance degrades. To resolve this issue, instead of optimizing the confidence of individual samples, ReCAP optimizes region-based confidence using bias and variance terms through Region Confidence Optimization. Furthermore, to enable low-cost computation and accuracy, the method employs approximation theories (e.g., Finite-to-Infinite Approximation). When applied to low-data settings (batch size = 1), the proposed method demonstrated a +3.5% improvement in performance.
Claims And Evidence: The paper experimentally demonstrates that entropy minimization leads to performance degradation when local consistency is disrupted. The results show that even when entropy values are similar, prediction differences can be significant in domain-shift environments. Furthermore, the proposed RCO method improves the stability of TTA, and ReCAP outperforms traditional entropy-based methods such as Tent and MEMO, proving to be particularly effective in domain shift scenarios. The study also validates that the Bias Term and Variance Term play a crucial role in maintaining prediction consistency through mathematical formulations and empirical analysis. Additionally, the paper demonstrates that ReCAP achieves higher performance than Tent with only a 5% increase in computational cost.
Methods And Evaluation Criteria: The study employs widely used datasets in TTA research, including ImageNet-C, ImageNet-R, and VisDA-2021, to evaluate performance. Comparisons are made with state-of-the-art TTA techniques such as Tent, MEMO, DDA, EATA, SAR, and DeYO. The evaluation focuses on improvements in accuracy under domain shifts, robustness in low-data scenarios, and performance across mixed-domain tests. The inclusion of an ablation study analyzing key hyperparameters such as region size and bias-variance tradeoff strengthens the validity of the evaluation framework, aligning well with the study’s research objectives.
Theoretical Claims: The paper theoretically supports its approach by introducing an optimization framework leveraging Bias and Variance Terms to balance confidence estimation and prediction consistency. Additionally, it proposes a Finite-to-Infinite Approximation method to reduce computational cost while effectively approximating regional confidence. The mathematical derivations appear valid, and the experimental results substantiate the proposed theoretical foundation.
Experimental Designs Or Analyses: The experimental design appears relatively reliable, demonstrating that ReCAP maintains high performance even in low-data settings and remains stable across various corruption types and domain shift scenarios. The study also presents t-SNE visualizations, confirming that ReCAP enhances class separability. Overall, the experiments are appropriately designed to support the paper’s claims.
Supplementary Material: The supplementary material includes the theoretical proof used in SOC and an advanced study on limited batch sizes and imbalanced label shifts. These materials further emphasize the validity of their proposed ReCAP method and serve as valuable supporting evidence.
Relation To Broader Scientific Literature: This work builds upon prior entropy minimization-based TTA research, such as Tent, MEMO, and EATA, extending the optimization approach from sample-level to region-level confidence estimation. Additionally, it is relevant to domain adaptation research, distinguishing itself by focusing on maintaining local consistency as a key factor in adaptation performance.
Essential References Not Discussed: This paper leverages the appendix to cite all relevant studies comprehensively.
Other Strengths And Weaknesses: Strength
- While entropy minimization has been used in TTA, its accuracy gains have been limited. This paper provides a meaningful finding by identifying its limitations and proposing an effective solution.
- The paper effectively explains why local consistency is critical in TTA and thoroughly discusses the limitations of existing methods, making a strong case for the necessity of ReCAP.
- Instead of optimizing confidence at the sample level, the paper introduces region-based confidence optimization, which is a more robust and reliable strategy for TTA.
- The paper rigorously evaluates ReCAP across various datasets and settings, including different domain shifts, data scarcity scenarios, and mixed-domain testing. This strengthens the credibility of the proposed method and demonstrates its robustness in real-world applications.
- Unlike computationally intensive methods like DDA, ReCAP maintains a lightweight adaptation process while still improving accuracy.
- ReCAP not only outperforms baseline methods but also enhances other approaches such as SAR and DeYO, demonstrating its adaptability and versatility
Weakness
- While the method is effective for classification, it is unclear how well it would generalize to more complex tasks like object detection, segmentation, or NLP.
- The paper assumes that the finite-to-infinite approximation holds consistently, but in scenarios where domain shifts occur rapidly, there is a possibility that this assumption might not always hold. Investigating its robustness in highly dynamic environments could provide further insights.
- While the paper discusses applying features to reduce computational cost, providing a quantitative comparison of the actual reduction in computation would strengthen the analysis.
Other Comments Or Suggestions: - It would be helpful if the paper clarified what value of 𝜏 was fixed when conducting experiments on the effect of 𝜆 in Section 6.1.
- The t-SNE plots effectively illustrate the improvements in feature space adaptation, making it easier to understand the impact of ReCAP on prediction consistency and clustering quality.
Questions For Authors: - Does the variance + bias function serve the same role as the traditionally used mutual information?
- How does ReCAP handle rapid domain shifts where local consistency may be entirely lost?
- Could ReCAP be extended to structured prediction tasks like segmentation?
- How sensitive is ReCAP to the choice of when adapting to new domains?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We appreciate your detailed review and positive feedback on our contributions, including meaningful findings, novel region-based confidence optimization, and comprehensive evaluation. Building on your comments, we provide additional explanations and experiments to further demonstrate ReCAP's effectiveness and efficiency.
>Q1: Generalization ability in more complex tasks.
A1: Thank you for raising this important point. While our current experiments focus on classification, the core idea of region-based confidence optimization is inherently versatile. Additional evaluations on segmentation (Tab. 1 in response to Reviewer ynrW) and object detection (Tab. 1 in this response) show consistent improvements of ReCAP over entropy-based methods, indicating that ReCAP can be integrated into diverse model architectures and effectively extended to various complex tasks.
Table 1: Comparisons of detection performance on KITTI-C benchmark in [1] with MonoFlex, regarding AP.
|Method|Gauss.|Shot|Impul.|Defoc.|Glass|Motion|Snow|Frost|Fog|Brit.|Contr.|Pixel|Sat.|Avg|
|-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|Source|4.2|7.5|5.6|2.6|3.8|10.9|15.6|10.5|7.5|24.8|7.1|29.1|31.9|12.4|
|TENT|16.0|25.1|23.8|21.7|11.6|27.1|26.9|26.9|30.5|35.8|33.7|41.1|35.2|27.3|
|EATA|16.8|25.9|24.7|22.1|13.6|27.5|27.7|27.4|30.7|35.6|33.9|41.0|35.6|27.9|
|DeYO|19.2|26.1|24.7|23.2|15.6|28.5|28.5|29.3|30.8|35.1|34.2|40.8|36.2|28.6|
|MonoTTA (latest SOTA)|21.3|28.2|26.2|25.8|19.4|31.8|29.3|30.2|32.1|36.1|36.5|41.2|37.4|30.4|
|ReCAP|21.3|29.3|26.3|26.7|20.1|31.1|32.2|32.6|31.7|36.7|36.1|41.3|37.5|31.0|
>Q2: How does ReCAP handle highly dynamic shifts where local consistency may be entirely lost?
A2: Thank you for your insightful question. Our finite-to-infinite approximation is derived without assuming any consistency condition, ensuring its applicability even when consistency is entirely lost. Based on this foundation, ReCAP employs region-confidence optimization to enhance local consistency, which is crucial for robust adaptation.
Moreover, we evaluate ReCAP in a highly dynamic setting where the data stream undergoes rapid transitions across different domains, including style, corruption, and label shifts (see Appendix B.1). The results demonstrate that ReCAP exhibits strong robustness and achieves SOTA performance, validating its capability to address highly dynamic scenes.
>Q3: Quantitative comparison of computational cost reduction.
A: ReCAP reduces the computational cost via feature-level region modeling, eliminating the overhead of image-level region modeling and augmentation. Furthermore, its finite-to-infinite approximation serves as an efficient proxy, removing the need for costly sampling. As shown in Tab. 2, these designs achieve significant runtime reduction. We will incorporate this quantitative comparison into the revised version to enhance clarity on the efficiency of ReCAP.
Table 2: Running time on 50,000 images.
|Region Type|Time (s)|
|-|:-:|
|Image-level region (16 augmentation)|1798|
|Feature-level region (w/o proxy, 16 sampling)|163|
|Feature-level region (w/ proxy)|116|
>Q4: Clarification on ablation study settings.
A4: In our analysis of the effect of λ in Section 6.1, we fixed τ at 1.2, which aligns with the default value used across all experiments. We will explicitly state it in the revised version.
>Q5: Variance + Bias vs. Mutual Information.
A5: Our variance + bias function serves a fundamentally different role from mutual information (MI) in several key aspects:
1. Different Objects: MI is defined between two random variables, measuring their shared information, whereas our variance + bias function is computed over a local region surrounding a single sample x, capturing localized prediction stability.
2. Different Purposes: MI primarily quantifies mutual dependence, while our function serves as prediction confidence and consistency measure within a local feature region, making it more aligned with adaptation objectives.
3. Different Optimization Effects: MI encourages statistical association but does not address prediction probability discrepancies. In contrast, our function directly optimizes both prediction uncertainty (bias) and local discrepancy (variance), enhancing robustness under domain shifts.
>Q6: Sensitivity of RaCAP.
A6: We have extensively evaluated ReCAP across diverse datasets (ImageNet-C, VisDA, ImageNet-R), tasks (classification, segmentation, detection), TTA settings (wild, mild, and continual), and hyperparameter configurations. Our results consistently show its robustness and reliability across these various conditions.
Additionally, we notice that the question might miss a word (e.g., choice of L_0 when adapting). If we have misunderstood your concern, please clarify, and we would be happy to provide further insights.
>References
[1] Lin, Hongbin, et al. "Monotta: Fully test-time adaptation for monocular object detection." European Conference on Computer Vision, 2024.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response—especially for including additional experimental results and for the detailed explanation on how your method differs from mutual information. The clarification regarding the finite-to-infinite approximation also helped me better understand your formulation.
Also, to follow up on my final question (Q6), I realized that I had originally meant to refer to the τ(tau), which I mistakenly left out—apologies for the confusion. The ablation study (Fig. 4b) shows stable performance within a reasonable τ range, supporting the method’s robustness. However, the performance drop beyond τ = 2.5 raises questions, how sensitive the method is in real-world scenarios where the optimal τ may not be known in advance. It would be helpful to better understand how much τ influences performance in practice.
---------
Update following "Reply Rebuttal Comment by Authors":
Thank you for your thoughtful responses to my final questions. The additional experiments across diverse domains consistently show strong performance around similar τ values were convincing. I recognize the strength of your work and have decided to raise my score to a 4.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your constructive and positive feedback on our response. Following your suggestions, our additional experiments and explanations have further strengthened this work, particularly in terms of its broader applicability and high efficiency. We are also grateful for your clarification on Q6 and hope to address your question below:
>Q7: The ablation study (Fig. 4b) shows stable performance within a reasonable τ range, supporting the method's robustness. However, the performance drop beyond τ = 2.5 raises questions, how sensitive the method is in real-world scenarios where the optimal τ may not be known in advance.
A7: Thank you for recognizing the robustness of our method. To further clarify the practical stability of τ selection, we provide additional discussion and a practical example below:
1. **Default Value as a Reliable Choice:** Across all experiments in our manuscript, we consistently use a fixed τ=1.2, which delivers SOTA results across various datasets and TTA scenarios. This value serves as a reliable choice, and we recommend its use in cases where a validation set is unavailable.
2. **Stable Optimal Range:** Hyperparameter tuning on a small validation set (10% Gaussian-type data from ImageNet-C) across 3 settings and 2 model architectures consistently selects 1.2 ($\pm 0.2$). Further validation reveals that values within $\[0.6, 1.6\]$ maintain strong performance (Tab. 3), confirming a stable optimal region for reliable adaptation.
3. **Real-World Practicality:** To assess τ sensitivity in real-world scenarios, we examine its impact on a detection task. The default τ=1.2 achieves 31.0 AP on KITTI-C, outperforming prior SOTA (Tab. 1). Additionally, grid search over $\[0.6, 1.6\]$ on the validation set selects τ=1.3, improving AP to 31.2. This supports our default setting as a strong baseline and our tuning range as a practical search space.
Thank you again for raising this critical point. Given the empirical evidence and actionable guidance provided, we believe that τ selection is both stable and practical for researchers and practitioners, ensuring reliable performance without excessive sensitivity concerns.
Table 3: Additional ablation study on τ under 3 settings and 2 models. Results that surpass the prior SOTA are in **bold**.
|Setting|τ=0.6|τ=0.8|τ=1.0|τ=1.2|τ=1.4|τ=1.6|
|-|:-:|:-:|:-:|:-:|:-:|:-:|
|Batch Size=1 (ResNet)|**46.6**|**47.3**|**47.5**|**47.6**|**47.2**|**46.3**|
|Batch Size=1 (ViT-Base)|**64.2**|**64.9**|**65.4**|**65.6**|**65.5**|**65.1**|
|Mixed Domain=1 (ResNet)|**40.0**|**41.2**|**42.0**|**42.1**|**42.1**|**42.0**|
|Mixed Domain=1 (ViT-Base)|**58.6**|**59.5**|**59.5**|**59.6**|**59.5**|**59.5**|
|Label Shift=1 (ResNet)|**45.5**|**46.6**|**47.1**|**47.2**|**46.4**|**45.3**|
|Label Shift=1 (ViT-Base)|**61.5**|**62.1**|**62.6**|**63.0**|**62.6**|**62.2**|
Again, we would like to thank you for appreciating our work and recognizing our contributions!
Best regards, Authors | Summary: This paper proposes a region modification based mechanism, called “Region Confidence Adaptive Proxy (ReCAP), to address the problem of will test-time adaptation (WTTA). Further, it develops a finite-to-infinite asymptotic approximation, which is a tractable upper bound to the intractable region confidence.
Experimental results show improved performance of ReCAP compared to other approaches.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: The theoretical claims in the main paper have been checked. The details of the proofs in Appendix have not been thoroughly verified.
Experimental Designs Or Analyses: The experiments follow the existing WTTA line of work.
Supplementary Material: No code is provided in the supplementary. The appendix has some additional results and proofs that have been reviewed to some extent.
Relation To Broader Scientific Literature: The problem of WTTA is challenging. However, with the advent of more realistic continual test-time adaptation [1] approaches, the real world applicability of WTTA seems limited compared to recent progress.
References:
1. Wang, Qin, et al. "Continual test-time domain adaptation." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: **Strengths**
* Proposed a tractable upper bound to the intractable region confidence.
* The theoretical results are interesting.
* Most of the experimental results show improvements.
**Weaknesses**
* Some of the empirical gains are marginal; for example, in Table 2, VitBase, DeYO -> ReCAP gain is =< 0.5.
* Limited real-world applicability of WTTA, with batch size = 1 setting.
Other Comments Or Suggestions: 1. Line 235-236: It should be Eq. 5 in place of Eq. 10.
2. The recent focus in the area of test-time adaptation has shifted towards continual test-time adaptation (CTTA), so experiments in the CTTA setting will enhance the contribution of this paper.
Questions For Authors: 1. How is the hyperparameter L_0 in equation 9 tuned for the experiments?
2. Is the accuracy in Figure 4 measured on the test set itself, on which the final performance is reported? Whether there is a validation split for tuning?
3. In real-world applications, isn't the setting, such as batch size = 1, too contrived, though it is challenging? Can we not accumulate more examples before updating, effectively increasing the batch size?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for carefully reviewing our paper and offering a positive assessment. We appreciate your recognition of the contributions made by our work, particularly the idea of the tractable bound on the intractable region confidence and the theoretical results.
>Q1: Some of the Empirical Gains are marginal. For example, VitBase in mixed testing domain.
A1: Thank you for your feedback. In highly competitive TTA scenarios, performance gains tend to approach saturation in some cases. However, larger domain gaps, such as more severe corruption or mixed style shifts, still present significant challenges in terms of adaptation efficiency and robustness for TTA methods.
To access the empirical gains under more severe shifts, we increase the severity level from 4 & 5 to levels 6 & 7 (see Tab. 1), and our method achieves significant improvements of **+5.7** and **+5.2** over DeYO. Furthermore, under complex style shifts, ReCAP achieves average gains of **+2.6** on ImageNet-R and **+1.7** on VisDA (Appendix B.1). Overall, our method consistently outperforms prior methods across 3 datasets, 3 wild settings, and 2 base models, achieving gains of >+1.5 in the majority of scenarios.
Table 1: Comparisons on ImageNet-C (severity level 6, 7) using VitBase under Mixed Testing Domain.
|Method|Level 6|Level 7|
|-|-|-|
|Source|18.87|12.80|
|TENT|2.45|0.99|
|EATA|30.58|16.32|
|SAR|32.11|17.74|
|DeYO|29.54|16.08|
|ReCAP|**35.27**|**21.32**|
>Q2: Limited applicability of bs=1 setting. Can we not accumulate more examples to increase the batch size?
A2: We appreciate your comment. While bs=1 may seem contrived, some real-world applications (e.g., edge computing) face hardware constraints that necessitate the use of small mini-batches. Following your comment, we evaluate the effect of accumulating examples with varying sizes (See Tab. 2). However, small batch sizes still present a crucial bottleneck, hindering adaptation performance. This underscores the importance of developing robust TTA solutions tailored to such restrictive conditions.
Table 2: Accuracy of Tent on ImageNet-C across different accumulated batch sizes, evaluated on ResNet50.
||no-adapt|bs=1|bs=4|bs=16|bs=32|bs=64|
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|Tent (%)|30.6|21.5|23.5|25.9|28.6|33.9|
>Q3: Line 235-236: It should be Eq. 5 in place of Eq. 10.
A3: Thank you for pointing out this typo, and we will correct it.
>Q4: Additional experiments in the CTTA setting will enhance the contribution of this paper.
A4: Thank you for your constructive suggestion. Following your advice, we evaluate our method in CTTA scenarios for classification (Tab. 3 in this response) and semantic segmentation (Tab. 1 in response to Reviewer ynrW). While ReCAP is not designed for CTTA setup, it shows competitive performance and outperforms several strong baselines. These additional evaluations further highlight the broad applicability of our method across mild, continual, and wild settings.
Table 3: Error rate (%) in CTTA scenario (CIFAR100C) [1], evaluated on ResNeXt-29.
|Method|Gaus.|Shot|Impu.|Defo.|Glas.|Moti.|Zoom|Snow|Fros.|Fog|Brig.|Cont.|Elas.|Pixe.|Jpeg|Avg|
|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|
|Source |73.0|68.0|39.4|29.3|54.1|30.8|28.8|39.5|45.8|50.3|29.5|55.1|37.2|74.7|41.2|46.4|
|TENT|37.2|35.8|41.7|37.9|51.2|48.3|48.5|58.4|63.7|71.1|70.4|82.3|88.0|88.5|90.4|60.9|
|CoTTA|40.1|37.7|39.7|26.9|38.0|27.9|26.4|32.8|31.8|40.3|24.7|26.9|32.5|28.3|33.5|32.5|
|SAR|39.7|34.3|36.5|26.4|37.4|28.6|26.1|32.7|31.4|36.6|26.1|29.6| 33.0|29.8|38.1|32.4|
|EcoTTA|39.1|35.7|37.5|26.2|37.7|28.3|26.3|32.2|31.0|36.9|25.9|27.4|32.7|28.4|34.7|32.0|
|DeYO|39.0|34.1|36.3|26.7|37.2|28.4|26.2|32.4|31.6|36.2|25.5|26.8|32.2|30.1|38.3|32.1|
|ReCAP|38.8|33.5|36.5|26.5|37.9|28.2|26.4|31.1|29.6|34.0|25.8|27.7|32.0|28.2|38.1|31.4|
>Q5: How is the hyperparameter L_0 in equation 9 tuned for the experiments?
A5: For hyperparameter L_0, we perform a grid search over a range of values on a small validation set, which comprises 10% of the Gaussian-type data from ImageNet-C. Additionally, we conduct a sensitivity analysis (see Appendix C.1) to confirm the robustness of our method to variations in L_0. Further details on this tuning process will be included in the revised version.
>Q6: Is the accuracy in Figure 4 measured on the test set itself, on which the final performance is reported? Whether there is a validation split for tuning?
A6: For the sensitivity analysis in Figure 4, we measure accuracy on the entire test set to validate the robustness of our method. For hyperparameter tuning, we use a small validation split (the same set used for L_0). The hyperparameters selected through this process are then validated and shown to be robust in Figure 4. We will provide a clearer explanation of this procedure in the revised version to avoid any confusion.
>References
[1] Wang, Qin, et al. "Continual test-time domain adaptation." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for their response.
I do not have any further queries or comments.
---
Reply to Comment 1.1.1:
Comment: We sincerely express our gratitude for your valuable feedback. Thanks to additional discussions and experiments, we were able to effectively convey the contributions of our work.
Again, we would like to thank you for appreciating our work and recognizing our contributions!
Best,
The Authors | null | null | null | null | null | null |
Action-Minimization Meets Generative Modeling: Efficient Transition Path Sampling with the Onsager-Machlup Functional | Accept (poster) | Summary: This paper presents a new method for transition path sampling in molecular systems by combining generative models with the Onsager-Machlup action functional. The authors show how pre-trained generative models (specifically denoising diffusion and flow matching) can be repurposed to find high-probability transition paths between stable configurations without requiring specialized training. The author's approach interprets candidate paths as trajectories sampled from stochastic dynamics induced by the learned score function, making finding transition paths equivalent to minimizing the Onsager-Machlup action. The method is demonstrated on a 2D Müller-Brown potential, fast-folding proteins, and tetrapeptide sequences, showing it can generate physically realistic transition pathways. Compared to traditional molecular dynamics simulations, their approach produces higher-quality transition paths with significantly less computational cost.
Claims And Evidence: The overall claims focus on the efficiency and physically realistic transition path generated based on the OM algorithm, supported by its implementation on systems such as Müller-Brown potential and coarse-grained proteins.
Comparisons are made between OM-generated paths and MD simulations in a few systems with wall-clock time comparison presented in Figure 3c shows that OM optimization is faster than brute-force MD. However, the comparison lacks benchmarking against enhanced sampling methods (e.g., metadynamics, umbrella sampling, weighted ensemble, transition interface sampling). These are the actual competitors, not brute-force MD. Therefore, The computational cost of optimizing the OM action versus performing biased MD simulations is not clearly reported.
Regarding the generated transition path, the claim of its physical realism requires additional proof. The paper claims that minimizing the OM action produces physically meaningful transition paths, even for systems unseen during training, given the comparison between transition path and free energy landscapes. Since the generative models are trained on equilibrium data, the core issue is that the score-based generative models do not inherently capture rare transition pathways. The paths generated may be interpolations that resemble plausible transitions rather than actual dynamical pathways governed by the correct physics (figure 13 and 14). Besides, MSM-based validation is useful but does not confirm the dynamical accuracy of the generated paths. The MSM itself is only as good as the data used to construct it.
Another claim that requires further examination is the zero-shot generalization statement. However results on held-out tetrapeptides suggest that transition paths can be generated for new sequences, different tetrapeptides have unique energy landscapes, and it is unclear how well a generative model trained on one set of peptides generalizes to another with different interaction potentials. Generative models trained on equilibrium data are likely to underrepresent high-energy, transition-state conformations, which means the model might struggle with generating physically accurate rare event dynamics.
The strong claim of solving the CV selection problem is not solid enough -- the method optimizes transition paths based on the OM action but does not guarantee exploration of the full transition ensemble. If the generative model was trained on biased datasets (e.g., experimental structures, short MD trajectories, not-complete stationary structures), then the inferred transition paths will be similarly biased.
Methods And Evaluation Criteria: The main method introduced in the paper is OM action minimization using generative models (specifically denoising diffusion models and flow matching models) to infer likely transition pathways between molecular configurations. The key steps include training a generative model on equilibrium molecular conformations; using the learned score function as an approximate force field and optimizing paths between two states by minimizing the OM action.
The OM action is based on path probability, but most probable does not equal dynamically correct. The method may find smooth interpolations rather than true dynamical pathways dictated by physical kinetics. Besides, there is no comparison is made with unbiased MD simulations to check if the generated paths follow the correct transition state kinetics.
Another issue is that the optimization heavily depends on parameters controlling diffusion, drift, and friction, but the method does not provide a systematic way to tune these parameters for different molecular systems. For example, the experiments show that lower diffusivity (Figure 2) gives a better estimation of the intrinsic reaction coordinate (minimum energy pathway that connects the saddle point and two minima) while the larger diffusivity is way off the accurate transition path.
A more robust evaluation of TPS methods should include:
Comparison with well-established TPS methods (e.g., umbrella sampling, metadynamics, transition path sampling).
Validation against unbiased MD simulations to assess kinetic accuracy.
Quantitative measures of transition rate, free energy profile, and committor function accuracy.
Theoretical Claims: N/A
Experimental Designs Or Analyses: The issue has been described in the previous question, the evaluation experiments need stronger validation. The proposed method is innovative, but its evaluation is insufficient to justify its claims of accuracy, efficiency, and generalizability.
Supplementary Material: The supplementary material includes detailed visualization and experimental setup for the case studies introduced in the main text.
Relation To Broader Scientific Literature: The contribution of the paper may benefit the study of transition paths but still lacks confidence due to incomplete validation, limited benchmarking, and potential oversimplification of transition path dynamics. While the idea of leveraging generative models for transition path sampling (TPS) is innovative, the connection between OM action minimization and physically meaningful transition paths remains insufficiently supported. Below, I examine how the paper relates to the broader scientific literature, highlighting where it builds upon previous work and where gaps remain. Due to the high degree of freedom for reaction systems (DOE = 3N-5), the complexity of the potential energy surface can not be solved in this paper. The unique characteristics of the transition path connecting the transition state— which appears as a saddle point in the reaction coordinate projection but as a minimum in other projections—cannot be predicted solely based on the minima.
Essential References Not Discussed: The related works are properly cited.
Other Strengths And Weaknesses: Strength: The paper presents a scalable, and computationally efficient approach to transition path sampling using generative models and OM minimization.
Weakness: The paper lacks crucial validation against enhanced sampling methods and transition rate calculations, leaving uncertainty about its physical accuracy. If these issues are addressed in future work, the approach could impact molecular simulations and accelerate the discovery of reaction pathways.
Other Comments Or Suggestions: N/A
Questions For Authors: 1. How does OM optimization compare to well-established enhanced sampling techniques in terms of both efficiency and accuracy? Without this comparison, the claim of “higher efficiency” is weak.
2. How do these transition paths compare quantitatively with TPS or enhanced sampling methods in terms of transition rates and free energy barriers?
3. How does OM optimization perform on systems where the generative model was trained on an entirely different force field or chemical environment? Does it still yield reasonable transition paths?
4. Does this method capture all relevant transition pathways, or does it preferentially generate low-energy interpolations that may miss rare but important transitions?
5. How does OM optimization perform with chemical reaction systems (when bond cleavage/formation is happening)?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: >Comparison with enhanced sampling baselines
To our knowledge, there’s no widely accepted force field for the $\alpha$-Carbon coarse-graining used for the fast-folding proteins, so benchmarks are challenging. We now benchmark on all-atom alanine dipeptide, a standard test system for TPS. We compare with MCMC (shooting) and metadynamics. See [Table 1r](https://imgur.com/a/iPmIUbp) for accuracy and efficiency results, and [Figure 1r](https://imgur.com/a/SImqo3n) for sampled transition paths.
OM optimization is considerably faster than the baselines. Internal energy barriers were about 10-12 kcal/mol, in line with literature.
>Quantitative evaluation of transition paths (reaction rate, committor function, free energy profile)
These are known to be challenging to compute accurately for high-dimensional systems. With this in mind, we provide results to show how our OM optimized paths can be used to compute these quantities. We’re happy to include more in-depth results for the camera-ready version.
Free Energy: Our OM-optimized paths can serve as a guide for umbrella sampling, from which we can calculate free energy profiles. We do this for Alanine Dipeptide and achieve the following free energy profile, which closely approximates the barrier of 6-8 kcal/mol obtained from metadynamics: [Figure 2r](https://imgur.com/a/3ci9dXh)
Committor Function/Transition Rates: The Backward Kolmogorov Equation (BKE) provides a principled algorithm [1] to estimate the committor function and transition rates using the paths from OM optimization. See [Figure 3r](https://imgur.com/a/BhLLOKs) for results on the 2D Muller-Brown potential, where we achieve accurate committor and rate estimation, with a margin of error comparable with or better than past works. Note that it is often challenging to even compute the reaction rate within the correct order of magnitude [1,2].
>Guidance on setting hyperparameters
We now set hyperparameters directly as values with physical units: for example setting $\gamma$ and $D$ to be friction and diffusion coefficients, and $\Delta t$, $L$ from the time horizon and desired fidelity. We provide an example of new paths with varying time horizons in [Figure 4r](https://imgur.com/a/s9DhO3R).
>Markov State Models (MSM) do not confirm dynamical accuracy
Our reference MSMs are fit on long D.E.Shaw simulations which extensively sample the Boltzmann distribution, including many transition events. Therefore, the reference MSMs are a useful low-dimensional approximation of the underlying dynamics of the unbiased MD simulation, which is also standard practice in past works [3,4].
We have also directly compared our generated paths against the unbiased MD transition trajectories from D.E.Shaw: [Figure 5r](https://imgur.com/a/7m7Npne)
>OM Optimization on chemical reactions
We conducted initial experiments using a machine learning force field (MLFF) trained on the Transition-1x dataset. OM interpolation finds transition states on average 1.4eV above the reference transition state (found using Nudged Elastic Band (NEB) calculations with DFT), significantly better than running NEB calculations with the MLFF (1.91eV above) and uses 10x fewer force evaluations. A comprehensive analysis would be a paper in and of itself, so we consider this an interesting future direction.
>Zero-shot generalization on unseen tetrapeptides
While the force field is the same for all tetrapeptides (see [4]), it is sequence-dependent: the energy landscape is different for held-out proteins compared with training proteins. Thus, our results in Section 5.3 effectively demonstrate generalization to new interaction potentials.
>Rare events underrepresented in training data
We agree that this is an inherent challenge associated with data-driven methods. However, we have demonstrated experimentally in Section 5.2 that OM optimization is resilient to a degree of data sparsity and still identifies realistic transition paths. As we scale up to larger models like BioEmu [5], we expect resilience to sparsity to get stronger.
>Preferring low-energy interpolations
We can still achieve diversity in sampled transition paths by leveraging stochasticity of the generative model to get diverse initial guesses (Alg. 2, Sec. G) – converged paths thus tend to find diverse local minima. Further, $\tau_\text{initial}$ can be tuned to vary the diversity of the initial guesses and subsequently produced paths.
>Solving CV selection problem
We don’t claim to solve the CV selection problem. We just note that our method does not require CV to operate.
>OM Optimization on completely different force fields or chemical environments
This is an important area for future work, and we believe that using more performant models such as BioEmu [5] as the underlying score estimator would aid in achieving this.
[1] Hasyim et al. JCP (2022)
[2] Rotskoff, et al. PMLR (2022)
[3] Arts et al. JCTC (2023)
[4] Jing et al. NeurIPS (2024)
[5] Lewis et al. arxiv: 2024.12.05.626885 | Summary: The manuscript focuses on transition path sampling (TPS), which involves identifying high-probability paths between two states or points on an energy landscape. The authors combine generative models trained to sample temporally independent states from an energy landscape with the task of transition path sampling. These paths arise from a stochastic differential equation (SDE) constructed from a diffusion or flow-matching model. The authors observe that finding high-likelihood transition paths is equivalent to minimizing the Onsager-Machlup action functional. This connection enables the authors to repurpose pre-trained generative models for TPS. They demonstrate their approach on protein and molecular systems.
Claims And Evidence: The claims made in the manuscript are backed with theoretical analysis and experiments.
Methods And Evaluation Criteria: The method is grounded in strong theoretical results.
Theoretical Claims: The authors state that the SDE arising from Denoising Diffusion Probabilistic Models (DDPM) is a natural candidate for TPS, but not necessarily a good one. Was this experimentally tested? If so, it may be useful to include a reference to the experiment.
>While the denoising (i.e., sampling) process of a DDPM (see Appendix B.1) may appear to be a natural candidate, a closer inspection reveals that it is unsuitable, as it optimizes for different likelihoods at different points along the trajectory. A large portion of the denoising trajectory thus has low likelihood under the data distribution. Therefore, we need to consider an alternative approach.
Experimental Designs Or Analyses: The results are interesting and demonstrate strong performance. However, I believe comparisons against existing methods or simpler baselines are lacking. For example, given access to both the score and probability, you could try a simple Markov Chain Monte Carlo (MCMC) algorithm like Metropolis-Adjusted Langevin Algorithm (MALA) or Hamiltonian Monte Carlo (HMC)? Similar works, like AlphaFlow, are relevant to molecular dynamics, and there exist diffusion and flow-matching generative models for proteins that could be integrated within your framework. This suggestion is to enhance comparisons with existing methods; if you know of any others that would be appropriate, that would be welcome.
Supplementary Material: I mainly reviewed the proofs in Section B.3, "Flow Matching and Score Matching."
Relation To Broader Scientific Literature: The authors cleverly combine existing theoretical findings and adjust flow matching (FM) to their framework. Their work could be used for various applications that already use diffusion models or flow-matching.
Essential References Not Discussed: [1] AlphaFlow could be cited as relevant works.
[1] Jing, B., Berger, B., & Jaakkola, T. (2024). AlphaFold meets flow matching for generating protein ensembles. arXiv preprint arXiv:2402.04845.
Other Strengths And Weaknesses: I found the introduction and background sections well written and very informative. The method is theoretically grounded, and its explanation is clear.
The primary drawback is the lack of extensive comparisons in the experimental sections.
Other Comments Or Suggestions: - > However, these approaches rely on highly specialized training procedures and fail to exploit the growing quantity of atomistic simulation and structural data
It would be helpful to provide an example of these specialized training procedures or at least elaborate on why this is a drawback of existing approaches.
Questions For Authors: - I would like to confirm that the reason it is zero-shot is that the Onsager-Machlup (OM) minimization only occurs at inference? There is no need for a fine-tuning stage?
Repurpose pre-trained generative models for TPS in a zero-shot fashion.
- Have you attempted using the OM for generative tasks in a manner similar to your current approach, but starting with a noise representation?
- It would be beneficial to clarify the differences between your work and [2].
It is mentioned in the limitations, but what are the advantages of your method compared to this one (if I understand correctly, they need to train or fine-tune)?
[2] Du, Y., Plainer, M., Brekelmans, R., Duan, C., NoÅLe, F., Gomes, C. P., Aspuru-Guzik, A., and Neklyudov, K. Doob’s Lagrangian: A sample-efficient variational approach to transition path sampling. arXiv preprint arXiv:2410.07974, 2024.
- In diffusion and flow-matching models, the score of the vector field is usually parameterized by a time variable that interpolates between noise and data. Since, in your case, this initial point is not necessarily noise, which time parameter do you use? I assume the time variable from Eq. 11 will not be used to compute the score $s_\theta(x,\tau)$, especially since $\tau$ needs to fall within the interval originally used to train the model.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: >Comparison against baselines and incorporation of other generative models
Please see our response to reviewer bRih, in which we compare our OM optimization approach on alanine dipeptide with two traditional approaches for transition path sampling: Markov Chain Monte Carlo (MCMC) and metadynamics. We find that OM optimization is several orders of magnitude faster than these approaches and finds transition paths with energy barriers that agree with the reported literature values. We agree that it would also be very interesting to incorporate AlphaFlow and other recent generative models into our OM framework. We have done some preliminary work with AlphaFlow, but found it challenging to work with due to its reliance on pre-trained OpenFold/ESM models, which are extremely memory-intensive. We are actively working to address these challenges in future work.
>Unsuitability of denoising diffusion SDE for transition path sampling
The full direct denoising process starts with maximum time conditioning ($t=T$) at Gaussian noise (i.e. the target of the learned vector field $\epsilon_\theta(x, T)$ is the score of a pure Gaussian). Therefore, optimizing paths to have high likelihood under the denoising SDE would provide a “force field” near the start of the trajectory that just forces it to the origin. Thus the denoising SDE is ill-posed for our setting of TPS, since we want our SDE to produce paths which are high-likelihood under the data distribution at all times, not just at the final denoising step.
>Example of specialized training procedures in prior work and their drawbacks.
Specialized training procedures in past works include solving a Schrodinger Bridge problem via Stochastic Optimal Control, reinforcement learning, or differentiable simulations (see our submission for citations). These training procedures are expensive because they involve running MD simulations during training, and most trajectories fail to reach the target state, yielding sparse rewards and high cost. A simulation-free approach that guarantees endpoint constraints was introduced in [1]. However, all of these techniques require a training process which is **unique to transition path sampling and must be repeated for every new system of interest (since they do not utilize any training data)**, which is expensive. Meanwhile, our approach indirectly leverages the data and compute used to train atomistic generative models (which can be used for many tasks beyond TPS, notably conformational sampling), using a lightweight test-time OM action minimization procedure.
>Comparison to Doobs Lagrangian
Doobs Lagrangian [1] is a data-free method, meaning it relies only on querying the underlying energy/force function, and requires a bespoke training procedure for every molecular system of interest in order to learn the optimal bias potential for TPS. Meanwhile, our approach leverages pretrained generative models off-the-shelf, with no specialized training procedure for TPS, and can be used across chemical space (barring excessive distribution shift from the generative model’s training data), as we demonstrate in our “Generalization to Unseen Tetrapeptides” experiments.
>Using OM optimization for generative tasks
Optimizing the denoising process directly could be feasible, and indeed falls into the OM optimization framework (essentially by changing $\tau_\text{opt}$ between endpoints, similar to $\tau$ in the denoising process). However, since our paper’s setting focuses on optimizing trajectories that are high likelihood under the data distribution throughout the entire trajectory (rather than just at the endpoint), we have not run such experiments here. This is an interesting idea for future work!
>Zero-shot property of OM optimization.
OM optimization only occurs at test-time, and is a gradient descent procedure over the transition path, not over the generative model weights. The generative model weights are completely fixed throughout our procedure, which is why we call it “zero-shot”.
>What time parameter is used in OM optimization?
Due to space constraints, please see our response to reviewer jnQt, who asked a similar question.
[1] Du et al. NeurIPS (2024)
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their response. I don't have additional questions at the moment.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their helpful comments and for increasing their score. | Summary: The paper proposes a way of using a score-based or a flow based generative model trained to generate molecular configurations to generate transition paths between meta-stable configurations. The paper proposes so be drawing a relation between what would be a limit SDE corresponding to noising and denoising process of DDPM between two adjacent times and the original SDE coming from the physics of the molecular configurations and its energy potential. This relationship allows to adapt the minimization of the Onsager-Machlup functional to the score from the generative model. The paper then proceeds to a numerical evaluation in diverse settings where they compare the proposed approach to state of the art and show that it is able to sample paths that look plausible from several different metrics and comparable with much more compute demanding simulations.
Claims And Evidence: Yes
Methods And Evaluation Criteria: The proposed datasets are pertinent to the problem at hand.
Theoretical Claims: yes
Experimental Designs Or Analyses: Yes, the experiments are pertinent and show the value of the proposed method. I still have an issue with the lack of discussion of the optimal diffusion times. I think it would be interesting to see for example in the toy example what is the tradeoff between diffusion time $\tau_{opt}$ and the distance between the wells. As I see it, having more distant metastables states would lead to an augmentation of the $\tau_{opt}$ used.
Supplementary Material: Yes, B, C and H.
Relation To Broader Scientific Literature: The paper proposes a method relying on pre-trained generative models for the molecular configuration for sampling transition paths in a much faster way than the other proposed methods in the literature, such as learning the control drift through reinforcement learning or direct simulation. Although it is has less theoretical guarantees, it can still be a valuable addition.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: The paper present an original idea for using generative models to sample joint transition paths, which open new research venues. The main weakness of the paper is in my opinion the lack of further investigation around the score based model and it's trade-offs and also the fact that the proposed method is rather adhoc as said in the limitations sections.
Other Comments Or Suggestions: N/A
Questions For Authors: 1. Could the authors further comment figure 7? It seems that in this case flow matching is actually able to sample configurations around the transition are but the actual OM minimization lead to some erroneous trajectories.
2. Why not simply use Langevin dynamics with the learned score instead of the noising denoising limiting equation? (11)
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: > Discussion of optimal diffusion time
A couple of ways that $\tau_\text{opt}$ is chosen:
1. For diffusion models, a small nonzero $\tau$ usually works better than $\tau=0$. Theorem B.1 highlights why, noting $\bar \alpha_0 = 1$. Scores closer to the data are weighted lower in a standard DDPM training process. We have run some cosine similarity tests with the true force over Muller-Brown and Alanine Dipeptide to inform the optimal $\tau_\text{opt}$ used in these settings (presented in [Figure 8r](https://imgur.com/a/UDyWcxK)) and we used the reported optimal times for fast folders [1].
2. Even if $s_\theta(x, \tau_\text{opt})$ matches the true score well for small enough $\tau_\text{opt}$, this vector field can be difficult to optimize over. We have found at times that annealing over the conditioning time $\tau_\text{opt}$ helps the OM optimization: start at a value of $\tau_\text{opt}$ closer to noise, where $s_\theta(x, \tau_\text{opt})$ is generally a smoother vector field, then gradually over the optimization anneal $\tau_\text{opt}$ closer to data. This then motivates us deriving eq. (11) for any latent time, rather than just close to the data distribution.
[1] Arts et al. JCTC (2023)
> “Ad-hoc” nature of the method
Please see our response to Reviewer bRih, who also raised a similar concern that the hyperparameters controlling the relative contributions of the path, force, and diffusion terms in the OM action are set in an ad-hoc manner. We have refined our method such that these parameters are now directly interpretable as constants with physical units, rather than arbitrary hyperparameters. This makes choosing these hyperparameters intuitive and constrains the hyperparameter search space. More generally, as presented in the Appendix, our method is rigorously derived from well-known action-minimization principles from statistical mechanics, and the equivalence between the learned score function from generative models and the true force field has a clear theoretical justification in the limit of a large, expressive model trained on comprehensive data.
> Erroneous trajectories in flow matching (Figure 7):
There are a couple factors that make optimized trajectories (even at 0 temperature) look like they’re not following the data distribution:
1. There were fewer plotted samples in order to maintain figure simplicity, but as a result what is not displayed is the fact that the learned distribution does cover a wider band in the transition path, and the converged path is finding the shortest way through this band (hugging the inner wall).
2. The trained flow matching model is not perfect, and has some erroneous local minimum at the center, which gets picked up more at higher temperatures and thus requires a more conservative (lower) $dt$ parameter.
If we increase waypoints and are more careful with the optimization, we can obtain more reasonable trajectories even under a potentially erroneous flow matching model, see the now-provided [Figure 9r](https://imgur.com/a/xxpKHrI).
> Using Langevin dynamics instead of the noising-denoising limiting equation
Eq. (11) is essentially Langevin dynamics with a learned score at a fixed time-conditioning (note the reasoning works at any time in the noising process, where the vector field is the score of a noised distribution rather than the data distribution). Meanwhile, the standard Langevin dynamics used for sampling/denoising varies the score from $s_\theta(x, T)$ to $s_\theta(x, 0)$ throughout the trajectory.
The motivation for using the fixed-time, noising-denoising limiting equation is the following: our desiderata for the SDE is that the entire trajectory generated by the SDE remains high-likelihood under the data distribution. For variable-time Langevin dynamics, this desiderata is not satisfied, as only the converged/steady-state limit of the dynamics produces data samples. Meanwhile, the noising/denoising equation satisfies this criterion, and the limit process is a valid SDE to which we can apply OM optimization. | Summary: The paper introduces Onsager-Machlup (OM) optimization to sample transition paths, claiming three advantages: efficiency, scalability, and flexibility. OM optimization approach produce transition paths in pre-trained generative models, where the core idea is interpreting candidate paths as the denoise-noise SDE allowing tractable computation. Experiments on 2D Muller-Brown potential and fast-folding coarse proteins show that the method produces realistic paths and is also generalized to unseen tetra-peptides.
## Update after rebuttal
I have confirmed the author's response on the scalability and efficiency concern of the proposed method, and raised the score accordingly.
Claims And Evidence: The paper’s main claim is supported by theoretical arguments (section 3) and experiments (section 5). Specially for the efficiency aspect, transition paths are generated using pre-trained generative models, resulting valid metrics compared to MSM in smaller cost.
Methods And Evaluation Criteria: The proposed method and evaluation criteria are suited for the problem of transition path sampling, but additional evaluation criteria may strengthen the author’s claim
- 2D Muller brown: qualitative evaluation is given in the main paper, and additional contents in the appendix. A distribution plot of the transition state or the highest energy in 2D Muller brown would clearly show that the OM optimization works well.
- Fast-folding coarse-grained proteins, tetra-peptides: evaluation based on MSM following MDGen is well done
Theoretical Claims: I’ve gone through the core theoretical claim, using Onsager-Machlup action for paths under SDE and can be used to produce realistic transition paths with pre-trained generative models.
Experimental Designs Or Analyses: The experimental design and analyses are concrete and appropriate.
- 2D Muller brown: well-known synthetic testbed for transition path studies
- Fast-folding protein systems: evaluation and analysis with MSMs
- Tetra-peptides: transition path generation for unseen data, with evaluation and analysis with MSMs
Supplementary Material: The paper is accompanied with extensive material, supporting the authors claim. It seems to be of high quality, filling out details missing in the main paper.
- Appendix A: related works
- Appendix B, C: theoretical derivation and proofs
- Appendix D, E, G: method details
- Appendix F, H, I, J: experiment details and additional results
Relation To Broader Scientific Literature: The key contributions of this paper is related to ‘transition path sampling’, where traditional methods have been struggling due to extensive computation. The proposed method seems efficient compared to prior works (however, please check weakness 2).
Essential References Not Discussed: It seems all of related works have been comprehensively discussed, with details in Appendix A.
Other Strengths And Weaknesses: **Strengths**
1. Originality
Combining OM optimization with pre-trained generative model seems genuinely enough.
2. Presentation
The paper is well-written, logically structured making it easy to follow!
3. Extensive experiments
Along with synthetic systems, fast-folding proteins and tetra-peptides experiments are well done and validate the authors claim
**Weaknesses**
1. (Minor) diversity for transition paths
While “diversity” is not rigorously quantified (no explicit diversity metric is given), the authors do generate multiple paths and report that many are unique and all have non-zero probability under the reference MSM.
2. Comparison with MDGen [1]
MDGen is a generative model targeting molecular systems with multiple downstream tasks, with transition path sampling being one of them. Like the evaluation in this paper, it also uses MSMs for evaluation. The generalization task for tetra-peptides seems quite similar to that of MDGens. Is there any comparison between MDGen with OM optimization, e.g., OM is efficient to MDGen in terms of GPU hours?
[1] Generative Modeling of Molecular Dynamics Trajectories, NIPS 2024
Other Comments Or Suggestions: I do not have any other comments
Questions For Authors: 1. Scalability
The authors highlighted that the proposed method is advantageous in scalability, proteins and tetra-peptides are done by coarse-graining leading to less than hundred of atoms. Since MDGen[1] models where tetra-peptides are modeled in all atoms, I am confused what the authors imply by scalability. Could the authors provide some details about scalability compared to prior works?
[1] Generative Modeling of Molecular Dynamics Trajectories, NIPS 2024
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: > Scalability and efficiency comparison to previous works
To address the concern about only including results on coarse-grained tetrapeptides, we also present OM optimization results on all-atom tetrapeptides, which contain up to 56 atoms, in [Figure 6r](https://imgur.com/a/naQLWDy). We obtain competitive results on the Markov State Model metrics, similar to coarse-grained tetrapeptides in Figure 4 of the paper. We plan to replace Figure 4 with this all-atom result for the camera-ready version of the paper.
Regarding efficiency relative to past works, MDGen [1] requires a TPS-specific training procedure of about 460 GPU-hours, followed by inference on the order of a few seconds per tetrapeptide path (numbers obtained via direct correspondence with the authors). Meanwhile, we utilize pretrained generative models and perform OM optimization using the learned score function, which requires approximately 30 seconds per tetrapeptide path.
To account for the increased memory footprint of larger systems, we use a simple path batching technique, which splits up the discretized path into mini batches (e.g split a large, 5,000-point path into mini-batches of size 100). The batches can be optimized either sequentially or trivially parallelized with multiple GPUs at each step of OM optimization, making our method scalable.
At a broader level, our method is scalable in the sense that we expect continual performance improvements as the underlying atomistic generative models scale up and learn better score functions. This also applies to the growth in size and quality of underlying training datasets (including incorporating experimental data, etc.). We believe that our approach, which requires no TPS-specific training procedure, aligns well with the growing trend of leveraging large-scale, well-tested, general-purpose generative models—a direction already standard in language, speech, and image modeling communities. As high-quality generative models become increasingly available, this synergy will position our method more favorably than existing TPS approaches, which lack such compatibility.
[1] Jing et al. NeurIPS (2024)
> Transition state distribution for 2D Muller Brown
We show the distribution of sampled transition (i.e highest-energy) states for 2D Muller Brown in [Figure 7r](https://imgur.com/a/CxPwszr). While OM optimization does not capture the full transition state ensemble, this can easily be obtained by initiating MD simulations along the sampled path.
> Diversity metric
We did not report an explicit diversity metric because to our knowledge, there is no such agreed upon metric in the TPS literature. While not directly a diversity measure, reported distributional distances using the Jensen-Shannon Divergence (JSD) between the MSM state distribution visited by the true and reference transition paths (see Fig. 3d and 4) serve as a proxy for capturing the appropriate path diversity. If the diversity was under- or over-estimated, this would be reflected in the JSD. | null | null | null | null | null | null |
FedOne: Query-Efficient Federated Learning for Black-box Discrete Prompt Learning | Accept (poster) | Summary: The authors propose FedOne to increase the query efficiency prompt learning method for cloud-based LLM, which activates only one client per round for optimal efficiency. The proposed method is shown to be effective by extensive experiments.
Claims And Evidence: Good.
Methods And Evaluation Criteria: Fair.
Theoretical Claims: Good.
Experimental Designs Or Analyses: Good.
Supplementary Material: Yes, regarding method design.
Relation To Broader Scientific Literature: Good.
Essential References Not Discussed: This paper is cited by not referenced and compared in the experiments:
[1] Sun, Jingwei, et al. "Fedbpt: Efficient federated black-box prompt tuning for large language models." arXiv preprint arXiv:2310.01467 (2023).
Other Strengths And Weaknesses: Strengths:
1. The authors provide theoretical analysis.
2. The authors conduct experiments on multiple datasets.
Weaknesses:
1. The motivation behind query efficiency is unclear. What is the main purpose of optimizing it? I would assume that a server-deployed LLM would have sufficient system capacity and tailored algorithms to handle concurrent queries efficiently, such as through parallelization techniques.
2. The authors make assumption on client heterogeneity but ignore the experiments for heterogeneous FL settings.
3. In Table 1, what are the preformance of Fed-X, i.e., sampling multiple active clients?
Other Comments Or Suggestions: Please refer to Strengths And Weaknesses.
Questions For Authors: Please refer to Strengths And Weaknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely thank you for your insightful comments and constructive criticisms. Your feedback has been invaluable in improving the quality and clarity of our manuscript. Below, we address the **Weaknesses and Essential Reference part**.
>**W1**: Motivation behind optimizing query efficiency
The motivation for improving query efficiency is to **reduce the monetary cost and practical constraints associated with training federated black-box prompt learning models using cloud-based LLMs**. Each query to commercial APIs incurs usage fees and is subject to rate limits, both of which scale with the number of active clients and training iterations in Fed-BDPL. (e.g. GPT-4o is \\$2.5/1M input tokens, \\$10/1M output tokens, with rate limits depending on the usage tier.)
Our goal is to make federated black-box prompt tuning cost-effective and scalable in real-world deployments. To that end, our analysis and design focus on the FL framework’s query behavior to cloud-based LLM, rather than how the LLM server handles concurrent API calls.
We will revise the introduction to emphasize the motivation behind optimizing query efficiency.
>**W2**: Experiment on heterogeneity
We provide an additional experiment to **demonstrate FedOne’s performance under varying levels of client heterogeneity**. Following prior works [1, 2], we simulate heterogeneity using a Dirichlet distribution with concentration parameters $\alpha=0.5$ for medium heterogeneity and $\alpha = 0.1$ for high heterogeneity. Each experiment is run three times independently. The complete results, including three figures corresponding to different heterogeneity levels, are available at this anonymous link ( https://anonymous.4open.science/r/ICML-Rebuttal-FedOne-649F/Converge2-Medium_Hetero.png ).
These experiments further support our theoretical results. **Activating a single client per round ($K_*=1$) consistently achieves the highest query efficiency, even in highly heterogeneous settings**. This is because the core intuition behind FedOne remains valid under the heterogeneous case: the query cost to the LLM increases linearly with the number of activated clients (since each client issues one query), while increasing K is not able to provide a linear speedup in convergence rate. Our original submission did not include these heterogeneity results because our primary goal was to theoretically identify and validate the optimal query-efficient strategy for Federated Black-box Prompt Learning. Heterogeneity is outside the core scope of this work.
We appreciate the reviewer’s suggestion, and we will add the heterogeneity experiments to the Appendix C.
>**W3**: Fed-X result for Table 1
The corresponding result is provided at this anonymous link ( https://anonymous.4open.science/r/ICML-Rebuttal-FedOne-649F/kF7d_Fed-X_in_table1.jpg ), and it demonstrates that FedOne-X and Fed-X achieve comparable test metrics. We did not include the test metrics for Fed-X in the main text because the **primary goal of our research is to improve the query efficiency of Fed-BDPL; optimizing test accuracy is beyond the scope of this work**. Our theoretical results also specifically focus on convergence behavior and query efficiency, and do not deal with generalization errors.
We will include the results of Fed-X of Table 1 in Appendix C for completeness.
>**Essential References**: Experiment comparison with FedBPT
Thank you for raising this important point. We would like to clarify that **FedBPT is already included in our experiments under the name Fed-BBT**. *(For context, FedBPT (Sun et al., 2023) applies BBT (Sun et al., 2022) to federated learning, with specific adaptations to enhance performance in FL.)* Specifically, our Fed-BBT baseline implements the FedBPT, where the CMA-ES parameters are aggregated across clients, instead of full model parameters.
We chose the name "Fed-BBT" (instead of "FedBPT") to help the reader intuitively compare methods within our framework. The naming convention **"Fed-BBT vs. FedOne-BBT"** makes it immediately clear that the two methods share the same prompt tuning backbone (BBT) but differ in client activation mechanisms (Fed- vs. FedOne-). In contrast, naming "FedBPT vs. FedOne-BBT" requires additional prior knowledge to recognize that they shared the same backbone.
We will revise Section 4.2 to explicitly state that Fed-BBT is adapted from FedBPT, with BBT as the underlying prompt tuning method.
---
**References**
*[1] Lin, Tao, et al. "Ensemble distillation for robust model fusion in federated learning." NeurIPS 33 (2020): 2351-2363.*
*[2] Yurochkin, Mikhail, et al. "Bayesian nonparametric federated learning of neural networks." ICML. PMLR, 2019.* | Summary: This paper explores Federated Black-Box Discrete Prompt Learning and introduces FedOne, a novel approach that selects a single client per round. The chosen client updates the sampling probability for each token at different positions, optimizing prompt learning in a federated setting. Comprehensive experiments confirm the effectiveness.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: The notation in the paper is relatively complex, even for standard federated learning pipelines, as seen in Eq. (1) and (2). The authors could simplify the notation to enhance clarity and readability.
Experimental Designs Or Analyses: Yes.
Supplementary Material: Yes. Author provides convergence analysis.
Relation To Broader Scientific Literature: Adapting federated learning to Black-Box Discrete Prompt Learning could further enhance prompt tuning performance by leveraging data from diverse sources, which provides the chance for multi-clients to collaboratively learn the large model in a parameter-efficient way.
Essential References Not Discussed: Yes.
Other Strengths And Weaknesses: The author provides detailed theory analysis and experimental comparison.
Weakness.
1. Background Introduction
The rationale for employing Black-Box Discrete Prompt Learning in a federated setting is unclear. Given the popularity of Federated Prompt Learning, as demonstrated in prior works [1,2,3,4], why not leverage standard federated prompt learning approaches? The authors should refine the paper’s logical structure to better justify the proposed approach.
References:
• [1] Tao Guo et al., PromptFL: Let Federated Participants Cooperatively Learn Prompts Instead of Models—Federated Learning in the Age of Foundation Models, IEEE TMC, 2023.
• [2] Guoyizhe Wei et al., Dual Prompt Tuning for Domain-Aware Federated Learning, arXiv preprint arXiv:2310.03103, 2023.
• [3] Hongxia Li et al., Global and Local Prompts Cooperation via Optimal Transport for Federated Learning, CVPR, 2024.
• [4] Hangchao Su et al., Federated Adaptive Prompt Tuning for Multi-Domain Collaborative Learning, AAAI, 2024.
2. Client Selection Strategy
The motivation behind selecting only one client per round is unclear. Why not select multiple clients (e.g., two or three) to balance convergence speed and robustness? The rationale for this design choice should be explicitly discussed, and a convergence comparison with alternative selection strategies would strengthen the justification.
3. Prompt Length Impact
The effect of prompt length is not sufficiently discussed. In Section C1, the authors examine different prompt lengths but do not explain why increasing the prompt length leads to worse performance. A detailed analysis of this phenomenon is needed.
4. Lack comparison. As far as I know, the FedBPT is the closest with your work. Could you explain the rationale difference and compare with them?
[1] FedBPT: Efficient Federated Black-box Prompt Tuning for Large Language Models
Other Comments Or Suggestions: Refer to weakness.
Questions For Authors: Refer to weakness.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely thank you for your insightful comments and constructive criticisms. Below, we address the weaknesses.
>**W1**: Rationale of Fed-BDPL
The rationale for employing **Black-box** Discrete Prompt Learning is grounded in two key real-world constraints:
1. **Lack of Access to Model Internals**: The white-box federated prompt learning approaches (reference [1–4] you mentioned) assume direct access to model weights or gradients, which is infeasible in realistic scenarios when we want to leverage the most advance commercial, closed-source LLMs (e.g., GPT, Claude), that are only accessible via APIs. Our work specifically targets this black-box setting, where the prompt tuning is conducted without model access—a practical and increasingly common constraint in modern applications.
2. **Resource Constraints on FL Clients**: In typical FL scenarios, clients are resource-constrained devices (e.g., mobile phones) with limited compute and memory. However, white-box methods ([1-4]) require storing and training on the LLM model locally, which is computationally infeasible for such device. In contrast, black-box prompt tuning offloads the computation to the LLM API, making it scalable and deployable in realistic FL scenario.
We will revise the introduction (par. 3, 4) to make the above rationale more explicit, and ensure that references [1–4] you mentioned are properly cited in the Related Work section.
>**W2**: Motivation of FedOne
The motivation to activate only one client per round is not heuristic, but rather **grounded in our theoretical analysis of query efficiency in Fed-BDPL**, which includes the convergence of Fed-BDPL (Theorems 3.4, 3.5) and query efficiency (Corollary 3.6). Specifically, **Corollary 3.6 demonstrates that setting K=1 achieves optimal query efficiency for reaching an $\epsilon$-solution in the Fed-BDPL framework**. The FedOne framework is directly derived from this theoretical result to achieve optimal query efficiency.
The **intuition** behind FedOne is further elaborated in **Remark 3.7**: the query cost to the LLM increases linearly with the number of activated clients (since each activated client issues one query), while increasing $K_*$ is not able to provide a linear speedup in convergence rate. Consequently, **the marginal gain in convergence rate does not compensate for the linear increase in query cost, making $K_*=1$ the most query-efficient choice under this framework.**
We further empirically validate this in **Figure 2**, where different client selection strategies ($K_*=1,5,10,20,40$) are compared. The results show that activating a single client per round consistently achieves the optimal query efficiency, aligning with our theory.
>**W3**: The Impact of Prompt Length
While prompt length is not the primary focus, we agree that it is an important factor. As shown in Fig. 3, performance remains relatively stable across a moderate range of prompt lengths, with the average accuracy staying in $0.83\pm0.005$. This suggests **no statistically significant performance drop** when increasing the prompt length.
However, it is worth noting that very short or very long prompts introduce larger variance due to the properties of BDPL.
1. Very short prompts lack sufficient capacity to instruct the LLM effectively.
2. Very long prompts lead to a larger search space and increased training difficulty.
Based on these observations, we selected a prompt length of 20 in our main experiments, as it offers a good trade-off, with the lowest variance. We will clarify this rationale in Appendix C.
>**W4**: Compare with FedBPT
Thank you for raising this important point. We would like to clarify that **FedBPT is already included in our experiments under the name "Fed-BBT"**. *(For context, FedBPT (Sun et al. 2023) applies BBT (Sun et al. 2022) to FL, with adaptation designs to enhance its performance in FL.)* Specifically, our Fed-BBT baseline implements the FedBPT, where the CMA-ES parameters are aggregated across clients, instead of full model parameters.
We chose the name "Fed-BBT" (instead of "FedBPT") to help the reader compare methods intuitively. E.g., using **"Fed-BBT vs. FedOne-BBT"** immediately tells that they share the same backbone but differ in client activation mechanisms (Fed- vs. FedOne-). In contrast, naming "FedBPT vs. FedOne-BBT" requires additional prior knowledge to recognize that they are comparable.
We will revise Section 4.2 to explicitly state that Fed-BBT is adapted from FedBPT, with BBT as the underlying prompt tuning method.
>**W4**: The rationale difference between FedBPT and ours
The main rationale difference between FedBPT and our work lies in the **research focus**: FedBPT focuses on adapting BBT to the federated learning setting, with specific designs to improve optimization performance across clients. In contrast, our work focuses on query efficiency, a crucial and underexplored challenge in federated black-box prompt learning with cloud-based LLMs.
---
Rebuttal Comment 1.1:
Comment: After reading the rebuttal, I decided to decrease my score.
**First**, the motivation based on theoretical analysis remains somewhat unclear. Theorems 3.4 and 3.5 on page 4 only discuss the convergence conditions, but what I care more about is whether the setting itself is reasonable in a federated learning scenario. Although Figure 2 shows query efficiency and validation accuracy, I would suggest also plotting validation accuracy versus training epochs to provide a better understanding of training dynamics. Additionally, you mention using a toy model, but it is unclear why the analysis is conducted on **MNIST**, while your main experiments are performed on datasets such as MNLI, QQP, SST-2, MRPC, CoLA, QNLI, and RTE. Why not perform the theoretical analysis and ablation studies on these actual benchmark datasets? Furthermore, it would strengthen your work to demonstrate that your method scales well across varying numbers of clients. Lastly, explaining how your method handles conflicting objectives or consensus among participating clients—a central issue in traditional federated learning—would be important.
**Second**, as you mention “Resource Constraints on FL Clients,” it is questionable that your experiments are based on RoBERTa-large, which is computationally heavy. Moreover, while you state “Lack of Access to Model Internals,” your paper (page 5) also mentions that “the trainable prompts were placed at different positions in the model depending on the algorithm of the baselines.” This appears contradictory and should be clarified by referring to your own descriptions.
**Third**, the discussion on the difference from FedBPT lacks depth. It is not enough to just list the differences; the paper should clearly articulate why FedBPT fails to address the core challenges targeted by your approach.
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate your time and effort in reviewing our work. Thank you for your constructive feedback.
## **1st:**
>*Convergence and "whether the setting is reasonable in FL?"*
The federated black-box prompt learning setting is essential when working with **closed-source commercial LLMs, where the white-box prompt learning is infeasible**. It also aligns with real-world FL scenarios involving resource-constrained clients, as **computation is offloaded to the cloud-based LLM server**, making it a more practical and scalable approach for real-world deployments.
Regarding the theoretical analysis on convergence and query efficiency: The motivation to analyze query efficiency under the federated black-box setting arises from the fact that **each query to the cloud-based LLM incurs monetary cost and latency, which has become a dominant factor in the overall system cost**. Our analysis is designed to directly address this practical challenge.
>*Fig. 2, validation accuracy vs. training epochs*
Thank you for your suggestion. We add plots of validation accuracy vs. epochs and vs. total API calls, extending the experiment shown in Fig. 2. The plots are presented side by side to better illustrate the training dynamics. The results are available at ( https://anonymous.4open.science/r/ICML-Rebuttal-FedOne-649F/Fig2_Valid-Epoch-API.png ).
>*MNIST and GLUE benchmark. Why not perform the theoretical analysis and ablation studies on these actual benchmark datasets?*
We would like to point out that **the ablation study on varying the number of activated clients using the SST-2 dataset has already been provided in Appendix C.1 (as noted in footnote 3)**. The results consistently show that activating one client per round yields the best query efficiency, aligning with our theoretical analysis.
We chose MNIST as the toy experiment due to its familiarity within the ML community, allowing us to illustrate the core intuition of FedOne in a simple and accessible manner to researchers across various subfields of ML and FL.
> *How does FedOne scale with varying numbers of clients?*
Thank you for your suggestion, we add one more experiment on varying the total number of clients. The results are available at ( https://anonymous.4open.science/r/ICML-Rebuttal-FedOne-649F/Varying_total_number_of_client.png ).
>*Conflicting objectives or consensus among clients*
We would like to clarify that the problem of "conflicting objectives and client consensus" is **beyond the scope of our study**.
This problem typically arises in learning frameworks involving **multiple distinct objectives**, such as in personalized FL (client-specific objectives), multi-task learning (task-specific objectives), or in pretraining–finetuning conflicts (transferred objective). These settings primarily focus on enhancing generalization across different tasks or clients.
In contrast, our work focuses on the convergence and query efficiency of the system. Accordingly, our formulation assumes a **shared global optimization objective** for the entire FL system (Eq. 1). Therefore, the study of conflicting objectives among clients is beyond the scope of our study.
## **2nd:**
>*Experiments on RoBERTa-large are computationally heavy.*
In that experiment, when using the **black-box baselines**, the "heavy computation" associated with RoBERTa-large is **offloaded to the cloud-based LLM server**. RoBERTa-large is treated as a frozen model hosted by the cloud-based LLM server, serving purely as a black-box oracle without exposing any model internals to the clients. This setup significantly reduces the computational burden on FL clients and aligns with the “Resource Constraints on FL Clients.”
>*"The trainable prompts were placed at different positions in the model"*
We apologize for the confusion, and we will modify this part to prevent ambiguity. This sentence mainly refers to the white-box baselines.
In the white-box baselines, the LLM is stored and executed locally on each client. In this setting, trainable prompts are placed at different positions within the model, depending on the algorithm used (Prompt Tuning or Prefix-Tuning v2).
In the black-box baselines, the LLM is hosted on the cloud-based server, and clients have no access to model internals. Instead, they optimize local parameters to generate prompts, which are evaluated by querying the cloud-based LLM, which fully adheres to the black-box learning paradigm.
## **3rd:**
>*FedBPT*
We will further clarify why FedBPT fails to address the core challenges of our work.
FedBPT does not explicitly consider or optimize the query cost associated with cloud-based LLMs ("Inference API" in their paper). **Neither their theoretical analysis nor their experiments quantify or minimize the number of queries to the Inference API**. In contrast, our work explicitly targets query efficiency, providing both theoretical insights and a practical framework designed to minimize the number of LLM queries. | Summary: The paper introduces a federated learning (FL) framework designed to improve the query efficiency of Black-Box Discrete Prompt Learning (BDPL) when interacting with cloud-based Large Language Models (LLMs). Traditional federated black-box prompt tuning approaches incur high query costs due to multiple clients querying the cloud-based LLM in each training round. To address this, the authors propose FedOne, a specific case of FedAvg framework that activates only one client per round.
Claims And Evidence: The convergence of Fed-BDPL (theorem 3.4, Coro 3.5). Activating one client in Fed-BDPL framework can achieve $\epsilon$-solution with least queries (corollary 3.6).
Methods And Evaluation Criteria: The proposed method is well-motivated and aligns with intuitive reasoning. The evaluation follows standard practices. However, further experiments on non-IID settings would strengthen the generalizability of the results.
Theoretical Claims: I checked the proof of Theorem 3.4. It makes sense to me.
Experimental Designs Or Analyses: The experiment looks pretty standard (benchmark, baselines). It uses the GLUE dataset and demonstrates results on the white-box RoBERTa and the black-box GPT-3.5, covering standard tuning methods such as prompt-tuning, prefix-tuning, and black-box tuning. Additionally, it includes an analysis of computational and communication costs. The toy experiment also aligns well with the theoretical findings.
Supplementary Material: Yes, I reviewed the supplementary material, including proof and supplementary experiment.
Relation To Broader Scientific Literature: The backbone of this paper is Fed-BDPL (Lin et al., 2023), which applies BDPL (Diao et al., 2022) to FedAvg (McMahan et al., 2017).
1. This paper identifies the previously overlooked query cost problem in cloud-based LLMs within Fed-BDPL.
2. This paper presents the first convergence analysis of Fed-BDPL (Theorem 3.4) and further explores query efficiency and convergence behavior (Corollary 3.6), providing a rigorous theoretical foundation for fed-BDPL.
3. Building on these theoretical results, the author demonstrate that activating only one client per round achieves optimal query efficiency in Fed-BDPL, leading to the proposed FedOne framework.
Essential References Not Discussed: No
Other Strengths And Weaknesses: Strengths:
1. Though the idea of FedOne is quite simple, the author provides rigorous proof by deriving the explicit form of the number of queries required to achieve the $\epsilon$-solution (Corollary 3.6) effectively justifying the underlying principles of the approach. This finding is interesting and novel.
2. The paper identifies a previously overlooked research problem, the substantial query cost on cloud-based LLM using convention FedAvg.
3. The paper presents the first theoretical analysis of Fed-BDPL, providing a rigorous foundation for understanding its convergence and query efficiency. This analysis provides valuable insights in the optimization of Fed-BDPL.
4. The presentation is clear. The core idea is simple, with no unnecessary components added to the framework to artificially increase system complexity. Theoretical analysis and algorithmic design are seamlessly integrated, contributing to the paper's overall coherence.
Weaknesses:
1. While heterogeneity is not the research focus of this paper, a discussion on heterogeneity and its potential impact on the FedOne framework could be further explored in the theoretical analysis. This would provide deeper insights into how FedOne performs under varying degrees of client heterogeneity. I am very curious about this question.
2. The experiment is also solely based on the IID case. The theoretical analysis suggests that under bounded client heterogeneity, one client is most query-efficient. I suggest that the authors should also include additional experiments on heterogeneous data to evaluate whether the FedOne framework remains efficient under some level of heterogeneity, which make the experiment section more align with the theorem.
Other Comments Or Suggestions: 1. #016, Large Language Model (LLM).
2. For table 1 you can also bold the highest, which makes a consistent format as table 3.
3. #220, the footnote 2 is not complete.
4. #325, in table 1. We observed...
Questions For Authors: How can heterogeneity affect the result of Corollary 3.6, FedOne? How does heterogeneity affect the results in the experiments?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your insightful comments and for recognizing the contribution of our work. Your feedback has been invaluable in enhancing the quality and clarity of our manuscript. We reply to the weaknesses and questions.
>**W1&Q1**: Impact of heterogeneity on theoretical analysis
Thanks for pointing that out. We will add more discussion on the heterogeneity in the theoretical analysis.
Regarding the theoretical result under heterogeneity, the conclusion that activating one client per round yields the highest query efficiency in Fed-BDPL still holds. However, the trade-off is a slower convergence rate per round, as fewer clients contribute updates in each iteration.
1. Theorem 3.4, Theorem 3.5, and Corollary 3.6 explicitly consider bounded client heterogeneity, and the result that **$K_*=1$ achieves optimal query cost remains valid in this context**. This is because the core intuition behind FedOne continues to hold: the query cost to the LLM increases linearly with the number of activated clients (as each client issues one query), while increasing $K_*$ is not able to provide a linear speedup in convergence ([1,2]).
2. The trade-off is that fewer activated clients per round lead to a slower convergence rate (e.g., $O(\frac{1}{\sqrt{K T}})$ convergence in [1]), even though it improves overall query efficiency.
>**W2&Q2**: Experiment on heterogeneity
We have also included an additional experiment to demonstrate **FedOne’s performance under varying levels of client heterogeneity**, supporting the points discussed in W1&Q1. Following prior works [3, 4], we simulate heterogeneity using a Dirichlet distribution with concentration parameters $\alpha=0.5$ for medium heterogeneity and $\alpha = 0.1$ for high heterogeneity. Each experiment is run three times independently. The complete results, including three figures corresponding to different heterogeneity levels, are available at the following anonymous link ( https://anonymous.4open.science/r/ICML-Rebuttal-FedOne-649F/Converge2-Medium_Hetero.png ). We will include the above experiments in Appendix C.
The experiment shows that across all levels of heterogeneity, when the trials converge to a similar stationary point (i.e., approaching the same validation accuracy), **activating a single client per round ($K_*=1$) consistently achieves the lowest query cost**.
The trade-off is that:
1. Activating fewer clients increases variance in training, which can lead to greater fluctuations.
2. Activating fewer clients also lowers the convergence rate, and conversely, increasing K can speed up convergence at the cost of higher query overhead.
Based on the above trade-off established in our work, practitioners can tune $K_*$ according to their specific system constraints and performance goals.
Despite these trade-offs, the **primary objective of our work is to identify the optimal query-efficient strategy within the Fed-BDPL framework that still guarantees convergence**. Both our theoretical analysis and empirical results demonstrate that activating a single client per round achieves this goal, providing the best balance in settings constrained by the high cost and rate limits associated with querying cloud-based LLMs.
>**Other comments**
Thank you very much for pointing out these helpful corrections! We have revised them accordingly.
---
References
*[1] Haddadpour, Farzin, and Mehrdad Mahdavi. "On the convergence of local descent methods in federated learning." arXiv preprint arXiv:1910.14425 (2019)*
*[2] Li, Xiang, et al. "On the convergence of fedavg on non-iid data." ICLR 2020.*
*[3] Lin, Tao, et al. "Ensemble distillation for robust model fusion in federated learning." Advances in neural information processing systems 33 (2020): 2351-2363.*
*[4] Yurochkin, Mikhail, et al. "Bayesian nonparametric federated learning of neural networks." International conference on machine learning. PMLR, 2019.* | Summary: This paper introduces a federated learning framework for black-box discrete prompt learning (BDPL), specifically suitable for cloud-based LLMs. The core idea of FedOne is to optimize query efficiency by degrading the traditional FedAvg algorithm to activate only a single client per round. The authors claim to provide the first theoretical analysis of query efficiency in federated BDPL, demonstrating that FedOne achieves optimal query efficiency in this context. Empirical results from numerical experiments are presented to support these theoretical findings, showing significant improvements in query efficiency compared to existing federated black-box prompt tuning approaches.
## update after rebuttal
I feel my concerns are addressed and I have increased my score.
Claims And Evidence: The paper's central claim about the optimality of $K^*=1$ for query efficiency.
E1: The theoretical analysis in Sec. 3 provides a foundation for the claim, the authors derive the convergence rate of Fed-BDPL (Corollary 3.5) and the query complexity function (Corollary 3.6), showing that it achieves minimum at $K^* < 1$.
E2: The empirical validation includes
- A toy experiment demonstrating that $K^*=1$ achieves better query efficiency on MNIST data
- Evaluations on GLUE benchmark tasks.
The claim about comparable performance despite reduced query cost is well-supported by the experimental results in Table 1 and Table 3, where FedOne-BDPL and FedOne-GS-BDPL achieve performance metrics similar to other methods while reducing queries by orders of magnitude.
Methods And Evaluation Criteria: The proposed method, FedOne, which is essentially FedAvg with client selection restricted to one client per round, is a sensible approach to potentially improve query efficiency. By activating only one client, the number of queries to the cloud-based LLM per round is directly reduced.
For evaluation, this paper focuses on query efficiency as a primary criterion. The authors also consider query cost for cloud-based LLM APIs.
For benchmark datasets, GLUE is a standard choice for evaluating LLM performance across diverse tasks.
Theoretical Claims: I did not verify the details in the proof. I do not find apparent flaws with high-level theoretical analysis.
Experimental Designs Or Analyses: The toy experiment (figure 2) appropriately demonstrates the relationship between query efficiency and $K^*$ in a controlled setting.
For the GLUE benchmark evaluation, the use of 100 clients with $k$-shot datasets simulates a FL scenario, and the hyper parameter tuning with grid search is thorough and follows standard practice.
I appreciate that the authors also provided computational efficiency experiment (table 2), including direct comparison of white-box vs. black-box approaches.
Supplementary Material: no supplementary material other than the appendix is provided
Relation To Broader Scientific Literature: This paper should be positioned within three interconnected areas: prompt tuning for LLMs, federated learning, and federated prompt tuning.
Essential References Not Discussed: I am not an expert in FL, and I am not able to identify clear missing references.
Other Strengths And Weaknesses: I summarize the strengths and weaknesses I identified below. They may overlap with the previously discussed points:
### strengths
1. The paper introduces a counter-intuitive but well-substantiated insight that fewer activated clients leads to better query efficiency in federated black-box prompt learning.
2. The theoretical analysis establishes connections between convergence rates and query complexity
3. The experimental validation covers both synthetic data and real-world LLM APIs.
4. The topic being studied is timely and important.
### Weaknesses:
1. Limited analysis of the impact of heterogeneity across clients.
2. The analysis assumes uniform query costs across all clients, but in practice, query complexity might vary based on input length, client location, and other factors.
3. It appears that the paper does not explore adaptive strategies for determining $K^*$ based on system conditions.
Other Comments Or Suggestions: N/A.
Questions For Authors: 1. How does the FedOne approach perform when client data distributions are highly heterogeneous? Since only one client is activated per round, does this potentially slow convergence in highly non-IID settings compared to traditional FL approaches?
2. Have you explored adaptive strategies for setting K* based on observed convergence patterns or system conditions?
3. The paper assumes uniform query costs across clients. How would varying query costs (due to different prompt lengths, computation time, etc.) affect the theoretical analysis and the optimality of $K^*=1$?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your insightful comments and for recognizing the contribution of our work. Your feedback has been invaluable in enhancing the quality and clarity of our manuscript. We reply to the questions and weaknesses.
>**Q1&W1**: Impact of heterogeneity
When the client‘s data distribution is highly heterogeneous, **the theoretical result that activating one client per round yields the highest query efficiency in Fed-BDPL still holds**. This is because the core intuition behind FedOne remains valid under such conditions: the query cost to the LLM increases linearly with the number of activated clients (since each activated client issues one query), while increasing K is not able to provide a linear speedup in convergence ([1, 2]).
We have also included an additional experiment to evaluate FedOne's performance under highly heterogeneous client distributions. Following prior works [3, 4], we simulated heterogeneity using a Dirichlet distribution with the concentration parameter $\alpha=0.1$, representing a highly non-IID setting. The result demonstrates that, **even under highly non-IID conditions, activating a single client per round ($K_*=1$) still achieves the highest query efficiency**, which is aligned with the theoretical result. The result figure can be found at this anonymous link ( https://anonymous.4open.science/r/ICML-Rebuttal-FedOne-649F/Converge3-High_Hetero.png )
We will include this experiment in Appendix C.
>**Q1**: Potentially slow convergence
As demonstrated in [2], activating more clients per round under non-IID settings can improve the convergence rate, but it does not yield a linear speedup. By setting $K_*=1$ in FedOne, **our approach intentionally trades off this marginal gain in convergence rate in favor of query efficiency**.
However, the primary goal of our work is not to maximize convergence rate per round, but to **identify an optimal query-efficient strategy within the Fed-BDPL framework that still guarantees convergence**. Both our theoretical analysis and empirical results confirm that activating a single client per round achieves this objective, offering the best trade-off when considering the high cost and rate limits associated with querying cloud-based LLMs.
We will add a discussion of this trade-off in the Appendix to provide a more balanced view of the design choice.
>**Q2&W3**: Adaptive strategy of $K_*$
Regarding the adaptive strategy, this work provides **two key insights into the trade-off** between the number of activated clients $K_*$, convergence speed, and query efficiency:
1. Increasing $K_*$ leads to higher query costs (scaling linearly with $K_*$) while yielding only sublinear improvements in convergence rate.
2. Reducing the number of activated clients K improves query efficiency but may result in slower convergence. Notably, setting $K_*=1$ achieves optimal query efficiency.
Our research provides the above principles for tuning $K_*$, enabling practitioners to adjust $K_*$ according to their specific system constraints and performance objectives.
>**Q3&W2**: Uniform query cost assumption
We acknowledge that query costs may vary in practice due to factors like input length or client-specific characteristics. However, the uniform cost assumption allows us to simplify the theoretical analysis without significantly compromising realism.
Since clients are randomly selected at each round, **variations in individual query costs are expected to average out over time**. Even if we modeled client query costs as a distribution, the inherent randomness in selection would mitigate the overall impact. Therefore, the uniform cost assumption offers a practical balance between analytical tractability and real-world applicability.
We will add the above discussion to the manuscript.
---
References
*[1] Haddadpour, Farzin, and Mehrdad Mahdavi. "On the convergence of local descent methods in federated learning." arXiv:1910.14425 (2019)*
*[2] Li, Xiang, et al. "On the convergence of fedavg on non-iid data." ICLR 2020.*
*[3] Lin, Tao, et al. "Ensemble distillation for robust model fusion in federated learning." NeurIPS 33 (2020): 2351-2363.*
*[4] Yurochkin, Mikhail, et al. "Bayesian nonparametric federated learning of neural networks." ICML. PMLR, 2019.*
---
Rebuttal Comment 1.1:
Comment: I thank the authors for providing additional theoretical analysis and addressing my questions. I have no more concerns and think this is a good paper to be accepted. I have increased my score as well.
---
Reply to Comment 1.1.1:
Comment: **We are truly grateful for your thoughtful and in-depth discussions!**
**Your support means so much to us!** | null | null | null | null | null | null |
Overcoming Vocabulary Mismatch: Vocabulary-agnostic Teacher Guided Language Modeling | Accept (poster) | Summary: This paper proposes a Vocabulary-agnostic Teacher Guided Language Modeling for guiding the training of smaller student models by large teacher models. This method tries to bridge the gap caused by vocabulary mismatch in different models. The proposed approach comprises two key components: Token-level Lexical Alignment and Teacher Guided Loss, both of which contribute to the enhancement of performance.
Claims And Evidence: The authors argue that the perceived vocabulary mismatch is, in fact, a mismatch of tokens extracted by different models. Presenting this as a vocabulary mismatch appears to introduce a misleading notion.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria are reasonable.
Theoretical Claims: The authors lack corresponding theoretical analysis, and the proposed solution tends to be more engineering-oriented.
Experimental Designs Or Analyses: Yes, I have checked the validity of experimental designs, and they are acceptable.
Supplementary Material: Yes, the authors provide the pretraining codes.
Relation To Broader Scientific Literature: There's nothing to add
Essential References Not Discussed: No
Other Strengths And Weaknesses: **Strengths**
The presentation of the paper is good, with each section being clear. The focused problem is well-articulated, with an in-depth analysis and preliminary tiny experiments to illustrate it.
The suggested method achieves effective performance improvements.
The scalability with different teachers has also been sufficiently validated.
**Weaknesses**
The contributions of this paper are insufficient. Firstly, the Token-level Lexical Alignment appears to be an engineering operation, and the theoretical analysis of the effectiveness of the proposed mapping is lacking. Secondly, the technical contribution of weighting different tokens is not substantial enough.
Other Comments Or Suggestions: No
Questions For Authors: Please refer to the Weaknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for the reviewer’s helpful feedback. We are pleased that the reviewer recognized the motivation and analysis of our work. We provide our response as below.
### **Clarification of the “Vocabulary Mismatch” Notion.** (Claims And Evidence)
We appreciate the chance to clarify our terminology and the motivation behind it.
- **Tokenizer Differences Originate from "Vocabulary" Choices:**
The issue of "tokens extracted by different models" fundamentally stems from the fact that these models use different tokenizers. When building a tokenizer, the primary goal is to construct a vocabulary. Since different model series typically start with tokenizers based on distinct vocabularies, we believe that the fundamental reason for the observed mismatch lies in the vocabulary itself. Thus, we refer to this problem as a *vocabulary mismatch*.
- **"Vocabulary" Mismatch Leads to "Token" Sequence and Logit Mismatches:**
The consequences of vocabulary mismatch manifest as differences in token sequences and divergences in logit distributions. Since the token sequence is determined by the tokenizer’s vocabulary, we treat “token sequence mismatch” as a downstream result of vocabulary mismatch. Moreover, in transformer-based models, the vocabulary size directly determines the dimensionality of the final logit layer. Therefore, “logit distribution divergence” is more precisely attributed to differences in vocabulary, rather than just token-level differences.
- **Terminological Consistency with Prior Work:**
A large body of prior work [1,2,3] has used terms in a way that implicitly refers to mismatches in vocabulary. In this context, using the term *vocabulary mismatch* aligns with established usage in the field. The suggestion that it is misleading may in fact run counter to prevailing terminology practices in related literature.
[1] *Cui et al., Multi-level optimal transport for universal cross-tokenizer knowledge distillation on language models. In AAAI 2025.*
[2] *Boizard et al., Towards cross-tokenizer distillation: the universal logit distillation loss for LLMs. TMLR, 2025.*
[3] *Xu et al., Bridging the gap between different vocabularies for llm ensemble. In NAACL 2024.*
### **Highlighting Our Contribution on Vocabulary Mismatch in Pretraining.** (Weaknesses)
We would like to highlight that **addressing *vocabulary mismatch* in the pretraining stage** is one of the main contributions of our work.
While knowledge distillation during pretraining is a well-established technique, in practice, vocabulary mismatch often limits the use of diverse teacher models.
To our best knowledge, this aspect remains underexplored in the pretraining literature, and we believe **this is the first work** to directly address vocabulary mismatch in this context.
VocagnoLM offers **a simple yet effective** solution to this problem, with practical benefits as follows:
- **Practicality of Token-Level Lexical Alignment**
- While our token-level lexical alignment may not constitute a novel theoretical contribution, we argue that this is precisely what makes it a **practically effective** solution to the vocabulary mismatch problem. Our proposed method performs lexical alignment based on character offsets, which theoretically ensures that every student token must be contained within one or more teacher tokens. In practice, as shown in Figure 6, we observe 100% Intersection-of-String (IoS), demonstrating that two token sequences can be efficiently and accurately aligned using only a simple approach.
- Furthermore, this practical nature allows for both *online mapping* during training and *offline mapping* during preprocessing. This flexibility is particularly advantageous in large-scale pretraining scenarios, where optimization at the training stage is crucial.
- We highlight the simplicity and efficiency of our mapping function, which supports various weighting options in the aggregation step. While we only explore a simple aggregation function in this paper, we leave the exploration of more advanced variants as future work.
- **Advantages of Loss-Based Teacher Guidance for Logit Distribution Divergence**
- Even after resolving the token sequence mismatch, the issue of logit distribution divergence remains. Traditional KL divergence cannot adequately address this, and existing distance-based methods such as ULD, can incur additional information loss during alignment.
- In contrast, our approach is based on the teacher’s loss, which allows us to circumvent the problem arising from the dimensional mismatch between vocabularies.
In summary, we **decompose the issues caused by vocabulary mismatch into two sub-problems (token sequence mismatch / logit distribution divergence)** and design separate solutions for each. We demonstrate that applying these techniques in the pretraining stage leads to effective improvement.
---
Rebuttal Comment 1.1:
Comment: Thank you for the author’s response. They partally address my concerns. I also reviewed the comments from other reviewers, and I believe that the contribution of this manuscript is not sufficient for publication in ICML. I maintain my original rating. | Summary: The paper addresses the challenge of vocabulary mismatches between teacher and student language models during knowledge distillation. I believe this is a well-motivated and important topic since it is difficult to do the logits-level distillation between student and teacher models with different tokenizers. To overcome this, the authors propose the method that consists of two main components:
• Token-level Lexical Alignment: A procedure to map student tokens to the corresponding teacher tokens using character-level offsets, thereby aligning token sequences despite different vocabularies.
• Teacher Guided Loss: A reweighting scheme that leverages the teacher’s token-level loss as guidance for training the student, overcoming divergences in logit distributions.
The approach is validated on a math-focused pretraining corpus (OpenWebMath) and evaluated on a suite of mathematical reasoning benchmarks, showing significant performance improvements—especially when using teacher models with very low vocabulary overlap.
Claims And Evidence: I believe the claims (the motivation and effectiveness of the design) are generally well-supported by the experiments. The potential questions are listed in the other section. The experimental results indicate that VocAgnoLM improves performance by up to 46% compared to naive continual pretraining and outperforms logit distribution alignment baselines (e.g., KLD and ULD). Detailed ablation studies are provided to show how choices in token alignment granularity, handling of unmapped tokens, and aggregation strategies affect performance.
Methods And Evaluation Criteria: The proposed methods make sense to me while they are somehow heuristic and lack theoretical support for their effectiveness.
a) Token-level Lexical Alignment: 1) Utilizes character-level offsets from both teacher and student tokenizers. 2) Employs binary search techniques to establish a one-to-many mapping for each student token.
b) Teacher Guided Loss: 1) Computes the loss for a student token and aggregates the losses of its corresponding teacher tokens. 2) Applies a top-k threshold strategy to reweight the importance of each student token based on the discrepancy between the student and teacher losses.
Although the evaluation is mainly concentrated on the math/reasoning tasks, I believe they are representative. They compare performance on multiple math reasoning benchmarks (e.g., GSM8K, MATH, SVAMP, ASDiv, MAWPS, and others). Comparisons against standard distillation approaches (KLD and ULD) are also provided. Ablation studies that assess the impact of different alignment granularities and token handling strategies, which demonstrate the effectiveness of the design. The weakness of the evaluation is mentioned in the previous section.
Theoretical Claims: N/A
Experimental Designs Or Analyses: It is appreciated that the author compares the performance of VocAgnoLM to KLD-based and ULD-based distillation, and they also provide ablation study to demonstrate the effectiveness of the design.
However, the evaluation is not comprehensive enough to demonstrate the bounds of the effectiveness of the proposed methods. We does want to understand the performance gain/loss of the proposed methods but the comparison seems not to be very fair. It would be great if the following two comparisons are provided: 1) learn from the logits of advanced models (e.g. qwen-math) vs. learn from their generated tokens. 2) Two similar student models (with different tokenizers) learn from the same teacher (or teachers with similar performance) - one with identical tokenizer vs one with different tokenizer. By this two experiments, we can understand the bound/limit of the proposed methods, which can be very helpful.
Also, the computational complexity can be high, it would be great if more comprehensive and explicit data points can be provided.
Supplementary Material: The authors provide their implementation of pretraining with the token mapping and logits alignments.
Relation To Broader Scientific Literature: I believe the proposed methods are important and relevant to general LLM pretraining, especially for knowledge distillation domain. Previous works mainly consider the distillation between models of the same tokenizer. Recent works 'Towards cross-tokenizer distillation: the universal logit distillation loss for LLMs' proposes methods for cross-tokenizer distillation. In this paper, the proposed methods are shown to outperform the existing cross-token KD methods and improve the performance based on KD process.
Essential References Not Discussed: There are more existing literatures considering the knowledge distillation (not necessarily the cross-tokenization KD), such as the KD for pretraining exploration (https://arxiv.org/pdf/2410.16215) and the Nvidia minitron (https://arxiv.org/pdf/2407.14679). It is good to include more previous papers on KD to further emphasize the importance of the KD-based pretraining.
Other Strengths And Weaknesses: * Strength:
Comprehensive Analysis: Extensive ablation studies and comparisons with existing methods (KLD, ULD) strengthen the evidence for the proposed approach.
Practical Impact: By enabling the use of high-performing teacher models regardless of vocabulary, the method broadens the applicability of teacher-guided pretraining, particularly in domain-specific settings like mathematics.
* Weakness:
The methods are heuristic, and we are not sure how much the proposed methods will compromise/influence the KD performance, compared to the standard KD with the same tokenizer.
The topic is of great importance and I will raise my score if concerns can be properly addressed.
Other Comments Or Suggestions: Please kindly refer to the weakness discussed above.
Questions For Authors: Please kindly refer to the question in the previous sections.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer’s insightful comments and valuable suggestions. We are glad that the reviewer acknowledged the importance of the problem and appreciated our comprehensive analysis. Below, we provide our detailed responses to the points raised.
### **Distinction between "Teacher-generated Knowledge" and "Teacher-Guided Language Modeling"** (Exp Designs or Analyses #1)
We would first like to clarify that the scope of this work specifically targets the pretraining stage. Unlike (sequence-level) knowledge distillation [1] approaches trained on generated tokens in the fine-tuning stage, it is difficult to define the “generated tokens” of the teacher model during the pretraining stage. In the extreme case, a teacher model could reproduce the same input corpus or generate an entirely new 15B corpus using the teacher model’s internal knowledge.
Besides, training the student model on new knowledge generated by the teacher model differs from our objective of teacher-guided language modeling within a given corpus. However, while such an approach goes beyond the scope of this work, we appreciate the suggestion and believe it as a valuable direction worth exploring further.
[1] *Kim et al., Sequence-Level Knowledge Distillation, In EMNLP 2016.*
### **Comparison with the standard KD using same tokenizer teacher** (Exp Designs or Analyses #2, Weakness)
We appreciate the reviewer’s suggestion for further experiments. We present an additional experimental result on 2B tokens that offers relevant insight. Specifically, we compare our method against the standard KD approach using two teacher models of comparable performance: MetaMath-Llemma-7B (which shares the same tokenizer as the student model) and Mistral-ProXMath-7B (which uses a different tokenizer).
As shown in the table below, our method performs better even when using a teacher model with a different vocabulary, despite both two teacher models achieving the same performance on the AVG(w/o SAT) metric.
| Model | Tokenizer w/ Student | Method | AVG | AVG (w/o *) | AVG (w/o SAT) |
|---------------------------|----------------------|--------|------|-------------|----------------|
| ***Teacher Model Performance*** |||||
| MetaMath-Llemma-7B | Same | - | 52.2 | 62.1 | 57.2 |
| Mistral-ProXMath-7B | Different | - | 59.2 | 58.9 | 57.2 |
| ***Student (S) Model Performance*** |||||
| S + MetaMath-Llemma-7B | Same | KLD | 14.2 | 14.3 | 14.4 |
| S + Mistral-ProXMath-7B | Different | Ours | 16.6 | 16.6 | 16.3 |
We also clarify that our Token-level Lexical Alignment is a deterministic and straightforward solution based on character offset. While simple, it ensures 100% overlap (Figure 6), and plays a critical role in enabling reliable teacher guidance across different tokenizers. We believe that our alignment mechanism contributes to the effectiveness of our method even when compared to standard KD with the same tokenizer, as demonstrated by the results in the table.
### **Clarification on Computational Overhead.** (Exp Designs or Analyses #3)
To report the additional computational overhead, we decompose into two components compared to standard KD, Token-level Lexical Alignment and Loss-based Guidance.
- First, the loss-based guidance shares most of its computational operations with standard KLD. When measuring 1M tokens, KLD and loss-based guidance require approximately 15.36 TFLOPs, **showing no significant difference in computation cost.**
- On the other hand, Token-level Lexical Alignment is a CPU-bound operation, so we report latency instead of FLOPs.. When measuring on a 2048 token sequence using the Mistral-ProXMath teacher model, the alignment step takes approximately 0.047 seconds, averaged on 1000 repeated runs. While this mapping process introduces a small amount of overhead, it can be amortized during preprocessing time. **Especially in pretraining stage, corpus is typically packed to the maximum sequence length, so the mapping process can be efficiently performed during preprocessing time.**
We hope this addresses the reviewer’s concern regarding computational overhead. We’ll include this point in the final version.
### **Response for "Essential References Not Discussed"**
Thank you for the new suggestion. Minitron (Muralidharan et al., 2024) has already been cited in the introduction (Line 48-49). We also agree that Pretraining Exploration is a valuable reference that highlights the importance of knowledge distillation during pretraining. In the final version, we will revise the first paragraph of the introduction to emphasize on the importance of KD-based pretraining.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' response and additional experiments. Most of concerns have been addressed and I recommend accepting the paper
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate the reviewer’s thoughtful response and are grateful for the recommendation to accept the paper. We're glad that our efforts to address the concerns were well received. | Summary: This paper proposes VocAgnoLM, a method to overcome vocabulary mismatch in knowledge distillation for language models. It introduces Token-level Lexical Alignment for precise token mapping and Teacher Guided Loss to adjust training signals. Experiments show up to 46% improvement over baseline methods, enabling effective distillation from stronger teacher models despite vocabulary differences.
Claims And Evidence: In Equation (4), certain tokens are masked during pretraining with the proposed VocAgnoLM method, potentially reducing the effective token count. Given that current scaling laws [1,2] primarily examine the relationship between performance and training computation, this masking mechanism may inherently introduce a scaling disadvantage. To better understand this limitation, more empirical analysis is encouraged, including:
1. Masking Ratios: What are the token masking ratios across different experimental settings in the paper?
2. Scaling Trends: How does the scaling behavior of VocAgnoLM compare to baseline methods in terms of training computation?
3. Generalization to Broader Domains: How does the method perform on general-domain tasks, where more critical tokens may exist beyond the math-focused setting?
[1] Scaling Laws for Neural Language Models.
[2] Training Compute-Optimal Large Language Models.
Methods And Evaluation Criteria: N/A
Theoretical Claims: There is no theoretical claim proposed in the paper.
Experimental Designs Or Analyses: From Equation (4), the proposed method improves upon the baselines by masking out "unimportant tokens," thereby amplifying the supervision signal from the "important tokens." However, a much simpler way to achieve a similar effect is to increase the learning rate uniformly across all tokens, without masking. For instance, if the masking ratio is 20%, raising the learning rate by a factor of 1/(1-0.2) = 1.25x in the standard CPT baseline would result in a comparable signal strength for important tokens as in the knowledge distillation setting. To further validate the effectiveness of Teacher Guided Language Modeling, it would be beneficial to compare VocAgnoLM against this learning rate adjustment baseline.
Supplementary Material: N/A
Relation To Broader Scientific Literature: This paper proposes a method to tackle the tokenization mis-matching problem during knowledge distillation.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: N/A
Questions For Authors: In lines 435-439, the authors claim that MiniPLM[1] requires the teacher and student models to share the same vocabulary. However, [1] shows that MiniPLM enables cross-family KD where the vocabulary of the student and teacher models can be different. Is this a mistake in the literature review?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We appreciate the reviewer's thoughtful feedback and constructive suggestion. We address the points raised below.
### **Impact of top-k threshold (Selected Ratio) on scaling trends.** (Claims and Evidence #1, #2)
- As discussed in Appendix B and Figure 7a, we explore the effect of the top-k threshold using a 2B-token subset of the corpus, following the experimental setup of Lin et al [2]. Based on this analysis, we adopt a top-k threshold of 40% ( 60% Masking Ratio ) in our study (L253).
- Consistent with prior work [1,2], we observe that selecting more tokens from a fixed corpus does not necessarily lead to better performance. Importantly, as shown in Figure 7a, the optimal threshold may vary depending on the choice of teacher model. Taking into account both our empirical results in Appendix B and findings from previous study [1,2], we chose the 40% threshold.
### **Insights from established findings, and highlight our representative tasks.** (Claims and Evidence #3)
- As mentioned in our limitations section, we agree that VocAgnoLM is designed to be broadly applicable across different models and datasets.. Due to our limited pretraining resources, we focused our efforts on the moderately scaled 15B OpenWebMath corpus. **As reviewer bj8e acknowledged, although our work focuses on math/reasoning tasks, we made every effort to evaluate the model through a diverse set of representative tasks to ensure broad applicability within the domain.**
- As a complementary perspective, we would like to share an intuition that motivated our design choice. **Several prior works [1,2,3] have provided promising evidence that selective token/dataset training may generalize beyond narrow domains, not only on math domain corpus, but also on general-domain corpora such as ThePile, SlimPajama, StarCoderData.** Motivated by these findings, our study aims to validate whether the proposed method can effectively follow the teacher model’s expertise within a domain, using a moderately scaled math-domain web corpus.
- We hope that these prior findings, along with our own results (e.g., Figure 5, which shows performance improvement using an instruction-tuned teacher model) can help estimate VocagnoLM’s potential for general-domain performance. We consider extending our experiments to broader general-domain corpora a valuable direction for future work.
[1] *Mindermann et al., Prioritized Training on Points that are learnable, Worth Learning, and Not Yet Learnt, In ICML 2022.*
[2] *Lin et al., RHO-1: Not All Tokens Are What You Need, In NeurIPS 2024.*
[3] *Xie et al., Data Selection for Language Models via Importance Resampling, In NeurIPS 2023.*
### **Comparison with scaled CPT baseline.** (Experimental Designs and Analyses)
Thank you for the insightful suggestion. Following your suggestion, we conducted an additional experiment on 2B tokens.
In VocAgnoLM, we apply a top 40% threshold based on teacher guidance. To match the signal strength of important tokens, we rescaled learning rate by a factor of lr / 0.4 = 2.5x.
As shown in the table below, our method, using various teacher models, outperforms CPT (lr /= 0.4). Although CPT delivers a similar amount of signal strength by increasing the learning rate, it also amplifies the effect of unimportant tokens. These results support the effectiveness of our teacher guidance, and allow us to estimate the impact of various teacher models in terms of CPT-equivalent training scale.
| Setting | AVG (w/o SAT) |
|------------------------|----------------|
| CPT | 13.6 |
| CPT (lr /= 0.4) | 15.3 |
| S + Mistral-ProXMath | 16.3 |
| S + DeepSeekMath | 17.3 |
| S + Qwen2.5-Math | 18.8 |
### **Difference with MiniPLM.** (Questions for Authors)
Thank you for pointing this out. We noticed that the order of the last two sentences in Section 7.2 may have been mistakenly switched. MiniPLM enables cross-vocabulary distillation through an offline KD strategy. However, our method supports **both online and offline KD**, offering more flexibility in the choice of pretraining strategies. Additionally, while MiniPLM performs distillation at the instance-level, our approach focuses on **token-level**, which marks a key difference. We will revise the sentence accordingly and describe this difference in the final version. | null | null | null | null | null | null | null | null |
Blink of an eye: a simple theory for feature localization in generative models | Accept (oral) | Summary: This paper introduces a general framework for critical windows in stochastic localization. After a lengthy but valuable description of some key notions such as stochastic localization sampling and the "forward-reverse experiment", the authors prove their key result, which shows that there exist (possibly empty) "critical windows" during stochastic localization sampling in which the forward process has destroyed the information needed to distinguish a submixture $S_{init}$ from a larger submixture $S_{targ}$, but not the information needed to distinguish $S_{targ}$ from the remainder of the data distribution. They provide a simple toy example in the case of diffusion models and a number of examples drawn from autoregressive learning. They then sketch a general theory of how stochastic localization interacts with hierarchical semantic structure and briefly describe the results of some experiments on LLMs.
## Update after rebuttal.
I appreciate the authors' engagement with my review and their promise to add figures and examples to the camera-ready as intuition-building tools. I maintain my positive assessment of this paper.
Claims And Evidence: The main claim in this paper is to have constructed a general theory to explain "critical windows" in generative models, which the authors colloquially define as a small subset of steps in which important features of a model sample emerge. I believe the paper largely achieves this objective. The stochastic localization framework is general enough to include diffusion models and autoregressive models, which are the two main paradigms in generative modeling nowadays. Their main result is general and does not rely on strong assumptions or particularly heavy sledgehammer results. However, as the authors acknowledge in Remark 3.2, their theory does not rule out the possibility of critical windows being empty sets, which I believe to be the main gap in their story. Nonetheless, I think the results are sufficiently interesting to stand on their own, and look forward to future work exploring why critical windows are often non-empty for common classes of generative models.
Methods And Evaluation Criteria: The theoretical methods are appropriate for demonstrating the authors' key claims. In particular, the "forward-reverse experiment" is an appropriate tool for formalizing the process of destroying and then recovering information in stochastic localization sampling.
Theoretical Claims: I have reviewed the proof outline for Theorem 3.1. I believe the strategy is correct, and I was unable to find any specific errors in the outline.
Experimental Designs Or Analyses: The experiments included in the main body seem to be sound and adequately illustrate that critical windows can occur in LLMs. However, I would have liked the authors to include a more thorough discussion of their experiments in the main body -- as I will note below, it generally seems like the authors have packed too much content into the 8-page limit at the price of e.g. a very abbreviated experiments section.
Supplementary Material: I did not review the supplementary material in great detail.
Relation To Broader Scientific Literature: This paper generalizes results from Li and Chen (2024), which studies the existence of critical windows in diffusion models. It draws heavily on tools from the stochastic localization literature, which is anchored by a series of papers by Eldan and connected to diffusion models in a set of notes by Montanari (2023).
Essential References Not Discussed: While I am not well-versed in the literature on critical windows and stochastic localization, it seems to me that this paper adequately situates itself in its literature.
Other Strengths And Weaknesses: While this paper is fairly well-written, it is dense with notation and short on figures. A few figures to illustrate the key notions would greatly improve the readers' intuition for the results. For example, the authors could include a figure illustrating the forward-reverse experiment for a simple case like a mixture of Gaussians and depicting the model distribution $p_t^S$ for various subsets $S$ during the critical window predicted by Theorem 3.1. The definition of $\epsilon$-mixture trees in Section 5 is also *very* abstract, and while I appreciate the benefits of this approach from the standpoint of generality, a few figures or additional examples would help readers parse the definitions better.
To me, the most interesting takeaway from this paper is that despite knowing nothing about the semantics of the data a priori, a stochastic localization sampler generates information in a way that respects the data's semantic hierarchy. e.g. a dog is an animal, so the support of the distribution over images of dogs is contained in the support of the distribution over images of animals -- and stochastic localization features critical windows in which the sampler has "decided" to generate an animal image but not yet decided that it will generate a dog image. It seems surprising to me that one can prove a general result of this form. However, I believe that Remark 3.2 reveals the primary gap in this theory as it stands -- it is not clear a priori that non-empty critical windows should exist. Exploring *why* this is the case would be an interesting future direction.
Other Comments Or Suggestions: In general, it seems like the authors attempted to pack too much content into the 8-page limit, and have consequently neglected to include illustrative figures and a related work section in the main body of the text. They have also heavily compressed their experiments section. If this paper is accepted, I'd ask the authors to consider including figures, an expanded experiments section, and at least an abbreviated related work section in the main body, perhaps at the price of moving some of the examples in Section 4 to the appendix.
Questions For Authors: No further questions.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their time and thoughtful comments. We were glad to hear that you thought that the theoretical results were interesting and that experiments were sound and illustrative of our main points.
## Writing changes
* *“While this paper is fairly well-written, it is dense with notation and short on figures”*
In the final revision, we will include a section titled “Intuition for critical windows” before the technical prelims that **describes our theory informally** and introduces the forward-reverse experiment, the definition of critical windows, the definition of $\epsilon$-mixture trees, and our main Theorem 3.1 through a **simple vignette**. All of these definitions and theorems will be accompanied by **figures that visually explain them and text which concretely places them within our vignette.** For example, the definition of the forward-reverse experiment will be accompanied by a figure which shows how a “sweet spot” of noise leads to the specialization to a target sub-mixture; the definition of $\epsilon$-mixture trees will be shown with a graph that shows the hierarchy of features in our vignette; we will expand and add more detail to Figure 2 of a critical window in this section.
The section will very loosely follow this structure: we will describe critical windows as the transition from sampling from a larger subset of features to a smaller subset of features. This motivates trying to understand when the generative model is sampling from a subset of features, and thus the forward-reverse experiment and our main Theorem 3.1. We will provide intuition into the location of the bounds for Theorem 3.1 and then explain how sequences of critical windows motivate understanding a hierarchy of feature specialization and thus the definition of $\epsilon$-mixture trees.
* *“For example, the authors could include a figure illustrating the forward-reverse experiment...”*
Yes, in the aforementioned new section we will illustrate the forward-reverse experiment with a very concrete example and figure.
* *"The definition of ϵ-mixture trees in Section 5 is also very abstract, and while I appreciate the benefits of this approach from the standpoint of generality, a few figures or additional examples would help readers parse the definitions better."*
We also plan to include an example and figure of an “$\epsilon$-mixture tree” in the section “Intuition for critical windows” accompanying our text and the vignette.
* *“include a more thorough discussion of their experiments in the main body… consider including figures, an expanded experiments section, and at least an abbreviated related work section in the main body”*
In addition to the new section, we will move many details from the examples and hierarchy sections to the appendix, add a short related works section in the main body, and thoroughly expand the experiments section. The expanded experiments section will include our structured output experiments, which show that our theory is predictive of critical windows for LLMs when outputs are hierarchically structured, and we will add to our LLM reasoning experiments details that were originally relegated to the appendix, e.g., statistics and visualizations of critical windows across datasets and models. The abbreviated related works section will cover the theory of critical windows for diffusion, the forward-reverse experiment, and stochastic localization.
## Future directions
* *“not clear a priori that non-empty critical windows should exist. Exploring why this is the case would be an interesting future direction.”*
In the Yellowstone and jailbreaking example, some actions from the LLM, i.e. browsing Yellowstone or acceding to a harmful user request, are much likelier under one mode of behavior than another and completely determine to which mode the generation belongs, resulting in a critical window as explained by Example 4.3. We agree with the reviewer that further exploring why critical windows exist in different settings is an interesting direction of future research.
Thank you again for your time in reviewing the paper and providing much helpful feedback. If we have addressed your concerns about the paper, we hope you consider raising our score. | Summary: This paper discusses the phenomenon of critical windows in generative models. It is an interesting topic, and the paper presents a general theory with minimal assumptions, enabling the explanation of abrupt shifts during the sampling phase across different modeling paradigms and data modalities. The writing is clear, and the definition of critical windows based on sub-mixtures, along with the discussion on hierarchical sampling, is engaging.
Claims And Evidence: 1. If the reverse process is deterministic, such as an ODE, or includes additional conditions, such as text-to-image, does this theoretical framework still apply?
Methods And Evaluation Criteria: N/A
Theoretical Claims: I have checked the proof of Theorem 3.1 and found no additional issues.
Experimental Designs Or Analyses: 2. Section 4 presents some case studies. Could you provide further experimental results to verify the accuracy of the computed critical windows from the theoretical analysis?
Supplementary Material: I reviewed and checked the necessary appendices related to the main text, and found no additional issues.
Relation To Broader Scientific Literature: This paper proposes a unified and concise theoretical framework that explains the critical windows phenomenon observed in autoregressive and diffusion models in previous studies.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their time and thoughtful comments. We were glad to hear that you found our writing clear and engaging and our theory interesting.
* *“If the reverse process is deterministic, such as an ODE, or includes additional conditions, such as text-to-image, does this theoretical framework still apply?”*
If the reverse process is deterministic, then there is no notion of a critical window under our framework. The initial position at the start of sampling completely characterizes the final image. For example, given a fixed piece of text and language model, truncating it anywhere in the model’s response and resampling at temperature $0$ would yield the same final text. This means that the probability it would yield the same answer as the original generation would be 0. We view this as a fruitful future direction of work to extend our framework to deterministic samplers.
* *“Section 4 presents some case studies. Could you provide further experimental results to verify the accuracy of the computed critical windows from the theoretical analysis?”*
We would like to highlight that **many of the case studies in Section 4** are accompanied by experiments either in the appendix or in the existing literature:
* Li and Chen 2024 confirmed that critical windows for mixtures of Gaussians matched with experiments.
* A critical window for the all-or-nothing phenomenon in sparse linear regression can be seen in Figure 2 of Reeves et al. 2019
* The jailbreak critical windows are demonstrated in previous literature, e.g. Haize Labs 2024, and are reproduced in our Appendix F.1.
* In the final revision, we will mention these experiments along with these examples or present some as well as a diagram for a discrete diffusion model that is a mixture of delta measures.
Thank you again for your time in reviewing the paper and providing much helpful feedback. If we have addressed your concerns about the paper, we hope you consider raising our score.
Haize Labs. (2024). *A trivial jailbreak against LLaMA 3.* https://github.com/haizelabs/llama3-jailbreak
Li, M., & Chen, S. (2024). Critical windows: Non-asymptotic theory for feature emergence in diffusion models. arXiv preprint arXiv:2403.01633.
Reeves, G., Xu, J. & Zadik, I. (2019). The All-or-Nothing Phenomenon in Sparse Linear Regression. *Proceedings of the Thirty-Second Conference on Learning Theory in Proceedings of Machine Learning Research* 99:2652-2663. | Summary: The authors present a paper that explores critical windows in generative models. Their paper is heavily theoretical and they propose an understanding that can be applied to a wide range of models.
Claims And Evidence: yes
Methods And Evaluation Criteria: yes
Theoretical Claims: I did not check the accuracy of the proofs
Experimental Designs Or Analyses: NA
Supplementary Material: please note, I did not check any of the verification in the appendices, but I didn't feel I needed to. I think the best way to verify a contribution like this is to expose it to the academic community.
Relation To Broader Scientific Literature: I believe this paper will have broad appeal to the machine learning community
Essential References Not Discussed: no
Other Strengths And Weaknesses: First, I will admit to being somewhat biased in favour of strong theoretical contributions, but this paper stands out an exceptionally well written, instructional and informative example. While the underlying theme of the paper is theoretical, the authors bring their contribution to a real issue in diffusion and LLMs. I also appreciate the included clear examples adding context to the theoretical formulation.
Other Comments Or Suggestions: undefined terms in the abstract (jailbreak), though this is defined quite early in the introduction
Questions For Authors: NA
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: We thank the reviewer for their kind comments and strong recommendation. We will modify the abstract to say ''hacks’’ instead of ''jailbreaks.’’
---
Rebuttal Comment 1.1:
Comment: The system seems to require a rebuttal comment. Nothing new added here | Summary: The paper theoretically explains sudden behavioral shifts in generative models through critical windows, employing a forward-reverse experiment to study this phenomenon. It introduces Theorem 3.1, which bounds total variation distance to demonstrate that these windows signify transitions between sub-mixtures. The findings are substantiated with examples from diffusion and autoregressive processes.
## update after rebuttal
The authors' response has addressed my concern, so I have raised the score from 3 to 4.
Claims And Evidence: The central claim is supported by Theorem 3.1. Experimental results further confirm the presence of critical windows in generations.
Methods And Evaluation Criteria: The theoretical method uses stochastic localization samplers and mixture models and arrives at a conclusion in TV distance bounds. This is a sensible approach for studying feature localization in generative models.
Theoretical Claims: I did not check the proofs in the Appendix.
Experimental Designs Or Analyses: The results on LLMs clearly demonstrate abrupt changes in output probabilities.
However, several concerns remain:
- Since LLM performance is sensitive to evaluation metrics, a deeper discussion on the robustness of critical windows across different metrics is needed.
- The experimental setup is not rigorously defined or directly validated against the theory, limiting its connection to Theorem 3.1. While TV distance may not be feasible for real distributions, simulations could provide a more direct validation of the theoretical results.
Supplementary Material: I didn’t review the supplementary material.
Relation To Broader Scientific Literature: The paper builds on prior work on critical windows in diffusion models (e.g., Sclocchi et al., 2024; Li & Chen, 2024).
Essential References Not Discussed: I did not notice any missing key references.
Other Strengths And Weaknesses: See *Experimental Designs Or Analyses*.
Other Comments Or Suggestions: See *Experimental Designs Or Analyses*.
Questions For Authors: See *Experimental Designs Or Analyses*.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their comments. We were glad to hear that you found that our experimental results for LLMs are convincing.
* *“Since LLM performance is sensitive to evaluation metrics, a deeper discussion on the robustness of critical windows across different metrics is needed.”*
In appendix H.1, we include a discussion about the evaluation metrics we used to test our model, and in appendix H.4, we explore the effect of different temperatures. Note that we use standard methods like direct text comparison for multiple choice questions (Lanham et al. 2023) and existing math graders from the literature (Lightman et al. 2023). Given the primary focus on this paper is theoretical, and our experiments are commensurate with these well-cited manuscripts on LLM evaluation and performance, our evaluation metrics and discussion cover a broad range of datasets, models, and other empirical settings that demonstrate the robustness of the critical windows phenomenon.
* *“The experimental setup is not rigorously defined or directly validated against the theory, limiting its connection to Theorem 3.1. While TV distance may not be feasible for real distributions, simulations could provide a more direct validation of the theoretical results.”*
Appendix G describes a **direct validation of our theoretical results for LLMs**, where we actually compute the TV distance to verify our bounds for a real-world model.
Thank you again for your comments. If we have addressed your concerns about the paper, we hope you consider raising our score.
Lanham et al. (2023). Measuring Faithfulness in Chain-of-Thought Reasoning. *arXiv preprint arXiv:2307.13702*. Retrieved from https://arxiv.org/abs/2307.13702
Lightman et al. (2023). Let's Verify Step by Step. *arXiv preprint arXiv:2305.20050*. Retrieved from https://arxiv.org/abs/2305.20050
---
Rebuttal Comment 1.1:
Comment: Thank the authors for the clear response. I will raise the score to 4.
---
Reply to Comment 1.1.1:
Comment: Thank you so much! | Summary: The paper presents a theory of “critical windows” – intervals in the generation process in which specific features of the generated data emerge – in both diffusion and autoregressive systems. Leveraging the framework of stochastic localization, the authors rigorously characterize when such windows appear. The theory is applied to several scenarios, including diffusion of Gaussian Mixture Models, jailbreaks in LLMs, a minimal model of problem-solving, and in-context learning. It also considers hierarchical distributions, where a hierarchy of critical windows separate different subpopulations. Finally, experiments demonstrate critical windows in LLM solving reasoning tasks.
## Update after rebuttal
I maintain my positive assessment of this work.
Claims And Evidence: The paper is primarily theoretical and rigorously supports its claims.
Methods And Evaluation Criteria: N/A
Theoretical Claims: I checked the validity of the main result (Theorem 3.1).
Experimental Designs Or Analyses: Experiments are well-executed.
Supplementary Material: I mainly reviewed Appendix C, which provides technical steps used for obtaining the main result.
Relation To Broader Scientific Literature: The paper follows a recent rich literature on critical windows and phase transition in generative diffusion models. In particular, it extends the rigorous results of Li & Chen (2024) to general stochastic localization samplers, relaxing several technical assumptions and including autoregressive systems, and tightening the bounds in the case of Gaussian diffusion.
Essential References Not Discussed: None that I identified.
Other Strengths And Weaknesses: **Other strengths**
- The paper provides an interesting and rigorous unifying theoretical framework for critical windows in both diffusion and autoregressive models.
**Other weaknesses**
- Section 1.1. “Our contributions” is quite unclear. In particular, I think it lacks a clear and comprehensive list of the contributions of the paper, especially the theoretical ones. It briefly mentions “bounds” (On what? Obtained how? In which framework?). Moreover, is “Generality” really a contribution of the paper? It also mixes theoretical and empirical contributions. Can the authors give a more standard list of contributions, briefly explaining the setting, the obtained insights, and only then the experimental results? The paper is rather dense in content, so I think it would greatly benefit from a clearer outline of contributions. On a side note, also the abstract is not fully informative of the paper's content.
Other Comments Or Suggestions: - The plots in Figure 1, taken from related work, are not explained and are hard to read/understand, especially at that point of the introduction. Personally, I don’t see the necessity of including such a figure. I’d encourage the authors either to remove it or – in case they wish to keep it – to make it larger (use vectorized graphics) and explain the content.
- Is the formulation of autoregressive systems as a stochastic localization sampler a novel contribution of the work? If so, I would suggest highlighting it more. Otherwise, the paper should cite previous work showing it.
- L132 (right column): “The can be understood”?
- L144 (right column): That’s true only in the case of Gaussian diffusion.
- The running title is still the ICML template default and should be updated.
Questions For Authors: - Can you please elaborate more on the complexity of hierarchies learned by diffusion vs autoregressive models, as speculated at the end of Sec. 5? Don’t the two results refer to different data models/distributions?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their time and thoughtful comments, especially with respect to the contribution section and the exposition of our theory. We were glad to hear that you found that our rigorous unifying framework was interesting and that our experiments were well-executed.
# Contributions
* *“Can the authors give a more standard list of contributions, briefly explaining the setting, the obtained insights, and only then the experimental results?"*
We will revise the contributions in the final version, separating into clear “theoretical” and “empirical” sections for clarity. On the theoretical side, we will specify that, unlike existing frameworks, our theory **applies across all generative models and data distributions** represented by the stochastic localization framework, as accomplished by Theorem 3.1 and Definition 3.3. Moreover, the theoretical “bounds”, which we clarify as predictions for the location of critical windows, are more precise than that in Li and Chen 2024.
We also highlight applications of our general theory into specific but important contexts as contributions: for example, we can now **compute critical windows in many different contexts** (discrete diffusion, in-context learning, statistical inference), **unlike existing work (Section 4)**. We will also specify a novel result for hierarchically structured data, where, if the learned sampler and true model are based on the same localization sampler and the learned sampler is good, then they have the same hierarchical structure (Corollary 5.3).
We will add a section that better explains our framework intuitively as well.
* *“It briefly mentions “bounds””*
By this we mean a comparison between the computations of the locations for critical windows for Gaussian diffusion in Li and Chen 2024 versus this paper (Theorem 3.1). They were only able to control the total variation by epsilon times a factor polynomial with the dimension $d$. We were able to upper bound the total variation by epsilon times a constant, and our theorem **improves on their results by a factor that grows polynomially with $d$**. We will clarify this in our contributions.
* *“is “Generality” really a contribution of the paper?”*
By generality, we mean that our framework applies to all stochastic localization samplers and models of data, not just the Gaussian diffusions and the toy models of data considered before in the literature. We view this ability of our **unifying framework** to explain critical windows across so many different contexts as a major contribution of our work.
* *“On a side note, also the abstract is not fully informative of the paper's content.”*
We will synthesize the background in the abstract and better describe our contributions.
# Other Comments or Suggestions
* *“The plots in Figure 1 … are hard to read”*
In the final revision, Figure 1 will only include three examples that will be explained: the Georgiev et al. 2023 critical window, the prefill attack from Haize Labs 2024, and Phi-4 critical tokens in Abdin et al. 2024.
* *“That’s true only in the case of Gaussian diffusion.”*
While the initial applications of stochastic localization (Eldan 2013; 2020) were Gaussian diffusion, extensions of stochastic localization by Montanari 2023 apply to a broader family of generative models, including discrete diffusion models (Example B.2 and Section 4.3. of Montanari 2023).
* *“Is the formulation of autoregressive systems as a stochastic localization sampler a novel contribution of the work?"*
This was first presented in Montanari 2023. In Section 2.1 and 2.2, we explicitly cite this work. and we will modify the text to cite it in Appendix B when we instantiate language models within this framework.
* *“Can you please elaborate more on the complexity of hierarchies learned by diffusion vs autoregressive models, as speculated at the end of Sec. 5?”*
We view the dimension of a diffusion model as the dimension of the underlying space, $d$ in $\mathbb{R}^d$, and the dimension of an autoregressive model as the length of its context, $T$ in $\mathcal{A}^T$. We simply pointed out that the hierarchy depth was $O(\ln d)$ for a continuous diffusion example and $\Omega(T)$ for an autoregressive example. While they refer to different modalities, we wanted to highlight this vast difference between how hierarchy depth can vary with the dimension. In the final version, we will explain this more clearly.
Thank you again for your time in reviewing the paper and providing much helpful feedback. If we have addressed your concerns about the paper, we hope you consider raising our score.
Eldan, R. Thin shell implies spectral gap up to polylog via a stochastic localization scheme. *Geometric and Functional Analysis*, 23(2):532–569, 2013.
Eldan, R. Taming correlations through entropy-efficient measure decompositions with applications to mean-field approximation. *Probability Theory and Related Fields*, 176(3-4):737–755, 2020.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their answers. I maintain my positive assessment of this work. | Summary: The paper investigates "critical windows" in generative models—brief intervals during the generation process in which features of the final output are determined. The authors introduce a general theoretical framework based on stochastic localization samplers, a class that includes both diffusion models and autoregressive models as special cases. Their data model assumes samples are drawn from a mixture distribution, with sub-mixtures corresponding to specific features. The core theoretical contribution involves analyzing forward-reverse experiments to identify critical windows: time intervals during which the inversion of a noised observation yields a distribution localized on a sub-mixture, corresponding to the emergence of a feature.
The authors instantiate their theory with examples such as Gaussian mixture models under diffusion and stylized settings modeling jailbreaks, math reasoning, and in-context learning in large language models. They then extend the theory to handle hierarchical mixture models, where modes are recursively nested. Finally, they perform experiments with large language models showing the presence of critical windows during their generation and that these windows are more likely to occur when the model outputs incorrect answers.
## Update after rebuttal
The paper is technically sound, offers some unifying perspectives, and presents interesting experiments on LLMs.
While I still find the predictive power of the framework in empirical settings somewhat limited, the authors have addressed most of my concerns. I have therefore raised my score to recommend acceptance.
Claims And Evidence: The authors claim that their theory applies to a wide class of generative models (both diffusion and autoregressive), that it improves upon prior theoretical results, and that it avoids strong distributional assumptions.
The first part of their claims is well supported. However, I think that the claim that their theory requires "no distributional assumptions" is overstated. In fact, it relies on having data from a mixture model, and it is unclear how to apply it to more complex data structures. Moreover, as the authors acknowledge in Remark 3.2, there are mixture distributions and samplers for which their bounds may be vacuous. Therefore, the applicability of the theory crucially depends on the considered data structure.
On the empirical side, the reported experiments with LLMs provide evidence that critical windows can be identified in their generative process.
Methods And Evaluation Criteria: Yes, the methods and evaluation criteria make sense.
Theoretical Claims: I checked the correctness of the proof of the main theorem, which is sound and logically well-organized.
Experimental Designs Or Analyses: The experimental designs are sound for the considered tasks. The observation of the existence of critical windows in many LLMs tasks is interesting on its own. However, I am not sure about the connection between the experiments and the proposed theory: is there some qualitative phenomenon we can predict from the theory (e.g., existence or not of critical windows, their width, etc.) that can be then verified in the experimental data?
Supplementary Material: I went through the appendix A, C, F, G.
Relation To Broader Scientific Literature: The paper improves previous theoretical results on critical windows for diffusion of mixtures of log-concave distributions [Li&Chen 2024].
The studied phenomena are connected to similar studies of diffusion models from a statistical physics perspective, which focus more on specific data models [Raya&Ambrogioni2023, Sclocchi et al. 2024, 2025; Biroli et al. 2024].
The paper connects these ideas of critical windows in diffusion models with recent observations in Jailbreaks and chain-of-thought in LLMs.
Essential References Not Discussed: The essential scientific literature is cited.
Other Strengths And Weaknesses: Strengths
- The theoretical framework is general and draws connections between stochastic localization and different generative models, such as diffusion and autoregressive models.
- The identification of critical windows in LLM tasks and how they correlate with accuracy is interesting.
Weaknesses
- The theory relies on a specific structure of the data distribution and its features, and the limit of validity of this modeling assumption should be better clarified.
- The connection between experiments and theory is not very compelling.
Other Comments Or Suggestions: - Example 4.2: $S_{after}$ should be $\{\mu_i\}$
Questions For Authors: Can the authors clarify how general the data distribution assumption and the theoretical results are?
It seems that the width of the critical windows varies significantly according to the considered model. Is there an intuition about when to expect sharp critical windows?
In the experiments, the presence or absence of critical windows seems to depend strongly on the starting data. Can you elaborate more on that?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their time and thoughtful comments, especially with respect to our theory’s assumptions and the relationship between our experiments and theory. We are glad that you found that the generality of our theory and LLM reasoning experiments interesting.
**Modeling assumptions**
* *“The claim that their theory requires "no distributional assumptions" is overstated.. it relies on having data from a mixture model… Can the authors clarify how general the data distribution assumption and the theoretical results are?”*
In the final revision, we will rephrase “no distributional assumptions” to “very few distributional assumptions.” The mixture model assumption is **extremely mild**; any partition of the outputs from a generative model defines a mixture model, where the classes are given by the different partitions. For example, for a list of outputs of a language model, we can split them into labeled groups such as {correct answer, incorrect answer}, {safe answer, unsafe answer}, etc. The ability to attach these labels and partition into subpopulations is broadly applicable to the datasets, which we also make use of in our experiments.
See Remark 2.4 as well for a re-emphasis of the generality of the mixture model assumption.
* *“It seems that the width of the critical windows varies significantly according to the considered model... the applicability of the theory crucially depends on the considered data structure... limit of validity of this modeling assumption should be better clarified.”*
We view one of our main contributions as offering a **unifying framework** to distill the phenomenon of critical windows to very general facts about the data distribution, so that their **presence/absence or width** depends only on computations with the data model and forward process. This is a major strength compared to extant literature, which only discusses critical windows for **particular data distributions**. In the final revision of this paper, we will clarify that our bounds and the narrowness of our critical windows are affected by the specifics of the data distribution.
* *“there are mixture distributions and samplers for which their bounds may be vacuous.”*
In Sec. 4 and 5, we verify that our bounds are non-vacuous in many contexts.
* *"Is there an intuition about when to expect sharp critical windows?"*
In Ex. 4.3, we mention an example providing general intuition when critical windows can be sharp, i.e. when a few tokens are very unlikely under one mode compared to the other. In general, we expect sharp critical windows when it only takes a few steps from the forward process to erase the differences between $S_{\textrm{before}}$ and $S_{\textrm{after}}$. This could happen if the data has a multi-scale hierarchical structure (Definition 5.1), where a feature is decided in a narrow intermediate band of the tree.
**Experiments**
* *“Is there some qualitative phenomenon we can predict from the theory (e.g., existence or not of critical windows, their width, etc.) that can be then verified in the experimental data?”*
We highlight **several predictions of our theory** verified by experiments:
* We provide structured output experiments where our predictions for the location of critical windows match experiments (Fig. 5, App. G).
* We predict that prefill jailbreaks yield narrow critical windows, because the probability that a model agrees to a harmful request in the first few tokens but refuses in the end is low (Ex. 4.3). This is consistent with (Haize Labs 2024b) and Fig. 4a.
For LLM reasoning experiments, our only claim is that critical windows coincide with reasoning mistakes (Table 1). In the final revision, we will make which aspects of our theory that the experiments verify more clear.
We note other works verifying theory with experiments: Li and Chen 2024 showed that positions of theoretical computations of critical windows matched up with experiments for Gaussian mixtures, and Biroli et al. 2024 showed a measure of the separation (size of principal component) between classes predicts real-life critical windows for diffusion.
* *“In the experiments, the presence or absence of critical windows seems to depend strongly on the starting data. Can you elaborate more on that?”*
We agree that the specifics of the starting point could affect the location or presence of the critical window (Fig. 7). One explanation is that sometimes certain parts of the solution are more important to the final answer. For example, the bolded critical window in Figure 3 occurs at the point where the model finds the correct formula is crucial to solving the problem. In other instances, no particular part of the text could be crucial to the answer. We will clarify this in the final revision.
Thank you again for your time in reviewing the paper and providing much helpful feedback. If we have addressed your concerns about the paper, we hope you consider raising our score. | null | null |
KBQA-o1: Agentic Knowledge Base Question Answering with Monte Carlo Tree Search | Accept (poster) | Summary: This paper proposes KBQA-o1, which utilizes Monte Carlo Tree Search and ReAct-based agent process to generate stepwise logical form with knowledge base environment.The incremental finetuning strategy on automatically labeled examples further enhances the performance. According to the experiment results, KBQA-o1 can outperform previous few-shot KBQA methods with open-source LLM like Llama-3.1-8B.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: Yes
Experimental Designs Or Analyses: Yes
Supplementary Material: Yes, especially the proof of the propositions, detailed parameter settings, case study and error analysis.
Relation To Broader Scientific Literature: KBQA-o1 adapts MCTS algorithm from o1 to KB-specific question answering tasks, which shows advantages compared to previous end-to-end and step-by-step methods as it allows stepwise adjustments by KB environment awareness.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: Strengths:
1. This paper proposes KBQA-o1, which shows good potential to solve KBQA in low-resource setting with open-source model.
2. The experiment and analysis is comprehensive.
Weaknesses:
1. The efficiency analysis is reflected by the number of queries per minutes, just wondering how many queries are required to be executed for one target question on average.
Other Comments Or Suggestions: 1. There maybe a typo before equation 13. Then, we discard the annotation by choosing if the answer set is not empty... should be if the answer set is empty...
2. In equation 10, in the sum operator, the upper bound and lower bound should be reversed
Questions For Authors: 1. In Table 7, I find that the reward threshold is higher for easier datasets like webqsp, could you provide more insights when you choose this parameter?
2. Have you tried applying RL instead of SFT for optimization?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you very much for your time and effort in reviewing our paper. We sincerely appreciate your feedback. Below, we respectfully provide our detailed responses to address your concerns.
**W1: The efficiency analysis is reflected by the number of queries per minutes, just wondering how many queries are required to be executed for one target question on average.**
- We thank the reviewer for raising the concern regarding efficiency analysis. The reviewer asked about “the average number of queries required to be executed for one target question” to better understand the efficiency of the proposed method. In Section 5.4 (Comparison Analysis) and Figures 4(c) and 5(c), we have already provided a detailed analysis of the trade-off between query frequency (queries per minute) and accuracy (F1 score).
- It is important to emphasize that our KBQA-o1 method adopts a dedicated efficiency-oriented MCTS parameter setting θ_eff during the prediction phase (see Section 4.3 and Figure 5(c)), which significantly reduces the number of queries per question. Compared with the exploration phase that uses higher MCTS weights, the prediction phase achieves a substantial improvement in overall efficiency while sacrificing only a marginal amount of accuracy. Moreover, Figure 4(c) provides a comparative analysis with other baseline methods under the same evaluation protocol, showing that KBQA-o1 achieves higher accuracy while maintaining a competitive level of query efficiency.
- Regarding the average number of queries per question, since KBQA-o1 adopts Monte Carlo Tree Search (MCTS), which is a tree-based heuristic search algorithm, the number of queries is not fixed, but dynamically determined by the search space, question complexity, and the model’s policy. Therefore, we argue that query frequency (queries per minute) is a more comprehensive and practical indicator of efficiency in real-world applications.
**C1: There maybe a typo before equation 13. Then, we discard the annotation by choosing if the answer set is not empty... should be if the answer set is empty...**
- Thank you for pointing out the typo. The sentence before Equation (13) should indeed read “if the answer set is empty” instead of “not empty.” This will be corrected in the revised version.
**C2: In equation 10, in the sum operator, the upper bound and lower bound should be reversed**
- Thanks for noticing the issue in Equation (10). The summation bounds should indeed be reversed, and we will make the necessary correction in the updated paper.
**Q1: In Table 7, I find that the reward threshold is higher for easier datasets like webqsp, could you provide more insights when you choose this parameter?**
- Thank you for the question. The reward threshold γ* is indeed a key parameter in filtering auto-labeled samples during incremental fine-tuning. While we adopt a unified reward model across all datasets, the distribution of reward scores varies due to differences in dataset difficulty, logical form complexity, and question types.
- For relatively easier datasets like WebQSP, the generated logical forms are typically shorter and more confident, leading to overall higher reward scores. To ensure quality, we set a higher γ* to filter out over-confident but potentially incorrect samples. Conversely, in more complex datasets such as GrailQA and GraphQ, the model is more conservative, and the reward scores tend to be lower. Thus, a lower γ* is chosen to retain sufficient high-quality samples for fine-tuning.
- To determine γ*, we follow a validation-based selection strategy:
1. We first apply the reward model to score auto-labeled logical forms on a validation subset;
2. We then plot the relationship between the reward threshold γ*, the proportion of selected samples, and their downstream F1 performance, as shown in Figure 5(b);
3. Finally, we choose the γ* that optimizes the trade-off between data quality (reward score) and model improvement (F1 score).
- To improve efficiency, in practice, we adopt a simple yet effective strategy: we set γ* such that approximately the top 90% of auto-labeled samples are retained, filtering out only the bottom 10% with the lowest reward scores.
**Q2: Have you tried applying RL instead of SFT for optimization?**
- Thank you for your question. Due to the letter limitation, please refer to **our response to the last question of Reviewer 369N**, which is the same question as this.
At last, we sincerely appreciate your valuable feedback, and we will carefully consider all your suggestions to further improve our paper. Thank you very much!
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarification from the authors. I will maintain my score. | Summary: The paper introduces KBQA-o1, a novel agentic Knowledge Base Question Answering (KBQA) method that leverages Monte Carlo Tree Search (MCTS) to address challenges in KBQA, such as weak KB awareness, the trade-off between effectiveness and efficiency, and high reliance on annotated data. The proposed method employs a ReAct-based agent process for stepwise logical form generation and uses MCTS to balance exploration performance and search space. Additionally, KBQA-o1 generates high-quality auto-annotated data through heuristic exploration, reducing the need for extensive human annotation.
Claims And Evidence: The claims are well-motivated and largely supported by theoretical and empirical evidence
Methods And Evaluation Criteria: The proposed methods and evaluation criteria are appropriate and well-aligned with the paper’s goals.
Theoretical Claims: I reviewed the theoretical claims, including proofs in the main paper.
Experimental Designs Or Analyses: The experimental designs are largely sound.
Supplementary Material: I reviewed all the supplementary material.
Relation To Broader Scientific Literature: N/A
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Weaknesses:
**Limited Evaluation on Real-World Scenarios:** While the paper demonstrates strong performance on benchmark datasets like GrailQA, WebQSP, and GraphQ, it lacks evaluation in real-world, noisy, or incomplete knowledge base scenarios. Real-world KBs often contain incomplete or inconsistent data, and the robustness of KBQA-o1 in such settings remains unclear. Including experiments on more diverse and noisy datasets would strengthen the paper's claims about the model's practical applicability.
**Scalability Concerns:** The paper does not thoroughly address the scalability of the proposed method, especially when dealing with extremely large knowledge bases. Although MCTS is designed to balance exploration and exploitation, the computational overhead of performing multiple rollouts on large-scale KBs could be significant. A more detailed analysis of the computational complexity and runtime performance on larger KBs would be beneficial.
Other Comments Or Suggestions: Please see weakness.
Questions For Authors: Please see weakness.
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: Thank you very much for your time and effort in reviewing our paper. We sincerely appreciate your feedback. Below, we respectfully provide our detailed responses to address your concerns.
**Q1: Limited Evaluation on Real-World Scenarios: While the paper demonstrates strong performance on benchmark datasets like GrailQA, WebQSP, and GraphQ, it lacks evaluation in real-world, noisy, or incomplete knowledge base scenarios. Real-world KBs often contain incomplete or inconsistent data, and the robustness of KBQA-o1 in such settings remains unclear. Including experiments on more diverse and noisy datasets would strengthen the paper's claims about the model's practical applicability.**
- Thank you for raising this important point. We fully agree that evaluation under real-world, noisy, or incomplete KB conditions is critical for practical applicability.
- KBQA-o1 is inherently designed to address real-world challenges such as missing entities, non-standard relations, and schema inconsistency. Its environment-aware agent dynamically interacts with the KB during logical form construction, enabling stepwise adaptation to incomplete or unstable KB structures—an advantage over static or end-to-end approaches.
- To improve robustness, KBQA-o1 uses SimCSE-based semantic matching in MCTS expansion, allowing flexible matching of noisy or ambiguous relations. This reduces reliance on rigid schema annotations and improves generalization to noisy KBs.
- While our experiments are conducted on standard datasets, GraphQ in particular is widely recognized as a noisy and structurally diverse benchmark. KBQA-o1 achieves a 19.4 F1-point improvement over previous methods on GraphQ, showing strong performance under noisy conditions.
- We further simulate real-world scenarios via incremental self-supervised learning: starting with minimal labeled data, KBQA-o1 explores unlabeled questions using MCTS and filters high-quality logical forms via a reward model for fine-tuning. This aligns with the practical need for robustness under low-annotation settings.
- We also identify real-KB evaluation as a key future direction. Our Impact Statement and Appendix outline plans to test on domain-specific KBs (e.g., medicine, law), simulate KB incompleteness via subgraphs, and explore continual learning strategies like DPO.
**Q2: Scalability Concerns: The paper does not thoroughly address the scalability of the proposed method, especially when dealing with extremely large knowledge bases. Although MCTS is designed to balance exploration and exploitation, the computational overhead of performing multiple rollouts on large-scale KBs could be significant. A more detailed analysis of the computational complexity and runtime performance on larger KBs would be beneficial.**
- Thank you for raising this important concern. We fully recognize that scalability is critical for practical deployment of KBQA on large-scale knowledge bases.
- While KBQA-o1 adopts MCTS, it does not rely on exhaustive search. Instead, it integrates an environment-aware agent with policy-guided local exploration. At each step, we use SimCSE-based semantic retrieval (Eq. 6) to narrow the candidate actions to a small, relevant subset, significantly reducing the search space—even in large KBs.
- To balance quality and efficiency, we apply stage-specific MCTS settings: a higher exploration weight w = 50 during training to ensure logical form quality, and a lightweight setting w = 10 during inference to reduce rollout cost. As shown in Figure 5(c), this effectively ensures both performance and scalability.
- We also present empirical results on query throughput vs. accuracy (Figure 4(c)), showing that KBQA-o1 outperforms CoT and ToT variants in both accuracy and runtime. In the final version, we will further include a theoretical analysis of complexity, including rollout cost, SimCSE retrieval overhead, and search depth.
- Importantly, the experiments in our paper are conducted on Freebase, which itself is a massive real-world KB with tens of millions of entities and triples. The fact that KBQA-o1 operates efficiently on Freebase already demonstrates its practical scalability under large-KB conditions.
At last, we sincerely appreciate your valuable feedback, and we will carefully consider all your suggestions to further improve our paper. Thank you very much!
---
Rebuttal Comment 1.1:
Comment: Thank you for your clarification. I apologize for missing the formal proofs in the appendix — after reading them carefully, I find the theoretical justification solid. Your explanation for choosing MCTS over standard RL also makes sense. I now believe the paper meets the accept standard and have updated my score.
I have a minor (non-blocking) question: could you elaborate a bit more on how KBQA-o1 might leverage recent advances such as GRPO in the future?
---
Reply to Comment 1.1.1:
Comment: Thank you for your support. Regarding your new question, our response is as follows:
GRPO is an open-source reinforcement learning framework proposed by DeepSeek for large reasoning models, designed to replicate the long-chain-of-thought reasoning capabilities of the GPT-o1 model. It has been shown to be highly effective for generating long reasoning trajectories. Recent works such as Search-R1 and R1-Searcher further extend this line of research by integrating reasoning-oriented reinforcement learning with external search engines.
For the KBQA task, we believe there is strong potential to adapt this approach by integrating reinforcement learning with a knowledge graph as the environment. In this context, KBQA-o1 serves as a solid foundational framework.
In addition, we can conducte a comparison between GRPO and MCTS. Both share the characteristic of end-to-end reward signal propagation. However, GRPO operates at the token level, making it more suitable for textual reasoning tasks such as chain-of-thought (CoT) generation. In contrast, MCTS functions at the step level, which may be more appropriate for structured query generation tasks that require explicit interaction with the environment. We plan to further investigate and analyze the differences between the two approaches in future work. | Summary: This paper proposes a novel agentic KBQA framework that integrates Monte Carlo Tree Search (MCTS) with large language models (LLMs) to address challenges in low-resource and complex reasoning scenarios.
There are too many baselines not being discussed or compared, which makes this paper far from technically sound since the performance and efficiency are both unsatisfactory.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: 1. MCTS requires multiple rollouts and tree expansions, leading to increased latency for complex queries.
2. The reward model evaluates logical forms based on syntax and answer alignment but ignores semantic plausibility. Also there is no exploration and exploitation trade-off since high exploration w improves accuracy but slows down the inference, while lower weights risk suboptimal performance.
Theoretical Claims: The methods and comparisons are theoretically analyzed.
Experimental Designs Or Analyses: 1. The efficiency is really bad. ReAct is already not suitable for QA, let alone the combination with MCTS. Six minutes for one query is not acceptable for either research or industrial scenarios.
2. Too many advanced baselines missed to be compared, e.g., RoG, GNN-RAG, StructGPT.
3. The performance is not satisfying, even though many baselines are not compared. For example, RoG is 70.8% in terms of F1 score on WebQSP.
Supplementary Material: Roughly on the proofs.
Relation To Broader Scientific Literature: Closely related.
Essential References Not Discussed: Too many baselines are missed to be discussed and compared, e.g., RoG, GNN-RAG, StructGPT, etc.
Other Strengths And Weaknesses: NA
Other Comments Or Suggestions: NA
Questions For Authors: Please check the comments and weaknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you very much for your time and effort in reviewing our paper. We sincerely appreciate your feedback. We understand that your main concerns center around two key aspects: **performance** and **efficiency**. Below, we respectfully provide our detailed responses to address these points.
**< Performance >**
**Q1: "Too many baselines are missed to be discussed and compared, e.g., RoG, GNN-RAG, StructGPT, etc."**
- Thanks. Methods like RoG, GNN-RAG, and ChatKBQA rely on annotated data (e.g., WebQSP, CWQ) for tuning. However, as Gu et al. note, such data is **often unavailable in practice**, making current KBQA approaches **overly dependent on supervision**. This motivates **few-shot evaluation (<=100 examples), which is the setting adopted by KBQA-o1 and all baselines in our work**, as shown in Table 2.
- We also evaluated KBQA-o1 under full supervision, where it performs also strongly. However, as leaderboard gains have plateaued, we focus on the more practical challenge of low-resource KBQA and omit these results from the paper. For reference, the full-supervised comparison is:
| Type | Method | WebQSP F1 | WebQSP Hit@1 | WebQSP Acc | CWQ F1 | CWQ Hits@1 | CWQ Acc |
|-------------|--------------------------|-----------|---------------|-------------|--------|-------------|----------|
| End-to-end | RoG | 70.8 | 85.7 | - | 56.2 | 62.6 | - |
| | GNN-RAG | 73.5 | 82.8 | - | 60.4 | 62.8 | - |
| | ChatKBQA | 83.5 | 86.4 | 77.8 | 81.3 | 86.0 | 76.8 |
| Step-by-step| Pangu | 79.6 | - | - | - | - | - |
| | StructGPT | 72.6 | - | - | - | - | - |
| | ToG | - | - | 82.6 | - | - | 69.5 |
| | KG-Agent | 81.0 | 83.3 | - | 69.8 | 72.2 | - |
| Heuristic | **KBQA-o1 (Ours)** | **85.7** | **88.3** | **81.7** | **83.9**| **89.5** | **80.7** |
**Q2: "The performance is not satisfying, even though many baselines are not compared. For example, RoG is 70.8% in terms of F1 score on WebQSP."**
- Thanks. Compared to RoG’s 70.8 F1 under full supervision, KBQA-o1 achieves 67.0 F1 with only 100 labels, showing strong low-resource potential. Under full supervision, which is the same settings as RoG's, KBQA-o1 further reaches 85.7 F1, outperforming RoG and achieving SOTA. However, our focus remains on more practical low-resource KBQA, not fully supervised settings.
**< Efficiency >**
**Q3: "Six minutes for one query is not acceptable for either research or industrial scenarios."**
- Thanks. There is a factual misunderstanding. As shown in **Figure 4(c)**, the x-axis represents **Query per Minute**, and KBQA-o1 achieves approximately **6 queries per minute**, not **“Six minutes for one query”** as the review states.
- With an average of ~10 seconds per query, KBQA-o1 offers efficient, high-quality KB reasoning for either research or industrial scenarios.
**Q4: "There is no exploration and exploitation trade-off since high exploration w improves accuracy but slows down the inference, while lower weights risk suboptimal performance."**
- Thanks. Indeed, increasing w enhances accuracy while decreasing efficiency. However, this process is not linear. As shown in Figure 5(c), both effectiveness and efficiency stabilize after w reaches a certain threshold, indicating a trade-off can be achieved within a proper w.
- This trade-off arises from the reward mechanism and the UCT selection algorithm in MCTS, making MCTS inherently heuristic. A detailed proof is provided in Appendix B.2.
**< Other Comments >**
**Q5: "The reward model evaluates logical forms based on syntax and answer alignment but ignores semantic plausibility."**
- Thanks. Our reward is not solely based on syntax or answer alignment. As shown in Equation (9), it combines the policy model’s semantic score and the reward model’s syntax score via weighted fusion, enabling a more robust evaluation that accounts for both semantic plausibility and structural correctness.
**Q6: "ReAct is already not suitable for QA, let alone the combination with MCTS. "**
- Thanks. In KBQA-o1, we only use ReAct as a standardized prompt format to formulate the agent process. These prompts are fixed and embedded via instruction tuning. Thus, whether ReAct is “suitable for QA” is irrelevant to our setting and does not impact our method’s effectiveness.
At last, we sincerely appreciate your valuable feedback, and we will carefully consider all your suggestions to further improve our paper. We would be deeply grateful if you could kindly reconsider raising the score to 3 or above. Thank you very much!
---
Rebuttal Comment 1.1:
Comment: Dear Authors,
Thanks for the rebuttal. I have increased my score to 3.
---
Reply to Comment 1.1.1:
Comment: Thank you for your support. Once again, we sincerely appreciate your responsible and insightful review. We will continue to refine this work and guide our future efforts based on your valuable suggestions. | Summary: The paper presents KBQA-o1, an agentic Knowledge Base Question Answering (KBQA) method that integrates Monte Carlo Tree Search (MCTS) for improved logical form generation. It addresses challenges in KB awareness, search efficiency, and reliance on annotated data by employing a ReAct-based agent process and incremental fine-tuning. Experiments on GrailQA, WebQSP, and GraphQ show that KBQA-o1 outperforms previous low-resource methods and approaches fully supervised performance, demonstrating strong generalization and adaptability across multiple LLMs.
Claims And Evidence: Yes. The paper provides substantial empirical evidence to support its main claims.
Methods And Evaluation Criteria: The methods and evaluation criteria in the paper are generally appropriate and well-aligned with the KBQA task. The authors evaluate KBQA-o1 on three widely used benchmark datasets—GrailQA, WebQSP, and GraphQ—which are standard for assessing KBQA models, particularly in low-resource settings. The use of F1 score and Exact Match (EM) as evaluation metrics is also consistent with prior work in this domain.
Theoretical Claims: The paper presents several theoretical claims related to the effectiveness of its agentic KBQA approach with Monte Carlo Tree Search (MCTS).
Proposition 4.1 – The agent’s awareness of the KB environment improves logical form generation compared to end-to-end methods.
Proposition 4.2 – The MCTS-based heuristic method balances search efficiency and effectiveness better than Chain-of-Thought (CoT) and Tree-of-Thought (ToT) methods.
Proposition 4.3 – There exists a reward threshold 𝛾∗ that ensures incremental fine-tuning improves model performance.
The correctness of these claims is primarily supported by empirical results, rather than formal mathematical proofs. The paper references experimental findings (Section 5.4 and Appendices) as qualitative or quantitative justification but does not provide rigorous theoretical derivations.
Experimental Designs Or Analyses: The experimental design and analysis in the paper are generally sound and well-structured, providing strong empirical support for the proposed KBQA-o1 method. The authors evaluate their approach on three widely used KBQA benchmarks (GrailQA, WebQSP, GraphQ) under a low-resource setting, which aligns well with the paper’s focus on improving performance with limited annotated data.
Supplementary Material: Yes, all parts.
Relation To Broader Scientific Literature: The paper’s contributions are well-situated within the broader scientific literature on Knowledge Base Question Answering (KBQA), heuristic search methods, and large language models (LLMs) for reasoning. It builds upon existing techniques while introducing novel elements to improve logical form generation and exploration efficiency.
Novelty & Contribution to Literature
Agentic KBQA with MCTS: The combination of ReAct agents and MCTS for KBQA reasoning appears to be a novel approach that improves search efficiency while maintaining flexibility.
Incremental Fine-Tuning for KBQA: The method’s use of self-annotated logical forms aligns with semi-supervised learning approaches, providing a scalable alternative to purely supervised KBQA models.
Improved Low-Resource Performance: Unlike previous KBQA methods that depend heavily on large annotated datasets, KBQA-o1 achieves strong performance with limited supervision, making it more practical for real-world applications.
Essential References Not Discussed: No
Other Strengths And Weaknesses: Strengths:
Well-Defined Problem Scope & Contribution: The paper clearly identifies key challenges in KBQA, such as poor KB awareness, large search spaces, and reliance on annotated data, and proposes a well-motivated solution with KBQA-o1. The integration of ReAct-based agent reasoning with Monte Carlo Tree Search (MCTS) is a creative and effective combination of existing ideas.
Strong Empirical Performance: The method outperforms state-of-the-art low-resource KBQA methods on standard benchmarks (GrailQA, WebQSP, GraphQ). It demonstrates competitive performance even against fully supervised methods, highlighting its effectiveness in low-data scenarios.
Weaknesses & Suggestions:
1. A small-scale experiment on a different KB structure would strengthen claims of broad applicability.
2. A discussion on why MCTS was chosen over standard RL for guiding logical form exploration would be beneficial.
Other Comments Or Suggestions: Please refer to the above section.
Questions For Authors: Please refer to the above section.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you very much for your time and effort in reviewing our paper. We sincerely appreciate your feedback. Below, we respectfully provide our detailed responses to address your concerns.
**Q1: Proposition 4.3 – There exists a reward threshold 𝛾∗ that ensures incremental fine-tuning improves model performance. The correctness of these claims is primarily supported by empirical results, rather than formal mathematical proofs. The paper references experimental findings (Section 5.4 and Appendices) as qualitative or quantitative justification but does not provide rigorous theoretical derivations.**
- Thank you for your comments. The core components of KBQA-o1 in our paper include Agent Initialization, Heuristic Environment Exploration, and Incremental Fine-Tuning. We provide formal justifications for the effectiveness of these three key modules through Propositions 4.1, 4.2, and 4.3, respectively.
- You mentioned that “these claims are primarily supported by empirical results, rather than formal mathematical proofs” and that the paper “does not provide rigorous theoretical derivations.” However, we would like to kindly clarify that in addition to the quantitative experimental results in Section 5, we have provided detailed theoretical derivations supporting these propositions in Appendices B.1, B.2, and B.3.
- We are unsure whether this might have been an oversight or if you believe the theoretical proofs require further improvement. Please kindly let us know so we can better address your concerns.
**Q2: A small-scale experiment on a different KB structure would strengthen claims of broad applicability.**
- Thank you for the suggestion. As current KBQA tasks are primarily based on large-scale RDF knowledge bases such as Freebase and Wikidata—each containing tens or hundreds of millions of nodes—the task remains relatively complex. We have validated the effectiveness of our method across three datasets with different distributions: GrailQA, WebQSP, and GraphQ. In particular, we evaluated our approach on the more comprehensive GrailQA dataset under various settings, including I.I.D., Compositional, and Zero-Shot, which aligns with the majority of existing KBQA benchmarks. This ensures both the solidity and applicability of our approach.
- As future work, we plan to extend our experiments to different types of knowledge base structures on a broader scale. For instance, we aim to apply our method to custom-built knowledge graphs such as those used in GraphRAG tasks, as well as to property graphs or hypergraph-based knowledge bases, to further demonstrate the broader applicability of our approach.
**Q3: A discussion on why MCTS was chosen over standard RL for guiding logical form exploration would be beneficial.**
- We initially experimented with the Direct Preference Optimization (DPO) algorithm as a standard RL-based approach for guiding logical form exploration. However, DPO requires high-quality negative samples to be effective. To this end, we attempted to construct negative samples by leveraging the erroneous branches generated during the MCTS search.
- Nevertheless, we observed a critical challenge: due to the dense structure of large-scale knowledge bases, the differences between correct and incorrect logical forms are often very subtle. For instance, the correct relation might be film.actor.film, while a near-miss incorrect one could be tv.tv_actor.starring_roles. Despite their structural difference, these relations are semantically very close. DPO struggles to distinguish such fine-grained differences in structured outputs, resulting in subpar performance compared to supervised fine-tuning (SFT) with only positive samples.
- Given these limitations, we opted to use MCTS for logical form exploration, which provides a more interpretable and controllable mechanism to search over the space of structured queries. Furthermore, we noted recent advances such as GRPO, proposed by DeepSeek-R1, which uses end-to-end reinforcement learning with reward signals to guide structured generation. Inspired by this, we plan to explore replacing the MCTS process with an end-to-end RL paradigm in future work to further enhance performance.
At last, we sincerely appreciate your valuable feedback, and we will carefully consider all your suggestions to further improve our paper. Thank you very much! | null | null | null | null | null | null |
Improved and Oracle-Efficient Online $\ell_1$-Multicalibration | Accept (poster) | Summary: This paper tackles the challenge of online multicalibration. The key contribution of this paper is theoretical: the paper proposes a method that achieves improved rate of O(T^-1/3) and oracle efficient rate of O(T^-1/4). The key insight is that one can reduce the L1 multicalibration problem into an online linear-product optimization problem (OLPO).
Claims And Evidence: Yes, claims are usually backed up with references or motivation.
Methods And Evaluation Criteria: The paper does not provide any evaluation of the proposed approach.
Theoretical Claims: The paper is extremely heavy on theory and I did not check the correctness of proofs.
Experimental Designs Or Analyses: No experiments.
Supplementary Material: No.
Relation To Broader Scientific Literature: The paper improves upon existing literature by improving the oracle efficient rate from T^-1/8 to T^-1/4, and the online multicalibration rate from T^-1/4 to T^-1/3. Also, the paper mentions connections with omnipredictions.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: # Main Strengths:
1. The paper tackles the multicalibration problem, which is a nice property that models should've to output precise probabilities. By analyzing theoretically the proposed solution, the paper improves the convergence rate;
2. Several results are provided, showcasing the importance of online multicalibration;
# Main Weaknesses:
1. The paper is extremely heavy on theory. I wonder whether ICML is the proper venue to present such findings or another conference (e.g., theoretical computer science) could be a better fit;
2. The structure of the paper is not ideal: first results are provided (Sec 1), and after they are proved (Sec 2). Also, I do not understand how 1.1 and the next subsections could fit into the introduction. I'd rather have a Sec 2 starting from 1.1;
3. Although the paper presents a pseudo-code for the proposed algorithm, seeing results on toy data would improve the quality of the paper;
4. The motivation underlying the paper is hidden, and it seems to simply be "this existing paper achieves the following guaranteed error, can we do better?". The point of why this is relevant should be highlighted.
Other Comments Or Suggestions: Overall, I don't feel like the community at ICML could benefit from these theoretical results and probably another venue could be a better fit. However, I am open to change my mind, should the other reviewers be positive on the paper.
Questions For Authors: NA
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s concern regarding the theoretical nature of our work. While our contributions are indeed theoretical, we believe ICML is an appropriate venue for the following reasons. (1) Relevance to Core ML Problems: Multicalibration — and online multicalibration in particular — is a key problem at the intersection of machine learning, fairness, and sequential decision-making. Recent ICML papers (e.g., on online learning, calibration, multigroup fairness, omniprediction) have explored similar themes, indicating strong interest from the ICML community.
(2) Methodological Advancement: Our work introduces a new online learning problem (OLPO) — and gives a reduction similar to one that was obtained for online calibration and OLO.
(3) Precedent: Similar works were published at ICML; see, e.g., Noarov and Roth (2023), Globus-Harris et al. (2023). In fact, the original multicalibration paper, Hébert-Johnson et al. (2018), also appeared in ICML.
Works on online learning, optimization and oracle-efficient online learning also regularly appear at ICML.
$Regarding \ the \ motivation$: While our work does improve upon existing error rates, our motivation is broader: online multicalibration is a key tool for ensuring fairness and reliability in sequential prediction settings, which arise in many real-world applications. Our contributions aim to make this framework both statistically and computationally efficient. We will revise the introduction to better highlight this broader motivation and its relevance to the ML community. | Summary: The paper focuses on the online multicalibration task, for which it (1) presents a O(T^{2/3})-ECE error algorithm, thus matching the best known constructive efficient bounds for vanilla calibration; and (2) presents an oracle-efficient algorithm that obtains O(T^{3/4}) multicalibration ECE error given access to a certain offline oracle.
Both results above are achieved by developing a new online prediction framework that the authors called online linear product optimization (OLPO), which is distinct from online convex optimization. They also develop a linearized variant of OLPO, which helps obtain the first contribution. They obtain the second contribution via implementing (non-linearized) OLPO via the oracle-efficient FTPL technique of Dudik et al (2020), which works under two certain conditions. They then show that for the natural "transductive" and "small-separator" classes of group families, these conditions always hold.
#######
Update after rebuttal:
I have read the authors' response. It addresses my concerns by promising to include additional discussion of the reference, and I therefore keep my original score.
Claims And Evidence: Yes, the claims made in the submission are supported by convincing evidence in the form of proofs.
Methods And Evaluation Criteria: N/A --- this paper presents a theoretical contribution only.
Theoretical Claims: Yes, I checked the bulk of proofs and statements in the main part and in the supplementary, and believe them to be correct --- possibly modulo some small unchecked technicalities.
Experimental Designs Or Analyses: N/A --- this paper presents a theoretical contribution only.
Supplementary Material: Yes, I carefully reviewed (almost) all of the supplementary material.
Relation To Broader Scientific Literature: There exists a large prior literature on online vanilla calibration, and a smaller but substantial literature on online multicalibration and related methods. Relative to this latter literature, this paper obtains new algorithms for online multicalibration with rates matching the (near-) best known rates for vanilla calibration, as well as the first oracle efficient algorithm.
For the latter, oracle efficiency, part, the closest-related prior work is (Garg et al, 2024), which obtained oracle efficient online omniprediction (a closely related but distinct task), but the sense in which Garg et al's was an oracle-efficient algorithm required access to an online (regression) oracle, whereas the present paper shows in its setting that an offline oracle can suffice, and do so via novel arguments in their developed framework, connecting it to the oracle efficiency afforded by an adaptive FTPL algorithm of Dudik et al.
For the former contribution, that is, the O(T^{2/3}) L1 multicalibration rate, I believe this rate is already subsumed by an existing online unbiased prediction framework (https://opt-ml.org/papers/2023/paper96.pdf); see below. The OLPO algorithm and framework used by this paper to obtain this result is similar but appears distinct from the former framework (the two frameworks might possibly be duals of each other in some sense).
Essential References Not Discussed: The following paper [NRRX'23] (https://opt-ml.org/papers/2023/paper96.pdf) contains what appears to be a highly related framework for online unbiased vector-valued prediction, where in their case unbiasedness is stated with respect to general events that may depend on the context and on the predictions themselves. In particular, their method already attains O(T^{2/3}) L1 multicalibration as an immediate corollary. Namely, discretizing into m uniform buckets and defining conditioning events for each pair (group, bucket), their framework gives, for each group after T rounds, L1 calibration on every group bounded as: T/m (accumulated discretization error over the T rounds) + O(\sum_{buckets i} \sqrt{number of rounds prediction fell into bucket i}). In the worst case when each bucket appears in T/m rounds this leads to the bound O(T/m + \sqrt{Tm}) = O(T^{2/3}) for the standard choice m = T^{1/3}.
Beyond this result, it appears that the OLPO is quite related to this unbiased framework, with both reducing to applying the combination of experts + OCO algorithms, so it appears important for context to discuss this connection in some detail.
Other Strengths And Weaknesses: Overall, I believe that the main strength of this paper lies in its carefully built framework that allows to exploit the oracle-efficient FTPL technique in this setting. While oracle efficient FTPL is clearly a possible candidate subroutine to guarantee oracle efficiency in this and many other online settings, it remained unclear in the online calibration literature until now how to connect this setting so that this subroutine can be exploited. This paper does that, and also makes a first step to identify: under what conditions can group families induce this efficient oracle algorithm? Similarly, also instructive and fruitful are the careful techniques exploiting the calibration halfspace oracle of Abernethy et al (2011) that are involved in the first part of the paper. Another strength is that the paper is well-written.
Other Comments Or Suggestions: A presentational suggestion: I believe the derivation in the last appendix, which confirms the two conditions in the case of transductive and small separator set groups, is actually quite important for the reader to ingest, so I think it is wise to move it to the main part of the paper --- indeed, it nicely clarifies the applicability of the novel oracle-efficient construction.
Questions For Authors: The main question that I have is a substantive comparison to the reference above --- beyond the O(T^{2/3}) L1 multicalibration implication, OLPO appears to share some essential algorithmic features with the proposed framework here, seeing as both frameworks rely on appropriately linearizing the task into a combination of an experts algorithm to aggregate over the "groups", and on an online convex optimization method to form actual predictions.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for pointing us towards this relevant reference.
After reviewing their results, we agree that their framework can be used to derive bounds for online $\ell_1$-multicalibration, and we outline a high-level approach for binary-valued hypothesis classes below.
Fix a $h \in \mathcal H$ and denote the collection of events
$\mathcal E := \{E_{h,i}: h \in \mathcal H \text{ and } i \in \{0,\ldots,m\} \}$
where $E_{h,i}(x,p) = \mathbb{I}[h(x_t) = 1, p_t = i/m]$. Further, denote $T_{h,i} := \sum_{t=1}^T \mathbb{I}[h(x_t) = 1, p_t = i/m]$ and note that this is equal to $\sum_{t=1}^T E_{h,i}(x_t,p_t)$. Also note that $\sum_{i = 0}^m T_{h,i} \leq T$. Then, applying Theorem~3.4 of the reference together with the halfspace oracle, one would obtain
$$
K_T(\pi,h) \lesssim \sum_{i = 0}^m \sqrt{T_{h,i} \cdot \log (2|\mathcal H|mT)} + \frac{T}{m} \leq \sqrt{mT \cdot \log (2|\mathcal H|mT)} + \frac{T}{m},
$$
and then optimizing over the discretization level $m$ gives the result.
It is worth noting that the algorithmic framework in the paper [NRRX'23] is quite different, despite both works using an expert routine. In particular, the work [NRRX'23] requires a small-loss regret bound to get the result, while we do not.
Additionally, we believe our algorithm is much simpler (e.g. not requiring the solution of any min-max optimization problem) and the reduction to OLPO is of independent interest as it facilitates the oracle-efficient results in a more natural and modular way.
We will certainly include a discussion of this connection in the related work section to provide better context and contrast our contributions with theirs.
Finally, thank you for the presentational feedback. We agree and we would be happy to move this derivation (at least the statements of the conditions, if not the full proof) to the main body of the paper. | Summary: The paper studies the problem of online multicalibration for L1 norm. The paper proposes a method with theoretical guarantees. The key contribution is based on the reduction of online L1-multicalibration to an online learning problem.
### update after rebuttal
I am maintaining the current score following the rebuttal.
Claims And Evidence: These claims are well-supported through theoretical analyses and mathematical proofs.
Methods And Evaluation Criteria: N/A
Theoretical Claims: The theoretical contributions are sound. The idea on extending the algorithms to a more general setting make sense. The proposed OLPO is novel.
Experimental Designs Or Analyses: N/A
Supplementary Material: N/A
Relation To Broader Scientific Literature: The results improve the prior work on online multicalibration (Gupta 2024, Garg 2024). In particular, the error rate when H is finite is improved from $O(T^{-1/4})$ in Garg 2024 to $O(T^{-1/3})$ in this paper. Morever, the oracle-efficient bound is also improved from $O(T^{-1/8})$ in Garg 2024 to $O(T^{-1/4})$.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: 1. The paper is very well-written and the maths appears to be rigorous.
2. The paper proposed an approach that only takes a polynomial number of call to an optimization oracle, greatly improve the no-regret algorithm for OLPO.
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful and positive evaluation of our paper. | Summary: The paper studies an online prediction setting, where a learner wished to minimize $\ell_1$ multicalibration error with respect to a class of real-valued predictors $\mathcal{H}$ that act as group selection functions. The authors propose an algorithm that obtains an error rate of $O(T^{-1/3})$, through reducing the problem to an online problem they term online linear product minimization, which they solve by an linearizing the reward function by enumerating over all $h\in\mathcal{H}$, and running an algorithm that employs an algorithm for the expert setting over $\vert\mathcal{H}\vert$ online linear optimization algorithms ("meta-experts"), one for each $h\in\mathcal{H}$. They then show how to extend their result to non-finite classes $\mathcal{H}$ using the technique of covering numbers and finding a finite cover for $\mathcal{H}$. Finally, the authors consider obtaining online $\ell_1$ multicalibration with oracle-efficient algorithms (which do not require enumerating over all of $\mathcal{H}$, but instead assume access to an offline optimization oracle), for which they show an algorithm based on a reduction to the generalized FTPL framework (Dudik et al. 2020), and obtaining an error rate of $O(T^{-1/4})$.
Claims And Evidence: The paper is purely theoretical. I have only glanced at the appendix with the proofs, but the claims appear to be substantiated.
Methods And Evaluation Criteria: N/A.
Theoretical Claims: I did not.
Experimental Designs Or Analyses: N/A.
Supplementary Material: I did not.
Relation To Broader Scientific Literature: The paper is studying online $\ell_1$ multicalibration and provides: (a) (computationally inefficient) algorithm that improves error rates over prior work (Gupta et al. 2022, Lee et al. 2022) (though Noarov et al. may implicitly still obtain the same bounds, see question to authors) and extensions to non-finite classes, and (b) oracle-efficient algorithm obtaining a faster error rate than known in prior work (Garg et al. 2024), and under milder assumptions (offline oracle instead of online regression oracle).
Essential References Not Discussed: Most of the relevant related work appears in in manuscript. An exception is "High-Dimensional Prediction for Sequential Decision Making" by Noarov et al. 2023 that I believe is highly relevant and is not discussed. I believe that using their approach of obtaining subsequence regret guarantees can be utilized when each subsequence is defined by $h\in\mathcal{H}$, and optimizing over the number of buckets in the prediction should obtain similar bounds of $O(T^{-1/3})$.
Other Strengths And Weaknesses: Strengths:
1. The paper is very well-written and presentation is clear and concise. I enjoyed reading it.
2. The problem is clearly motivated, and is rather central in the domains of fairness/uncertainty estimation.
3. The result on obtaining $O(T^{-1/4})$ $\ell_1$ multicalibration with an oracle-efficient algorithm and an offline oracle I believe is novel and very interesting, as well as the bounds for non-finite classes in the first part of the paper.
Weaknesses:
1. Novelty of the $O(T^{-1/3})$ bound using an inefficient algorithm in the finite case, please see questions section.
Other Comments Or Suggestions: -
Questions For Authors: Can the approach in Noarov et al. 2023 for subsequence regret be utilized to derive similar bounds of $O(T^{-1/3})$ for $\ell_1$ multicalibration? Can you elaborate on the differences?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive evaluation of our paper, and for pointing us towards the paper [NRRX'23]. Please refer to the response to Reviewer~uPiD. | Summary: This paper studies the online l1-multicalibration problem. Multicalibration is a natural extension of calibration with group identities. It is a natural group fairness definition and implies some learning concept called omniprediction.
The authors improved based on a previous work that provides O(T^{1/4}) upperbound for l2-multicalibration. This paper directly solves l1-multicalibration with O(T^{1/3}) and oracle efficient O(T^{1/4}) upperbound using halfspace oracle, which they provided an implementation in the paper. This paper improves the previous work in many perspectives.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes
Theoretical Claims: I read the arguments at a high level, and they make sense to me.
Experimental Designs Or Analyses: N/A
Supplementary Material: I took a look at Appendix B and C to get a better understanding of their approach.
Relation To Broader Scientific Literature: This paper provides some better results on a problem studied by some previous work.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: This paper provides a reduction from online multicalibration to online linear-product optimization, which is similar to the connection between calibration and online linear optimization. The result is an improvement, and the reduction provides some new understanding of the online multicalibration problem.
Other Comments Or Suggestions: There is one typo that needs to be fixed at the end of Appendix C but does not hurt understanding.
Questions For Authors: I am wondering if the authors can confirm that their results also provide the same upperbound for online omniprediction for free since l1-multicalibration error is an upper bound for omniprediction error in the offline setting.
Also, does the lower bound in online expected calibration error also a lower bound for online multicalibration? I think it would be nice if the authors could discuss a bit in related work.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their thorough review of our paper (and for discovering a typo in Appendix~C). We also appreciate the reminder to acknowledge the known lower bound for online calibration, which indeed extends to online $\ell_1$- multicalibration.
To address the first comment, yes, our results do imply improved bounds for online omniprediction since our improved bounds for online $\ell_1$-multicalibration can be transformed into improved bounds for online omniprediction when the loss functions are convex and Lipschitz; see, e.g., Garg et al. (2024).
We opted against including results for online omniprediction because it was not the primary focus of our work, and we wanted to avoid distracting from the core contributions on online $\ell_1$-multicalibration and the $\mathtt{OLPO}$ framework.
Additionally, due to space limitations, providing a satisfactory treatment of omniprediction results would have been challenging.
That said, we acknowledge that this connection is standard and well-known in the community, and we will add a short discussion in the related work in the final version of the paper. | null | null | null | null |
"Who experiences large model decay and why?" A Hierarchical Framework for Diagnosing Heterogeneous Performance Drift | Accept (poster) | Summary: This paper proposes a nonparametric Subgroup-scanning Hierarchical Inference Framework for performance drifT (SHIFT) to use hypothesis testing for drift diagnosis. The SHIFT first decides if any subgroup experiences significant performance decay from drift, then checks the specific shift that explains the decay. In this way, SHIFT enables the explainable detection of subgroup shifts.
## update after rebuttal
The rebuttal from the authors solved my concerns and provided insightful experiment and discussion. I decide to keep my original positive rating.
Claims And Evidence: The claims are successfully supported by experiments on tabular data.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense under the problem setting and tabular data.
Theoretical Claims: The theoretical claims focus on the formulation of the hypothesis testing.
Experimental Designs Or Analyses: The experiments are extensive and support the proposed claims.
Supplementary Material: I didn't review the supplementary material.
Relation To Broader Scientific Literature: Previous work proposes to decompose the average performance drop within an identified subgroup into covariate shift and outcome shift. This work aims to further identify the related input variables. In addition, the prior work on drift diagnosis is primarily based on the estimation, while this work uses hypothesis testing.
Essential References Not Discussed: I didn't identify any such references.
Other Strengths And Weaknesses: Strengths:
This paper is well-written and easy to follow. The work proposes a framework to detect the subgroup with performance decay and identify the key variables for the performance decay, which is explainable and has great potential in practical applications. The application of SHIFT in practice is also well-discussed in this paper.
Weaknesses:
This paper mainly focuses on the experiments with tabular data. Despite the conclusion mentions it can be applied on image/text data, it might bring new challenges, such as computational cost and the categorization of subgroups.
Other Comments Or Suggestions: NaN
Questions For Authors: 1. How is the efficiency or computational cost of the hypothesis testing in SHIFT?
2. Will there be new challenges when extending SHIFT to image/text data? If so, is it possible to address these challenges?
3. SHIFT requires domain experts to select the minimum subgroup size and minimum shift magnitude and how sensitive is it to the choices of the hyperparameters?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their constructive feedback and are glad to know that they found the work to be well-written and to have **great potential in practical applications**. Indeed, SHIFT addresses a **critical and very practical question**: when performance of an ML algorithm drops in a new application context, which subgroups are adversely affected and why? Such settings are prevalent in healthcare, where ML algorithms differ in performance widely across demographic groups, time, and geographies [Finlayson et al. 2021].
Discovering the subgroups experiencing performance drop is **highly important to catch hidden failures of the algorithm and to develop targeted fixes** to the algorithm for the affected subgroups without sacrificing its performance elsewhere. Current methods to study performance drops either do not focus on subgroup-level performance or do not quantify uncertainty in the discovered subgroups. SHIFT addresses this methodological gap via statistically principled and computationally scalable methods. Furthermore, we believe SHIFT provides a **solid theoretical foundation** on which future work can build on, as discussed below.
---
**Extending SHIFT to image/text data** Although SHIFT is primarily designed for tabular data, its aggregate-level tests are suitable for analyzing unstructured data; its detailed-level tests can also be used, if one has prespecified concepts. In the revised manuscript, we will include applications of SHIFT to both text and image datasets. As an example, we have applied SHIFT to the CivilComments dataset [Koh et al. 2021], which contains comments on online articles and are judged to be toxic or not. We consider a DistilBERT-base-uncased model fine-tuned to classify toxic comments. Given the **768-dimensional** embeddings from this BERT model, we can apply SHIFT to understand differences in accuracy when classifying comments that mention the female gender (target domain) versus the remaining (source). Accuracy of the model drops by 1.3\% in the target. Results from SHIFT's aggregate-level test find evidence for covariate shift, i.e. there exists a subgroup of size $\ge$ 5\% that experiences an accuracy drop greater than 5\% due to covariate shift.
| Test | p-value |
|-----------------|---------|
| Covariate shift | 0.00 |
| Outcome shift | 0.83 |
To run detailed-level tests in SHIFT, we require variables to be interpretable. Given unstructured data, one solution is to combine SHIFT with concept bottleneck models [Koh et al. 2020]. We will include such an example in the revised paper. We note that another solution, if one does not need statistical inference at the detailed level, is to simply analyze differences between the comments from the detected subgroup from SHIFT in the source and target domains. Using a combination of GPT-4o and manual review, we found that in the subgroup where the toxicity classifier experienced performance decay at the target domain, the comments tended to discuss politics, society, race, and identity more. This shift in topics may explain the performance drop. For instance, the combination of female references with discussions of race and political ideology might compound biases that the classifier has inadvertently learned.
---
**Computational cost** SHIFT is very **fast**. SHIFT runs in under 10 minutes for the real-world datasets with around 10,000 points. The bulk of the computation is just fitting the nuisance models, so the runtime is just O(V) where V is just the number of cross-validation folds. Moreover, fitting these nuisance models is easily parallelizable.
---
**Sensitivity to test parameters** We find that SHIFT is not very sensitive to the two test parameters: minimum subgroup size and shift magnitude. Moreover, these two parameters are very **intuitive**. While one could have a domain expert set them, they are intuitive parameters that can be easily selected by anyone and should simply reflect one's own tolerance for performance drift. We will include guidance on setting the parameters in the two case studies.
We hope that the responses address your concerns.
---
[1.] Koh et al. WILDS: A Benchmark of in-the-Wild Distribution Shifts. ICML 2021
---
Rebuttal Comment 1.1:
Comment: Thank you for the rebuttal from the authors. The application of SHIFT on CivilComments looks promising and the discussion has the potential to inspire future research. I will keep my positive rating and hope the authors could incorporate the new results in the draft. | Summary: This paper introduces SHIFT, a hierarchical hypothesis-testing framework designed to identify subgroups experiencing significant performance degradation in machine learning models due to distribution shifts. SHIFT first tests for the presence of large performance decay due to aggregate covariate and outcome shifts, and subsequently identifies specific subsets of input variables responsible for the observed decay. Unlike previous approaches, SHIFT does not rely on strong parametric assumptions or detailed causal knowledge, making it suitable for scenarios with limited data. For experiments, SHIFT is validated at the simulation level and also for shifts in real-world data.
Claims And Evidence: The claimed are backed by theoretical proofs. Empirically, it was shown via experiments (both simulated and real) that SHIFT can identify relevant shifts and improve subgroup accuracy.
The paper could be further strengthened by demonstrating SHIFT's robustness across a wider array of distribution shifts such as high-dimensional, non-tabular data. Additionally, observations under extreme sparsity or very small sample scenarios would be informative.
Methods And Evaluation Criteria: The proposed method’s two-stage testing approach makes sense. Also, the experimental evaluation is sound, using both controlled simulation and real world case studies encompassing realistic shifts (insurance across states, hospital readmission across institutions). If possible, additional real world studies regarding a different domain would be very informative, but it could be hard to conduct such experiments.
Theoretical Claims: As I am not currently well-versed in the theoretical aspects of the related literature, I must defer the analyses and verifications of the theoretical claims of the paper to other reviewers.
Experimental Designs Or Analyses: Strengths:
- Clear ground-truth validation of subgroup detection and shift attribution via simulations.
- Also demonstrates practical utility via real-world studies, with SHIFT-driven fixes improving performance.
Weaknesses:
- Scalability - no discussion of runtime or feasibility in high-dimensional settings.
- SHIFT focuses on one affected subgroup per shift type, but iterative subgroup discovery is not explored.
Supplementary Material: I was unable to review the supplementary materials.
Relation To Broader Scientific Literature: The proposed method is closely related to distribution shifts, robust machine learning, bias and fairness in machine learning. Although the paper primarily focuses on tabular data, these concepts are also important for large scale models (if the proposed method could be effectively scaled so such magnitudes).
Essential References Not Discussed: It could benefit the paper if the WILDS dataset (Koh et al., 2021), real-world distribution shift benchmark, is included.
Other Strengths And Weaknesses: My thoughts on the strengths and weakness of the paper are discussed above.
Other Comments Or Suggestions: Currently, I have no other comments or suggestions.
Questions For Authors: Currently, I have no other questions. If it is possible, I wish to confer with other reviewers regarding the theoretical aspects of the manuscript.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive feedback appreciating the practical utility of the methods and for providing helpful suggestions on additional benchmarks. Indeed, SHIFT addresses a **critical and very practical question**: when performance of an ML algorithm drops in a new application context, which subgroups are adversely affected and why? Such settings are prevalent in healthcare, where ML algorithms differ in performance widely across demographic groups, time, and geographies [Finlayson et al. 2021].
Discovering the subgroups experiencing performance drop is **highly important to catch hidden failures of the algorithm and to develop targeted fixes** to the algorithm for the affected subgroups without sacrificing its performance elsewhere. Current methods to study performance drops either do not focus on subgroup-level performance or do not quantify uncertainty in the discovered subgroups. SHIFT addresses this methodological gap via statistically principled and computationally scalable methods. Furthermore, we believe SHIFT provides a **solid theoretical foundation** on which future work can build on, as discussed below.
---
**Computational complexity** SHIFT is very **fast**. SHIFT runs in under 10 minutes for the real-world datasets with around 10,000 points. The bulk of the computation is just fitting the nuisance models, so the runtime is just O(V) where V is just the number of cross-validation folds. Moreover, fitting these nuisance models is easily parallelizable.
---
**Extensions to high-dimensional, non-tabular data e.g. WILDS** Although SHIFT is primarily designed for tabular data, its aggregate-level tests are suitable for analyzing unstructured data; its detailed-level tests can also be used, if one has prespecified concepts. In the revised manuscript, we will include applications of SHIFT to both text and image datasets. As an example, we have applied SHIFT to the CivilComments dataset [Koh et al. 2021], which contains comments on online articles and are judged to be toxic or not. We consider a DistilBERT-base-uncased model fine-tuned to classify toxic comments. Given the **768-dimensional** embeddings from this BERT model, we can apply SHIFT to understand differences in accuracy when classifying comments that mention the female gender (target domain) versus the remaining (source). Accuracy of the model drops by 1.3\% in the target. Results from SHIFT's aggregate-level test find evidence for covariate shift, i.e. there exists a subgroup of size $\ge$ 5\% that experiences an accuracy drop greater than 5\% due to covariate shift.
| Test | p-value |
|-----------------|---------|
| Covariate shift | 0.00 |
| Outcome shift | 0.83 |
To run detailed-level tests in SHIFT, we require variables to be interpretable. Given unstructured data, one solution is to combine SHIFT with concept bottleneck models [Koh et al. 2020]. We will include such an example in the revised paper. We note that another solution, if one does not need statistical inference at the detailed level, is to simply analyze differences between the comments from the detected subgroup from SHIFT in the source and target domains. Using a combination of GPT-4o and manual review, we found that in the subgroup where the toxicity classifier experienced performance decay at the target domain, the comments tended to discuss politics, society, race, and identity more. This shift in topics may explain the performance drop. For instance, the combination of female references with discussions of race and political ideology might compound biases that the classifier has inadvertently learned.
---
**Observations under very small sample scenarios** SHIFT performs well at small sample size. Please refer to Fig 6, 7 in Appendix, where we show that SHIFT maintains the specified type-I error rate and has good power as sample size decreases to 500.
---
**Iterative subgroup discovery is not explored** Our tests provide evidence that there is an affected subgroup with statistical significance. We can then use existing subgroup discovery methods to explore all subgroups [Eyuboglu et al. 2022, d'Eon et al. 2022]. Such methods, thus, are complementary to ours but do not provide statistical significance.
---
[1.] Koh et al. WILDS: A Benchmark of in-the-Wild Distribution Shifts. ICML 2021
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response. My raised concerns have been resolved.
However, I wish to maintain my current positive score, as my evaluations are mostly based on the empirical side of the paper. If possible, I wish to defer the evaluations on the theoretical side to other reviewers. | Summary: The paper titled "Who experiences large model decay and why?" introduces a hierarchical framework called SHIFT (Subgroup-scanning Hierarchical Inference Framework for performance drifT) to diagnose heterogeneous performance drift in machine learning models. The framework aims to identify subgroups that experience significant performance decay due to covariate or outcome shifts and provides detailed explanations for these shifts. The goal is to enable targeted corrective actions that mitigate decay for the most affected subgroups.
Claims And Evidence: The authors claim that existing methods do not provide detailed insights into subgroup-specific performance decay. SHIFT is proposed as a solution to identify and explain large performance decay in subgroups. The paper provides evidence through simulations and real-world experiments, demonstrating that SHIFT can identify relevant shifts and guide model corrections effectively.
Methods And Evaluation Criteria: SHIFT is a two-stage hypothesis testing framework. The first stage identifies subgroups with large performance decay due to aggregate covariate or outcome shifts. The second stage provides detailed explanations by testing variable(subset)-specific shifts. The evaluation criteria include the ability to detect meaningful performance decay and provide valid statistical inference without strong assumptions.
Theoretical Claims: The paper claims that SHIFT provides valid statistical inference through hypothesis testing, even with limited data. It does not rely on strong assumptions like knowledge of the true causal graph or large datasets. The theoretical properties of the framework are supported by asymptotic normality and controlled Type I error rates.
Experimental Designs Or Analyses: The experiments include simulations and real-world case studies. Simulations vary the type and degree of shifts, ML algorithms, and data sizes to validate SHIFT's performance. Real-world case studies involve health insurance prediction across states and readmission prediction across hospitals. The experiments demonstrate SHIFT's ability to identify relevant shifts and guide targeted model updates.
Supplementary Material: NA
Relation To Broader Scientific Literature: NA
Essential References Not Discussed: NA
Other Strengths And Weaknesses: Strengths of the paper include the novel hierarchical framework for diagnosing performance drift, the ability to provide detailed explanations for subgroup-specific shifts, and the use of hypothesis testing for valid statistical inference.
Weaknesses may include the complexity of the framework and the potential challenges in implementing it in practice. The reliance on domain experts to set parameters like minimum subgroup size and shift magnitude may also be a limitation.
Other Comments Or Suggestions: The paper could benefit from a more detailed discussion on the computational complexity of the framework and potential strategies for efficient implementation. Additionally, providing more examples of real-world applications and their outcomes could enhance the practical relevance of the framework.
Questions For Authors: How does SHIFT handle cases where multiple subgroups experience overlapping shifts?
Can SHIFT be extended to handle unstructured data like images or text, and if so, how?
What are the computational requirements for implementing SHIFT in large-scale datasets?
How sensitive is SHIFT to the choice of parameters like minimum subgroup size and shift magnitude?
Are there any plans to make the SHIFT framework available as an open-source tool for broader use?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments and are heartened to hear they appreciate the novelty of SHIFT and its ability to provide detailed explanations for subgroup-specific shifts. Indeed, SHIFT addresses a **critical and very practical question**: when performance of an ML algorithm drops in a new application context, which subgroups are adversely affected and why? Such settings are prevalent in healthcare, where ML algorithms differ in performance widely across demographic groups, time, and geographies [Finlayson et al. 2021].
Discovering the subgroups experiencing performance drop is **highly important to catch hidden failures of the algorithm and to develop targeted fixes** to the algorithm for the affected subgroups without sacrificing its performance elsewhere. Current methods to study performance drops either do not focus on subgroup-level performance or do not quantify uncertainty in the discovered subgroups. SHIFT addresses this methodological gap via **statistically principled and computationally scalable** methods. Furthermore, we believe SHIFT provides a **solid theoretical foundation** on which future work can build on, as discussed below.
---
**Extensions to unstructured data**: Although SHIFT is primarily designed for tabular data, its aggregate-level tests are suitable for analyzing unstructured data; its detailed-level tests can also be used, if one has prespecified concepts. In the revised manuscript, we will include applications of SHIFT to both text and image datasets.
As an example, we have applied SHIFT to the CivilComments dataset [Koh et al. 2021], which contains comments on online articles and are judged to be toxic or not. We consider a DistilBERT-base-uncased model fine-tuned to classify toxic comments. Given the **768-dimensional** embeddings from this BERT model, we can apply SHIFT to understand differences in accuracy when classifying comments that mention the female gender (target domain) versus the remaining (source). Accuracy of the model drops by 1.3\% in the target. Results from SHIFT's aggregate-level test find evidence for covariate shift, i.e. there exists a subgroup of size $\ge$ 5\% that experiences an accuracy drop greater than 5\% due to covariate shift.
| Test | p-value |
|-----------------|---------|
| Covariate shift | 0.00 |
| Outcome shift | 0.83 |
To run detailed-level tests in SHIFT, we require variables to be interpretable. Given unstructured data, one solution is to combine SHIFT with concept bottleneck models [Koh et al. 2020]. We will include such an example in the revised paper. We note that another solution, if one does not need statistical inference at the detailed level, is to simply analyze differences between the comments from the detected subgroup from SHIFT in the source and target domains. Using a combination of GPT-4o and manual review, we found that in the subgroup where the toxicity classifier experienced performance decay at the target domain, the comments tended to discuss politics, society, race, and identity more. This shift in topics may explain the performance drop. For instance, the combination of female references with discussions of race and political ideology might compound biases that the classifier has inadvertently learned.
---
**Computational complexity** SHIFT is very **fast**. SHIFT runs in under 10 minutes for the real-world datasets with around 10,000 points. The bulk of the computation is just fitting the nuisance models, so the runtime is just O(V) where V is just the number of cross-validation folds. Moreover, fitting these nuisance models is easily parallelizable.
---
**Complexity in practice and open-source tools** We plan to publish an **open-source Python package** for running SHIFT that will provide simple and intuitive APIs. That way, users of the package can run SHIFT with just one or two lines of code.
---
**Multiple subgroups experience overlapping shifts** SHIFT is designed to handle situations where the subgroup experiencing covariate shift does or does not overlap with the subgroup experiencing outcome shift. When there are multiple subgroups experiencing covariate shift, SHIFT groups them together into one large subgroup and performs an omnibus test. SHIFT handles multiple subgroups experiencing outcome shifts similarly.
---
**Sensitivity to test parameters** We find that SHIFT is not very sensitive to the two test parameters: minimum subgroup size and shift magnitude. Moreover, these two parameters are very **intuitive**. While one could have a domain expert set them, they are intuitive parameters that can be easily selected by anyone and should simply reflect one's own tolerance for performance drift. We will include guidance on setting the parameters in the two case studies.
We hope that our responses address the reviewer's concerns and encourage them to reconsider their score.
---
[1.] Koh et al. WILDS: A Benchmark of in-the-Wild Distribution Shifts. ICML 2021 | Summary: This paper proposes a method (SHIFT) for diagnosing performance drift in machine learning models that are transferred from a “source” to a “target” domain. Specifically, it aims to identify where (i.e., in which subgroups) a model’s performance decays the most and how such decay arises, distinguishing between subgroup-specific covariate shifts versus outcome shifts. By framing these questions as hierarchical hypothesis tests and using sample splitting plus flexible ML estimators, SHIFT produces valid inferences (i.e., with Type I error control) while preserving decent statistical power. The paper provides both theoretical guarantees and empirical evaluations on synthetic and real-world datasets (public-health insurance coverage and hospital readmissions).
Claims And Evidence: - SHIFT detects subgroups with large performance decay due to distribution shifts, specifically distinguishing whether the shift is driven by covariates or by the outcome distributions.
- SHIFT can then explain decay with sparse variable subsets, checking if these smaller shifts can plausibly account for the large performance drop in the discovered subgroups.
- Tests have valid Type I error control and good power asymptotically, meaning SHIFT will not flag a nonexistent shift too often, and will detect real shifts with high probability given enough data.
- SHIFT helps practitioners mitigate shifts more effectively than blanket retraining, by highlighting targeted fixes that resolve problems within critical subgroups without negatively impacting performance elsewhere.
Methods And Evaluation Criteria: The proposed methods make sense for the problem at hand given the decomposition of distribution shift into these constituent components at the covariate and outcome level. Meanwhile the evaluation criteria seems applicable for driving application but less clear how this fits into the benchmarks used in other related works. For example, when comparing TE-VIM to SHIFT, the real world datasets are not those used in the original paper. The synthetic experiment does a good job of making it clear why and when SHIFT will outperform methods tailored to each type of shift. It is much less clear why the real-world case studies are relevant in the context of the prior work.
Theoretical Claims: I briefly skimmed the the theoretical results in Appendix C. Primarily focusing Theorem C.2.
Experimental Designs Or Analyses: Yes, I went through Section 5 in detail. My main concern with the experimental design is the choice of case study datasets. It’s unclear why these datasets were chosen and what underlies these datasets that we should expect their to be distribution shifts to test for. Furthermore, the readmission case study is unclear as to who the subgroups are.
Supplementary Material: I reviewed Appendix C to get a sense of the theoretical results associated with the derived estimator
Relation To Broader Scientific Literature: The key contribution of this paper in relation to the existing literature on distribution shift detection and heterogeneous subgroup performance is the formalization of the sources of heterogeneity and a universal test to distinguish both issues. Much of the literature has been disparate in aiming to develop statistical tests for either issue. To my knowledge, this is the first test to present this hierarchical test.
Essential References Not Discussed: There are no additional related works that I believe need to be cited.
Other Strengths And Weaknesses: Strengths:
- Clear formalization of the testing framework and its relation to existing literature
- Test unifies two areas of research producing a more useful test for practitioners
Weaknesses:
- Applied case studies section lacks clarity and unconvincing of the methods utility in settings outside of the synthetic setups
- Synthetic setups could be more tailored to demonstrate why SHIFT outperforms prior methods
- Theory is derived to show bounds on the Type I error but this isn’t measured in the empirical results
Other Comments Or Suggestions: None.
Questions For Authors: Below is a list of questions for which I would engage in a discussion that would potentially lead to increasing my score to a weak accept or accept:
1. Can you describe why this method outperforms other methods on the respective two tasks presented? If the claim is not the this method outperforms those but its utility is in being able to use it for both tasks at once then how should I think about that efficiency gain? It’s unclear to me how much more efficient it would be than just using two separate tests – if I knew which ones to use of course.
2. I would like a much more detailed description and understanding of the case studies. What background underlies the choice in subgroup? Do we expect their to shifts in their distributions? What are the subgroups for readmission?
3. If you fix the issues in the encounters feature do you actually see the accuracy gaps close?
4. How does this method work when we scale the size of subgroups that experience a drop in accuracy? Given the epsilon parameter defines the size of the subgroup there are an arbitrary number of subgroups one can construct of that size. I assume there is a breaking point at which the method fails to detect if that epsilon is too small.
5. Finally, I’m curious about how the dimensionality of the problem would factor into your theoretical and empirical results. Currently, I don’t see any dimensionality issues in the theory which makes sense given the nature of the test. Empirically though, the experiments are all in quite low dimensional settings. I’d be interested in understanding at what X variable subsets the test struggles.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for carefully reading the work and for appreciating its practical relevance. Indeed, heterogeneity in ML performance is a major safety concern in high-risk applications and there is no unified test to identify the sources of heterogeneity.
---
**Why SHIFT outperforms other methods** There are two reasons why SHIFT is better. First, SHIFT is much more targeted: whereas methods like MMD and KCI try to detect *any* shift which dilutes their power to find the more relevant shifts, SHIFT is only interested in finding *subgroups* with a performance drop and the subgroups have to satisfy minimum size and magnitude requirements. Second, SHIFT uses ML techniques to perform the test, whereas the comparators are kernel-based and do not scale well to high dimensions.
Another advantage of a unified framework is that the test results across the aggregate and detailed levels will not conflict with each other. The current approach in the literature is to use a mix of different tests for the two levels, which may lead to conflicting results that are challenging to reconcile since the tests often have different null hypotheses.
---
**Background of case studies** We chose the case studies to mirror the real-world application of the framework. They consist of settings where covariate or outcome shifts impact performance and domain experts do not know which shifts are detrimental. Such settings are highly prevalent in healthcare where ML performance varies widely across hospitals and time. We do not use the same data as the TE-VIM baseline since it is for causal inference and only assesses outcome shifts. We will update the paper with the following background.
The first case study is based on a systematic analysis in Liu et al. 2023 that analyzed performance drops of an algorithm for predicting insurance coverage across different US states in the ACS dataset. Among many state pairs, Liu et al. primarily found a large decay when transfering the algorithm from Nebraska to Louisiana. We decided to dive deeper into this analysis by identifying which subgroups were affected and why. SHIFT detected that people who are unemployed or whose parents are not in the labor force experience a large decay (Fig 3c). Since health insurance coverage is tied to employment in the US, and insurance rates and incomes differ between the states, such a decay is expected.
The readmission case study analyzes an algorithm to predict readmission that is trained on a well-resourced academic hospital and applied to a safety-net hospital. Since safety-net hospitals serve patients regardless of their ability to pay, their populations are quite different. SHIFT detected that patients with many emergency encounters experience a large decay (Fig 3d), which is expected because safety-net hospital patients seek care from emergency departments for very different reasons than at academic hospitals. Thus, SHIFT helps detect subgroups in realistic benchmarks.
---
**Fix issues in encounters feature** We processed the readmission data again to correct the encounters feature. After correction, covariate shifts no longer lead to a significant subgroup-level accuracy drop (p-value goes from 0.00 to 0.69). Thus, SHIFT has helped bridge the accuracy gap.
---
**Effect of scaling subgroup size** As the prevalence of the subgroup experiencing performance decay decreases, the power of SHIFT decreases. If the prevalence of the subgroup goes so low to be below the specified minimum threshold in SHIFT, the null hypothesis would be true and this tiny subgroup would no longer be of interest. Thus the "breaking point" of SHIFT is the specified minimum threshold, but this is *by design*. Other tests also decrease in power as the subgroup experiencing the shift decreases in prevalence. But existing tests set the minimum threshold to zero, stating that all shifts are of practical interest and yet have limited power to detect them. For empirical results of SHIFT for small subgroups, please refer to Fig 6, 7 in Appendix.
---
**Effect of dimensionality** The theoretical results primarily rest on one's ability to estimate the nuisance functions at a sufficiently fast rate. When dimensionality increases, estimation rates for nuisance functions tend to slow down, though it may still be sufficiently fast if these functions are sparse [see e.g. 1,2]. To test SHIFT empirically, we applied our method to a text classification problem with 768-dimensional embeddings and was able to detect shifts. For the revised manuscript, we will include more validation of SHIFT in higher-dimensional datasets.
---
**Type I error** Please refer to Fig 6, 7 in Appendix, where we confirm that Type I error of SHIFT is controlled.
We hope that our responses clarify the concerns raised.
---
[1] Wager et al. Adaptive concentration of regression trees... arXiv 2015
[2] Belloni et al. $\ell_1$-penalized quantile regression... Ann Statist 2011 | null | null | null | null | null | null |
ALMTokenizer: A Low-bitrate and Semantic-rich Audio Codec Tokenizer for Audio Language Modeling | Accept (poster) | Summary: The paper introduces ALMTokenizer, a low-bitrate and semantically rich audio codec.
It incorporates a novel fixed-interval query interleaving mechanism which extracts contextual features from the acoustic features and quantizes (using RVQ) only the contextual features extracted by these queries, thus achieving low bitrates, empowered by transformer based encoder and decoder. The decoder receives the queries and the interval, and inserts mask tokens, which are then converted into acoustic features. Additionally the VQ codebook vectors are initialized from k-means clusters of wav2vec2 and BEATS and kept fixed during training to provide strong semantic priors to the quantization module.
Training involves two stages, first the Encodec style patchify/unpatchify modules are trained along with some dedicated stage one encoder/decoder transformers without a quantizer, with MAE loss. The goal of the first stage training is to enable the frontend patchify module to learn semantically rich features. Second stage intializes the patchify/unpatchify with the stage 1 parameters and patchify is kept frozen. Second stage is trained with a combination of several objective functions including MAE loss, AR loss and the standard codec reconstruction and GAN losses.
Training and evaluation is conducted on a mix of speech, music and general purpose audio datasets. Evaluation results indicate strong performance on semantic tasks while being competitive in terms of reconstruction quality.
## Update after rebuttal
After reading the rebuttal and discussions with the all reviewers, especially the additional experiments in response to reviewer btwe, I am confident that this work is valuable to the neural audio codec community. I keep my Accept recommendation.
Claims And Evidence: The paper makes the following claims with regards to the codec:
- **low-bitrate**: experimental results show that it works well, competitive reconstruction performance.
- **semantic-rich**: experimental results show that it achieves good results on speech and emotion recognition, and sound classification tasks.
- **latent space optimized for AR modeling**: while the experimental result shows that the token prediction accuracy improves by using the AR loss, the actual effect of it does not seem to be too significant in terms of speech generation tasks (Fig. 3). In fact, from the ablation study in Table 6, it seems that not using the LM loss is beneficial for several metrics.
Methods And Evaluation Criteria: Proposed methods and evaluation criteria do make sense for the application.
However, there are many training/evaluation datasets utilized for different tasks and it is sometimes difficult to follow. Would be better for all tables/figures to mention which evaluation set it used.
Also, Table 1 and Table 6, both utilize the same metrics but are performed on different evaluation datasets, so it is difficult to contextualize the ablation study with the main result.
Most of the results are reported without confidence intervals, which makes it difficult to judge whether the difference in metrics is statistically significant or not.
Theoretical Claims: There are no theoretical claims in the paper.
Experimental Designs Or Analyses: Experimental design is mostly fine.
I have one concern, both Encodec and DAC are used at 1.5kbps, which are the lowest settings of the respective models. While this is fair in terms of the number of RVQ layers, it would have been nice to see a comparison with their full potential as well.
Supplementary Material: I reviewed all the appendix and supplementary material (including the demo webpage with the audio samples).
Relation To Broader Scientific Literature: There has been significant recent efforts in creating low bitrate codecs which are easy to use for downstream language models.
While the overall structure of the proposed method takes inspiration from these related works (using Encodec style patchify/unpathify, transformers before and after RVQ), the key novelty lies in utilizing interleaved query vectors to encode contextual information and only quantizing these query vectors.
This method also enables an alternative approach to variable bitrate by changing the frequency of the query interleaving, unlike previous methods which typically use variable number of RVQ levels (SoundStream, Encodec, DAC).
Essential References Not Discussed: Most essential related works have been discussed.
Other Strengths And Weaknesses: **Strengths**
- Query token interleaving is a very elegant idea and it has been demonstrated that it works well.
**Weaknesses**
1. The training strategy seems to be too wasteful. Most of the components are eventually discarded, MAE-transformer encoder/decoder in stage 1, MAE-decoder in stage 2, AR-transformer in stage 2, etc. While I understand the purpose of them, but simply discarding is very wasteful in terms of compute and energy efficiency. It would have made sense to still utilize them for some purpose, for example, stage1 MAE encoder could have been used to initialize the enocder in stage2, and stage1 MAE decoder could have been used to initialize stage 2 MAE decoder. Similarly, the AR decoder (depth GPT) could have been used to initialize the depth transformer in the Language modeling task, etc.
2. The variable bitrate strategy is useful for the model as a pure codec, but its usefulness for downstream LM tasks is not explored. The LM is trained with a fixed bitrate and I think, it cannot generalize to a different bitrate, while in the case of variable RVQ based tokenizers, the downstream model can also take advantage of the variable bitrate for tradeoffs in quality and efficiency.
Other Comments Or Suggestions: 1. Table 7, codebook size and FPS columns seem to be interchanged.
2. Line 216: "we found that using a large mask rate will significantly influence the reconstruction performance", for the better or worse? Is there an experimental verification of this statement?
3. I think the subjective evaluation result should be included in the main content and not relegated to the Appendix. Maybe integrating in Table 1, and instead of reporting two automatic quality metrics (UTMOS and DNSMOS), it could be one automatic and the MUSHRA score.
4. It is better to include an explanation of the wastefulness and energy efficiency of the training pipeline in a Broader Impact Statement.
5. The paper relies a lot on the Appendix, especially for the experimental sections, unless the reader carefully goes through the Appendix, several details might be missed.
Questions For Authors: Please see above sections.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for recognizing our contributions.
**Q1:** latent space optimized for AR modeling...it seems that not using the LM loss is beneficial for several metrics.
**A:** We appreciate this comment. We acknowledge that introducing the autoregressive (AR) loss may slightly impact reconstruction metrics. As discussed in the Limitations section, we emphasize this trade-off to encourage future research on achieving a better balance between reconstruction quality and modeling efficiency.
**Q2:** Methods And Evaluation Criteria: However, there are many training/evaluation datasets utilized for different tasks ... mention which evaluation set it used ...
**A:** We appreciate this comment. We will update our paper to explicitly specify the evaluation set used for all tables and figures in the final version. Furthermore, both Table 1 and Table 6 report results on the VCTK dataset. In Appendix Table 12, we present a reconstruction performance comparison between our proposed method and previous works on the LibriTTS test set. Additionally, we have reported results with 95\% confidence intervals for subjective evaluations. For objective reconstruction metrics, since they are deterministic, confidence intervals were not previously included. However, for generation experiments, we will incorporate confidence intervals by sampling multiple times in the final version.
**Q3:** I have one concern, both Encodec and DAC are used at 1.5kbps...potential as well.
**A:** We appreciate this comment. In line with MimiCodec, we will include results for Encodec and DAC in two configurations: (1) using 3 RVQ layers, as reported in our paper, and (2) using 8 RVQ layers to demonstrate their full potential.
**Q4:** The training strategy seems to be too wasteful... initialize the depth transformer...
**A:** We appreciate this constructive comment, which provides valuable insights for improving our work. In particular, leveraging the AR decoder to initialize the deep transformer in language modeling is an interesting idea. We find this direction highly promising and plan to explore it in future work. Additionally, we will incorporate this discussion into the final version of our paper to inspire further research in this area.
**Q5:** The variable bitrate strategy is useful for the model as a pure codec...
**A:** We appreciate and agree with the reviewer that the applicability of the variable bitrate strategy for downstream language modeling tasks remains unexplored. Our proposed method primarily facilitates the selection of codec models with different frame rates, providing greater flexibility in bitrate allocation. We will add this discussion into the Limitation part.
**Q6:** Table 7, codebook size and FPS columns seem to be interchanged.
**A:** Thank you for your help to find this mistake. We will update it in the final version.
**Q7:** Line 216: "we found ...of this statement?
**A:** In our experiments, we observed that a high masking rate negatively impacts reconstruction performance. We evaluated three masking rate ranges: 10–20\%, 20–30\%, and 30–40\%. As shown following, higher masking rates (30–40\%) improve semantic representation but degrade reconstruction quality. Based on these findings, we adopt an intermediate masking range of 20–30\% to balance semantic preservation and reconstruction fidelity.
| mask rate range | UTMOS | DNSMOS | VISQOL | PESQ | STOI | ASR | ER |
|-----------------|--------|---------|--------|------|-------|-------|------|
| 10-20% | 3.77 | 3.62 | 3.80 | 2.0 | 0.81 | 18.7 | 27.7 |
| 20-30% | 3.76 | 3.64 | 3.78 | 2.0 | 0.81 | 18.3 | 29.0 |
| 30-40% | 3.36 | 3.06 | 3.31 | 1.58 | 0.77 | 18.1 | 29.6 |
**Q8:** I think the subjective evaluation ... it could be one automatic and the MUSHRA score.
**A:** We appreciate and agree with the reviewer that subjective evaluation performance should be presented as the primary result in the paper. Accordingly, we will update our manuscript to include the MUSHRA score results in the main text.
**Q9:** It is better to include an explanation of the wastefulness...
**A:** As we discussed in Q4, we will incorporate this constructive discussion into our final version, stating the existing wastefulness and listing the potential solutions.
**Q10:** The paper relies a lot on the Appendix...several details might be missed.
**A:** We appreciate this comment. Since the final version of ICML allows an additional page in the main text, we will move more experiments from the Appendix into the main text, such as (1) Table 10 (the subjective evaluation results) and (2) Table 11, the LM-based sound and music understanding and generation results.
---
Rebuttal Comment 1.1:
Comment: I really appreciate the comments by the authors. It will be good to include the above mentioned changes in final version. I have no further questions and will keep my positive rating of the paper.
---
Reply to Comment 1.1.1:
Comment: We sincerely thank the reviewer for the feedback. We appreciate the positive view from the reviewer and are glad that all of concerns have been addressed. All of mentioned changes will be updated in the final version.
Best wishes | Summary: This paper introduces ALMTokenizer, a codec for speech, music and sound, which incorporates semantic and acoustic information into a single hierarchy of residual tokens with remarkable performance at a very low bitrate. The proposed improvement over previous codec include both architectural changes and training tricks (MAE, AR loss) that can be combined or used separately. The really remarkable scope of experiments and the many ideas of various importance introduced in the paper make it such that neural codec researchers will likely read it several times. Thus, I recommend an accept.
Claims And Evidence: Pros:
- Claims are convincingly supported, at the exception of the one below. Overall, the experimental design is remarkably ambitious and represents the most extensive codec evaluation I have read so far.
Cons:
- One of the claims is that the "semantic priors" avoids distillation as done by previous work. The semantic prior involves training a k-means on self-supervised embeddings and using the centroids as the fixed codebook of the first VQ. This is a form of distillation, yet less costly as during training one does not need to pass audio through a teacher embedding. However, this is never compared to the distillation used by SpeechTokenizer or Mimi and the baseline in the corresponding ablation is instead a codec without distillation, which expectedly performs poorly on semantic tasks. A proper comparison should rather be done with a model using distillation.
Methods And Evaluation Criteria: See "Experimental Designs and Analyses"
Theoretical Claims: No theoretical claims.
Experimental Designs Or Analyses: Pros:
- Extensive experiments across speech, music and sounds, both in discriminative and generative settings.
- Interesting ablations and overall compelling experimental design.
Negative:
- Presentation of results: Human evaluations of audio quality are put in Appendix, while they show a worse performance for ALMTokenizer than for baselines. These results contradict claims such as "ALMTokenizer achieves better reconstruction performance at lower bitrate". Objective proxies for audio quality assessment (UTMOS, DNSMOS, MOSNet, etc.) are notoriously limited and provide a much weaker signal than actual human judgments. The MUSHRA scores should thus appear as main results of the paper, even if they depict a less positive result for the proposed model (they actually are quite good since ALMTokenizer outperforms all previous models at matching bitrates).
Supplementary Material: I read everything, as this was necessary for the understanding the the methods and the results.
Relation To Broader Scientific Literature: ALMTokenizer introduces new ideas wrt previous work along two main axes: first, a new Transformer architecture that improves over fully convolutional codecs, and is an alternative to the Tranformers used by Mimi. Second, a set of additional losses (MAE, AR loss) to force semantic information into the learned tokens without the need for semantic distillation. Both contributions are somehow independent, and properly evaluated as such in the ablations study. Overall, the proposed methods are not groundbreaking but are conceptually sound and simple, and properly supported, such that I expect the community of neural audio codecs to build on it in the future.
Essential References Not Discussed: None, the references are quite complete, and pretty much every open source baseline has been included in the experimental pipeline.
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: 1) - L230: what does "continuous" mean in that setting? that it's continuously trained? This requires more details in particular I guess a stop gradient is applied to avoid degenerate solutions? Also, does "predicting the third VQ from the first and second" mean that this small model is only autoregressive along the VQ axis or is it also autoregressive along time?.
2) L357: 12.5kHz -> 12.5Hz. Also, Mimi uses Transformers in the encoder and the decoder, is this included in this ablation? The baseline described in "The Effectiveness of Query-based Audio Compression" seems to be purely convolutional. Making sure those transformers are included would demonstrate the usefulness of the query-based proposal wrt a simple transformer.
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for recognizing our contributions. We do appreciate the constructive comments the reviewer provided to us to further improve our paper. We are delighted to have the following discussion with the reviewer.
**Q1:** One of the claims is that the "semantic priors" avoids distillation as done by previous work. The semantic prior involves training a k-means on self-supervised embeddings and using the centroids as the fixed codebook of the first VQ. This is a form of distillation, yet less costly as during training one does not need to pass audio through a teacher embedding...
**A:** We appreciate and agree with the reviewer that semantic priors can be regarded as a form of distillation. Their advantages include: (1) reduced training cost, as they do not require audio to be passed through a teacher embedding during training; and (2) the flexibility to integrate multiple teacher models, such as Wav2Vec2 for speech semantic priors and BEATs for general sound semantic priors. In contrast, previous methods like SpeechTokenizer and MimiCodec rely on a single teacher model and primarily focus on speech semantic priors, although they can be extended to both speech and general sound.
We will revise our previous claim that 'semantic priors avoid distillation' to clarify that semantic priors are indeed a form of distillation. Additionally, we will highlight their advantages and application scenarios is different with previous methods.
**Q2:** Presentation of results: Human evaluations of audio quality are put in Appendix...
**A:** We appreciate this comment and agree that human evaluation performance should be presented as a primary result in the paper. Accordingly, we will update our manuscript to include the MUSHRA score results in the main text.
**Q3:** L230: what does "continuous" mean in that setting? that it's continuously trained? This requires more details in particular I guess a stop gradient is applied to avoid degenerate solutions?
**A:** We appreciate this comment. The term 'continuous autoregressive (AR) transformer' is used to distinguish our approach from traditional discrete AR models, which operate on discrete token sequences and are optimized using cross-entropy loss. In our study, to facilitate gradient backpropagation, we apply the AR transformer directly to continuous features (i.e., the quantized features) and optimize using mean squared error (MSE) loss. We will put these details in our final version.
**Q4:** Also, does "predicting the third VQ from the first and second" mean that this small model is only autoregressive along the VQ axis or is it also autoregressive along time?.
**A:** Yes, the AR model is only autoregressive along the VQ axis.
**Q5:** L357: 12.5kHz -> 12.5Hz.
**A:** Thank you for your help to find this mistake. We will update it in the final version.
**Q6:** Also, Mimi uses Transformers in the encoder and the decoder, is this included in this ablation? The baseline described in "The Effectiveness of Query-based Audio Compression" seems to be purely convolutional. Making sure those transformers are included would demonstrate the usefulness of the query-based proposal wrt a simple transformer.
**A:** We appreciate this comment. In our ablation study, The Effectiveness of Query-based Audio Compression, we compare our approach against a reproduced version of MimiCodec, which incorporates convolutional and transformer layers. The details of our reproduced MimiCodec implementation are provided in Appendix B.4. Table 7 presents the performance of reproduced MimiCodec at three different frame rates: 50 Hz, 25 Hz, and 12.5 Hz. | Summary: The paper presents a method to convert an audio signal to a sequence of discrete tokens, with an aim to maximize compression (low bit rate) while retaining maximum semantic information. To achieve this goal, it introduces the use of learnable query tokens, masked auto-encoders, semantic priors (to initialize VQ layer), and AR prediction loss, in the audio tokenization pipeline. The applications include audio generation, text-to-speech and multimodal LLMs.
Claims And Evidence: - Several essential concepts are not properly explained. For example, the concept of query tokens is introduced on page 3 line 157 right, without any definition. Then, line 205 left says that [CLS] is a learnable query token. Overall the concept of query token as used in this work is not clear to me.
- I have concerns regarding the novelty of the work.
- It seems the bit-rate is reduced by hyperparameter tuning (12.5Hz, 25 Hz, 50Hz). Please correct me if I missed something essential.
- Second, the semantic richness of tokens is achieved by query tokens, but the concept of the same is not clear to me.
- The use of transformers, MAE, VQ layer initialization and AR prediction loss seems novel but they contribute more to the audio processing side and would be a very good contribution to audio conferences and journals.
Methods And Evaluation Criteria: The paper uses a wide range of experiments and tasks to evaluate the proposed method. They aim at evaluating both compression efficiency (bit rate) and the semantic richness of the tokens. They appear alright to me.
Theoretical Claims: As I mentioned above, several things, such as the concept of query tokens, are not clearly explained.
Experimental Designs Or Analyses: The experiments have been performed on a variety of tasks and look good.
Supplementary Material: I read some sections to understand the main text.
Relation To Broader Scientific Literature: It relates to the broader literature very well. Many state of the art tokenizers and encoders have been discussed.
Essential References Not Discussed: Tokenization is also used for audio retrieval. One may refer to works such as "Spoken-Term Discovery using Discrete Speech Units, Interspeech 2024" and "wav2tok: Deep Sequence Tokenizer for Audio Retrieval, ICLR 2023".
Other Strengths And Weaknesses: - Clarity of writing: the proposed method is not clear to me, in particular, the concept of query tokens.
- Contributions: I am not convinced about contributions to ML. They need to be highlighted by the authors.
Other Comments Or Suggestions: None
Questions For Authors: - What are query tokens? One can learn them from the data, but how are they used during inference. E.g., for text-to-speech, how do we know them; do we derive them from the given text and use for speech synthesis?
- The paper does an impressive number of experiments to show the utility of the proposed tokenizer. But the theoretical contributions to ML need to be highlighted.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We greatly appreciate the reviewer's time and patience with our paper. We are delighted to solve the concerns of reviewer one by one.
**Q1:** Several essential ... query token is not clear to me.
**A:** We appreciate your feedback regarding the clarity of the "query token" concept. Below, we provide a detailed explanation.
we first revisit the workflow of previous tokenization methods, such as Encodec and SoundStream. As shown in the left part of Figure 2, the input audio data is first processed by the encoder module, transforming it into a series of audio frames, denoted as $\boldsymbol{e} \in \mathcal{R}^{T \times d}$, where $T$ represents the number of frames. The value of $T$ is determined by the down-sampling strides in the encoder module. Notably, previous works treat all audio frames equally. However, in such a setting, reducing a low frame rate (e.g., 12.5 Hz) requires increasing the down-sampling stride to 1920, which significantly degrades reconstruction due to: (1) ignoring the fact that different audio frames encode varying levels of information; and (2) failing to leverage contextual dependencies across frames.
Thus, we propose a query-based compression strategy. Instead of employing large down-sampling strides, we first use a Patchify module to segment the audio into frames. Our approach then introduces learnable query tokens, which dynamically extract information across multiple frames. **Notably, query tokens are learnable embedding vectors that are updated throughout the training process.** As described in Section 3.2, these learnable query tokens are combined with the audio frames and processed by a transformer encoder, where they adaptively aggregate important information. After that the original audio frames will be ignored, and these query tokens will be used for RVQ and decoder.
In summary, query tokens serve as learnable embedding vectors designed to capture holistic contextual information from the audio frames. Since the number of query tokens is smaller than the number of audio frames, which effectively reduces the frame rate while preserving essential information. The concept of query tokens is analogous to BERT [1], where a [CLS] token is placed at the beginning of a sentence to capture its semantic representation (In contrast, in our stdudy, we introduce multiple query tokens by Query token interleaving). To enhance clarity for readers, we denote this query token as [CLS] in our paper (line 205).
**Q2:** It seems the bit-rate is...
**A:** As discussed in Q1, the low bitrate is not achieved through hyperparameter tuning. Instead, we introduce query tokens that summarize contextual information across frames, where the number of query tokens directly determines the bitrate.
**Q3:** Second, the ... same is not clear to me.
**A:** We appreciate this comment. As discussed in Q1, we employ query tokens to capture holistic audio context information from audio frames. Similar to BERT, semantic information is effectively aggregated into the query tokens with the aid of a transformer encoder.
**Q4:** The use of transformers, MAE, .. seems novel ... audio conferences and journals.
**A:** We thank the reviewer for recognizing our contributions. Although our approach involves audio processing techniques, its core contribution lies in machine learning methodologies—specifically, transformer-based compression and representation learning for audio modeling. Our method is not limited to audio; the query-based compression strategy can be extended to other sequential modalities. Given the increasing importance of efficient tokenization strategies in large-scale multimodal models, we believe our work is highly relevant to the ICML community.
**Q4:** Essential References Not Discussed...
**A:** We appreciate and agree with the reviewer that tokenization can also be applied to audio retrieval, further highlighting the potential of our research for multimodal tasks. We are pleased to discuss the application of tokenization methods (such as DUSTED and wav2tok) in audio retrieval in our revised version.
**Q5:** Contributions: I am not convinced about contributions to ML.
**A:** Reviewer can refer to Q4.
**Q6:** What are query tokens? ...for speech synthesis?
**A:** The reviewer may refer to Q1 for further details. For the text-to-speech task, the corresponding query tokens can be predicted based on textual conditions. Because we use RVQ to quantize the query tokens into discrete IDs, enabling them to be modeled by an LM-based audio generation framework.
**Q7:** The paper does an impressive number of experiments...
**A:** We thank the reviewer for recognizing our esperiments and contributions. The theoretical contributions to ML can refer to Q4.
[1] Devlin J, et al. Bert: Pre-training of deep bidirectional transformers for language understanding. NAACL. 2019.
---
Rebuttal Comment 1.1:
Comment: Thanks for answering my questions. I appreciate the work including the extensive experiments. Please include the description of query tokens in the main paper for the benefit of a broader audience.
---
Reply to Comment 1.1.1:
Comment: We sincerely thank the reviewer for the feedback. We appreciate the positive view from the reviewer and are glad that all of concerns have been addressed. All of mentioned changes will be included in our final version.
Best wishes | Summary: The authors propose ALMTokenizer, an audio tokenizer designed to enhance compression efficiency and reconstruction quality at a low bitrate. Its key innovations include a query-based framework, semantic priors in vector quantization (VQ) codebooks by leveraging self-supervised learning (SSL) model feature clusters, MAE and LM losses, and a two-stage training approach. Experimental results show that ALMTokenizer meets or exceeds the performance of recent neural audio codecs in both compression and semantic information measures. Additionally, downstream generative modeling with ALMTokenizer’s audio tokens achieves stronger performance compared to competing tokenizers.
## update after rebuttal
After reviewing the authors’ rebuttal and their reply to the rebuttal comments, I find that the updated experimental results sufficiently address most of my concerns. I encourage the authors to incorporate the points discussed during this exchange into the revised manuscript. Given these improvements, I have updated my score from 3 to 4 and now recommend acceptance of the paper.
Claims And Evidence: The proposed methods and evaluation criteria are relevant.
Methods And Evaluation Criteria: Proposed methods and/or evaluation criteria are reasonable and aligned with the problem/application.
Theoretical Claims: I reviewed proposed methods and found them sound.
Experimental Designs Or Analyses: I reviewed the experimental design and analyses and found them sound.
Supplementary Material: The details of audio language model framework as well as the subject evaluation of audio reconstruction.
Relation To Broader Scientific Literature: This work contributes to improving neural audio codecs at low bit rates and demonstrates enhanced performance on downstream generative tasks. These advancements are relevant to broader audio generative modeling, including applications such as speech-language modeling, text-to-speech, and voice conversion.
Essential References Not Discussed: Essential references are well discussed.
Other Strengths And Weaknesses: Strengths:
* The proposed query-based compression framework is promising for easily adjusting compression rates, making it potentially useful across varied low-bitrate scenarios.
* The paper is well-structured, with detailed explanations of the architecture and training process. The analysis of experimental outcomes is both comprehensive and straightforward, aiding reader understanding.
* Improved performance in downstream generative modeling including TTS indicates that the proposed approach can be integrated effectively into broader applications.
Weaknesses:
* Although the paper’s ablation study suggests that each proposed technique contributes, the simultaneous use of multiple methods (semantic prior VQ, MAE, LM loss, and two-stage training) complicates fair comparisons. For example, while semantic VQ priors provide only marginal improvements in semantic tasks, they slightly degrade audio reconstruction quality. Additionally, MAE, LM loss, and two-stage training could likely be integrated independently into other neural audio codecs. A more direct comparison using only the core techniques in the main experiments would strengthen the paper’s central claim and clarify the true necessity of the auxiliary techniques.
* The paper does not sufficiently address how changes in specific hyperparameters or components (e.g., MAE or AR loss) might affect the final model’s behavior. If the authors assert that each technique is equally important, analyzing these variations would help determine whether the approach is robust or susceptible to performance degradation under different configurations.
Other Comments Or Suggestions: I don't have any other comments or suggestions.
Questions For Authors: I don't have any other questions.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for recognizing our contributions. We do appreciate the constructive comments the reviewer provided to us to further improve our paper. We are delighted to have the following discussion with the reviewer.
**Q1** : Although the paper’s ablation study suggests that each proposed technique contributes, the simultaneous use of multiple methods (semantic prior VQ, MAE, LM loss, and two-stage training) complicates fair comparisons.
**A** : We thank the reviewer for acknowledging the contributions of our proposed techniques. To systematically validate each component's effectiveness, we have conducted extensive ablation studies (Table 6 of manuscripts), including:
**(1) Query-Based Framework** : We constructed a MimiCodec-style baseline [1] with identical convolutional patchify/unpatchify modules and transformer-based encoder/decoder. The key distinction lies in our proposed query-based framework, which demonstrably improves audio compression and semantic performance (Table 6, Row 3).
**(2) MAE and LM (AR) Losses** : Ablation experiments confirm that both losses enhance semantic performance (Table 6, Rows 4) and Figure 3.
**(3) Two-Stage Training** : Compared to one-stage training, our two-stage strategy consistently improves reconstruction and semantic metrics (Table 6, Row 6).
These experiments clearly clarify the impacts of our techniques.
**Q2**: For example, while semantic VQ priors provide only marginal improvements in semantic tasks, they slightly degrade audio reconstruction quality.
**A:** We agree with the reviewer’s observation regarding the trade-off between semantic performance and reconstruction quality. One of the potential reasons is that we fix the VQ codebooks during training, which is different from the traditional VQ training (as noted in Section 3.2). This aligns with our discussion on limitations part: while semantic VQ priors improve semantic performance, optimizing the semantic information and minimize reconstruction loss remains an open challenge. We highlight this trade-off to encourage future work on balanced solutions.
**Q3:** Additionally, MAE, LM loss, and two-stage training could likely be integrated independently into other neural audio codecs.
**A:** We appreciate and agree with the reviewer that our proposed technique contributes, such as MAE loss, LM loss, two-stage training strategy can be integrated into other neural audio codec models, such as Encodec and MimiCodec. As we discussed in Q1, we have conducted ablation studies to validate the effectiveness of each part. We hope these contributions will inspire broader adoption in the audio codec community.
**Q4:** The paper does not sufficiently ..., analyzing these variations would help determine whether the approach is robust or ... under different configurations.
**A:** We appreciate this suggestion and provide additional analyses:
**(1) Mask Rate in MAE Loss.**
Inspired by MAE [2], we tested three group of mask rates ranges: (10–20\%), (20–30\%), and (30–40\%). The experiments as following Table shows. Results indicate that higher rates (30–40\%) benefit semantics but harm reconstruction, leading us to adopt an intermediate range (20–30\%).
| mask rate range | UTMOS | DNSMOS | VISQOL | PESQ | STOI | ASR | ER |
|:---------------:|:------:|:-------:|:------:|:----:|:-----:|:-----:|:----:|
| 10-20% | 3.77 | 3.62 | 3.78 | 2.0 | 0.81 | 18.7 | 27.7 |
| 20-30% | 3.76 | 3.64 | 3.78 | 2.0 | 0.81 | 18.3 | 29.0 |
| 30-40% | 3.36 | 3.06 | 3.31 | 1.58 | 0.77 | 18.1 | 29.6 |
**(2) The hyperparameters of MAE loss and AR loss weighting.** To better understanding the influence of hyperparameters of MAE loss and AR loss ($\lambda_1$ and $\lambda_2$), we design 3 group settings (1, 1), (0.5, 0.5), (0.5, 0.1). The experimental results as the following Table shows. We can see that (0.5, 0.1) obtains better performance, thus we empirically choose $\lambda_1=0.5$ and $\lambda_2=0.1$ as our default setting.
| $\lambda_1$ | $\lambda_2$ | UTMOS | DNSMOS | VISQOL | PESQ | STOI | ASR | ER |
|:------:|:------:|:-------:|:----------:|:---------:|:-------:|:-------:|:------:|:----:|
| 1 | 1 | 3.69 | 3.55 | 3.70 | 1.8 | 0.78 | 18.4 | 29.7 |
| 0.5 | 0.5 | 3.71 | 3.58 | 3.77 | 1.9 | 0.77 | 19.0 | 28.8 |
| 0.5 | 0.1 | 3.76 | 3.64 | 3.78 | 2.0 | 0.81 | 18.3 | 29.0 |
[1] Défossez A, et al. Moshi: a speech-text foundation model for real-time dialogue[J]. 2024. \\
[2] He K, et al. Masked autoencoders are scalable vision learners CVPR 2022.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' careful responses and their efforts to address my concerns, particularly their clarification regarding one of my questions (Q4). However, it would be beneficial if the authors could explicitly highlight the advantages of their proposed main architecture compared to the baseline methods without MAE, LM loss, and two-stage training, since these components could also potentially be applied to the baselines. Nevertheless, I acknowledge and value the contributions of this study, including the introduction of these techniques, and will maintain my initial positive evaluation.
---
Reply to Comment 1.1.1:
Comment: We sincerely thank the reviewer for the feedback. We appreciate the positive perspective and are glad that most of the concerns have been addressed. We agree that it would be beneficial to highlight the advantages of our proposed main architecture compared to baseline methods.
To demonstrate the advantages of our proposed architecture, we conducted an ablation study comparing the performance of the following two methods:
(1) **The proposed query-based compression strategy**, which uses convolutional patchify/unpatchify modules and a transformer-based encoder/decoder as the backbone, without MAE loss, LM loss, and two-stage training.
(2) **The previous SOTA method, MimiCodec [1]**, using the same convolutional patchify/unpatchify modules and transformer-based encoder/decoder for a fair comparison.
For the MimiCodec baseline, we applied down-sampling rates of [2, 4, 5, 6, 8], resulting in a frame rate of 12.5 Hz to match that of our proposed method. Additionally, the codebook size (2048) and number of VQ layers (3) were kept the same across both models.
The experimental results are shown in the table below:
| model | UTMOS | DNSMOS | VISQOL | PESQ | STOI | ASR | ER |
|:--------------------------------:|:--------:|:----------:|:---------:|:--------:|:--------:|:--------:|:---------:|
| MimiCodec-style baseline | 2.49 | 3.13 | 3.37 | 1.58 | 0.77 | 34.5 | 22.6 |
| Proposed Query-based compression | **3.54** | **3.41** | **3.44** | **1.69** | **0.78** | **27.2** | **24.5** |
These results clearly demonstrate the effectiveness of the proposed query-based compression strategy.
Finally, we appreciate the reviewer’s suggestions to further improve our work, and we will incorporate this discussion in the final version of the paper.
[1] Défossez A, et al. Moshi: a speech-text foundation model for real-time dialogue[J]. 2024. | null | null | null | null | null | null |
Safe Delta: Consistently Preserving Safety when Fine-Tuning LLMs on Diverse Datasets | Accept (poster) | Summary: The authors propose a new harmful fine-tuning (and benign fine-tuning) defence method that estimates a correction vector that is applied after training the model. They show that their method doesn’t harm utility while maintaining a low attack success rate.
Claims And Evidence: Within the scope of previous literature using HEX-Phi and AOA Identify Shifting to evaluate previous defences, I find that the claims over the effectiveness of this method as a defence sound. The method clearly is able to balance utility.
Methods And Evaluation Criteria: The method is quite valuable not only because of it’s efficacy but also because it doesn’t need access to the attack distribution the attacker uses which is the limitation of some methods.
The chosen evaluation approach is quite limited from the attack perspective. Much larger attack datasets such as [1] and are commonly used in previous literature. Can the authors please add an evaluation on [1] using more attack samples such as 10k as was used in [2]? Please also report the utility loss after doing this experiment as it would be good to test the limits of this method.
[1] Ji, J., Liu, M., Dai, J., Pan, X., Zhang, C., Bian, C., ... & Yang, Y. (2023). Beavertails: Towards improved safety alignment of llm via a human-preference dataset. *Advances in Neural Information Processing Systems*, *36*, 24678-24704.
[2] Rosati, D., Wehner, J., Williams, K., Bartoszcze, L., & Gonzales, R. . carsten maple, Subhabrata Majumdar, Hassan Sajjad, and Frank Rudzicz (2024). Representation noising: A defence mechanism against harmful finetuning.
There are more recent baselines that seem important to add due to their popularity. I’d recommend at least also evaluating Lisa [3] which is more standard for harmful fine-tuning defence than what was chosen. Please try to add another post-fine-tuning method like [4]. [5] is also relevant but it should be considered concurrent work with this manuscript.
[3] Huang, T., Hu, S., Ilhan, F., Tekin, S. F., & Liu, L. (2024). Lazy safety alignment for large language models against harmful fine-tuning. *arXiv preprint arXiv:2405.18641*, *2*.
[4] Antidote: Post-fine-tuning safety alignment for large language models against harmful fine-tuning.
[5] Yi, X., Zheng, S., Wang, L., de Melo, G., Wang, X., & He, L. (2024). NLSR: Neuron-Level Safety Realignment of Large Language Models Against Harmful Fine-Tuning
Theoretical Claims: I reviewed the proof of Theorem 4.1 and it seems sound.
Experimental Designs Or Analyses: Generally the experimental design and analysis is fine.
An adaptive attack on this method is missing. For example, [1] provides an attack that isn’t considered by the authors but I would encourage them to include, where the samples are purposely designed to overcome these types of safe guards because the samples might not create meaningful deltas. In this case perhaps you’d construct an attack if you had knowledge of what the safe delta estimation dataset was to purposely be as close to this dataset as possible.
[1] Halawi, D., Wei, A., Wallace, E., Wang, T. T., Haghtalab, N., & Steinhardt, J. (2024). Covert malicious finetuning: Challenges in safeguarding llm adaptation
Supplementary Material: I reviewed the appendices.
Relation To Broader Scientific Literature: This work is part of a broader initiative reviewed in [1] to prevent training-time attacks on large language models. Specifically, this work provides a post-training correction method that does not harm utility.
[1] Huang, T., Hu, S., Ilhan, F., Tekin, S. F., & Liu, L. (2024). Harmful fine-tuning attacks and defenses for large language models: A survey.
Essential References Not Discussed: This paper neglects quite a lot of prior work, I would suggest a review of [1]. In particular there seems to be many similar methods for instance [2] and [3]. While many works are concurrent, I do believe the author needs to revise the related works section in order to properly discuss the difference between current methods for preserving safety when fine-tuning on benign or harmful fine-tuning as many of these papers were posted to arXiv over the summer of 2024. As per the guidelines of ICML it is unreasonable to expect references to works that appeared one month before the submission deadline.
[1] Huang, T., Hu, S., Ilhan, F., Tekin, S. F., & Liu, L. (2024). Harmful fine-tuning attacks and defenses for large language models: A survey.
[2] Huang, T., Bhattacharya, G., Joshi, P., Kimball, J., & Liu, L. (2024). Antidote: Post-fine-tuning safety alignment for large language models against harmful fine-tuning.
[3] Yi, X., Zheng, S., Wang, L., de Melo, G., Wang, X., & He, L. (2024). NLSR: Neuron-Level Safety Realignment of Large Language Models Against Harmful Fine-Tuning
Other Strengths And Weaknesses: Limitations of this method are not explicitly discussed. I would encourage the authors to add a limitations section.
I said it before but one of the main strengths of this work is providing a working method that doesn't require access to the attackers dataset distribution.
Other Comments Or Suggestions: (4), (4) - is sd a typo, is it meant to be sft? If its supposed to be sd then what does sd mean?
Section 4.2 and elsewhere: I think some clarity is needed here on terminology. Layers in a neural network often include other bits like the activation functions and attention. Can you clarify in Section 4.2 that your method looks at each individual linear transformation parameterized by a weight matrix? that would make things clearer to the reader and less open to misinterpretation.
Section 4.4: I’d recommend illustrating the complexity and actual dimensions used in practice for doing the hessian inverstion computation. A lazy or confused reader might think this is an intractable computation if they missed that its only in the size of a linear transformation.
“Hence we design a layer-specific
threshold of the form” → It’s confusing to me what the random variable in this expectation is, is it the inputs? It might be clearer just to take a mean rather than an expectation.
“we use the PureBad” It’s not called PureBad it’s called HEX-Phi i’d recommend that this is corrected in the text.
“contains implicitly harmful examples” I’d recommend explaining what this means - i.e. absolutely obedient agent
Questions For Authors: Are there any additional adaptive attack the authors can think of and discuss in the paper?
I’m curious if the authors think there could exists a benign fine-tuning dataset that could be prevented from learning by this correction. Perhaps if it was very much out of distribution. I’d encourage the authors to think about this.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your insightful reviews and comments. We will address your concerns and questions as follows:
> C1: Much larger attack datasets should be considered. Can the authors please add an evaluation using more attack samples such as 10k?
Thank you for your thoughtful advice. We adopt experiments on PureBad dataset with 1k and 10k size, sampled from [1]. Below results show that Safe Delta still maintain the safety and basic utility. We will discuss relevent works in our paper.
|Datasize →|1k|10k|
|-|-|-|
|Model↓,Metric→|MT-B↑/ASR↓|MT-B↑/ASR↓|
|Finetuned|5.1/95.1|5.2/94.6|
|SafeDelta|6.0/4.8|6.1/4.6|
> C2: Some recent methods should be included, such as Lisa[2].
Thanks for your thoughtful advice. We will add discussion of these suggested works in our paper.
Due to limited time and resources during the rebuttal period, we only adopt Lisa on the PureBad dataset. The table shows the basic utilty and safety peformance of Lisa.
|Method↓, Metric→|MMLU↑|MT-B↑|ASR↓|HS↓|
|-|-|-|-|-|
|Finetuned|44.35|5.43|95.76|4.82|
|Lisa|44.72|5.91|8.48|1.32|
|SafeDelta|44.61|6.18|3.33|1.13|
> C3 & Q1: An adaptive attack is missing. Are there any additional adaptive attack the authors can think of and discuss in the paper?
Thanks for your thoughtful advice.
We agree with your analysis that if the adaptive attacker may know the estimation dataset of SafeDelta, they could construct a corresponding attack dataset. But in practice, the dataset hold by model provider is hard to access, making such attack hard and costy to work.
For your mentioned attack, as they do not releas their code, it's hard for ust to reimplement it in limited rebuttal period.
We agree that it is an interesting direction for future research, and we will add discussion about this in our paper.
> C4: Concern about references. Neglect some relevent works.
Thank you for you thoughful advice. We will add these suggested works in the literature review.
> W1: Add a limitation section.
Thank you for your thoughful review. We will add limitations regarding:
- SafeDelta may be vulnerable to future attacks with well-designed data.
- A more advanced weight selection method, instead of greedy method, could improve performance.
> Q2. I’m curious if the authors think there could exists a benign fine-tuning dataset that could be prevented from learning by this correction.
Thank you for your insightful question. If the attackers know the dataset for preparation, they may construct a out of distribution dataset that influence the performance of SafeDelta. Here the key point is how to access the dataset for preparation.
We agree that this is an interesting direction for future research, and we will add discussion about this in paper.
> Q3. Eq (4) - is sd a typo, is it meant to be sft?
"sd" is not a typo, which is short for Safe Delta, following the defination in Eq.1.
> Suggestions about improving writing.
Thank you for your helpful suggestions. We will improve the relevant sections in the next version.
## REF
[1] Beavertails: Towards improved safety alignment of llm via a uman-preference dataset. 2024
[2] Lisa: Lazy Safety Alignment for Large Language Models against Harmful Fine-tuning Attack. 2024 | Summary: The paper introduces Safe Delta, a two-stage method that estimates the effects of specific datasets on safety and utility, compensating the safety degradation while maintaining utility.
Claims And Evidence: The claims made by the authors are supported by the experimental results presented, though there are some concerning aspects on the difference between the reported results and previous literature, as well as the statistical significance of the said results (see Experimental Design or Analyses below).
Methods And Evaluation Criteria: The method proposed in this work appears to be novel. The motivation is backed by a theoretical analysis of the safety degradation we want to balance, and an approximation of the utility improvement to preserve under $\mathbf{W}_{\text{sd}}$. The ablation provided in the experiments shows the benefits of the safety compensation vector computed derived through the safety degradation analysis.
Generally, the datasets considered are diverse and widely studied in previous works, while the baselines are appropriate methods for comparison in this setting.
Theoretical Claims: I did not carefully check the proof of Theorem 4.1., but on quick inspection it appears to be correct.
Experimental Designs Or Analyses: The experimental setup described is sound, and generally appears to be appropriate to test the claims the authors make about Safe Delta.
However, there are a few noticeable issues with the analysis:
- The reported performance of the baselines in this paper is quite far off from the reported performance in the original papers. For example, for PureBad the authors report MT-B of 6.05, ASR of 84.24%, and an HS of 4.21 for Safe LoRA whereas the original paper reports on the same dataset and model MT-B of 6.34, ASR of 3.03, and an HS of 1.055 (Hsu et al., 2024). The reported values for SafeInstr in this dataset and model also differ quite drastically from the ones reported in (Hsu et al., 2024). Is there a significant difference in the experimental setup between the two papers? If so, it would be extremely relevant to try Safe Delta in the setup considered by (Hsu et al., 2024) to see if the trends still hold.
- Given there are often small differences in terms of utility and/or safety metrics, it is hard to say if some of the results are statistically significant. While I understand the difficulties of running all results multiple times, the authors should run this analysis at least on a subset of the experiments to more effectively observe trends in the results.
- The time cost comparison with other methods, while extremely relevant for practical purposes, is not clearly explained in the paper. The authors mention that Safe Delta requires an “extra time cost of 62s per request,” but a more exact explanation of what is understood by a request in this experimental setting is not clarified in the text or available in the appendix. Further, it is unclear to me that all the numbers in Table 6 are directly comparable — do they reference the same fine-tuning dataset or one with the same number of examples (with the exception of BEA-10 and BEA-750)? How does this analysis change for, e.g., a 13B parameter model?
References:
- Hsu, Chia-Yi, et al. "Safe LoRA: The silver lining of reducing safety risks when finetuning large language models." Advances in Neural Information Processing Systems, 2025.
Supplementary Material: I reviewed some of the experimental details in the Appendix.
Relation To Broader Scientific Literature: Safe Delta is novel in terms of methodology, but due to some questions on the efficacy and efficiency of the method it is uncertain whether this is a marked improvement compared to previous methods.
Essential References Not Discussed: The related work section is comprehensive.
Other Strengths And Weaknesses: - The paper is very clear and easy to read, which is a strength of the work.
- While this method has some obvious limitations (e.g., scalability), the paper is missing a detailed section on this.
Other Comments Or Suggestions: - Use (a) and (b) or left and right in the description of Section 5.4. to clarify the two plots.
- Minor typo in Appendix E.1. where it should read “Harmful.”
Questions For Authors: - Why are the baseline performances so different in Pure Bad compared to previously reported ones?
- Is the trend observed in Section 5.3. for harmful datasets similar for benign ones?
- Are the time costs “per request” reported as the delta over the full fine-tuning time over 3 epochs, and for comparable datasets? I understand the difference between BEA-10 and BEA-750, but what dataset is being considered for the 62s reported by Safe Delta?
- On the same hardware as the time cost experiments, how long does it take to run the preparation stage for different model sizes (e.g., 7B vs 13B)? I understand that it can be cached, but it is still an important factor for model providers that regularly update their base models. Is it even feasible to do this for 70B+ models?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your insightful reviews. We are glad that you found our work **novel, theoretically grounded, and clearly presented**. Below, we address your concerns:
> C1 & Q1: Baseline performance in PureBad differs from previously reported ones.
### SafeLoRA
We appreciate your careful observation. We used SafeLoRA's official code, and the performance gap is expected due to differences in hyperparameter settings (specifically, the similarity threshold).
As noted in Sec 5.3, the SafeLoRA paper does not specify this threshold for full fine-tuning, so we tuned it ourselves. The table here compares SafeLoRA's original reported results with our implementation using different thresholds:
||PureBad|||Dirty Summary|||
|-|-|-|-|-|-|-|
|SafeLoRA|MT-B↑|ASR(%)↓|HS↓|F1↑|ASR(%)↓|HS↓|
|Report|6.34|3.03|1.05|0.497|8.79|1.30|
|Threshold=0.6|6.21|2.73|1.06|0.268|3.33|1.09|
|Threshold=0.4|5.98|93.94|4.73|0.479|7.58|1.28|
A threshold of 0.6 matches the reported PureBad results but harms the utility on Dirty Summary, while a threshold of 0.4 matches the reported Summary performance but sacrifices PureBad safety. This indicates that **matching SafeLoRA’s original performance requires tuning hyperparameters per dataset.** However, as discussed in Sec. 1, such tuning leads to high computational costs and limits practical usability — this is our main motivation.
Thus, for each method, we used a fixed hyperparameter across all datasets for fair comparison.
Since there is a threshold-dependent trade-off, we tuned the threshold on Dirty Summary and chose 0.52 to balance utility and safety (see Appendix D.3), aligning with fine-tuning service users’ goals.
### SafeInstr
Since SafeLoRA has not released its codes and dataset about SafeInstr, we followed BEA's implementation of SafeInstr [1] and achieved comparable results. On PureBad, ours: `ASR 37.82, HS 2.74`, BEA's:`ASR 34.91, HS 2.49`.
[1] BackdoorAlign: Mitigating Fine-tuning based Jailbreak Attack with Backdoor Enhanced Safety Alignment. NeurIPS 2024
> C2: Small differences make statistical significance unclear.
To address this, we run repeated experiments on Dirty Summary, where SafeDelta shows small metric superiority.
We train 3 models per method and test each 10 times with different random seeds, evaluating utility (F1) and safety (ASR).
Results (mean/std deviation) are:
|Method|F1(x10^-3)↑|ASR(%)↓|
|-|-|-|
|SafeInstr|**486**/6|44.65/3.8|
|BEA|475/6|13.52/4.6|
|SafeLoRA|468/4|7.44/2.2|
|Resta|473/5|10.05/1.8|
|our|**482**/5|**5.92**/1.3|
T-tests (95% confidence) confirm our method outperforms baselines in safety (ASR) and utility (F1), except SafeInstr, which slightly exceeds in F1 but lags in safety.
While acknowledging this comparison, we'd like to emphasize:
**Instead of excelling on a single dataset, our method prioritizes consistent safety across diverse settings without compromising utility(see Fig 1).**
> C3 & Q3: What is a "request"? Are numbers in Table 6 comparable? Does analysis change for 13B model?
A "request" is a complete fine-tuning job, aligning with practical fine-tuning services where a user uploads data and receives a final model. "Extra time" refers to the time overhead required for defensive fine-tuning compared to standard one.
These contents will be added to Section 5.8.
The time costs for data-based methods (BEA) and weight-modification methods (SafeLoRA, SafeDelta) in Table 6 are not directly comparable. Data-based methods require extra training data, so the time cost depends on extra data size and model size. However, weight-modification methods depend only on model size, so dataset details are not provided. To clarify, Table 6 is intended only "for reference" (as stated in Section 5.8), we will further revise the table to avoid misinterpretation.
For a 13B model, SafeDelta takes ~110s extra time and SafeLoRa takes ~212s, due to more parameters.
> C4: Lack discussion of limitation (e.g., scalability)
Based on your review, we assume you are referring to scaling to larger models. As shown in Table 4, SafeDelta performs effectively on a 13B model (~110s extra time). If you meant another scalability aspect, please let us know.
We will add discussions of limitations:
- SafeDelta may be vulnerable to future attacks with well-designed data.
- A more advanced weight selection method, instead of greedy method, could improve performance.
> Q2: Is trend in Sec 5.3 similar for benign datasets?
No, the trend differs. We conduct experiments on Math datasets (sizes 5k, 7.5k, 15k), showing that BEA consistently maintains safety, with an ASR of 2%.
> Q4: What is the preparation time for different model sizes?
Preparation times for different model sizes are summarized below. 7B/13B experiments use the same hardware as the time cost experiments. For 70B models, 4 A100-80G GPUs are used due to memory demands.
The results indicate the preparation times are accepetable for model providers:
|Model Size|7B|13B|70B|
|-|-|-|-|
|Pre. Time(s)|211|378|2620|
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their detailed rebuttal. They have addressed most of my major concerns, so I have updated my score accordingly.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your thoughtful review and for reading our rebuttal. We appreciate your feedback and will incorporate the suggested updates into the final version. | Summary: This paper introduces a novel defensive method to enhance LLM safety after fine-tuning. Specifically, it proposed to Safe Delta, which consists of a preparation step performed before fine-tuning and two steps (Finding Delta Parameters, Adding Safety Compensation) executed for each fine-tuning request. The goal of Safe Delta is to maximize the total utility improvement while keeping the safety degradation below a threshold. The author reports the attack success rate (ASR) and harmfulness score (HS) to evaluate the safety and uses the respective metrics for utility. Experimental results showed that Safe Delta could effectively balance the safety and utility after fine-tuning.
Claims And Evidence: The claims are supported by the experimental results and the theorem.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria are appropriate.
Theoretical Claims: The theorem and proof are clearly presented. However, the reviewer did not really understand the correlation between the theorem and the Optimal Brain Surgeon. There is really no need to link this approach to neuroscience terms.
Experimental Designs Or Analyses: The experimental designs and analyses are good. However, the reviewer does not understand why the utility improvement could use $||\mathbf{W}_\textnormal{sd} - \mathbf{W} _\textnormal{orig}||_2^2$ as an objective, what is the role of $\mathbf{W} _\textnormal{sft}$ playing here?
Supplementary Material: There is no code provided for review.
Relation To Broader Scientific Literature: This paper proposes the first utility-safety balance optimization at weights level, which is a good contribution for improving LLM safety.
Essential References Not Discussed: Most papers are well cited and discussed.
Other Strengths And Weaknesses: **Strengths**
- The paper is well written, and the figures/tables are well presented.
- The experimental results look promising, and this method might be useful in preventing models from losing certain capacities during fine-tuning by applying certain compensation.
Other Comments Or Suggestions: none
Questions For Authors: - It is surprising that $\mathbf{M}_R$ could achieve such a low Harmful Score. How do the authors explain such phenomena?
- Please also address the questions mentioned above.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your insightful reviews and comments. We will address your concerns and questions as follows:
> C1: Reviewer did not really understand the correlation between the theorem and the Optimal Brain Surgeon, which is neuroscience terms.
Thank you for raising this questions.
This question seems to reflect a potential misinterpretation of Optimal Brain Surgeon.
In this paper, we use Optimal Brain Surgeon to refer to the name of model pruning methods[1,2], which is also cited in our paper.
These methods inspired our theorem for identifying important weights for safety.
We will explicily write the method type of it in next version.
[1] Yann LeCun, John S. Denker, Sara A. Solla. Optimal brain damage. NeurIPS 1989
[2] Babak Hassibi, David G. Stork. Second order derivatives for network pruning: Optimal brain surgeon. NeurIPS 1992
> Q1: Why the utility improvement could use $\Vert \mathbf{W_{\text{sd}}} - \mathbf{W_{\text{orig}}}\Vert_2^2$ as an objective, what is the role of $\mathbf{W_{\text{sft}}}$ plays here?
Thank you for your thoughtful question.
We use $\Vert \mathbf{W_{\text{sd}}} - \mathbf{W_{\text{orig}}}\Vert_2^2$ as an objective because it reflects how many fine-tuned delta weights are kept, thereby reflecting the utility gain. And $\mathbf{W_{\text{sft}}}$ is implicitly integrated into $\mathbf{W_{\text{sd}}}$, expressed as: $\mathbf{W_{\text{sd}}} = \mathbf{W_{\text{orig}}} + \mathbf{M} \odot (\mathbf{W_{\text{sft}}} – \mathbf{W_{\text{orig}}}).$ As in step 1, SafeDelta constructs $\mathbf{W_{\text{sd}}}$ from $\mathbf{W_{\text{sft}}}$ and $\mathbf{W_{\text{orig}}}$ via a selective mask $\mathbf{M}$.
Here, $\mathbf{M}$ determines which weights to adopt from the fine-tuned model and which to retain from the original model.
Selecting more delta weights increases $\Vert \mathbf{W_{\text{sd}}} - \mathbf{W_{\text{orig}}}\Vert_2^2$ and preserves more utility gains ($\mathbf{W_{\text{sd}}}$ is closer to $\mathbf{W_{\text{sft}}}$), while selecting fewer delta weights decreases $\Vert \mathbf{W_{\text{sd}}} - \mathbf{W_{\text{orig}}}\Vert_2^2$ and discard more utility gain ($\mathbf{W_{\text{sd}}}$ is closer to $\mathbf{W_{\text{orig}}}$).
> Q2: Why random selection achieve such a low Harmful Score?
The effect of delta weight attribution would explain this phenomenon.
As we discussed in Section 4.1, the delta weights after fine-tuning contributes to two performance changes: (1) utility improvement and (2) safety degradation.
Randomly discarding parts of delta weights progressively reduces their contribution to safety degradation. For example, in the extreme case of full removal, safety reverts to the original model’s level (1.06 Harmful Score).
In the random selection experiments, we discarded about 50% of delta weights, thus mitigating the safety degradation: Harmful Score to 1.92, corresponding to 27% ASR. | Summary: Safe Delta is a safety-aware post-training defense method that adjusts the delta parameters (i.e., the parameter change before and after fine-tuning). Safe Delta estimates the safety degradation, selects delta parameters to maximize utility while limiting overall safety loss, and applies a safety compensation vector to mitigate residual safety loss.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes.
1. The proposed method is noval and effective.
2. Figure 3 is clear to understand.
Theoretical Claims: Yes. I have checked the proof in Appendix. Anthors provide a comprehensive proof for the conclusion.
Experimental Designs Or Analyses: Yes.
1. I am confused by why not also put Llama-3-8b-instruct results in Table 2. Because experiment could show realistic performance in real world deployment setting.
2. Authors might test on some over-refusal datasets, because many defense methods will suffer from over-refusal issues, which will refuse some normal questions.
Supplementary Material: Yes. I checked all the Appendix.
Relation To Broader Scientific Literature: 1. A safety-aware post-training defense method that adjusts the delta parameters (i.e., the parameter change before and after fine-tuning).
2. Safe Delta jointly estimate safety degradation and dynamically optimize delta parameter, addressing the challenge of different fine-tuning scenarios.
3. Safe Delta is an effecient method compared to baselines.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: 1. How does the methodology perform when applied to Multimodal Large Language Models (MLLMs)?
2. Have the authors evaluated the method's efficacy against jailbreak attacks? Given that real-world deployments often implement both post-safety-aware fine-tuning and jailbreak defense fine-tuning concurrently, it would be valuable to include an experimental study testing the proposed defense against state-of-the-art jailbreak methods.
3. For the Llama-3-8b experiment, could comparative baseline performance metrics be provided to contextualize the results?
4. Does Safe Delta exhibit performance degradation in sequential interaction scenarios where a user initially poses a malicious query followed by benign questions? This is particularly relevant given that defensive mechanisms frequently suffer from over-refusal issues.
5. Regarding Figure 4, what is the performance trajectory of Safe Delta when Harmful Dataset Size is substantially increased? It would be beneficial to explore extreme cases in the Appendix to understand the method's scalability and robustness under high-volume harmful data conditions.
6. In the attacker settings described in the evaluation methodology, how does the performance of content filtering approaches (such as Llama-Guard-3) applied to the training dataset compare with Safe Delta's effectiveness?
Other Comments Or Suggestions: No
Questions For Authors: No
Ethics Expertise Needed: ['Privacy and Security']
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your insightful reviews. We appreciate your recognition of our work as **novel, effective and efficient**. Below, we address your concerns:
> C1 & Q3: Baseline performances on Llama3-8b-instruct
Thank you for your thoughtful advice.
Since all baselines release their code based on Llama2-7b-chat, to ensure reproducibility, we chose to base our main experiments on Llama2 as well.
To address your concern, we extend the Llama3-8b experiments in Table 4 on two representative datasets, PureBad and DirtySummary. The table below shows SafeDelta effectively preserves safety while not harming the utility.
||PureBad||||DirtySummary|||
|-|-|-|-|-|-|-|-|
|Method|MMLU↑|MT-B↑|ASR↓|HS↓|F1↑|ASR↓|HS↓|
|SafeInstr|64.5|6.53|45.15|2.72|0.471|19.09|1.65|
|BEA|64.3|6.79|13.03|1.47|**0.483**|10.00|1.34|
|SafeLoRA|65.1|**6.88**|88.48|4.32|0.463|12.73|1.42|
|Resta|63.6|6.29|91.82|4.54|0.461|9.39|1.33|
|SafeDelta|**65.3**|6.83|**6.36**|**1.24**|0.477|**7.58**|**1.29**|
> C2: Over refusal issue.
Following the standard practice in this field [1-4], we initially did not include over-refusal test.
Recognizing the importance of your concern, we test SafeDelta under the most/least harmful settings: finetuned on PureBad/Math dataset. We employ ORBench [5] for evaluation. Results show that SafeDelta does not suffer from over-refusal issues and performs comparably to the original model:
|Model|OR rate↓|
|-|-|
|Orig|18.8|
|PureBad+SafeDelta|18.3|
|Math+SafeDelta|17.8|
"Orig" refers to original model; "OR rate" measures the percentage of refused benign questions (lower is better).
> Q1. How does SafeDelta perform for Multimodal LLMs?
This work focuses on text modality safety, so MLLMs were not considered. We plan to explore this in future work. SafeDelta can be adapted for MLLMs by using multimodal safety data to compute the Hessian matrix, with the weight adjustment process remaining unchanged.
> Q2. What is the efficacy against jailbreak attacks?
Since our work focuses on safety of fine-tuning instead of inference, we initially omitted jailbreak tests, following standard practice in this field[1-4].
To address your concern, we test SafeDelta against typical jailbreak attacks: GCG, AutoDAN and PAIR. Each generates 200 examples. To simulate black-box access in finetuning service, we do transfer attack using Vicuna-13B for GCG and AutoDAN.
We test the original model (Orig) and PureBad-finetuned model with SafeDelta.
The results show that SafeDelta preserves the original model's defense against jailbreaks:
|Attack|Orig(%)|PureBad+SafeDelta(%)|
|-|-|-|
|GCG|1.5|1.5|
|AutoDAN|1.5|2.5|
|PAIR|2|2|
Here, numbers are ASR (lower means stronger defense).
> Q4. Does SafeDelta degrade in sequential interactions (harmful queries followed by benign ones)?
We followed standard setups in this field [1-4] without considering this scenario.
To address your concern, we simulate 200 sequential interactions: each involves a PureBad harmful query, LLM's answer, and a follow-up Summary query.
The results confirm that SafeDelta maintains utility in this scenario.
||Direct|Sequential|
|-|-|-|
|Finetuned|0.491|0.484|
|SafeDelta|0.489|0.480|
Above are F1 scores (higher is better). "Direct" uses direct queries; "Finetuned" is model standard finetuned on Dirty Summary.
Since the model is fine-tuned on direct queries, there is little degradation in sequential scenario.
> Q5. What is the performance when Harmful Dataset Size is substantially increased?
Thanks for your advice. We test SafeDelta on PureBad dataset with 1k and 10k sizes. Results show that SafeDelta still maintains safety(ASR) while preserving basic utility(MT-B).
|Datasize→|1k|10k|
|-|-|-|
|Model↓,Metric→|MT-B↑/ASR↓|MT-B↑/ASR↓|
|Finetuned|5.1/95.1|5.2/94.6|
|SafeDelta|6.0/4.8|6.1/4.6|
> Q6. How does content filtering perform?
We initially did not consider content filtering methods, as they are ineffective on datasets with benign content (AOA, MATH), where fine-tuning still harms safety.
To address your concern, we filter the dataset using `Llama3-Guard-8b` and finetune on the filtered data. As expected, this approach performs poorly:
|Dataset|Filter Rate|Filter ASR|SafeDelta ASR|
|-|-|-|-|
|PureBad|83|82.1|3.33|
|DirtySummary|7.5|51.7|5.15|
|AOA|0 (No Defense)|
|Math|0 (No Defense)|
"Filter Rate" is the percentage of data filtered out; "Filter ASR" is the ASR of the model finedtuned on filtered dataset.
### References
[1] Safe LoRA: the Silver Lining of Reducing Safety Risks when Fine-tuning Large Language Models. NeurIPS 2024
[2] Mitigating Fine-tuning based Jailbreak Attack with Backdoor Enhanced Safety Alignment. NeurIPS 2024
[3] Language Models are Homer Simpson! Safety Re-Alignment of Fine-tuned Language Models through Task Arithmetic. ACL 2024
[4] Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To! ICLR 2024
[5] An Over-Refusal Benchmark for Large Language Models. 2024
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' responses to my concerns. I look forward to seeing the revised version with these updates incorporated and will adjust my score accordingly.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for reviewing our paper and reading our rebuttal. Since ICML does not allow modifications to the paper during the rebuttal stage, we will incorporate these updates into the final version:
- Over-refusal and sequential experiments, content filtering performance, and a discussion on applying the method to multimodal LLMs will be included in the main paper.
- Experiments with LLaMA3-8B, jailbreak attacks, and large-scale dataset evaluations will be added to the Appendix, with corresponding discussions included in the main paper.
We are truly grateful for your time and your reply. | null | null | null | null | null | null |
WAVE: Weighted Autoregressive Varying Gate for Time Series Forecasting | Accept (poster) | Summary: In this paper the authors integrate the attention mechanism used for time series forecasting with the concepts of moving average in used in classic statistical ARMA models. In particular they device the indirect MA weights on top of patched tokenized time series with the emphasis on linear attention level complexities. They show via empirical study that the resulting WAVE mechanism helps improve the forecast accuracy significantly.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: Yes.
Experimental Designs Or Analyses: Yes.
Supplementary Material: No.
Relation To Broader Scientific Literature: N/A.
Essential References Not Discussed: Literature on pretrained decoder-only time series specific models have discussions on end to end evaluation usually as part of the empirical study, see e.g.,
[1] Liu, Yong, et al. "Timer: generative pre-trained transformers are large time series models." Proceedings of the 41st International Conference on Machine Learning. 2024.
[2] Das, Abhimanyu, et al. "A decoder-only foundation model for time-series forecasting." Forty-first International Conference on Machine Learning. 2024.
Other Strengths And Weaknesses: Strength:
- The WAVE mechanism, especially the patch level MA formulation, is novel among TSF methods.
- The design and the empirical study included in the paper are following common rigorous practices thus are relatively convincing.
Weakness:
- The intuition and the theory behind the proposal are relatively weak, especially compared to the original context when ARMA applies.
- The empirical study does not fully show the ARMA structure is working as intended.
Other Comments Or Suggestions: Math in the paper could be better organized. Consider provide more intuition, a central place for notation definitions, and more concise math writings.
Questions For Authors: I am overall positive towards this submission, and plan to increase my recommendation once the following concerns / questions are addressed.
1. While ARMA has strong statistical interpretation, ARMA on the patch level in a stacked transformer becomes theoretical hard to understand. While there are examples (e.g., Fig 6) attempting to visualize the ARMA effect, what's the intuition behind WAVE when on a patched level, and how to claim it's working as intended instead of, e.g., merely introducing inductive biases in the model that work for the selected benchmark datasets?
2. Tokenization: while the authors claim an appropriate tokenization is required, the chosen tokenizer in the paper is a MLP(?) structure on top of patched time series which are common in PatchTST line of work. Any consideration of other tokenization strategies?
3. encoder-only vs decoder-only: the benefit of decoder-only mainly comes from its training efficiency and autoregressive decoding. The former is more crucial for pretrained models, so I wonder if we can see see any empirical study results on autoregressive decoding beyond a single step of the proposed method. Otherwise what's the justification of choosing decoder-only?
4. It's counterintuitive to see quadratic attention works worse than the linear attention most of the time, given the effective context length after patching is not long in the empirical study. What's the explanation, or if the quadratic models are way from optimally trained? Can we include more commonly used lookback / forecast horizon pairs in, e.g., the PatchTST paper?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: > Pretrained decoder-only models & end to end evaluation
We'll clarify our introduction regarding design purpose and use this as evidence in Section 2.2 showing AR attention-based TSF models perform comparably to other structures.
> Intuition behind WAVE on a patched level
> Context of ARMA
> Inductive biases issue
Thanks for this insightful question. Due to MLPs and non-linearities, patch-level AR weights resist direct interpretation on original observations. Our parameterization provides **separate AR/MA attention weights** whose visualizations (Figures 6, 9-13) reveal **token-level cyclic patterns** captured by AR weights and **short-term effects** by MA weights.
Each attention layer's value part can be viewed as a **reconstructed aggregation** of observations through a factor model. The Sequence Value section above each figure shows these layer-internal observations, essentially optimal input observations for ARMA structure learned at each layer.
This optimization occurs in all multi-layer AR attention models. For non-first layers, each input sequence step differs from the original observation as each layer aggregates information from other steps. While this makes interpreting attention weights on original inputs difficult, interpreting them on the model's constructed optimal input sequence remains meaningful.
In Figure 9, the three layers' AR/MA weights show distinct patterns. First-layer capture detailed **long-term/cyclic relationships** for different input lengths, while second/third layers capture stable cyclic patterns. MA weights show varying focus: first layer on distinctive **short-term dependencies**, second on shared decreasing short-term effects, and third on shorter corrective effects. This separation helps understand model mechanics better than pure AR models.
Regarding inductive bias, we believe all recent networks works on inductive bias while we need to consider its generalization ability. Our ARMA attention has demonstrated good generalization across 12 widely-used TSF datasets at different scales, proving its effectiveness as a structural prior.
> Other tokenization strategies
We use this tokenization to maintain next-step autoregressive relationships while covering the entire forecasting horizon. This represents PatchTST-style tokenization with autoregressive loss, enabling direct benchmark comparison. Our focus is efficiently introducing **MA terms to AR attention**, and we experimented with inter-channel mixing (results at [link (Table 2)](https://anonymous.4open.science/r/WAVE-Rebuttal-B8D2)), showing multivariate relationships do improve performance.
> Encoder-only vs Decoder-only
Our decoder-only pure AR model extends PatchTST with **autoregressive loss**. We omitted multi-step autoregressive prediction comparisons since baselines used different training objectives. Experiments confirm AR/ARMA models outperform prior approaches in one-step forecasting. Though our model shows advantages with shorter contexts due to autoregressive training, this is a secondary benefit as longer contexts are preferred when available. Since decoder-only models operate within training windows, our evaluation uses **full contexts**, maintaining consistency with previous benchmarks.
Additionally, decoder-only attention naturally performs multi-task training across varying context lengths, learning more adaptable representations that enhance generalization for TSF.
> Quadratic attention works worse / are way from optimally trained? Why linear better?
We agree. Research like DLinear, PatchTST, and iTransformer highlights serious overfitting in TSF Transformers. Softmax attention's non-linear capacity easily fits noise.
Linear attention has a special **dual structure**:
Autoregressive form of attention:
$$
\mathbf{o}\_t = \sum\_{i=1}^t \mathbf{w}\_{t,i} \mathbf{v}\_i \ , \mathbf{w}\_{t,i} = \mathbf{q}\_t \mathbf{k}\_i^\top \in \mathbb{R}^d
$$
Vector Autoregressive form of linear attention ($\mathbf{q}\_t \mathbf{k}\_i^\top = \mathbf{k}\_i \mathbf{q}\_t^\top$):
$$
\mathbf{o}\_t = \sum\_{i=1}^t \mathbf{k}\_i \mathbf{A}\_{t,i} \ , \mathbf{A}\_{t,i} = \mathbf{q}\_t^\top \mathbf{v}\_i \in \mathbb{R}^{d \times d}
$$
This dual linearity provides both **AR form and dynamic vector autoregression form**, providing **regularization**, and improving **linear relationship learning** in both channel- and token-wise dimensions.
> Commonly used lookback / forecast horizon pairs
For long-term forecasting, we used $L_I \in \{512, 1024, 2048, 4096\}$ with all $L_P$ values. We tested baseline setting $L_I=512$ and others. For stable token counts, we used fixed $L_I=1024$ for short-term prediction and specific combinations for long-term prediction. Table 3 demonstrates model's stability across different $L_I$ lengths. Our model works effectively with **very long lookbacks** ($L_I=4096$), showing excellent adaptability to extended contexts, which is a capability rare among existing models. | Summary: The author incorporates a moving average term into the autoregressive attention model for linear attention mechanisms, achieving state-of-the-art performance.
Claims And Evidence: 1. Effectiveness of the decoder-only autoregressive Transformer
- In time series forecasting (TSF), the previously overlooked decoder-only autoregressive Transformer can achieve results comparable to top baseline methods with appropriate tokenization and training strategies.
2. Incorporating the full ARMA structure into autoregressive attention
- Inspired by the ARMA model and recent advances in linear attention, this paper integrates the full ARMA structure into autoregressive attention mechanisms, improving long-range and local temporal modeling.
3. Proposing WAVE attention
- WAVE attention combines autoregressive (AR) and moving-average (MA) components, enhancing adaptability to different attention mechanisms while improving their ability to model long-range dependencies and local patterns.
- An indirect MA weight generation method introduces the MA component without increasing time complexity or parameter size in efficient attention models.
4. Effectiveness of indirect parameter generation for MA weights
- The study shows that indirect parameter generation can implicitly produce MA weights suited for capturing local temporal effects.
5. Experimental validation of WAVE attention
- WAVE attention consistently enhances autoregressive attention mechanisms and achieves state-of-the-art (SOTA) performance in time series forecasting tasks.
Methods And Evaluation Criteria: N/A
Theoretical Claims: contains no proofs.
Experimental Designs Or Analyses: N/A
Supplementary Material: all of it.
Relation To Broader Scientific Literature: N/A
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: s1. The WAVE attention mechanism captures short-term effects through the MA term, allowing the AR term to focus more on long-term and periodic patterns, thus balancing long-term and short-term dependencies.
s2. The WAVE attention mechanism proposed in the paper introduces the MA term while maintaining the same time complexity (O(N)) and parameter scale as the underlying efficient attention model.
s3. The experiments in the paper are comprehensive, covering both long-term and short-term time series forecasting, as well as ablation experiments.
w1. The method in this paper requires increasing the input length when the prediction length is extended. Even though, as the authors suggest, this method can be viewed as a patch for patchtst with added AR loss, patchtst itself does not require a longer input length when increasing the prediction length. Overall, while this design seems to alleviate the issue of error accumulation, it is not entirely natural. Existing decoder-only models like Autotimes do not need this design yet still remain state-of-the-art, demonstrating the true capability of decoder-only models: the ability to predict future lengths of any size.
w2. Sensitivity of parameters not analyzed.
Other Comments Or Suggestions: 1. figure 3 is too small to understand
Questions For Authors: q1. Why is each row of matrix B in the visualization basically uniform?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: > The method in this paper requires increasing the input length when the prediction length is extended. Even though, as the authors suggest, this method can be viewed as a patch for patchtst with added AR loss, patchtst itself does not require a longer input length when increasing the prediction length. Overall, while this design seems to alleviate the issue of error accumulation, it is not entirely natural.
Thank you for your suggestion. Our paper's main contribution is designing the ARMA attention mechanism and demonstrating that attention with ARMA structure significantly outperforms pure AR attention. Our tokenization method is merely a low-cost prerequisite problem introduction, aimed at incorporating AR training into the existing baseline experimental environment with minimal modifications. This allows us to isolate the improvement of adding the MA term to AR attention without introducing additional complex designs. We did this **only to control the introduction of irrelevant factors**.
We use non-overlapping PatchTST-style tokenization + AR loss to maintain a **pure autoregressive relationship** between tokens. This ensures the MA term calculation better aligns with its original design. Otherwise, the MA term calculation would be affected by overlapping portions between the current and previous tokens. While this wouldn't significantly impact performance, using such results for comparison wouldn't sufficiently demonstrate that ARMA structure's performance gain comes from properly introducing the MA term. Similarly, if we mixed channel information in tokenization to model multivariate relationships, it would improve model performance to some extent (see [link (Table 2)](https://anonymous.4open.science/r/WAVE-Rebuttal-B8D2)). However, mixing channel information would place output tokens containing single series information and input tokens containing channel relationships in different spaces, breaking the AR training objective. These additional factors would weaken our experimental results showing ARMA attention's improvement over AR forecasting, requiring additional ablation studies. Therefore, to straightforwardly demonstrate our results, we used this constrained pure AR setting to prove our conclusions while controlling these extra structures.
As shown in Table 3, this constrained setting provides additional insights: compared to current baselines with L_I≈512, it adapts to longer context lengths. This suggests some advantages for this **low-cost tokenization approach** when transferring AR attention to current experimental settings. However, these experiments were still designed to highlight ARMA attention's improvements over pure AR attention, which remains our primary objective.
> Sensitivity of parameters not analyzed.
Thank you for your suggestion. We modified the seed and ran our model five times, with results shown in [link (Table 4)](https://anonymous.4open.science/r/WAVE-Rebuttal-B8D2). We will add these additional experiments to the revision.
> Figure 3 is too small to understand.
Thank you for your suggestion. We will move the smaller font in Figure 3 to the caption to help readers understand it better.
> Why is each row of matrix B in the visualization basically uniform?
This is an intentional design choice, as we explained in detail in the footnote at the bottom right of page 5 and demonstrated in Figure 7. When matrix B lacks smoothness, the long-term components in the lower triangle dominate, preventing effective short-term modeling of MA terms.
Increasing element variance in matrix $\mathbf{B}$ causes greater fluctuations in longer-term elements of $\mathbf{\Theta}$. This can be understood through the simplified mean form in Eq. 5: $\theta_{ij} = b(1 + b)^{i-j-1}$. The lower-left portion of $\mathbf{\Theta}$ with larger values of $i-j-1$ corresponds to longer-term components. Due to the larger exponents in this area, variance is more **significantly affected** by variance in $b$. This is described in the paper as:
---
In the key activation, $ \alpha $ controls the variance of each row in the $ \mathbf{B} $ matrix, indirectly influencing the amount of long-term information (lower left) in the MA weights $ \mathbf{\Theta} $. Increasing $ \alpha $ would make the MA weights focus more on modeling long-term information. However, since we want the AR weights to handle the long-term component, we set $ \alpha $ to a relatively small value. This explains why the rows of the $ \mathbf{B} $ matrix **appear smooth** in the visualization. Refer to Fig. 7 for more details on $\alpha$, and see Fig. 8 for the effects of reversed positive $\phi_q$.
---
Rebuttal Comment 1.1:
Comment: The author's response basically resolved my concerns, and I will keep my score. | Summary: The paper proposes WAVE, a novel attention mechanism integrating autoregressive (AR) and moving average (MA) components for time series forecasting (TSF). The key contributions include:
Demonstrating that a decoder-only autoregressive Transformer, with proper tokenization and preprocessing, achieves performance comparable to state-of-the-art (SOTA) baselines.
Introducing an ARMA structure into autoregressive attention via an indirect MA weight generation method, which maintains linear time complexity and parameter efficiency.
Validating WAVE's effectiveness across 12 TSF benchmarks, showing that it improves AR-based Transformers and achieves SOTA results. Experiments demonstrate WAVE's superiority over existing methods in both short- and long-term forecasting.
Claims And Evidence: The paper's primary claims are largely supported by experiments. For instance, Figure 1 and Table 2 illustrate that AR Transformers perform competitively against baselines, while all WAVE variants outperform their AR counterparts. Table 7 further confirms that WAVE maintains computational efficiency in terms of FLOPs and parameter counts.
Methods And Evaluation Criteria: The ARMA structure in WAVE is well-designed: the MA term models short-term impacts via error accumulation, and the indirect MA weight generation avoids explicit matrix inversion through linear attention mechanisms. These design choices are methodologically sound and align with the goal of balancing efficiency and performance.
Theoretical Claims: While the paper formalizes the ARMA structure and WAVE attention, it lacks rigorous theoretical proofs (e.g., asymptotic complexity analysis to substantiate WAVE’s linear time complexity claims). The validity of the method is primarily empirically justified.
Experimental Designs Or Analyses: The experimental section supports the main claims but has limitations. For example, the ablation studies could be expanded to validate the impact of AR Transformer components (e.g., tokenization strategies, weight-sharing mechanisms) on performance.
Supplementary Material: The appendix provides additional details on related work, datasets, hyperparameter settings, and experimental results with visualizations, complementing the main text effectively.
Relation To Broader Scientific Literature: This work situates itself within the broader scientific literature through three principal connections. First, it extends the foundational architecture of decoder-only Transformers (Vaswani et al., 2017) and their computationally efficient variants, establishing their applicability to time series forecasting via deliberate tokenization strategies. Second, the study introduces a novel integration of classical autoregressive moving average (ARMA) principles (Box et al., 1974) into modern attention mechanisms, effectively bridging decades-old statistical forecasting techniques with contemporary deep learning paradigms. Third, the study resolves limitations of exponential decay in gated attention (e.g., MEGA) by explicitly separating MA terms, preserving long-range capabilities while capturing local patterns.
Essential References Not Discussed: The paper proposes a decoder-only Transformer with ARMA-enhanced attention (WAVE) for time series forecasting, claiming state-of-the-art performance. However, it does not cite TimesNet: Temporal 2D-Variation Modeling for General Time Series Analysis (Wu et al., ICLR 2023), which also addresses multi-periodicity modeling and achieved SOTA results in forecasting tasks. A discussion of TimesNet's 2D temporal modeling approach and its differences from WAVE's ARMA mechanism would strengthen the technical context, and inclusion in experiments is recommended for rigorous benchmarking.
Other Strengths And Weaknesses: Strengths:
The novel integration of the ARMA structure with efficient attention mechanisms provides a fresh perspective for time series forecasting (TSF), demonstrating the potential of AR Transformers in this domain.
The extensive experimental validation across multiple benchmarks strengthens the credibility of the results.
Weaknesses:
The absence of a dedicated related work section in the main text, which may hinder contextualizing the method within existing literature.
Limited ablation studies to isolate the contributions of key components (e.g., tokenization, AR/MA term interactions).
The core novelty of introducing ARMA into TSF could benefit from further justification.
Potential inconsistencies in Figure 1 (e.g., arrow directions in the schematic diagram).
Other Comments Or Suggestions: Future work could explore the generalizability of WAVE to multivariate time series, as the current experiments focus primarily on univariate scenarios.
Questions For Authors: None.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: > It lacks rigorous theoretical proofs (e.g., asymptotic complexity analysis to substantiate WAVE’s linear time complexity claims)
Thank you for your suggestion. We provide the time complexity analysis below:
**Proposition** For a sequence of length $N$ and embedding dimension $d$, WAVE attention based on **efficient linear attention** maintains a time complexity of $O(Nd^2)$, which is linear in the sequence length.
**Proof** We analyze the complexity of each component of WAVE attention separately:
**AR Component Complexity** For linear attention variants, computing $\mathbf{o}^{\text{AR}}_t$ for all positions $t$ requires:
- Computing query, key, value matrices: $O(Nd^2)$
- For each position $t$, computing a running sum $\mathbf{S}\_t = \mathbf{S}\_{t-1} + (\mathbf{k}^{\text{AR}}\_t)^\top \mathbf{v}\_t$: $O(Nd^2)$
- Computing $\mathbf{o}^{\text{AR}}\_t = \mathbf{q}\_t \mathbf{S}\_t$ for all $t$: $O(Nd^2)$
Total AR component complexity: $O(Nd^2)$
**MA Component Complexity** The indirect MA weight generation and computation involves:
- Computing residuals $\mathbf{r}\_j = \mathbf{v}\_{j+1} - \mathbf{o}^{\text{AR}}\_j$: $O(Nd)$
- Computing $\mathbf{q}^{\text{MA}}$ and $\mathbf{k}^{\text{MA}}$ matrices: $O(Nd^2)$
- Applying activation functions $\phi^{\text{MA}}\_q$ and $\phi^{\text{MA}}\_k$: $O(Nd)$
- For each position $t$, computing a running sum $\mathbf{T}\_t = \mathbf{T}\_{t-1} + \phi^{\text{MA}}\_k(\mathbf{k}^{\text{MA}}\_t)^\top \mathbf{r}\_t$: $O(Nd^2)$
- Computing $\mathbf{o}^{\text{MA}}\_t = \phi^{\text{MA}}\_q(\mathbf{q}^{\text{MA}}\_{t-1}) \mathbf{T}\_{t-1}$ for all $t$: $O(Nd^2)$
Total MA component complexity: $O(Nd^2)$
**Final Output Complexity** Computing $\mathbf{o}_t = (\mathbf{o}^{\text{AR}}_t + \mathbf{o}^{\text{MA}}_t)\mathbf{W}_o$ for all $t$: $O(Nd^2)$
Therefore, the total time complexity of WAVE attention based on linear attention is:
$O(Nd^2 + Nd^2 + Nd^2) = O(Nd^2)$
This confirms that WAVE attention maintains linear time complexity with respect to the sequence length $N$ when applied to **efficient linear attention** mechanisms.
**Corollary** For WAVE attention based on **standard softmax attention**, the time complexity remains $O(N^2d + Nd^2)$, matching the underlying softmax attention mechanism.
> 1. The ablation studies could be expanded to validate the impact of AR Transformer components (e.g., tokenization strategies, weight-sharing mechanisms)
> 2. Limited ablation studies to isolate the contributions of key components (e.g., tokenization, AR/MA term interactions)
> 3. Explore the generalizability of WAVE to multivariate time series
Thank you for your suggestions. We provide experimental results in [link (Table 2)](https://anonymous.4open.science/r/WAVE-Rebuttal-B8D2) comparing channel-independent tokenization with channel-mixing tokenization, as well as results without autoregressive training loss. These findings support our claims. Additionally, without weight-sharing, parameter size would exceed pure AR attention, creating an unfair comparison with the same number of layers. We'll discuss this further in the revision appendix. Regarding AR/MA terms, every experiment includes comparisons between pure AR and ARMA, effectively serving as an ablation study. For interpretability, please refer to the following discussion:
We illustrate using the hierarchical AR/MA weights visualization in Figure 9. The three layers exhibit different patterns. First layer AR weights capture detailed **long-term and cyclic relationships** at different input lengths, while second and third layers capture **common stable cyclic patterns**. First layer MA weights focus on distinct short-term dependencies across input lengths, second layer on shared decreasing short-term effects of fixed block length, and third layer on shorter-term correction effects. These AR/MA weights make the model's operation more interpretable compared to pure AR models.
> 1. Does not cite TimesNet
> 2. Discussion of TimesNet's 2D temporal modeling approach
> 3. The absence of a dedicated related work section in the main text
> 4. Potential inconsistencies in Figure 1
Thank you for your suggestion. TimesNet uses 2D convolution on adjacent time series to simultaneously aggregate channel-wise and temporal-wise information, which differs conceptually from AR attention-based methods. We have provided performance comparisons with TimesNet in [link (Table 3)](https://anonymous.4open.science/r/WAVE-Rebuttal-B8D2).
We will add a related work section in our revision to discuss recent TSF models beyond attention-based methods.
We have verified that Figure 1 accurately reflects the ARMA structure calculation method. In our revision, we will move smaller fonts from Figure 1 to the caption for improved clarity. | Summary: The Weighted Autoregressive Varying Gate (WAVE) attention is a new mechanism that augments Transformer attention with both an autoregressive component and a moving-average component. By combining ideas from statistical models with efficient Transformer architectures, WAVE expands the modeling capacity for time series forecasting while keeping the model lightweight and fast.
Claims And Evidence: The paper claims that WAVE-attention-equipped Transformers outperform state-of-the-art TSF models while maintaining O(N) time complexity, and that its indirect MA weight generation allows efficient modeling of short-term effects without significantly increasing model size.
Methods And Evaluation Criteria: Standard TSF benchmarks and baseline comparisons with recent Transformer architectures such as PatchTST, iTransformer, and DLinear.
Theoretical Claims: The paper builds on ARMA modeling principles and argues that WAVE correctly extends attention mechanisms with an implicit MA term. The mathematical formulation for the indirect MA weight generation is clearly derived, showing that the method preserves the computational efficiency of linear attention while approximating a full ARMA structure.
Experimental Designs Or Analyses: The one-step forecasting setup ensures fair comparisons between models. The long-term TSF experiments are insightful, demonstrating that WAVE Transformers scale effectively with increased lookback lengths, while other models suffer from performance degradation.
Supplementary Material: Yes. The supplementary material includes additional visualizations of attention weight distributions, hyperparameter settings, and detailed experimental results
Relation To Broader Scientific Literature: The study connects to statistical ARMA modeling, demonstrating how principles from classical time series analysis can enhance modern neural architectures.
Essential References Not Discussed: The paper adequately cites recent Transformer-based TSF models, but lacks statistical forecasting approaches (e.g., deep state-space models and mamba, etc).
Other Strengths And Weaknesses: Pros
1. Good results. The proposed model consistently outperforms state-of-the-art TSF baselines across 12 datasets
2. The idea looks interesting and works well. WAVE attention successfully incorporates ARMA modeling principles into Transformer attention, improving both long-term dependency handling (AR) and short-term fluctuation modeling (MA) without increasing computational complexity.
3. The paper is well-written and well-organized.
Cons
1. The model is primarily tested on single-channel TSF tasks, and its performance on multivariate time series remains unexplored.
2. Its potential application to other sequential modeling tasks (e.g., NLP or reinforcement learning) is not discussed
3. Please elaborate more on the relation between mamba and WAVE.
Other Comments Or Suggestions: Further interpretability analysis (e.g., visualization of learned MA weights over time) would strengthen the paper’s conclusions.
The notations are not clear.
Questions For Authors: + How does WAVE perform on multivariate time series forecasting, particularly when dealing with highly correlated input series?
+ Could WAVE be generalized to non-time-series applications, such as language modeling or reinforcement learning?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: > The paper adequately cites recent Transformer-based TSF models, but lacks statistical forecasting approaches (e.g., deep state-space models and mamba, etc).
Thank you for your suggestion. We will add a related works section in our revision to discuss statistical forecasting and recent SSM-based TSF models.
> Performance on multivariate time series remains unexplored.
> How does WAVE perform on multivariate time series forecasting.
Thank you for your suggestion. Following your advice, we added multivariate channel-mixing to our tokenization: after obtaining a tensor of shape $(C, N, L_P)$ using AR tokenization (where $C$ is the number of series, $N$ is the patch token count, and $L_P$ is patch size = forecasting horizon), we applied a **linear projection to the $C$ dimension** to mix channel information at the same position. We report comparison results of pure AR vs. WAVE attention with multivariate AR tokenization in [link (Table 2)](https://anonymous.4open.science/r/WAVE-Rebuttal-B8D2). Results show that introducing multivariate effects enhances overall performance, and the improvements from ARMA structure remain effective. We will include these results in our revision.
> Its potential application to other sequential modeling tasks is not discussed.
> Could WAVE be generalized to non-time-series applications?
Thank you for your suggestion. We've addressed this in the limitations section. Transformers for NLP are typically designed to learn complex, unstructured temporal relationships, unlike time series' common patterns (lag autocorrelation, cyclic patterns, seasonal effects, trends, local decay patterns). Introducing ARMA structure into attention provides **stable, interpretable inductive bias** for handling these effects, better modeling the time series generation process. We briefly explained TSF and NLP differences in section 2.5, which inspired our ARMA integration into AR attention. Further research is needed to verify if this TSF-specific inductive bias is effective for sequence tasks without fixed positional autocorrelation effects. We appreciate your understanding.
> Please elaborate more on the relation between mamba and WAVE.
Thank you for your suggestion. Mamba uses the following parameterization method:
$$
\mathbf{o}\_t = C \overline{B} \mathbf{x}\_t + C \overline{A}\_{t} \overline{B} \mathbf{x}\_{t-1} + C \overline{A}\_{t-2} \overline{A}\_{t-1} \overline{B} \mathbf{x}\_{t-2} + \cdots
$$
Its step-level formulation is:
$$
\mathbf{o}\_t = \sum\_{i=1}^t \prod\_{j=i+1}^{t} C \overline{A}\_{j} \overline{B} \mathbf{x}\_{i}
$$
where $\mathbf{x}\_i$ is a $1 \times d$ input column vector. The term $\prod\_{j=i+1}^{t} C \overline{A}\_{j} \overline{B}$ can be viewed as a vector autoregressive weight matrix for step $i$. If we use a diagonalizable parameterization for $\overline{A}\_{j}$, similar to the method in linear recurrent unit (https://arxiv.org/abs/2303.06349), we get the parameterization $ C P (\prod_{j=i+1}^{t}\Lambda_j) P^{-1} \overline{B}$, where $P$ is an invertible matrix. Here, $CP$ can be seen as $W_o$ in attention, $P^{-1} \overline{B}$ as $W_v$, and each diagonal element in $\prod_{j=i+1}^{t}\Lambda_j$ as an attention score $w_{t,i}$ for each channel. The difference is that here $w_{t,i} = \prod_{j=i+1}^{t}\lambda_j$, rather than $w_{t,i} = f(x_t W_q W_k^\top x_i^\top)$ in attention. In the paper's channel-wise AR format: $o_t = w_{t,i} v_i, \ v_i = P^{-1} \overline{B} x_i$
In summary: Diagonalizable Mamba or S4 (linear recurrent unit) can be viewed as **autoregression on each diagonal channel**, allowing direct incorporation of ARMA structure. However, Mamba or S4 (vanilla) can only be viewed as vector autoregression (SSM system), making it difficult to directly apply our method, which would require exploration in the VARMA domain.
> Further interpretability analysis & notations are not clear.
Thank you for your suggestion. We illustrate using the hierarchical AR/MA weights visualization in Figure 9. The AR/MA weights across the 3 layers exhibit different patterns. The first layer's AR weights capture detailed long-term and cyclic relationships corresponding to different input lengths, while the second and third layers' AR weights capture common stable cyclic patterns. The first layer's MA weights focus on **distinctive short-term dependencies** across various input lengths, the second layer focuses on **shared decreasing short-term effects** with fixed block lengths, and the third layer emphasizes **shorter-term correction effects**. These AR/MA weights help make the model's operating mechanism more understandable compared to pure AR models.
We will add more visualization analyses in the revision and move smaller, less clear text from figures to captions for better readability. | Summary: This work introduces a decoder-only Transformer based model for time-series forecasting and introduces the WAVE attention mechanism. The WAVE attention mechanism leverages autoregressive and weighted moving averaging techniques. The authors show that coupling WAVE-based attention and a decoder-only structure outperforms other state-of-the-art models for short-term forecasting and produces comparable results for long-term forecasting. The authors also provide extensive ablation studies into WAVE-based attention mechanisms and their hyperparameters.
Claims And Evidence: The claims made are supported by empirical evidence of the model's performance as well as plots showing the attention matrices. The plots of the attention matrices validate the authors' claims that the AR and MA attend to long-term and short-term phenomena, respectively. However, the authors do not provide evidence of how this affects the AR and MA predictions. I would like to see plots of oAR and oMA as well to validate the claims that oAR and oMA do in fact contain long-term cyclical and short-term dynamics, respectively.
Methods And Evaluation Criteria: This work evaluates multiple attention methods on standard benchmarks. However, it is unclear to me why the authors did not provide long-term prediction results for all the datasets used in the short-term experiments.
- Please report long-term experimental results for the same datasets as in the short-term experiments or provide sufficient reasons for not doing so.
- The authors do not show any time-series forecasts. It is very odd that there are no visualizations of the forecast results.
Theoretical Claims: - The authors attempt to avoid error accumulation by employing non-overlapping patching. However, when linking this patching method to their goal they briefly state "This ensures that each out-of-sample prediction token covers the entire forecasting length LP, thereby avoiding error accumulation." Basically the authors simply state that this works without providing insight or arguments that it should. Specifically, it's not clear how non-overlapping patches in the input mitigate error accumulation in the output, which I interpret as the point of this statement at this stage of the paper.
Experimental Designs Or Analyses: - This work omits important experimental details making it unclear how exactly the experiments were performed. Please provide a section in the supplementary material providing such details. Below I list a few.
- How many experiments were run?
- What is the train/val/test split?
- Which hyperparameters (e.g. Li and Lp) are dataset dependent or dataset and prediction length dependent?
Supplementary Material: The supplementary material provides full empirical results of all the experiments as well as more illustration of the attention matrices. However, the supplementary material is clearly missing plots of that sufficiently demonstrate the forecasted time-series.
Relation To Broader Scientific Literature: Time-series forecasting is a ubiquitous problem in scientific and industrial tasks. The authors introduce attention mechanisms based on well-established theory within these fields and expand upon them. Their work is broadly applicable to both science and industry.
Essential References Not Discussed: The authors interweave discussion of previous works with their own methodological discussions. I find this approach helpful. However, I do think this paper lacks a sufficiently broad overview of previous works that touch on all aspects of this work.
- Currently, there appears to be one paragraph at the beginning that addresses previous works in general. This needs to be expanded to multiple paragraphs and include more topics discussed in this work. Below are two examples.
- Some mention of linear attention mechanism and the important works as well as their impact.
- Some mentions of other works that include weighted moving averages within the attention process. These references are severely lacking in these references, such as ETSformer. The authors can find a good overview of such works in the introduction and related works of "Powerformer: A Transformer with Weighted Causal Attention for Time-series Forecasting" and "TOTEM: TOkenized Time Series EMbeddings for General Time Series Analysis"
Other Strengths And Weaknesses: **Strengths**
- The introduction of well estabilished theory into the attention mechanism through the AR and MA methods.
- Extensive experiments showing the benefits of the MA methods.
- Extensive experiments showing how ARMA improves upon multiple types of attention
**Weaknesses**
- Missing experiments and experimental details (described above).
- Poor use of figures and tables
- The claim of how the authors avoid error accumulation is very unclear even thought this seems important
- In general it seems like this paper needs to be read over by a more experienced writer, there are also random periods after some of the symbols which to me indicates that this manuscript was not thoroughly read over by other authors.
Other Comments Or Suggestions: I consider each bullet point (outside of the listed strengths) as a concern that needs addressing. Below are some further writing and organization concerns. In general I think this is a compelling work ultimately worthy of publication after addressing the concerns and clarifications throughout this review.
- I cannot read the text on most figures
- Figure 3 seems very important but is never mentioned in the text. If it's not important enough to mention in the text it should not exist in the main body. The description of the attention mechanisms in the methods section would greatly benefit from pointing to the corresponding panel in Fig. 3. This figure should also be made much larger, it is too busy for how small it is.
- Table 1 needs to take up more space or be reconfigured, I can't separate the equations in Elin-attn
- Not all tables have bolded and underlined results, why? This would be very helpful
- Tables 4, 5, and 6 are mentioned in the paper in reverse order, their order needs to be reversed.
Questions For Authors: 1) Can you please describe how you avoid error accumulation?
2) Does this method predict the entire prediction sequence all at once, like PatchTST, or does it predict one time-step and then the sequence in an autoregressive fashion?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: > Describe how you avoid error accumulation.
Thank you for this question. To clarify, this approach is not a contribution but a **prerequisite setting** demonstrating that **pure AR attention** can match previous models. Traditional one-step AR models accumulate significant errors during iterative prediction when $L_P>1$. By making each token contain $L_P$ steps, we **eliminate iterative prediction**, naturally avoiding these errors. This allows AR attention to integrate with existing baseline environments that also directly predict all $L_P$ steps. Essentially, this is similar to PatchTST tokenization with AR loss added between non-overlapping tokens, as illustrated in Figure 1(a).
> Please report long-term experimental results for the same datasets
Thank you for this valuable question. We prioritized short-term forecasting as our main experiment, with long-term as supplementary. This choice stems from **baseline limitations** rather than our model structure - previous baselines struggle to efficiently use longer lookback information needed to showcase ARMA attention's effectiveness in long-term forecasting.
Our AR tokenization requires $L_P$ as patch length. For $L_P=720$, we need $L_I=4096$ to maintain sufficient tokens to demonstrate ARMA's benefits (Table 8). However, previous research typically used shorter input lengths.
For fair comparison, we reran all baselines with $L_I \in \{512, 1024, 2048, 4096\}$ and selected the best results. Some baselines (Encformer, PatchTST) become **computationally infeasible** with high $L_I$, $L_P$, and input series count $C$. For large datasets like electricity ($C=321$) and traffic ($C=862$), using $L_I=4096$ makes training impossible even with batch size = 1. Thus, our long-term setting only included datasets up to Solar ($C=137$). Our code repository's baseline.sh verifies this limitation. We'll add a footnote to explain these constraints.
> The authors do not show any time-series forecasts
We'll add figures [link (PDFs)](https://anonymous.4open.science/r/WAVE-Rebuttal-B8D2) in the appendix. Time series visualizations typically show just one test datapoint from one series, providing limited information compared to comprehensive evaluation metrics. Papers in this field rarely include such figures in the main text, occasionally placing them in the appendix without using them to support claims.
> Experimental details, train/val/test split, hyperparameters
This information is detailed in Appendix B.2. Our setup matches DLinear, PatchTST, and iTransformer, using **identical train/val/test splits**. For short-term experiments, we fixed $L_I=1024$; for long-term, we used fixed combinations: $(1024,96)$, $(2048,192)$, $(2048,336)$, $(4096,720)$ for $(L_I,L_P)$. For baselines, we ran all combinations of $L_I \in \{512, 1024, 2048, 4096\}$ for each $L_P$ and selected the best results, ensuring **fair comparison** against optimally-performing baselines, similar to Table 3. Our complete code implementation supports these details.
> Lacks a sufficiently overview of previous works
Thank you for this suggestion. Our overview currently focuses on autoregressive attention mechanisms and attention for TSF. We'll expand the background content in the first section and add a related works section in the appendix discussing WMA and EMA applications in attention-based TSF and their relationship with ARMA.
> 1. I cannot read the text on most figures
> 2. Figure 3 never mentioned in the text
> 3. Table 1 needs to take up more space
> 4. bolded and underlined results
> 5. Tables are in reverse order
Thank you for these suggestions. We'll make these corrections:
1. We'll move small text from figures to captions.
2. We referenced Figure 3 in section 2.8 (line 262), though admittedly late. We'll reposition Figures 3 and 4 to address this ordering issue.
3. We'll add line breaks for Std and ELin names to provide more space for formulas.
4. We'll add light color formatting to all tables to highlight performance differences.
5. We'll reorder tables in ascending order.
> Random periods after some of the symbols ... manuscript was not thoroughly read over by other authors
We apologize for this confusion. What appeared as periods were actually `\cdot` **placeholders** in subscripts, particularly in sections 2.7 and 2.8. While normally displayed as $\phi(\cdot)$, in subscripts it appears as $\beta_{\cdot}$, resembling a period. In our revision, we'll use $\beta_{(\cdot)}$ in subscripts to clearly distinguish from periods.
> Plots of oAR and MA.
While decomposing AR/MA contributions to forecasting would be informative, our **stacked ARMA attention layers** make this challenging, as components mix with counterparts from previous layers. For understanding AR/MA behavior in each layer, our **visualizations of weights** are more appropriate. Please refer to Figure 9-12, which display AR and MA weights for each layer, providing detailed insights into how ARMA functions within layers. | null | null | null | null |
How Do Large Language Monkeys Get Their Power (Laws)? | Accept (oral) | Summary: This work tries to explain a curious phenomenon in LLM test-time scaling via repeated sampling and verification, as well as in Best-of-N jailbreaking:
while the per-problem failure probability should decay exponentially with the number of attempts,
it is often observed in practice that the average success rate on a task (which contains multiple problems) exhibits a power law instead.
The authors prove theoretically that this should be the case if (and only if) the distribution of per-problem single-attempt success probability satisfies certain long-tailed property;
such a condition is validated empirically for multiple tasks and LLMs.
Based on such analyses, this work also proposes a distributional estimator for the coefficients in the power laws of repeated sampling, and validates its efficacy numerically.
**Update after rebuttal:** I have read the authors' rebuttal (as well as other reviews), and will maintain my positive evaluation.
Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence, both theoretically and numerically.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense to me.
Theoretical Claims: I read through the theoretical analyses in the main text, which make sense to me.
I only skimmed through the technical proofs in the appendix.
Experimental Designs Or Analyses: I have checked all experimental designs and analyses, which are mostly standard statistical analyses for supporting the developed theories.
I don't see any serious issue in the results.
Supplementary Material: I skimmed through the whole appendix.
Relation To Broader Scientific Literature: This work offers some mathematical insights for LLM inference scaling laws that have been extensively studied recently.
The key to solving the puzzle under consideration can be easily explained in one sentence (in Line 231 Left):
"(a known result that) power laws can originate from an appropriately weighted sum of exponential functions".
Although the solution becomes obvious once it has been presented,
it is the essential contribution of this work to bring this to light.
Essential References Not Discussed: Not that I'm aware of.
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: A typo in Line 413 Right: "contributes is a new hypothesis" --> "contributes a new hypothesis"
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your positive and thorough review of our work. We appreciate your thoughtful assessment of our theoretical and numerical analyses, as well as your recognition of our contribution in applying the mathematical insight about power laws emerging from weighted sums of exponential functions to this specific domain. We will certainly fix the identified typo in Line 413, changing "contributes is a new hypothesis" to "contributes a new hypothesis."
We are committed to making this paper as strong as possible and would value your guidance on what specific improvements would strengthen the manuscript further. If you have any additional suggestions that would elevate your assessment to a 'Strong Accept,' we would be grateful for that feedback and would make every effort to address those points in our revision.
Thank you again for your constructive engagement with our work. | Summary: The paper demonstrates that power law behaviour in “pass at k” metrics originates from a power law tail in the distribution of the “pass at 1” probability across the test set. Furthermore, it argues that directly modeling the “pass at 1” distribution leads to more accurate predictions for the values of “pass at k.”
## Update after rebuttal
I maintain my positive assessment of the paper.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: I did not check the proofs and derivations in the appendix, but overall the analytical claims in the paper make sense and agree with similar calculations I have made in the past in different contexts.
Experimental Designs Or Analyses: I have checked the experimental design and analysis to the level with which it is described in the main text. Overall, I found it satisfactory; see a question and a suggestion below.
Supplementary Material: I did not review the supplementary material.
Relation To Broader Scientific Literature: This paper provides a simple but valuable insight into the shape of scaling laws with respect to success probability after multiple attempts. Prior work observed these sometimes behave as a power law, but this appears to be the first work to reconcile this finding with the fact that for an individual data point, the pass at k probability must decay exponentially.
Essential References Not Discussed: I am not aware of any gross omission of related work, but I am also not satisfied with the related work section in the submitted paper - see “Comments Or Suggestions” below.
Other Strengths And Weaknesses: Covered by my answers to the other questions.
Other Comments Or Suggestions: 1. While overall the paper is well written, Section 6 (Related Work) is sub-par: it is a single paragraph spanning over more than a column that reads as a laundry list of papers about scaling laws. I can find better lists of this sort online. What I expect to find in a related work section is insight about how these works relate to the paper. Here specifically, I am missing a discussion about prior work attempting to explain the origin of scaling laws currently listed in lines 390 to 395 - do any of these models predict a power law tail for the distribution of difficulties of individual test data? Could they provide a parametric form for that distribution? Polo et al. (2024) likely also deserve more detailed discussion due to the pass@k experiments they describe in Section 4.5.
2. Figure 4 is missing some indication of what is the measurement error of pass@1. I am not sure what is the best way to visualize it - at the very least, you should indicate the (inverse of the) sample size used to obtain the pass@1 estimates.
Questions For Authors: Is the discretize+ML described in lines 370-379 optimal in any sense? More specifically, is it the true maximum likelihood estimator of the pass@1 distribution parameters given the observations?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We appreciate your constructive feedback. We address your points below.
### Improvements to Related Work
> Could they provide a parametric form for that distribution? Polo et al. (2024) likely also deserve more detailed discussion due to the pass@k experiments they describe in Section 4.5.
We will incorporate a more thorough discussion of Polo et al. (2024)'s Sloth and the related papers Ruan et al. (2024)'s Observational Scaling Laws, and Owen (2024)'s predictability analysis. Their latent variable regression models offer complementary perspectives to our approach. Interestingly, Polo's Section 4.5 pass@k experiments show relatively poor fits (their Figure 7), possibly because (as best as we can tell) they use the biased estimator of $\operatorname{pass_i@k}$ that Chen et al. (2021) caution against. That said, while Polo et al. (2024) don’t define a functional form for scaling, our mathematical analysis could be combined with their estimated per-problem single-attempt success probabilities $\operatorname{pass_i@1}$. It’s possible that their cross-benchmark fitting method gives better estimates of these per-problem single-attempt success probabilities, which would improve our method.
We will expand on this connection and highlight how per-problem analyses could potentially combine with such cross-benchmark approaches to further improve predictability.
> What I expect to find in a related work section is insight about how these works relate to the paper. Here specifically, I am missing a discussion about prior work attempting to explain the origin of scaling laws currently listed in lines 390 to 395 - do any of these models predict a power law tail for the distribution of difficulties of individual test data?
We will revise the related work section to discuss contributions from key works beyond just listing scaling law analyses. To the best of our knowledge, no prior work has specifically attempted to explain the power law emergence with repeat sampling in the manner we propose, perhaps due to the recency of works like Brown et al. (2024) and Hughes et al. (2024).
### Quantification of Measurement Error
This is an excellent suggestion. For Large Language Monkeys, each problem had 10,000 attempts sampled, making the per-problem single-attempt success rate $\operatorname{pass_i@1}$ a Bernoulli estimator with well-understood standard error. The Best-of-N jailbreak case is more nuanced due to varying sample sizes across problems (as detailed in Appendix A).
We will add this measurement precision information to the main text and try to develop an appropriate visualization to represent the uncertainty in Figure 4.
### Optimality of Discretized ML Estimator
> Is the discretize+ML described in lines 370-379 optimal in any sense? More specifically, is it the true maximum likelihood estimator of the pass@1 distribution parameters given the observations?
Regarding whether our discretize+ML approach is optimal: we cannot make such a strong claim. While our empirical results demonstrate its effectiveness for the specific task of power law exponent estimation, a formal proof of optimality would require additional theoretical analysis.
The approach may be particularly well-suited for estimating parameters that best describe the distribution's left tail, which is crucial for our application. However, as you correctly suggest, this is a potentially complex statistical estimation question that warrants dedicated investigation. We will clarify these limitations in our revision and position this as an opportunity for future research.
### Invitation for Ways to Improve
We believe addressing these points will substantially improve the paper's clarity and impact.
We are committed to making this paper as strong as possible and would value your guidance on what specific improvements would strengthen the manuscript further. If you have any additional suggestions that would elevate your assessment to a 'Strong Accept,' we would be grateful for that feedback and would make every effort to address those points in our revision.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response. I maintain my current positive assessment. | Summary: This paper explores the scaling behavior of LLMs when inference-time compute is increased through repeated sampling. While failure rates for individual problems should decrease exponentially with multiple attempts, the authors observe that the aggregate success rate across many problems follows a power law. They resolve this paradox by demonstrating that this phenomenon arises from the distribution of single-attempt success probabilities, which is heavy-tailed that a small fraction of extremely difficult tasks disproportionately influences the overall trend. Through empirical analysis on math problem-solving and multimodal jailbreaking tasks, they confirm that while individual tasks improve exponentially, the global trend follows a power law due to the nature of the problem distribution. This work introduces a new distributional estimator that predicts the power law exponent with far less compute than traditional methods, improving efficiency by 2-4 orders of magnitude. Furthermore, they explain why some models deviate from power law scaling, attributing it to the lack of a heavy-tailed success rate distribution. Ultimately, this research enhances the understanding of inference-time scaling and provides a more accurate framework for forecasting LLM performance, offering practical implications for model evaluation and optimization.
## update after rebuttal
I appreciate the clarification and insight from the authors. My concerns have been addressed. I have updated the scores accordingly.
Claims And Evidence: ### Theoretical Justification for Heavy-Tailed Distributions:
While the authors demonstrate empirically that single-attempt success rates follow a heavy-tailed distribution, they do not provide a deeper theoretical justification for why this occurs in practice. They speculate that benchmark design and selection bias may contribute, but these points are not rigorously analyzed.
Methods And Evaluation Criteria: Yes. The paper employs a rigorous mathematical framework to establish that while individual problems exhibit exponential failure rate decay, aggregate success rates follow a power law due to the heavy-tailed distribution of single-attempt success probabilities. Empirical validation is conducted on two key tasks: mathematical problem-solving using the MATH benchmark and multimodal jailbreaking using HarmBench, both of which effectively illustrate how repeated sampling impacts model performance.
The paper also introduces a distributional estimator that predicts power law scaling exponents more efficiently than traditional regression-based methods, reducing computational costs by 2-4 orders of magnitude. The evaluation criteria, particularly the use of negative log success rate (−log(pass@k)), are well-motivated and provide clear insights into model scaling behavior. However, while the chosen benchmarks are appropriate, the study does not explore whether similar scaling laws hold across a broader range of NLP tasks such as summarization or question answering. Additionally, the underlying causes of heavy-tailed success probability distributions remain speculative.
Theoretical Claims: The key theoretical contributions involve proving that per-problem failure rates decay exponentially while the aggregate success rate across problems follows a power law due to the distributional properties of single-attempt success rates.
- 1. Exponential Decay of Per-Problem Failure Rates
- Claim: If each attempt at solving a problem is independent with a fixed success probability, then the failure probability over $k$ attempts follows an exponential decay.
- This confirms that the failure rate decreases exponentially as $k$ increases. The proof is valid and follows standard probability theory, particularly the Bernoulli trial model where repeated independent attempts lead to geometric or exponential-like decay.
- 2. Power Law Scaling of Aggregate Success Rates
- Claim: Despite individual problems following exponential failure rate decay, the overall success rate across problems follows a power law if the distribution of single-attempt success probabilities is heavy-tailed.
- The authors show that if the distribution $p_D(pass_i@1)$ has a power-law-like left tail near zero, the resulting negative log success rate follows a power law in $k$. They provide sufficiency and necessity theorems, proving that this scaling occurs if and only if the distribution of $pass_i@1$ behaves in a certain way. The derivation follows known statistical results about sums of exponentials forming power laws in appropriate conditions. The use of Gamma functions and integral approximations aligns with established results in scaling law analysis.
- 3. Connection Between Distributional Shape and Power Law Exponents
- Claim: The power law exponent $b$ of the aggregate scaling behavior is directly determined by the shape of the distribution of single-attempt success probabilities.
- The authors analyze different statistical distributions (e.g., Beta, Kumaraswamy, Continuous Bernoulli) and derive their impact on the resulting power law exponent. They prove that a heavy-tailed distribution of single-attempt success rates naturally leads to power law scaling. The derivations match well-known properties of compound binomial distributions, where a sum of many exponentially decaying functions with varying rates can form a power law.
Experimental Designs Or Analyses: The experimental design is well-structured to investigate the scaling behavior of LLMs under inference-time compute scaling. The authors conduct two core experiments: mathematical problem-solving using Pythia models on the MATH benchmark and Best-of-N jailbreaking on HarmBench, analyzing how success rates improve with multiple attempts. The negative log success rate (-log(pass@k)) is used as a primary metric, effectively distinguishing between exponential and power law scaling trends. Additionally, the authors fit success rate distributions using Beta and Kumaraswamy distributions, demonstrating that the heavy-tailed nature of single-attempt success probabilities explains power law behavior. Their proposed distributional estimator for power law exponents significantly reduces compute requirements by 2-4 orders of magnitude, showing clear advantages over traditional methods.
However, the study is somewhat limited in scope, as the sample sizes for both benchmarks (128 math problems, 159 jailbreaking prompts) may not fully capture model behavior across diverse tasks.
Supplementary Material: The supplementary material provides extensive support for the paper’s theoretical and empirical claims. The mathematical proofs and derivations (Appendices E.1 - E.9) rigorously establish why per-problem failure rates decrease exponentially while aggregate success rates follow a power law, given a heavy-tailed distribution of single-attempt success probabilities. These derivations are logically sound and well-explained, though the assumption that such distributions naturally arise in real-world tasks is not fully justified beyond empirical observations.
The benchmark dataset details (Appendices B, C, and D) outline the MATH benchmark (128 problems) and HarmBench dataset (159 prompts) used for evaluating math problem-solving and multimodal jailbreaking, confirming the datasets’ suitability for studying inference-time scaling. Additionally, the comparison of power law estimation methods in the supplementary material validates the authors’ proposed distributional estimator, demonstrating its efficiency and reduced compute requirements. However, further testing on more diverse NLP tasks and robustness checks would strengthen the generalizability of these findings.
Relation To Broader Scientific Literature: This work builds upon and extends key areas in the scientific literature on scaling laws, inference-time compute strategies, and power law behaviors in LLMs. It connects to scaling laws which showed that model performance improves predictably with increased compute, data, and parameter count, and later being refined by emphasized data efficiency over sheer model size. However, while previous work primarily focused on pretraining compute scaling, this paper shifts the focus to inference-time compute scaling, showing how repeated sampling affects model success rates. The discovery that per-problem failure rates decreases exponentially while aggregate success follows a power law introduces a new perspective, linking task difficulty distributions to inference efficiency.
Additionally, the study relates to Best-of-N sampling strategies which demonstrated that generating multiple outputs and selecting the best significantly improves performance. This paper extends those insights by providing a theoretical framework explaining why repeated inference exhibits power law scaling, depending on the distribution of single-attempt success probabilities.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: ### Strengths
- The paper presents a novel theoretical framework explaining why per-problem success rates follow exponential decay, while aggregate success rates exhibit power law behavior due to heavy-tailed task difficulty distributions.
- The introduction of a distributional estimator for power law exponents is innovative and significantly improves compute efficiency, making it a practical contribution for LLM evaluation.
- The work has security implications, particularly in adversarial robustness and jailbreaking prevention, by explaining how increased attack attempts affect model vulnerabilities.
- The paper is well-organized, with a clear presentation of mathematical derivations and strong empirical validation.
### Weaknesses
- The experiments focus on MATH (128 problems) and HarmBench (159 adversarial prompts), which may not fully generalize to other NLP tasks (e.g., summarization, question answering, commonsense reasoning).
- The proposed estimator for power law exponents is validated on synthetic data and limited benchmarks, but its performance on real-world applications (e.g., machine translation, conversational AI) remains uncertain.
Other Comments Or Suggestions: N/A
Questions For Authors: The study notes that some models (e.g., LLaMA 3 8B IT) do not follow power law scaling. Do you have hypotheses on why these models deviate from the expected trend? Could this be due to model architecture, training objectives, or tokenization differences? Understanding these deviations would help clarify when power law inference-time scaling can and cannot be expected.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review. We appreciate your recognition of our work's strengths, particularly that our paper "presents a novel theoretical framework explaining why per-problem success rates follow exponential decay, while aggregate success rates exhibit power law behavior" and that our distributional estimator "significantly improves compute efficiency, making it a practical contribution for LLM evaluation."
### Models and Tasks Are Diverse
> However, the study is somewhat limited in scope, as the sample sizes for both benchmarks (128 math problems, 159 jailbreaking prompts) may not fully capture model behavior across diverse tasks.
We believe our study offers substantial diversity.
1. Our analysis spans leading frontier models from four major AI companies (OpenAI, Google, Anthropic, Meta), open-parameter models ranging from 17M to 12B parameters, and fundamentally different tasks (mathematical problem solving and multimodal jailbreaking). This diversity strengthens our confidence in the generalizability of our findings.
2. We agree that verification across an even wider range of models and tasks would further strengthen generalizability. We relied on existing datasets from Brown et al. (2024) and Hughes et al. (2024) because generating 10,000+ attempts per model per problem involves substantial computational costs. For perspective, an experiment with 10 models across 5 benchmarks with 100 problems each would require 50 million sampled outputs.
3. From a statistical perspective, sample sizes matter in order to make precise statistical statements, e.g., determining confidence intervals. If you feel like 128 and 159 problems with >=10k samples per problem are inadequate for specific claims, could you please tell us which claims you find inadequately justified so we can better assess?
### Real-World Applications
> The proposed estimator for power law exponents is validated on synthetic data and limited benchmarks, but its performance on real-world applications (e.g., machine translation, conversational AI) remains uncertain.
This is a valid concern. While our current validation on both synthetic and real-world data demonstrates the estimator's effectiveness, we acknowledge the need for broader validation across diverse applications. Our method's foundations in statistical theory provide confidence in its generalizability, but we agree that testing across additional domains would be valuable. We view this as an important direction for future work and are exploring partnerships to apply our estimator to machine translation, conversational AI, and other practical applications.
### Deviations from Power Law Scaling
> The study notes that some models (e.g., LLaMA 3 8B IT) do not follow power law scaling. Do you have hypotheses on why
Our theoretical framework provides a clear explanation: power law scaling emerges only when the distribution of single-attempt success probabilities has a heavy left tail, which Llama 3 8B IT lacks when tested on jailbreaking (as shown in Figure 4).
What this means practically is that Llama 3 8B IT has lower robustness against adversarial attacks than the other models Hughes et al. 2024 tested. This could be attributable to many reasons. This could stem from several factors, including its smaller size, potentially less extensive safety training, or absence of defense mechanisms likely present in API-based models like GPT, Claude, and Gemini. Unfortunately, the proprietary nature of these other models limits our ability to investigate these hypotheses further.
### Deeper Theoretical Analysis of Why Heavy Left Tails Appear
> While the authors demonstrate empirically that single-attempt success rates follow a heavy-tailed distribution, they do not provide a deeper theoretical justification for why this occurs in practice. They speculate that benchmark design and selection bias may contribute, but these points are not rigorously analyzed.
The best answer we can think of is that power law scaling emerges in a "Goldilocks zone" of problem difficulty. For heavy left-tails to appear, we need problems that are challenging but not impossible—difficult enough to require many attempts yet still solvable. This explains why we wouldn't observe power law scaling when applying state-of-the-art models like GPT-4.5 or Claude 3.7 Sonnet to relatively simple benchmarks like GLUE (too easy), nor when applying these same models to extremely difficult tasks like Millennium Prize problems (effectively impossible). The power law phenomenon manifests precisely in this intermediate difficulty range.
It is not clear to us what a more compelling or more rigorous investigation would look like. If you have suggestions, we would greatly appreciate them!
Thank you again for your insightful feedback, which will help strengthen both this work and our future research directions. | Summary: This paper investigates the negative log of the average success rate scales as a power law with the number of attempts when LLMs make multiple independent attempts at a task (mathematical problems or jailbreaking). The authors identify a paradox that for any individual problem, success rates should improve exponentially (not as a power law) with more attempts. The paper resolves this paradox by demonstrating that power law scaling emerges from the distribution of per-problem single-attempt success probabilities. Specifically, the authors prove that a power law left tail in this distribution is necessary and sufficient for the emergence of aggregate power law scaling. The paper provides a theoretical framework that explains previously observed deviations from power law scaling and introduces a more sample-efficient method for estimating power law exponents.
Claims And Evidence: - The claim that individual problems scale exponentially is supported by mathematical derivation in Section 2 and empirical evidence in Figure 3, showing negative log success rates falling exponentially for each problem.
- The necessary and sufficient conditions for power-law scaling are rigorously established through formal mathematical proofs (Theorems 3.1 and 3.2) and validated empirically.
- The explanation of why Llama 3 8B IT deviates from power law scaling (because its success distribution lacks the required heavy left tail) is empirically validated.
Methods And Evaluation Criteria: - The authors leverage existing datasets from prior work (Brown et al. 2024, Hughes et al. 2024), ensuring comparability with published results.
- The distributional models (Beta, Kumaraswamy, etc.) used to characterize success probability distributions are appropriate given the bounded nature of probabilities.
- he evaluation of the distributional estimator includes both agreement with least-squares on real data (Figure 6) and superior performance on synthetic data with known ground truth (Figure 7), providing a comprehensive analysis.
[1] Large Language Monkeys: Scaling Inference Compute with Repeated Sampling, Brown et al., 2024
[2] Best-of-N Jailbreaking, Hughes et al., 2024
Theoretical Claims: I verified the key theoretical claims
- Theorem 3.1 (sufficiency): The proof correctly shows that power-law behavior near zero in the distribution yields aggregate power-law scaling.
- Theorem 3.2 (necessity): The proof correctly establishes that aggregate power law scaling requires a power-law left tail in the distribution.
Experimental Designs Or Analyses: - The authors appropriately visualize both per-problem exponential scaling and aggregate power law scaling.
- The distribution fitting and parameter estimation methods are appropriate.
- The backtesting approach for comparing estimators is rigorous, showing the distributional estimator achieves lower relative error.
- The authors appropriately account for sampling limitations and edge cases (problems with extremely low success probabilities).
Supplementary Material: No Supplementary Material provided.
Relation To Broader Scientific Literature: - This work extends recent work by Brown et al. (2024) on "Large Language Monkeys" and Hughes et al. (2024) on "Best-of-N Jailbreaking" by providing a theoretical explanation for their empirical findings.
- It connects to the broader literature on scaling laws in neural networks (Kaplan et al., 2020; Hoffmann et al., 2022) by revealing distributional foundations for observed scaling patterns.
[1] Training Compute-Optimal Large Language Models, Hoffmann et al., 2022
[2] Large Language Monkeys: Scaling Inference Compute with Repeated Sampling, Brown et al., 2024
[3] Best-of-N Jailbreaking, Hughes et al., 2024
[4] Scaling Laws for Neural Language Models, Kaplan et al., 2020
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: **Weaknesses:**
- While the paper explains how power law scaling emerges, it offers limited insight into why single-attempt success rates have heavy-tailed distributions in the first place. The brief discussion of benchmark design and selection bias could be expanded.
- The empirical analyses are limited to specific model families and benchmarks. Verification across a broader range of models and tasks would strengthen generalizability.
Other Comments Or Suggestions: - Typo in line 255: "Kuamraswamy" -> "Kumaraswamy"
Questions For Authors: 1. Your explanation focuses on the statistical properties of problem distributions that lead to power laws. Could you elaborate on potential causal factors that might create these heavy-tailed distributions in natural language tasks?
2. In Section 7, you speculate about connections to pretraining compute scaling laws. Have you found any empirical evidence that supports the "dark matter" hypothesis for neural scaling laws?
Code Of Conduct: Affirmed.
Overall Recommendation: 5 | Rebuttal 1:
Rebuttal: Thank you for your thorough and thoughtful review of our work. We will correct the typo you identified in line 255, changing "Kuamraswamy" to "Kumaraswamy." We address other points below:
### Origins of Heavy Left Tailed Distributions
> The brief discussion of benchmark design and selection bias could be expanded.
> Your explanation focuses on the statistical properties of problem distributions that lead to power laws. Could you elaborate on potential causal factors that might create these heavy-tailed distributions in natural language tasks?
If our paper is accepted, we will use the additional page in our camera-ready version to expand on benchmark design and selection bias as factors leading to heavy-tailed distributions. Your question about causal factors creating these distributions touches on an important insight: power law scaling emerges in a "Goldilocks zone" of problem difficulty. For heavy left-tails to appear, we need problems that are challenging but not impossible—difficult enough to require many attempts yet still solvable. This explains why we wouldn't observe power law scaling when applying state-of-the-art models like GPT-4.5 or Claude 3.7 Sonnet to relatively simple benchmarks like GLUE (too easy), nor when applying these same models to extremely difficult tasks like Millennium Prize problems (effectively impossible). The power law phenomenon manifests precisely in this intermediate difficulty range.
If you can think of a more rigorous way to investigate causal factors, we would welcome your suggestions!
> The empirical analyses are limited to specific model families and benchmarks. Verification across a broader range of models and tasks would strengthen generalizability.
We agree that verification across a wider range of models and tasks would strengthen generalizability. We relied on existing datasets from Brown et al. (2024) and Hughes et al. (2024) because generating 10,000+ attempts per model per problem involves substantial computational costs, e.g., drawing 10k attempts from 10 models across 5 benchmarks with 100 problems each would require 50 million samples. While computational constraints limited the scope of our current study, we view this as an important direction for future work and are exploring more efficient experimental designs to validate our theoretical framework more broadly.
### Dark Matter of Neural Scaling Laws
Regarding your question about the "dark matter" of neural scaling laws, this is indeed the focus of our ongoing follow-up work! The experimental approach involves training numerous small models on scaling ladders and running scaling predictions in reverse to identify deviations from expected power law functional fits. This allows us to fit more complex functional forms and better understand deviations. We're particularly excited about this direction because experimenting with small models enables cheaper and faster iteration, but is currently not useful because extremely small models are poorly predictive of massive models. If we can figure out the appropriate scaling corrections, this would accelerate experimentation with larger models.
Thank you again for your insightful comments and strong support for our work. | null | null | null | null | null | null |
NETS: A Non-equilibrium Transport Sampler | Accept (poster) | Summary: The authors propose a method for sampling from unnormalized probability distributions. The method builds on diffusion-based sampling, where a learnable drift is added to the stochastic differential equation. The authors propose a PINN objective which allows for off-policy optimization and does not require differentiating through the simulations. Moreover, the authors addtionally propose an objective that is based on action matching. The methods are tested on a variety of sampling problems.
Claims And Evidence: Claims are supported by theory. The reviewer has no concerns.
Methods And Evaluation Criteria: The paper uses a variety of evaluation critera such as ESS, Wasserstein distance or MMD which fit for evaluating sampling methods. Moreover, the paper uses a variety of baselines, most of which are quite recent, which is also good.
Theoretical Claims: The reviewer skimmed the Proofs in Appendix A which appear to be correct.
Experimental Designs Or Analyses: The comparision between the different methods seems highly unfair. The authors use knowledge of the target density that significantly reduces the complexity of the problems. For instance, the method linearly interpolates the means of the 40 mode GMM with the prior potential, which effectively removes the difficuly of exploration. Moreover, similar handcrafted potentials are used for other targets as well.
Are the authors using the same interpolation scheme for the baselines like FAB, or CMCD?
Supplementary Material: The reviewer reviewed parts A, F, H.
Relation To Broader Scientific Literature: It is not quite clear to the reviewer what the key contribution/novelty of this paper is. Sampling with PINNs was already done in [1,2]. The connection with Jarzynski’s equality was recently shown in [4] as noted by the authors. Combing diffusion-based sampling with SMC was also recently proposed in [4]. Moreover, off-policy learning with diffusion-sampler is possible using the log-variance loss, see [5].
[1] Shi, Zhekun, et al. "Diffusion-PINN Sampler." arXiv preprint arXiv:2410.15336 (2024).
[2] Sun, Jingtong, et al. "Dynamical measure transport and neural PDE solvers for sampling." arXiv preprint arXiv:2407.07873 (2024).
[3] Vargas, Francisco, et al. "Transport meets variational inference: Controlled monte carlo diffusions." arXiv preprint arXiv:2307.01050 (2023).
[4] Chen, Junhua, et al. "Sequential controlled langevin diffusions." ICLR 25
[5] Richter, Lorenz, and Julius Berner. "Improved sampling via learned diffusions." arXiv preprint arXiv:2307.01198 (2023).
Essential References Not Discussed: The authors did not cite related work that uses PINNs to sample from unnormalized densities, see [1,2]. Moreover, one of the seminal papers [3] on diffusion-based sampling was not cited.
Another paper that might be of interest for the authors is [4] which also builds on dynamic measure transport with resampling schemes
[1] Shi, Zhekun, et al. "Diffusion-PINN Sampler." arXiv preprint arXiv:2410.15336 (2024).
[2] Sun, Jingtong, et al. "Dynamical measure transport and neural PDE solvers for sampling." arXiv preprint arXiv:2407.07873 (2024).
[3] Richter, Lorenz, and Julius Berner. "Improved sampling via learned diffusions." arXiv preprint arXiv:2307.01198 (2023).
[4] Chen, Junhua, et al. "Sequential controlled langevin diffusions." ICLR 25
Other Strengths And Weaknesses: Strenghts:
- The paper proposes an off-policy objective which has potential to increase sample efficiency
- The methods avoids having to backpropagate though the simulation
Weaknesses:
- See Relation To Broader Scientific Literature and Experimental Designs Or Analyses
Other Comments Or Suggestions: A lot of the equations are stated but no context is provided which makes it difficult to read the paper.
Questions For Authors: - What is the difference between the proposed PINN objective and other related PINN based sampling methods, see e.g. [1,2]. The latter in particular also considers a prescribed density evolution.
- What insights does top row, left of Figure 3. give?
- The authors use CMCD-LV and CMCD-KL in Table 1. I assume this refers to the log-variance loss and the KL loss (the former is not mentioned cited). What is the benefit of the proposed method compared to CMCD-LV?
[1] Shi, Zhekun, et al. "Diffusion-PINN Sampler." arXiv preprint arXiv:2410.15336 (2024).
[2] Sun, Jingtong, et al. "Dynamical measure transport and neural PDE solvers for sampling." arXiv preprint arXiv:2407.07873 (2024).
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We thank the reviewer for their careful reading of our paper and positive feedback. We are glad that you found the work well-written, theoretically sound, and that the numerical experiments demonstrate convincing evidence for our method's effectiveness. Below we address your comments and suggestions:
**Experimental Designs Or Analyses:** The potential $U_t$ design is a feature we can exploit, similar to FAB or CMCD. In our drive link (https://tinyurl.com/netsicml) we compare with CMCD using identical interpolation, revealing CMCD struggles with this path: Using the same number of sampling steps as us, CMCD achieves only 15% ESS. To perform these experiments we worked with the authors to best benchmark their codebase.
**Relation To Broader Scientific Literature** and **Essential References Not Discussed:** Thanks for pointing us to the papers by Shi, Zhekun, et al.; Sun, Jingtong, et al.; and Chen, Junhua, et al. We are happy to add a citation to these works, but we wish to note that they fall within the ICML policy of concurrent works: "https://icml.cc/Conferences/2025/ReviewerInstructions".
We would also like to stress that, while variants of the PINN loss have appeared in the literature, our work makes the following new contributions:
- Novel connection between the PINN loss and the Jarzynski weighting factors and annealed langevin dynamics. This result fits in naturally with physical interpretation of Jarzynski equality -- the PINN loss control on the variance of the Jarzynski weights (the dissipation) in the process connecting $\rho_0$ to $\rho_1$.
- Show that we can better approach $\epsilon_t \to \infty$ limit in practice. Note that for our setup, perfect sampling is achieved in this limit, whether the learned transport is perfect or not. While this limit cannot be reached in practice without transport (as it would require taking astronomically large value of $\epsilon_t$ in general), we show that, with some learned transport added, even moderate values of $\epsilon_t$ can improve the sampling dramatically. This feature can be exploited after training as an explicit knob for tuning performance vs cost. This has not been recognized in other ML sampling literature nor in Vargas et al.
- Show that the PINN loss directly controls the KL-divergence between the sampled and target distribution.
- Directly characterize the minimizer of the action matching loss, which is also not known from previous work on it.
Our method connects with CMCD, but we derive results through Fokker-Planck manipulations rather than Girsanov theorem.
We'll add references to Richter and Berner's foundational work.
**Other comments and suggestions:** Could you please specify which equations are stated without context? We will happily try to clarify in editing.
**Questions for authors:**
-*Difference between the proposed PINN objective and other related PINN based sampling methods:* See our reply above about the key new insights that our work provides.
-*Insight of top row, left of Figure 3:* The top row (left) are example configurations of samples as we approach the critical point, and the bottom row are samples past the critical point, where they are fully magnetized. They illustrate the juxtaposition.
-*Benefit of the proposed method compared to CMCD-LV:* We have included in the "additional experiments" section below results on CMCD-LV, which we worked with the CMCD authors to set up. Our results show that, unlike our approach, CMCD-LV exhibits instabilities if the number of sampling steps is too small in training: while the CMCD-LV loss is "off-policy" like our PINN loss, it still requires a trajectory on which to perform the optimization, and the generation of such a $n_{step}$ trajectory requires O($n_{step}$ $\times$ network size) memory. If the trajectory generation is performed over too few steps the CMCD LV loss becomes unstable. After discussing with CMCD authors, this is an issue generally when $n_{step} < 256$.
**Additional experiments:**
In the anonymous drive link https://tinyurl.com/netsicml:
- We provide additional comparison between NETS and CMCD on the mean-interpolating time-dependent potential, as asked by another reviewer. We see that NETS performs well regardless of number of discretization steps, but the log-variance loss struggles for small steps and only performs well when $n_{step} = 256$ or more. We worked with the CMCD authors to implement their code and otherwise use their hyperparameters.
- We have also set up a test on the Lennard Jones potential, and have included preliminary results. We are continuing to refine these experiments and will incorporate them in the final version. | Summary: This paper introduces an algorithm for sampling from unnormalised probability distributions, through non-equilibrium sampling approaches. When computing expectations with respect to the final-time marginal distribution, classical approaches to this would leverage AIS or equivalently Jarzynski / Crooks. This approach, instead introduces a correction term, which corrects the discrepancy between the marginal distribution of the non-stationary 'Langevin' dynamics and the true distribution. This discrepancy can be expressed as a solution to a PDE, which is solved using a PINN-type loss.
The authors provide advice on tuning, suggestions for reducing cost of computations, and demonstrate the method on a Gaussian mixture model, Neal’s Funnel and Mixture of Student-T distributions, and statistical lattice field theory models.
Claims And Evidence: The theoretical claims are supported by proofs in the Supplementary material. The numerical experiments demonstrate the accuracy of the samples with respect to MMD and W2 distance, compared against a number of relevant comparable ML approaches to sampling from unnormalised distributions, which demonstrate convincing evidence that this method is achieves its claimed objective.
Methods And Evaluation Criteria: The proposed approach is sensible, and scales well (compared to competing approaches) in high dimensions. The evaluation criteria is both sensible and comprehensive.
Theoretical Claims: There are several theoretical claims made in the paper:
1. The Jarzynski inequality.
2. The fact that the corrected SDE (15) has exact marginals (prop 2.3)
3. A weighted version, recovering [ Vaikuntanathan & Jarzynski (2008) ] prop 2.4
4. Derivation of the PINN objective (2.5)
5. KL controlled by PINN loss (2.6).
6. Discretised version of Vaikuntanathan & Jarzynski (2008) (B1)
7. Connection with Feynman Kac (C1) for a specific form of b (gradient form).
8. D onwards - generalisations
All the proofs are sensible,
Experimental Designs Or Analyses: There are no experimental designs in this paper.
Supplementary Material: Yes - everything up to appendix E.
Relation To Broader Scientific Literature: This contribution sits in a wider body of literature which seek to correct dynamics by introducing KL-optimal control, building on ideas such as Follmer drift (e.g. [Huang et al, Convergence Analysis of Schrodinger-Follmer Sampler without Convexity, 2021], [Tzen and Raginski, 2019], [Vargas et al, Bayesian Learning via Neural Schrodinger-Follmer Flows, 2021]), and more recently [Vargas et al, Transport meets Variational Inference: Controlled Monte Carlo Diffusions, 2024]. The task of learning the vector field to correct the drift, as is done in this paper, has been studied approached in various manners, [Reich, A dynamical systems framework for intermittent data assimilation, 2011], [Heng, Jeremy, Arnaud Doucet, and Yvo Pokern. Gibbs flow for approximate transport with applications to Bayesian computation. 2021] and earlier, [Vaikuntanathan & Jarzynski (2008)].
The approach has strong connections with CMCD [Vargas et al, 2024], which leverages a similar PINN loss (for an associated HJB equation) to this paper. This is studied in detail in the supplementary information.
Essential References Not Discussed: None.
Other Strengths And Weaknesses: Strengths: The paper is well written, and the main arguments of the per are straightforward to follow.
Weakenesses: While this is a good paper, the numerical experiments remain a bit lacking. It would have been nice to see a wider range of potnential interactions.
Other Comments Or Suggestions: Prop 2.3 Then ρt(x) is the PDF of X_t not X_T?
Questions For Authors: No further questions
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive feedback. We are glad that you found the contribution novel and the theory sound and thorough. Below we address your comments and suggestions and supply more information on experimental results:
**Additional experiments**:
In the anonymous drive link https://tinyurl.com/netsicml:
- We provide additional comparison between NETS and CMCD on the mean-interpolating time-dependent potential, as asked by another reviewer. We see that NETS performs well regardless of number of discretization steps, but the log-variance loss struggles for small steps and only perform well when $n_{step} = 256$ or more. We worked with the CMCD authors to implement their code and otherwise use their hyperparameters.
- We have also set up a test on the Lennard Jones potential, and have included preliminary results. We are continuing to refine these experiments and will incorporate them in the final version.
---
Rebuttal Comment 1.1:
Comment: Many thanks for your comments, I am happy with the proposed updates. I will keep my score as-is. | Summary: The authors propose NETS, a Non-Equilibrium Transport Sampler that interpolates between two unnormalized densities $\rho_0$ and $\rho_1$ based on a user-defined choice of interpolant. A key contribution of the proposed approach is to introduce learning in the dynamics of continuous time annealed importance sampling by leveraging a learned drift which aims to minimize the variance of the importance weights, which can be combined with Sequential Monte Carlo by re-sampling during the simulation procedure based on the evolved importance weights. The paper introduces a physics informed neural networks (PINN) based loss to learn the drift function, and show that it additionally provides bounds on the KL divergence. Experiments are conducted on multiple different unnormalized densities, ranging from low and high dimensional Gaussian mixture models, Neal's funnel and mixture of Student-t distributions, as well as $\phi^4$ lattice theory and highlights the superiority of NETS over some of the established baselines.
Claims And Evidence: The fundamental claims made by the work is proposing a framework for sampling according to an unnormalized density which uses learned methods and can be trained without back-propagating through the dynamics. The loss considered for this training is well motivated and the claims are supported by mostly clear theoretical evidence, with some concerns that I have raised in sections below. Broadly, I think the authors have provided good evidence (mostly theoretical) regarding the claims that they make and I outline more specific questions and concerns regarding the evidence provided in the later sections.
Methods And Evaluation Criteria: Yes, the methods and evaluation criteria considered in this work make sense. I do, however, have a few questions regarding the evaluation criteria and the benchmarks considered.
- In the proposed approach, is it possible to approximate the density $\hat{\rho}_1(x)$ corresponding to any sample $x$? In particular, I am wondering if it would be possible to additionally evaluate the proposed method to approximate the partition function $Z_1$?
- Is there a reason why the authors authors did not evaluate their models on the Lennard-Jones potential or more complex problems to better benchmark against related work?
- Why were some of the baselines (eg. iDEM, CMCD-LV) dropped for Table 2?
- Is there a reason why the authors do not consider SMC or HMC as one of the baselines in Tables 1,2?
- The authors should also consider evaluating against CMCD with the same interpolation of potentials as it is imperative to perform this fair comparison when evaluating the benefits of NETS.
Theoretical Claims: I went over some of the theoretical claims made in the paper as well as some of the proofs outlined in Appendix A. While I enjoyed reading the theory outlined in the paper, there were some steps that were not obvious to me
- It was not clear as to how the authors jump from equation (6) to equation (7). In the latter, when talking about gradients and divergences, is it w.r.t the augmented state or the original state? It would be beneficial if the authors could actually sketch out the proof underlying this step.
- Assuming we have arrived at equation (7), how do the authors convert the partial differential equation to the coupled system of differential equation considered in Proposition 2.1? While the subsequent details and proof regarding expectation w.r.t the measure $\rho_t$ is clear, where and how does the couple dynamics come from the partial differential equation itself was not clear. I would request the authors to also put more focus and background into this step.
- Why is learning $\partial_t F_t$ a good idea? Is it because equation (49) implies that the objective is minimized if and only if the learned $F_t$ perfectly approximates the true free energy? But how does equation (48) lead to equation (49)? Are there any regularity conditions needed that $\rho_t$ vanishes faster than $\hat{b}_t$?
- In Section 2.5, can the authors talk about uniqueness of the solution that can be obtained? Why can there be more than one minimizer $(b, F)$?
- In Appendix A, within equation (35) when the authors use integration by parts, how do they ensure that the term at the boundary vanishes?
Experimental Designs Or Analyses: The experimental design and analyses conducted in this work is sound, except for one potential concern. The authors highlight that PIS did not converge on the Funnel task, while [1] shows that it does converge for this task. Could the authors clarify on what was the issue they came across?
[1] Sendera, Marcin, et al. "Improved off-policy training of diffusion samplers." Advances in Neural Information Processing Systems 37 (2024): 81016-81045.
Supplementary Material: I went over Appendix A in detail and then briefly skimmed over Appendices C-E before finally going over the experimental details in Appendices H and I.
Relation To Broader Scientific Literature: In my opinion, this is an interesting work and has key contributions to the scientific literature. In particular, it combines ideas from diffusion models and annealed importance sampling / sequential monte carlo and pushes the frontier of learned samplers.
Essential References Not Discussed: The authors cover most of the relevant literature off the top of my head.
Other Strengths And Weaknesses: **Strengths**
- The work is well motivated and tackles a challenging and very relevant problem. The combination of annealed importance sampling and diffusion style transport methods is novel and a good contribution to the field.
- The experiments highlight that the method seems to work well when compared to other learned samplers on the suite of tasks considered.
- It also provides some nice properties in the sense that one can compute expectations w.r.t the target measure through the use of importance weights.
**Weaknesses**
- I think the writing needs a bit more work. The way I understood it is that the authors consider defining an interpolation $\rho_t$ and then assume a drift $b$ which is trained with a loss directly derived from the continuity equation, i.e. $b$ is learned to satisfy the continuity equation for the marginals $\rho_t$. This learned drift is then artificially added to the Fokker Planck equation and then relevant terms are grouped together to define the drift, diffusion and importance sampling weight updates part of the coupled dynamics. In my opinion, this story could have been more clearly explained so that it makes it easier for a broader community to understand the work.
- The authors should consider a few more complex experimental settings, in particular the Lennard-Jones potential would be a good candidate.
- The authors mention that they can estimate the drift off-policy as well but they do not provide any experimentation with it. I think this would be a good ablation to add, and can be shown on perhaps a relatively easier task for some off-policy path.
Other Comments Or Suggestions: - Did the authors forget to include a concluding section or is this by design? Currently, the end of the paper feels quite abrupt.
Questions For Authors: I just have one additional question, apart from the ones raised already.
- Within the related work section on augmenting sampling with learning, the authors talk about minimizing the KL divergence between model and target as well as stochastic optimal control. Aren't these two approaches the same, where the latter minimizes KL divergence between the considered path and a reference path measure through application of Girsanov's theorem?
Apart from this, I think the paper is well motivated and well positioned, so I will be happy to raise my score if the authors could clarify the questions that I have raised.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable feedback, and we are happy to hear you found the work theoretically sound and novel. Below, we try to address all your questions, and provide some info on new additional experiments. We itemize our response according to the headings that appear in you review:
**Methods And Evaluation Criteria:**
- *estimating Z_1:* Yes, it's straightforward to construct an unbiased estimate if we know $Z_0$: $Z_1 = Z_0 \cdot \mathbb{E}[e^{A_1}]$. This follows from equation (21) at $t=1$, which can beestimated empirically over trajectories from the coupled SDE/ODE (19,20).
- *complexity of experiments:* We have included preliminary results on a Lennard-Jones system in an "Additional Experiments" section discussed below. Note that the $\phi^4$ models studied here are arguably more complex than the LJ-13 system: these $\phi^4$ models are either 256 or 400 dimensional problems, much bigger even than the LJ55 system, and are known as a proxy for one of the hardest sampling problems in research -- studying the strong force in lattice field theory. In particular, the $\phi^4$ models, like these lattice systems, suffer from what is known as critical slowing down in which MCMC algorithmic efficiency dramatically decays as one approaches the critical parameters. It has also been a target for neural samplers for some time [2].
- *Dropped Baselines for iDEM, CMCD:* We only included baselines where we could either quote authors' results or work with them to verify implementation. We did the latter with CMCD on Funnel and GMM, but couldn't for MoS distribution.
- *SMC/HMC*: SMC can be incorporated within our approach (this is what we refer to as "resampling"), but on its own isn't efficient enough for our sampling tasks. We tried HMC-type sampling (Inertial NETs, Appendix D.1) but saw little difference.
- *CMCD interpolation path:* We thank the reviewer for pointing this out. We have included in the "additional experiments" section below results on CMCD on the path interpolating the means, which we worked with the CMCD authors to set up. With this path as well, CMCD-LV exhibits instabilities if the number of sampling steps is too small in training: while the CMCD-LV loss is "off-policy", it still requires a trajectory on which to perform the optimization, and the generation of such a $n_step$ trajectory requires O($n_{step}$ $\times$ network size) memory. If the trajectory generation is performed over too few steps the CMCD LV loss becomes unstable. After discussing with CMCD authors, this is an issue generally when $n_{step} < 256$. This will become apparent in the "additional experiments" section below.
**Theoretical Claims:**
- *eq (6) and (7):* SMC can be incorporated within our approach (as "resampling"), but on its own isn't efficient enough for our sampling tasks. We tried HMC-type sampling (Inertial NETs, Appendix D.1) but saw little difference.
- *derivation of (7) and (10,11):* We'll add derivations in the appendix, as these aren't commonplace in ML literature.
- *Why learn $\partial_t F$?:* The loss function is a PINN loss which means it is minimized at 0 (i.e., when the physics equation is solved). $\partial_t F_t$ is a free parameter in this equation and must be fit if one wants to learn off policy. We do indeed use that $\hat b_t \rho_t$ vanish at infinity, which is a consequence of the conservation of probability that prohibits the existence of a probability current at infinity.
- *uniqueness of PINN minimizer:* The minimizer is unique *if and only if* the velocity field is learned in gradient form $\nabla \hat \phi_t = \hat b_t$.
- *integration by parts:* This is again a consequence of conservation of probability requiring no probability flux at infinity.
**Experimental Designs:**
- *PIS not included:* The PIS authors confirmed to us that they used $\sigma^2 = 3$ (not $\sigma^2=9$ as stated in their paper). This makes the problem much easier, so we omitted this experiment.
**Additional experiments**:
In the anonymous drive link https://tinyurl.com/netsicml:
- We provide a NETS vs CMCD comparison on mean-interpolating potentials which shows NETS performs well regardless of discretization steps, while log-variance struggles for small steps, improving only when $n_{step} ≥ 256$.
- We've included preliminary Lennard Jones results and will incorporate refinements in the final version.
We'll make your suggested expository edits when allowed to edit the paper. Thank you for your insights - if we've clarified everything, we'd appreciate an increase in your rating.
[1] Alpha Collaboration, "Critical Slowing Down and Error Analysis in Lattice QCD simulations," Nuclear Physics B, 2010.
[2] Albergo, Kanwar, Shanahan, "Flow-based Generative Models for MCMC in lattice field theory," Physical Review D, 2019.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for providing a detailed response. I am happy with the answers, and am looking forward to the updated manuscript with more details! I have also updated my rating accordingly. | Summary: This paper investigates sampling from a target distribution within the annealed importance sampling (AIS) framework. Inspired by Jarzynski equality, a continuous-time version of AIS can be formulated using an SDE for samples and an ODE for weights. Building on this, the paper proposes NETS by introducing an additional drift function into AIS, and propose learning this drift function using either a PINN loss or an action matching loss. Experimental results demonstrate the effectiveness of the proposed algorithm.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: Yes.
Experimental Designs Or Analyses: Yes.
Supplementary Material: Yes, I reviewed some of proofs in the appendix.
Relation To Broader Scientific Literature: This paper introduces a novel sampling algorithm inspired by concepts from nonequilibrium physics, aimed at enabling efficient sampling from unnormalized distributions.
Essential References Not Discussed: Yes. One related reference is missing: Junhua Chen et al. Sequential Controlled Langevin Diffusions. ICLR, 2025.
Other Strengths And Weaknesses: Strengths:
- The paper is generally well-structured and theoretically sound.
- A novel objective based on PINN or action matching is proposed to learn an additional drift function in the context of AIS.
Weaknesses:
- The main proposition of the paper (Proposition 2.4) has already been presented in Vargas et al. (2024), although the current derivations are arguably simpler to some extent.
- Please see the questions below for the authors.
---
Reference:
Francisco Vargas et al. Transport meets Variational Inference: Controlled Monte Carlo Diffusions. ICLR, 2024.
Jingtong Sun et al. Dynamical Measure Transport and Neural PDE Solvers for Sampling. Arxiv, 2024.
Other Comments Or Suggestions: - 2.2. Non-equilibrium sampling with importance weights: I didn't initially see how the right-hand side of Eq. (6) could be related to weights until the introduction of Jarzynski equality. The readability could be improved.
- Line 669: the term $\hat{b}_t$ is missing in Eq.(40). Should it be $\nabla U_t \cdot \hat{b}_t$? It seems that this omission also occurs in other equations in the appendix. Please check it.
Questions For Authors: - I have the following questions when comparing NETS with CMCD:
- Regarding the sentence 'requiring either backpropagation through the SDE or computation with a numerically unstable reference measure on a fixed grid' (Line 73), I guess that the first drawback can be addressed by off-policy divergences, e.g., log-variance. Could you clarify what is meant by 'computation with a numerically unstable reference measure on a fixed grid'? Should the grids be tuned to avoid instability?
- Regarding the sentence 'Moreover, our optimize-then-discretize framework allows for post-training adaptation of both step size and time-dependent diffusion, providing tunable parameters to enhance performance.' (Line 79), I am curious if this approach can be applied to other diffusion-based samplers, or what might prevent other samplers from doing the same?
- Aside from the fact that the PINN loss can be trained in an off-policy manner, are there any other advantages over KL divergence optimization? Additionally, have you considered experimenting with CMCD-LV, which can also be trained in an off-policy manner?
- Missing reference: Junhua Chen et al. Sequential Controlled Langevin Diffusions. ICLR, 2025, where CMCD combined with SMC (AIS with resampling) is proposed. I am courious about the comparision in terms of both concept and experiment.
- Figure 4: Taking $\epsilon_t \rightarrow \infty$ shows improved performance, although with more discretization steps. This should be the motivation for introducing an additional drift function - it may be imperfect for most of the time but still helpful, thus leading to fewer discretization steps?
- Line 614 - 628: Is the definition of $g_t$ sepicial? Usually, we don't need $e^a$ to define the marginal distribution. The motivation here is to link it to the weight function? If so, does Eq.(36) describe the evolution of a *weighted* distribution?
---
Update after rebuttal: I have raised my score to 4.
---
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable feedback, and we are happy to hear you found the work theoretically sound and novel. Below, we try to address all your questions, and provide some info on new additional experiments. We itemize our response according to the headings that appear in you review:
**References**
- Thanks for pointing us to this paper. The SMC that they propose is also something that we propose in this work (when we may choose to do 'resampling'). We are happy to add a citation to this work, but we wish to note that this falls within the ICML policy of concurrent works: "https://icml.cc/Conferences/2025/ReviewerInstructions".
**Weaknesses**
- Proposition 2.4 in general is not new, though this derivation is. Please note that it is also not new in Vargas et al. The essential machinery of this proposition has been known for 20 years in the statistical physics community (Jarzynski 1999, Vaikuntanathan & Jarzynski 2008), and neither us nor Vargas et al. are claiming to have discovered this equality. However, we both provide different derivations of it that make it more interpretable for sampling. Vargas et al. prove this result through the use of Girsanov, and here we provide a proof of it through simple manipulations of the the Fokker-Planck equation and other PDEs.
**Other Comments Or Suggestions:**
- We are happy to improve the readability of equation 6 when we can edit the text if the paper is accepted.
- thanks for catching the typo in Eq (40).
**Questions:**
- *backprop thru sde and numerical instability:* The first drawback cannot be entirely addressed by the CMCD log-variance loss. It is still a loss function that must use samples along a *trajectory* as input, and the generation of such a trajectory of lengthn $n_{step}$ requires $O$($n_{step}$ $\times$ network size) memory. By numerical instability, we mean that the CMCD LV loss becomes unstable if the trajectory generation is performed over too few steps. After discussing with CMCD authors, this is an issue generally when $n_{step} < 256$. This will become apparent in the "additional experiments" section below. Let us also stress that, in contrsat, our PINN loss can be evaluated pointwise in space and time.
- *Optimize then discretize:* Thanks for the observation. We imagine most other diffusion models could do this as well, but the main obstacle would be whether or not their method allows for an adaptable diffusion coefficient *post-traing* (like our method). If not, increasing the diffusion would require decreasing the time-step for trajectory generation during training.
- *PINN vs KL*: The benefits of the off-policy nature of the PINN loss manifest themselves in two ways: 1) The PINN loss is valid with respect to any sampling distribution), and 2) if you want to use simulated samples from your model, you do not need to backpropagate through their generation. The KL loss explicitly needs samples from the model distribution, and requires a backprop through the solve. As mentioned earlier, the log-var loss still needs to generate trajectories but has a smaller memory footprint. We did test the log-variance more, too. See: "additional experiments".
- *Paper by Chen et al:* Thanks again for pointing us to this. We are happy to reference it, though again per ICML policy it is out of the scope of this submission to benchmark it.
- *$\epsilon \rightarrow \infty$ limit:* Yes, that is exactly the motivation of the additional drift function. For our setup, perfect sampling is achieved in this limit, whether the learned transport is perfect or not. While this limit cannot be reached in practice without transport (as it would require taking astronomically large value of $\epsilon_t$ in general), we show that, with some learned transport added, even moderate values of $\epsilon_t$ can improve the sampling dramatically. This feature can be exploited after training as an explicit knob for tuning performance vs cost. This has not been recognized in other ML sampling literature.
**Additional experiments**:
In the anonymous drive link https://tinyurl.com/netsicml :
- We provide additional comparison between NETS and CMCD on the mean-interpolating time-dependent potential, as asked by another reviewer. We see that NETS performs well regardless of number of discretization steps, but the log-variance struggles for small steps. Note that LV begins to perform better when $n_{step} = 256$. We worked with the CMCD authors to implement their code and otherwise use their hyperparameters. This is the numerical instability we referred to earlier.
- We have also set up a test on the Lennard Jones potential, and have included preliminary results. We are continuing to refine these experiments and will incorporate them in the final version.
Please let us know if you have any other questions, and thanks again for your insights. If we have clarified everything for you, we would greatly appreciate an increase in your rating.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' time and effort in addressing my concerns and conducting additional experiments.
I am satisfied with the responses. However, the authors did not seem to answer the question posted previously:
- Line 614 - 628: Is the definition of $g_t$ sepicial? Usually, we don't need $e^a$ to define the marginal distribution. The motivation here is to link it to the weight function? If so, does Eq.(36) describe the evolution of a *weighted* distribution?
During the rebuttal period, I have a few follow-up questions regarding the methodology:
- You correctly pointed out that optimizing the LV loss requires simulating entire trajectories. I have not yet played with NETS and PINN loss. Based on Algorithm 1 in the paper, we should also simulate entire trajectories when optimizing the PINN loss? Additional experimental results do indicate that the PINN loss remains stable when $n < 256$.
- Given that different number of discretization steps can be used during inference, does $\epsilon_{t}$ remain fixed? $\epsilon_{t} \Delta t$ can be coupled together and $\Delta t$ is changed, allowing for a changable diffusion coffiecient. Also, have the authors tested the scenario, for example, training NETS with fewer discretization steps, while evaluating it with a fixed, but relatively more number of steps?
- Regarding KL control, since we usually optimize a path-wise objective, have the authors considered the following relation: $\log \frac{\mathrm{d} \overrightarrow{\mathbb{P}}}{\mathrm{d} \overleftarrow{\mathbb{P}}} = \int_{0}^{T} \left( - \nabla \cdot \widehat{b}_t + \nabla U_t \cdot \widehat{b}_t + \partial_t U_t - \partial_t \widehat{F}_t \right) \mathrm{d} t$?
---
Reply to Comment 1.1.1:
Comment: Thank you for these additional comments and sorry for missing your question about $g_t$.
**Regarding the function $g_t(x) = \int_{\mathbb R} e^a f_t(x,a) da$:** it is *not* the marginalization over $x$ of the extended probability density $f_t(x,a)$ (which would indeed read $\int_{\mathbb R} f_t(x,a) da$ *without* the factor $e^a$, as you point out) but rather the unnormalized density given explicitly by:
$$
g_t(x) = Z_0^{-1} e^{-U_t(x)}
$$
This is what is established in Eq. (37) and it implies that
$$
\int_{\mathbb R^d} g_t(x) dx = Z_tZ_0^{-1}
$$
and, given any test function $\phi$,
$$
\int_{\mathbb R^d} \phi(x) g_t(x) dx = Z_0^{-1} \int_{\mathbb R^d} \phi(x) e^{-U_t(x)} dx
$$
Since $g_t(x) = \int_{\mathbb R} e^a f_t(x,a) da$, these equations can also be written as
$$
Z_tZ_0^{-1} = \int_{\mathbb R^{d+1}} e^a f_t(x,a) dx da \equiv \mathbb{E}[e^{A_t}]
$$
and
$$
Z_0^{-1}\int_{\mathbb R^d} \phi(x) e^{-U_t(x)} dx = Z_0^{-1} \int_{\mathbb R^{d+1}} e^a \phi(x) f_t(x,a) dx da \equiv Z_0^{-1}\mathbb{E}[e^{A_t}\phi(X_t)]
$$
Dividing the second equation by the first establishes that
$$
\frac{\mathbb{E}[e^{A_t}\phi(X_t)]}{\mathbb{E}[e^{A_t}]} = Z_t^{-1}\int_{\mathbb R^d} \phi(x) e^{-U_t(x)} dx
$$
which is the result of Proposition 2.1.
We will be happy to clarify the role and meaning of $g_t(x)$ in the revised version.
Regarding your **other questions**:
- In Algorithm 1 in the paper, we specify a trajectory, but it is merely to get any samples over $[0,T] \times \mathbb{R}^d$, not to backpropagate through trajectories. The vector field $b_t$ simply needs to be fit to the PINN using the sampled data, so the stability of the loss is not influenced by the trajectory. We can store these samples in a replay buffer and draw a mini-batch to learn over. As you say, with LV-CMCD, the loss involves the trajectories themselves. In contrast, with NETS the trajectories only play a role of getting decent samples to evaluate the PINN loss on.
- In general, we can play around with $\epsilon_t$ but to keep discretization bias low we need to scale $dt$ down with greater $\epsilon_t$. For a fixed $dt$, it is probably best to use the largest $\epsilon_t$ that one can given the discretization (the benefit of which is captured in Figure 3). Note also that a discrete time version of the sampling and weight computation is given in the appendix that addresses discretization error.
- The Crooks equation you point out could indeed be used to derive our KL bound: to remain consistent with the general approach we take in the paper, we provide an alternative proof, based on using the FPE. We could make this connection with the path KL (a bit like what we do when we make a connection with CMCD) if you think that it is useful.
Please let us know if this addresses all of your questions, and thanks again for your insights. | null | null | null | null | null | null |
Rapid Overfitting of Multi-Pass SGD in Stochastic Convex Optimization | Accept (spotlight poster) | Summary: The author(s) analyzed the generalization lower bound of SCO in the multi-pass scenario. The lower bound that SCO with multi-pass can quickly overfit and yield $\Theta(1)$ population loss.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: The proofs looks reasonable to me. I did not check the proof in detail.
Experimental Designs Or Analyses: This is a pure theoretical work and there is no numerical experiments.
Supplementary Material: No
Relation To Broader Scientific Literature: No
Essential References Not Discussed: No
Other Strengths And Weaknesses: Pros:
- The writing of the paper is crispy, it is a joy to read the paper;
- The lower bound of SCO in Theorem 3.1 and 3.2 are new and interesting to me. The analysis is also neat.
Cons:
- The theoretical result is some kind of inconsistent with practitioner's observation as multi-pass SGD usually doesn't really hurt the generalization. Better to add a limitation section to address this issue.
Other Comments Or Suggestions: right column of line 255, $\partial h_1(0) 0 $, is this a typo?
Questions For Authors: Merely looking at Theorem 3.1, setting $\eta = T^{ -3/4 }$ yields population risk converges to 0. Why can't we do this?
Ethical Review Concerns: NA
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for the thorough review and input. Below are our responses to the main comments in your review:
> “The theoretical result is some kind of inconsistent with practitioner's observation as multi-pass SGD usually doesn't really hurt the generalization. Better to add a limitation section to address this issue.”
Thanks for this comment. We will add a discussion addressing this limitation. Needless to say, lower bounds depict the worst case and not “typical” cases, which are arguably those encountered in practice. In fact, even in theory, if we add further assumptions such as strong convexity, then multiple passes will no longer hurt performance. We will add this comparison to practical observations, as well as related theoretical setups, in the final version.
> ”Merely looking at Theorem 3.1, setting $\eta = T^{ -3/4 }$ yields population risk converges to 0. Why can't we do this?”
You are correct: the lower bound is matched by Bassily’s stability upper bound (up to an additive term of $\eta T/n$). So setting $\eta = T^{-3/4}$ will achieve $T^{-1/4} = 1/ ({nK})^{1/4}$ when $n$ is the sample size and $K$ is the number of epochs. Noticeably, in order to achieve the optimal $1/\sqrt{n}$ rate one would need as many as $K = n$ epochs (i.e., $T=n^2$ total number of steps). We will discuss these observations in the final version. | Summary: This paper considers multi-pass SGD in the SCO setting. Single pass SGD is known to achieve optimal excess population error, but it was not clear how it performs in terms of population loss after multiple passes. In fact, this paper shows lower bounds indicating that (several versions of) multi-pass can quickly "overfit" after even 2 passes over the training data and can have Omega(1) excess population risk when using the single-pass-optimal O(1/sqrt{n}) stepsize.
Claims And Evidence: The claims in the paper are supported by proofs. However, I would not necessarily describe these proofs as "clear". This can probably be partially attributed to the fact that the underlying problem and arguments are quite complicated and delicate, but I found it difficult to follow the details of the proofs in the paper.
The proof sketches give some idea of what is going on, but a key part of the argument rests on Livni's 2024 connection between "sample dependent oracles" and a standard SCO oracle. This connection is hardly explained at all, and so a thorough understanding of this paper appears to depend on the reader already being intimately familiar with Livni 2024 (I am not). Perhaps this is unavoidable, but I think the paper could be improved and made more readable by recapping this paper's Lemma D.1 from Livni 2024 and explaining what is going on there.
In the same vein, the proof of Lemma 5.1 is very terse and difficult to follow in full detail. It appears that an effort was made to cram it into the page limit. I would argue for making it easier to follow, even if it requires punting some/all of it to the appendix.
Methods And Evaluation Criteria: Yes, this is a theory paper and the implicit evaluation metric is, appropriately, the quality of the results of the theorems.
Theoretical Claims: Even though I would describe myself as an expert on lower bounds for convex optimization, I found it very difficult to follow these proofs, and I am nowhere close to 100% confident that I would have caught an error if there was one. That said, I did read through all of the proofs and did my best to follow them, and I found nothing that looked incorrect.
I will also add that these results hinge crucially on results from Livni 2024, which I was not previously familiar with and which I did not review in detail.
Experimental Designs Or Analyses: N/A
Supplementary Material: I read through all of the appendices in an attempt to verify the proofs.
Relation To Broader Scientific Literature: This is a very interesting paper that ties in well to the existing literature. Previously, it was mostly assumed that (1) one-pass SGD is optimal in terms of the excess population risk, (2) "few"-pass SGD is probably about the same or maybe even a little better in terms of excess population risk, and (3) that "many"-pass SGD might eventually overfit. This paper is a useful and thought provoking corrective to point (2), showing that even 2-3 passes over the training data are enough to overfit, or at least have substantially slower convergence (if the stepsize is chosen optimally, you get T^{-1/4} convergence for multi-pass instead of T^{-1/2} for one-pass).
Essential References Not Discussed: Nothing comes to mind.
Other Strengths And Weaknesses: See previous comments. Generally, I think the paper is interesting and the prose are well-written and understandable, but I the proofs are quite terse and hard to follow--I think the paper would be greatly improved by making the technical ideas more accessible and readable.
Other Comments Or Suggestions: pg 5: "furher", "minimzer"
Section 3: the first sentence says "for K \leq n epochs", but Theorem 3.1 says "3 \leq K \leq n^2", so there is a discrepancy there.
Section 4: first sentence says "\Omega(\eta \sqrt{T})" but Theorem 4.1 says "\eta \sqrt{n}". This is consistent because it is referring to one pass SGD, but I would suggest using the same letter in both places.
Questions For Authors: How much of this hinges on using a fixed stepsize? For instance, the h_2 portion of the argument in the Theorem 3.1 sketch seems like it could be complicated even by using variable stepsizes, even e.g., \eta + 10^{-100} N(0,1), because then the argmaxs would be a.s. unique after 2d steps so you could lose control over the iterates. The argument also seems to hinge similarly on initialization at zero (or else argmaxs might always be unique). Is that correct or am I missing something? Do you see a path towards a similar lower bound for variable stepsizes or e.g. random initialization?
Ethical Review Concerns: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for the thorough review and feedback. We respond to main claims below:
> “a key part of the argument rests on Livni's 2024 connection between 'sample dependent oracles' and a standard SCO oracle. This connection is hardly explained at all”
We thank you for the comment - we will improve this discussion for the final version. Notice that we do discuss it briefly in the main text (see left column line 290) and that we did add a section about the reduction done in Lemma D.1. But, we will add further intuition in the main text to make the result more accessible.
> “In the same vein, the proof of Lemma 5.1 is very terse and difficult to follow in full detail. It appears that an effort was made to cram it into the page limit.” I would argue for making it easier to follow, even if it requires punting some/all of it to the appendix.”
We will follow the reviewer’s advice and we will make it easier to follow, by deferring it to the appendix if needed.
> ”How much of this hinges on using a fixed stepsize? For instance, the h_2 portion of the argument in the Theorem 3.1 sketch seems like it could be complicated even by using variable stepsizes [...] The argument also seems to hinge similarly on initialization at zero [...] Do you see a path towards a similar lower bound for variable stepsizes or e.g. random initialization?”
These are great questions, indeed certain variants of SGD pose a challenge to our proof technique, and it is important to add a discussion to the paper on that. For some variants of SGD such as dynamic learning rate, our technique can be utilized, perhaps with a slight increase in the dimension (when using $h_1$ and not $h_2$, see right column line 251).
As for random initialization - you are correct that our construction relies on deterministic initialization. Showing lower bounds for random initialization is an interesting open problem, and in fact not just in the context of multi-pass SGD but also for full batch GD, for example. | Summary: This work considers the Stochastic Convex Optimization (SCO) setting and investigates the excess population risk and sample complexity lower bounds for Stochastic Gradient Descent (SGD). While the majority of previous work tackled GD or single-pass SGD, this paper mainly focuses on the multi-pass version of SGD. More specifically, the authors derive excess population risk lower bounds of $\Omega(\eta\sqrt{T} + 1/\eta T)$ for several versions of multi-pass SGD, both with and without replacement, when the number of passes exceeds a small constant. These lower bounds are tight and match the corresponding upper bounds. Additionally, the authors present a novel empirical risk lower bound of $\Omega(\sqrt{\eta}n)$ for single-pass SGD, further enriching the understanding of SGD's generalization properties.
Claims And Evidence: Not applicable.
Methods And Evaluation Criteria: Not applicable.
Theoretical Claims: The major theoretical conclusions are reasonable, and the key steps in the proofs appear to be correct.
Experimental Designs Or Analyses: Not applicable.
Supplementary Material: I have briefly checked the validity of the proof in the appendix.
Relation To Broader Scientific Literature: This work is entirely theoretical and does not present any negative broader scientific or societal impacts.
Essential References Not Discussed: The authors appropriately cite the most relevant prior work and provide a clear and detailed discussion of how their contributions relate to and advance the existing literature.
Other Strengths And Weaknesses: **Strengths**
Generally speaking, the paper is well-written, with a clear exposition that makes it easy to follow. The proof sketch is intuitive and highlights the key ingredients and techniques used in constructing lower bounds. From a contribution perspective, this paper gives a nearly complete picture for the understanding of excess population risk lower bound of multi-pass SGD under the non-smooth SCO setting. These results offer novel insights into the generalization property of multi-pass SGD and the relationship between overfitting, step-size and number of passes.
**Weaknesses**
However, the major weakness of this paper lies in its limited novelty. It is evident that the excess risk lower bound in Theorem 3.1 and Theorem 3.2 share an identical form with those in [Amir et al., 2021] and [Bassily et al., 2020]. Furthermore, as also acknowledged by the authors, the technique used to construct these lower bounds are direct extension of the technique used in [Amir et al., 2021]. While the results in this paper are technically new and are different from prior work such as [Amir et al., 2021], which tackles only GD, and [Bassily et al., 2020], which provides only uniform stability lower bounds, the conclusions in this paper are not entirely surprising and unexpected. Given this lack of significant improvement, I tend to reject this work unless the authors could provide more compelling arguments to address these concerns or offer deeper insights into the open questions discussed in the paper.
Other Comments Or Suggestions: Not applicable.
Questions For Authors: No further questions.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for the detailed review and discussion. We respond to the main claims below:
> It is evident that the excess risk lower bound in Theorem 3.1 and Theorem 3.2 share an identical form with those in [Amir et al., 2021] and [Bassily et al., 2020]. Furthermore, as also acknowledged by the authors, the technique used to construct these lower bounds are direct extension of the technique used in [Amir et al., 2021]. [...] the conclusions in this paper are not entirely surprising and unexpected.”
We see that a more thorough and in-depth comparison of our work to these two references is warranted, and we will make this improvement in the paper. However, the lower bounds in 3.1 and 3.2 do **not** share an identical form with Amir et al. nor with Bassily et al, and the technique used to construct the lower bounds are **not** direct extensions of Amir et al.
*Regarding Bassily et al:*
As you point out, Bassily et al shows a lower bound on stability and not for the generalization gap. One could argue that bad generalization in the face of instability is not surprising. But then at the same time SGD (single pass) counters this statement: it is provably a non-stable algorithm that achieves optimal generalization performance.
We also note that Bassily et al’s lower bound for stability applies to all epochs of SGD, whereas SGD learns at the first epoch and overfits from the second epoch (as we show). It is unclear how one could derive an intuition for such a behavior from the results of Bassily et al.
Overall our proof indeed relies on generating instability (which is a necessary condition for overfitting), but is more involved and incorporates further techniques and constructions such as the usage of Feldman’s function, as well as the notion of sample-dependent oracle which we modify and strengthen to fit our setup.
*Regarding Amir et al:*
One major challenge we faced, given Amir et al’s work, is that we aimed for constructions in dimension linear in the sample-size. This differs greatly from Amir et al that exhibited an exponential dependence of the dimension on the sample size, and did pose several challenges: for example, we require working with Feldman’s function which is a more subtle construction than previous existing ones in high (exponential) dimension.
Some of these challenges have been tackled before in the context of GD (as opposed to multi-pass SGD), and we incorporate some of these ideas. But our results deal with SGD which, unlike GD that was discussed in previous papers [Amir et. al., 2020, Livni, 2024], does generalize in its first epoch. In particular, it provably circumvents previous hardness constructions in its first epoch, and begins its second epoch from an optimal state in terms of generalization performance.
Given this, we actually think it is a surprising result that SGD’s generalization rapidly deteriorates and has the same asymptotic bounds as its stability so quickly after the first pass (but not during the first pass). | Summary: The paper makes three main contributions in Stochastic Convex Optimization with convex and Lipschitz (but not necessarily smooth) loss functions: First, they establish tight bounds on the population excess risk of multi-pass SGD that apply to both single-shuffle and multi-shuffle variants. Second, they prove similar tight bounds for with-replacement SGD that hold after a logarithmic number of steps. Finally, they provide new lower bounds on the empirical risk of single-pass SGD in nearly linear dimension, improving upon previous results that required quadratic dimension.
Claims And Evidence: All claims are supported by mathematical proofs:
1. For the multi-pass SGD bounds:
- They provide a lower bound construction in Section 5 (Theorems 3.1 and 3.2)
- They prove matching upper bounds in Appendix A (Theorems 3.3 and 3.4)
- The proofs involve constructing specific loss functions and showing both that these functions are valid (convex, Lipschitz) and that SGD behaves as claimed when applied to them
2. For with-replacement SGD:
- They use similar techniques but add analysis involving the coupon collector's problem to handle the random sampling
- The proofs appear in Section 5 and Appendix A
3. For the improved empirical risk bounds:
- They provide a construction in nearly linear dimension in Section 4 (Theorem 4.1)
- The proof leverages techniques from previous works, but achieves better dimension dependency
Most proofs are constructive, and seemingly easy to follow given the proof outlines. I did not look into the appendix thoroughly.
Methods And Evaluation Criteria: None.
Theoretical Claims: I did not verify fully the claims made by reviewing the appendix, however because most of the proofs are constructive none seemed incorrect at there face. That said, I am not an expert in this area, and thus some of the bounds discussed or the results derived may not actually make sense.
Experimental Designs Or Analyses: None.
Supplementary Material: No.
Relation To Broader Scientific Literature: I have a background in convex optimization / online convex optimization and am thus aware of online-to-batch arguments and the like. However, I have not read or interacted with many of the core works that this paper is built off of and thus might not fully appreciate the the impact of this work in the broader scientific literature.
Essential References Not Discussed: None that I am aware of, though given the structure of many of the proofs they discuss it seems a bit surprising that discussions surrounding online convex optimization and adversaries were not discussed in more detail.
Other Strengths And Weaknesses: Strengths:
1. Theoretical rigor - the paper provides complete proofs and careful mathematical constructions for all its claims
2. Novel insights - reveals a previously unknown phase transition between first and subsequent passes of SGD
3. Comprehensive analysis - covers multiple variants (with-replacement, single-shuffle, multi-shuffle)
4. Improves on prior work - achieves better dimension dependency than previous results
5. Clear practical implications - helps explain why multiple passes might lead to overfitting in the non-smooth setting
Weaknesses:
1. Theoretical focus - lacks empirical validation or real-world experiments to demonstrate the practical impact
2. Construction-based proofs - relies on specific constructed examples rather than showing results for general cases
3. Dimensionality assumptions - some results require high-dimensional settings that may not reflect practical scenarios
4. Limited practical guidance - doesn't provide clear recommendations for practitioners on how to avoid the identified overfitting issues
Other Comments Or Suggestions: - There are a couple instances of spelling and grammar issues throughout the paper which can probably addressed by passing tex file through a spell checker.
- Might want to consider some restructuring, not having a conclusion to tie things together seems a bit odd.
- Is there any way to provide empirical verification of any of the theoretical claims to illustrate how relevant this is to practical optimization?
Questions For Authors: Practical Implications:
- How do your theoretical results translate to practical guidance for practitioners?
- Are there specific strategies you'd recommend to mitigate the overfitting in multi-pass SGD?
- Have you observed the behavior your theory describes empirically in any real-world optimization problems?
Extension Possibilities:
- Could your analysis extend to other variants of SGD (e.g., with momentum, adaptive methods)?
- What about non-convex settings?
- Are there connections to implicit regularization in deep learning?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for the thorough review and discussion. Following are our responses to your main comments:
> “lacks empirical validation or real-world experiments to demonstrate the practical impact”
Our main contributions are theoretical and, respectfully, we disagree that this is a weakness. Theoretical results, and specifically in Stochastic Convex Optimization, are well within the traditional scope of ICML and have significant impact on the broader study of machine learning.
While theoretical results do not always offer immediate practical guidance, their goal is to improve understanding of SGD and they provide insight into the behavior of generalization in a simple setup (SCO). Such an understanding is a prerequisite for understanding more involved scenarios, such as those encountered in practical settings.
> “Construction-based proofs - relies on specific constructed examples rather than showing results for general cases”
Indeed, our main results are lower bounds --- and lower bounds are inherently construction based proofs. While lower bounds are demonstrated via specific constructions, they aid with providing a complete picture of the guarantees for the examined algorithms: whether we can find better algorithms, or that the algorithms we have at hand are already optimal.
> “Dimensionality assumptions - some results require high-dimensional settings that may not reflect practical scenarios”
The vast majority of our results hold already in dimension linear in sample size (and this is tight). Notably, the setting where the dimension is at least the sample size is currently the most interesting regime to study, in light of practical findings where overparameterized networks are shown to achieve superior performance. (In fact, in Theorem 4.1 for example, the main novelty is in reducing the dimension from quadratic in the sample size $n$, as in prior work, to linear in $n$.)
The only exception to this is the case of Theorem 3.1 for $K=2$ epochs, which applies only in dimension exponential in $n$. However, for $K>2$ our proof construction is in dimension linear in $n$.
> “Limited practical guidance - doesn't provide clear recommendations for practitioners on how to avoid the identified overfitting issues”
Indeed, we do not propose practical methods to avoid overfitting. Rather, we believe our contributions do give guidance to further theoretical studies, by indicating that in the general case (under the standard assumptions) multi-pass SGD quickly overfits, and additional assumptions will be needed in order to justify its empirical success.
In particular, prior work indeed showed that overfitting can be circumvented by either avoiding overtraining (the classical online-to-batch argument) or through stabilizing the algorithm by decreasing the learning rate (c.f. stability arguments of Bassiley et al). This touches on an important aspect of our work, that shows that, in general, these two solutions are indeed exhaustive: overtraining without stability leads to overfitting.
> “given the structure of many of the proofs they discuss it seems a bit surprising that discussions surrounding online convex optimization and adversaries were not discussed in more detail.”
This is a very good point. In a way, our work demonstrates that the classic online-to-batch proof technique is a tight approach for establishing generalization of SGD (Theorem 3.1 shows that once the gradients are biased, like in epochs beyond the first, then generalization deteriorates; and Theorem 4.1 shows that stability/uniform convergence cannot be established in general). From a technical perspective however, we indeed do not invoke the online/regret analysis which is an established technique to achieve algorithmic upper bounds.
We will add this discussion in the final version, thank you for this comment!
> “Might want to consider some restructuring, not having a conclusion to tie things together seems a bit odd”
The thing we had in mind is to have the discussion section appear at the end of the introduction (Section 1.2), and before the rest of the paper that delves into the technical proofs and formal statements. But, we will reconsider restructuring for the final version.
> “Could your analysis extend to other variants of SGD (e.g., with momentum, adaptive methods)?”
That is a very interesting question. Our results using the same techniques can be extended to SGD with varying stepsizes, perhaps with a slight increase in the dimension. As for other variants of SGD, it is currently unclear to us, and indeed an interesting question. We will add a discussion on further pursuing this avenue to the final version.
> “What about non-convex settings?”
Since most of the paper discusses lower bounds, the convex case subsumes the non-convex case, as the lower bounds are harder to achieve under the restriction of convexity (that is, any lower bound for the convex setting is also a lower bound for the non-convex setting). | null | null | null | null | null | null |
Is Best-of-N the Best of Them? Coverage, Scaling, and Optimality in Inference-Time Alignment | Accept (poster) | Summary: This paper investigates inference-time alignment in language models and demonstrates that naively scaling the Best-of-N heuristic leads to reward hacking, causing performance degradation beyond a certain computational threshold. The authors introduce a new algorithm, InferenceTimePessimism, which leverages χ²-regularization and rejection sampling to mitigate reward hacking effectively. The authors provide theoretical guarantees showing InferenceTimePessimism achieves optimal regret.
Claims And Evidence: The paper's claims are supported by theoretical proofs and experimental results. The paper clearly shows that the fundamental limitations of Best-of-N alignment through explicit regret analysis. In addition, it also show the robustness and optimal regret guarantees of InferenceTimePessimism.
Methods And Evaluation Criteria: The methods and evaluation criteria selected are appropriate and justified. The use of standard benchmark datasets such as GSM8K, MMLU, and MATH is suitable to validate alignment effectiveness and reward model quality.
Theoretical Claims: The correctness of the theoretical claims was found to be solid. The proofs of lower bounds establishing the necessity of coverage and the limitations of the Best-of-N approach are rigorous and correct.
Experimental Designs Or Analyses: The experimental designs were found to be sound. Experiments effectively demonstrate the robustness of InferenceTimePessimism in mitigating reward hacking and improving performance across multiple reward models and tasks.
Supplementary Material: I reviewed the supplementary material for theoretical proofs.
Relation To Broader Scientific Literature: Not sure about the relation To broader scientific literature.
Essential References Not Discussed: Not sure about the essential references not discussed.
Other Strengths And Weaknesses: - Novel theoretical insights significantly advancing understanding of inference-time alignment.
- Clearly articulated and robust theoretical framework that provides meaningful guidance for practical algorithm design.
- Empirical validation effectively demonstrates theoretical predictions, enhancing practical relevance.
Other Comments Or Suggestions: N/A
Questions For Authors: Have you tested or analyzed the performance and robustness of InferenceTimePessimism with significantly noisier or weaker reward models? If so, how sensitive is the algorithm to degradation in reward model accuracy? Similarly, have you experimented with or considered extending InferenceTimePessimism to more open-ended, subjective, or complex tasks, and what challenges might arise in these settings?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Summary: This paper examines inference-time alignment, where additional computation at generation time is used to improve language model outputs. Specifically, the authors focus on the widely used best-of-$n$ approach, which generates multiple responses and selects the one with the highest reward according to a (possibly imperfect) reward model.
The key contribution is a theoretical framework that explains when and why best-of-$n$ works, and more importantly, when it fails. The authors show that while best-of-$n$ can be effective with proper tuning, it inevitably suffers from "reward hacking" when scaled too aggressively. This happens because imperfections in the reward model become amplified when $n$ is large.
To address this, they introduce InferenceTimePessimism, an algorithm that deliberately uses inference-time computation to quantify and account for uncertainty in the reward model. Their theoretical analysis proves this approach is optimal in terms of regret and maintains performance even with increased computation.
The authors validate their theoretical findings with experiments on benchmarks using various language and reward models, demonstrating that their algorithm can mitigate reward overoptimization.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: No
Experimental Designs Or Analyses: No
Supplementary Material: No
Relation To Broader Scientific Literature: TBA
Essential References Not Discussed: No
Other Strengths And Weaknesses: ### Strengths
- **Novel Theoretical Framework:** The paper is the first to build the theoretical framework of best-of-$n$ with ideas from offline RL, with the core notion of coverage.
- **Theoretical Guarantees:** The authors first give comprehensive results (both upper and lower bounds) of best-of-$n$. The results can be stronger when a uniform converage is guaranteed. Then they prove their algorithm, InferenceTimePessimism, is regret-optimal within their framework, which is a significant theoretical contribution.
- **Practical Algorithm:** The authors give the practical implementation for InferenceTimePessimism and perform extensive experimental validation on multiple tasks, reward models, and language models.
### Weaknesses
- **Lacking some cases in theory:** For Theorem 3.1, the authors didn't discuss the case where $\varepsilon_{RM} (x) = 0$. If we directly apply the results, then the regret is unbounded, which seems counter-intuitive. For Theorem 4.1, the authors didn't give a dependency on $N$, which makes it hard to compare with Theorem 3.1.
Other Comments Or Suggestions: No
Questions For Authors: Regarding the weakness, what is the bound for best-of-$n$ when $\varepsilon_{RM} (x) = 0$? What is the dependency on $N$ for InferenceTimePessimism?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Summary: The paper first theoretically shows the overoptimization problem is inevitable with the well-known best-of-N algorithm, especially when N increases. Then they propose Inference-Time Pessimism Algorithm, and show that the proposed algorithm resolves the overoptimization problem and also achieves optimal regret rate in terms of the reward approximation error. The paper also provides experiments to validate their theoretical findings.
Claims And Evidence: **Claims with evidences**
- On the Best-of-N (BoN) algorithm, the paper shows that (i) it causes overoptimization in the worst cases when N increases (Theorem 3.2), and (ii) the regret achieved by BoN is also suboptimal in terms of $\epsilon_{RM}$, the approximation error of an estimated reward. (Theorem 3.1, 3.2)
- To resolve these issues, the paper proposes Inference-Time Pessimism Algorithm (Algorithm 1), which is a sampling procedure from $\chi^2$-regularized reward maximizing distribution, by rejection sampling with N samples from the reference model. The proposed algorithm does not cause overoptimization by its nature, and also achieves optimal regret bound in terms of $\epsilon_{RM}$. (Theorem 4.1, 4.2)
- Experimental results:
- Figure 1 validates that the proposed algorithm with tuned hyperparameter $\beta$, the strength of $\chi^2$ regularization, successfully avoids the overoptimization issue caused in BoN when N increases.
- Figure 2 investigates the behavior of the proposed algorithm with varying $\beta$ compared to the BoN algorithm, with fixed $N=2^{13}$. However, this is an extreme setting and also unfair for BoN.
Overall, the claims are theoretically well-supported, and experiments validate the benefits of the proposed algorithms as predicted from their theories. However, I still have some concerns especially about the experimental evidences:
- The paper claims that the proposed algorithm is superior to BoN because (i) it avoids overoptimization and (ii) it achieves the improved regret rate. However, experimental results only verify the first benefit (i), and it is unclear how the second benefit (ii) appears in the real-world experiments.
- The performance of the proposed algorithm seems to depend on the choice of the hyperparameter of regularization strength $\beta$, but the behavior with changing $\beta$ is only shown with the extreme setting of $N=2^{13}$.
Methods And Evaluation Criteria: The proposed method makes sense and is well-motivated. The experimental settings follow the standard ones.
Theoretical Claims: I read some proofs in Appendix to understand the theoretical results, but did not check all of them.
Experimental Designs Or Analyses: The experimental designs and analyses seem sound.
Supplementary Material: I read some proofs in Appendix to understand the theoretical results, but did not check all of them.
Relation To Broader Scientific Literature: The proposed algorithm successfully resolves the overoptimization problem caused with the BoN algorithm, which is a widely used method for inference-time alignment. In addition to analyses of the proposed algorithm, the theoretical results on BoN themselves are also novel and insightful.
Essential References Not Discussed: Yang et al. [1] also investigates the theoretical properties of the BoN algorithm in relation to KL-regularized reward-maximizing distribution, but not cited in this paper.
[1] Yang et al., "Asymptotics of Language Model Alignment" (ISIT'24)
Other Strengths And Weaknesses: See Claims and Evidences.
Other Comments Or Suggestions: N/A
Questions For Authors: See Claims and Evidences.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Summary: The paper analyzes the Best-of-N (BoN) algorithm for selecting among language model generations and introduces InferenceTimePessimism, a new algorithm that mitigates reward hacking. The authors formalize inference-time alignment as improving a pre-trained policy’s responses using an imperfect reward model. They show that BoN can achieve optimal performance under strict coverage but suffers from reward hacking when N is large. InferenceTimePessimism is proven to be optimal and scaling-monotonic, with empirical validation across tasks like GSM8K, MMLU, and MATH.
Claims And Evidence: The claims are well-supported by theoretical proofs and empirical results. The authors demonstrate BoN's limitations and prove InferenceTimePessimism's optimality and robustness through experiments on multiple tasks and models.
Methods And Evaluation Criteria: The methods are appropriate, focusing on inference-time alignment and evaluated using standard benchmarks (e.g., GSM8K, MMLU). The criteria, including accuracy and estimated reward, effectively measure performance.
Theoretical Claims: The theoretical claims are supported by rigorous proofs. The authors prove InferenceTimePessimism achieves optimal regret and is scaling-monotonic, with detailed proofs in the supplementary material.
Experimental Designs Or Analyses: The experiments are well-designed, covering multiple tasks and models. The results show InferenceTimePessimism avoids reward hacking and outperforms BoN in many cases.
Supplementary Material: The supplementary material provides additional proofs and experiments, supporting the main paper effectively.
Relation To Broader Scientific Literature: The paper connects well to prior work on reward overoptimization and offline RL, extending ideas like pessimism to inference-time alignment.
Essential References Not Discussed: The paper could reference more recent work on preference-based learning and scaling laws for language models to provide additional context.
Other Strengths And Weaknesses: Strengths:
1. Addresses a timely problem with theoretical insights and practical algorithms.
2. InferenceTimePessimism is a novel contribution with rigorous proofs and empirical validation.
3. The proposed methods work reasonably well.
Weaknesses:
1. Limited guidance on tuning the regularization parameter $\beta$
2. Experiments focus on mathematical tasks; more diverse tasks (e.g., dialogue) would strengthen generalizability.
Q1. How should practitioners choose $\beta$ in InferenceTimePessimism?
Q2. Have you considered evaluating on more diverse tasks like open-ended dialogue?
Q3. What are the computational costs of InferenceTimePessimism compared to BoN?
Other Comments Or Suggestions: Please see above
Questions For Authors: Please see above
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | null | null | null | null | null | null | ||||
Generalization Bounds via Meta-Learned Model Representations: PAC-Bayes and Sample Compression Hypernetworks | Accept (poster) | Summary: --- increased score from 2 to 3 after comment from authors ---
The authors developed a sample compression version of PAC Bayes generalization bounds, which reduce the number of training data points in standard PAC Bayes bounds into a compressed subset with generalization guarantee. They used a hypernetwork to meta-learn the parameter of another network, which they applied the PAC Bayes bound to. The authors explore three different hypernetwork architectures. Specifically, the authors design the sample compression hypernetwork to encode both samples and messages and decode the parameters of a downstream predictor. For hybrid networks, the message encoder is replaced by a PAC Bayes encoder. They evaluate their method over synthetic tasks and image tasks like MNIST variants and CIFAR100. The proposed bounds and training procedure lead to a tighter bound compared to a couple of baselines.
Claims And Evidence: Looks reasonable to me. There are no signs of extravagant claims.
Methods And Evaluation Criteria: The benchmark dataset (MNIST, CIFAR100) is quite standard, though they are a bit small scale. Nowadays people are applying bounds on models like 100M~7B parameters. See https://arxiv.org/abs/2407.18158, https://arxiv.org/abs/2312.17173.
Theoretical Claims: I’m not an expert on sample compression bounds, but the combinations of PAC Bayes and sample compression bounds look reasonable to me. The KL term and the terms introduced by sample compression are within expectations, though I didn’t verify every details in the derivation.
Experimental Designs Or Analyses: --- after authors' responses: These are promising avenues for future work indeed. Thank you for answering and I don't expect results on this since it would be beyond the scope of this paper. ---
I’m a bit skeptical about why meta-learning the parameters of a downstream predictor is a good idea. Would this meta-learning be too restrictive for very large models? How does this meta-learning approach compared to other types of model compression like subspace methods https://arxiv.org/abs/1804.08838, quantization https://arxiv.org/abs/2407.18158 etc. Why is the hypernetwork approach potentially better than other approaches listed above. Would be great to motivate this or demonstrate the superiority of this approach empirically.
Supplementary Material: No.
Relation To Broader Scientific Literature: This paper is an extension of a line of works in PAC Bayes bounds. More specifically, the authors combined the KL term in PAC Bayes and the term introduced by sample compression to bound generalization error.
Essential References Not Discussed: Missing https://arxiv.org/abs/2407.18158, https://arxiv.org/abs/2312.17173. PAC Bayes bounds on LLMs. The proposed subspace compression via a linear mapping is similar to the hypernetwork idea here.
Other Strengths And Weaknesses: Strength:
- the combination of PAC Bayes and sample compression is novel. The hypernetwork design here is novel.
- bounds derivation looks reasonable.
--- after authors' responses: Thank you for explaining the differences between these two tables. It makes more sense now. ---
Weakness:
- insufficient motivation for hypernetworks
- insufficient empirical demonstration for hypernetworks, though this is a minor point depending on how much compute the authors have.
- empirical results in Table 1 and 2 are a bit weak. Some of the bound values are higher than the strawman baseline.
Other Comments Or Suggestions: The paper can benefit from more analysis figures. For example, I would like to know the contribution of each of the terms in the bound to the bound value. Might be interesting to plot it out and visualize which part takes up most of the bound value when summed together.
Questions For Authors: as shown above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for his insightful feedback.
**1.** “People are applying bounds on models like 100M~7B parameters.”
Indeed, interesting works have successfully computed tight generalization bounds for large models. It does not undermine the need for tighter generalization bounds for smaller models, as they are still used for many applications. Moreover, the submitted manuscript focuses on investigating a new way of obtaining bounds in meta-learning, which we conceive as a stepping stone for undertaking larger experiments.
**2.** “Missing literature review on PAC Bayes for LLMs”
Following the reviewer's comments, we will include a literature review of PAC-Bayes for LLMs in Section 2.1 and mention these two works: “Unlocking Tokens as Data Points for Generalization Bounds on Larger Language Models” and “Non-Vacuous Generalization Bounds for Large Langage Models”.
**3.** “The proposed subspace compression is similar to the hypernetwork idea [ in SubLoRA (https://arxiv.org/abs/2312.17173) ].”
We would like to emphasize the following differences between SubLoRA and our approach: the representation in SubLoRA (and in LoRA) is learned via SGD and is composed of tunable parameters. In our case, though the representation is a function of tunable parameters, it mostly depends on the input dataset. The task is not to encode the weights of a model working well on a single task but to ensure the versatility of the representation for any related task. The way the representation is treated is also quite different: in our case, it is fed to a hypernetwork whose output is the weights of the downstream predictor. (See also **6.** for a conceptual difference.)
**4.** “Empirical results in Tables 1 and 2 are a bit weak. Some of the bound values are higher than the strawman baseline.”
We report empirical evaluation in both unfavorable (Table 1) and favorable (Table 2) environments. Table 1 illustrates that a fixed model can perform well across the tasks in terms of both error and bound. This highlights that the commonly used pixel swap environment does not account for all meta-learning scenarios. See “Rebuttal XmYz, 2.2” for a numerical investigation.
The strawman bounds are essentially test bounds: they are valid for a single predictor. Since no predictor is learned on the query set, the empirical error is an unbiased test error estimate (close to 0.5 in Table 2). Hence, the gap between the bounds and the test error is small in these cases. In contrast, the train bounds underlying the other methods are valid uniformly for all learnable predictors. The price to obtain train bounds is a larger complexity term that increases the gap. For the CIFAR100 experiment, all training bounds are greater than the one of the baseline, but we achieved the lowest one with our SCH- model.
**5.** “I’m a bit skeptical about why meta-learning the parameters of a downstream predictor is a good idea. Too restrictive for very large models?”
Since our bound relies on the latent representation instead of the complexity of the downstream predictor and decoder, we expect it to be suited for fine-tuning of the last few layers of large models. Also, one could consider a “prior” over the downstream predictor, which would correspond to the random initialization in the LoRA nomenclature, and predict the weights of LoRA-like matrices to modify this “prior”. Finally, note that our bounds hold for any bounded loss function, which is necessary in some settings for bounding models generating sequences of tokens. These are all promising future works that we will mention in our conclusion.
**6.** “How does this meta-learning approach compare to other types of model compression like [...]?”
Although both our approach and the aforementioned methods are compression-based, ours is not a model-compression approach. Our primary goal is to compress the dataset into a subset of datapoints and a message, which is a less explored strategy. By creating a bottleneck, we compress the information contained in the dataset and use it to learn a model instead of compressing the model itself. Therefore, to our knowledge, we compared ourselves to the works closest to ours in the field of meta-learning: PAC-Baysian approaches.
**7.** “The paper can benefit from more analysis figures [...]."
During the rebuttal period, we crafted additional figures following the Reviewers' suggestions:
- We present the contribution of each of the terms in the bound to the bound value for a few algorithms on the 200 pixels swap (see https://imgur.com/a/9d4fOvB) and on the CIFAR100 binary task (see https://imgur.com/a/GSyoggu). See “Rebuttal to Reviewer 5RkH, 2.4” for a discussion on these figures.
- We depict the test error and generalization bound for PB SCH as a function of both the compression set size and the message size. (see https://imgur.com/a/r45Wq56 and “Rebuttal to Reviewer 5RkH, 2.3” discussion).
---
Rebuttal Comment 1.1:
Comment: Dear authors, thank you for answering my questions and providing additional figures! It would be helpful to include discussions with related work for better context. I'm still somewhat skeptical about the empirical performance similar to Reviewer XmYZ. But in light of the novelty in the proposed method and addressed concerns, I'm raising my score from 2 to 3.
---
Reply to Comment 1.1.1:
Comment: We are grateful to the reviewer for assessing the novelty of our contribution! We will add the new figures, discussions, and references. Thanks to the Reviewers' comments, these will undoubtedly improve the manuscript. | Summary: This paper proposes a novel meta-learning framework that uses PAC-Bayes and Sample compression theory to learn the hypernetwork parameters. The hypernetwork consists of two components: an encoder (or compressor) that maps the training set into the latent representation space, and a decoder (or reconstructor) the maps the latent representation into the parameter of downstream predictor. A new PAC-Bayes sample compression bound as well as it disintegrated variant are provided to guarantee the low generalization error of the posterior over the latent representation space. Experiments are also conducted to validate the effectiveness of proposed method.
Claims And Evidence: Weaknesses:
**(1) My first major concern is that, the theorectical results are not novel enough.** Two reasons are as follow:
(i) Theorems 2.1-2.3 are the existing results. Although authors may place Theorems 2.1-2.3 in the main body for the completeness of the whole work, some of them in my opinion can be deferred to the supplementary material (e.g. Theorem 2.2. can be somewhat regarded as a special result of Theorem 2.3 and did not be used very few times in the main body, hence can be deferred to the appendix).
(ii) The proof technique is not new. The proof for the disintegrated bound in Theorem 2.5 is actually a direct extension of Theorem E.1 by Viallard et al. 2024. The proof technique (i.e. the use of change of measure lemma, the Markov's inequality, as well as the bounding of moment generating function of the comparator function $\Delta$) of Theorems B.1 and B.2 (i.e. of Theorem 2.4) is basic, and the contribution is to integrate PAC-Bayes bound into the sample compression framework, which is not new for me and seems not technical enough.
----
**(2) The second major concern is that, the experimental results in Tables 1-2 of the proposed method are not convincing .** Several reasons are as follow:
(i) First, in Table 1, the proposed method could not outperform ALL PAC-Bayes meta learning methods in terms of the test error (although the proposed method can achieve the tightest bounds). The explanation for this phenomenon lies in line 380 right column that, "The benchmark methods perform well because the posterior for each test task, is similar to the prior." As far as I know, the posterior for test tasks in the benchmark methods (like (pentina 2014 icml)) is outputed by running the PAC-Bayes algorithm that takes prior (sampled from the informative hyper-posterior) and the training data as input. Then, how to demonstrate the posterior and the prior is similar? If they are similar, then the KL-divergence between them will be small and the PAC-Bayes bound (of benchmark method) is small, then why the test bounds of the proposed method in Table 1 is smaller than the test bound of benchmark method?
(ii) In Table 1 and CIFAR100 of Table 2, the performance PBH and PB SCH is worse than the propoesd SCH _ and SCH +. And even worse is that, PBH and PB SCH could not outperform many benchmark methods in terms of the tightness of test bound the test error. Then it seems that the proposed PAC-Bayes encoder-decoder hypernetworks in Section 3.1 is not good enough. In particular, given that PB SCH is the hybrid version of PBH and SCH but performs much worse than SCH, I am not convinced that the proposed PAC-Bayes encoder-decoder hypernetworks (and it variant PB SCH) works well.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: I check the proof as far as I can, and believe it is correct. However, some crucial explanations should be added to make the whole proof clearer and more readable. For example, in line 1069, why "${\mathbb{E}}\left[\frac{Q_J(\mathbf{i})}{P_J(\mathbf{i})}\right]^\alpha=P_J(\mathbf{j})\left[\frac{1}{P_J(\mathbf{j})}\right]^\alpha$" holds? Such kind of explanations should be added.
Experimental Designs Or Analyses: The proposed meta-learning method uses support-query training strategy that needs to split the training dataset into two non-overlapping parts. But according to my experience, the benchmark methods in Table 1 \&2 (like [PL2014], [AM2018],[GL2022]) uses traditional ERM training strategy, instead of such support-query strategy. Then, in the experimental setting, for example each training task including 2000 images, benckmark methods compute the training loss over 2000 images but the proposed method only compute the training loss over part of these 2000 images?
Supplementary Material: I reviewed all the proof.
Relation To Broader Scientific Literature: This paper provides a new PAC-Bayes meta-learning framework to analyze the generalization performance of predictor on downstream tasks, which is different from the existing literature that considered the hierarchy of prior and posterior distribution.s
Essential References Not Discussed: Yes.
Other Strengths And Weaknesses: Strengths:
(1) The new PAC-Bayes sample compression bound in Theorem 2.4 is well-motivated by existing works in Theorems 2.1-2.3.
(2) Experimental results in Table 2 validate the effectivess of the proposed hypernetwork framework.
Other Comments Or Suggestions: Typos:
(1) line 91: by many (investigators)
(2) line 105: asses->assess
(3) line 122: abbreviation SCM lacks explanation
(4) line 186, right column: $\mathscr{H}(S') ->\mathscr{H}_{\theta}(S') $
Questions For Authors: (1) In the experimental setting, for example each training task including 2000 images, benckmark methods compute the training loss over 2000 images but the proposed method (with support-query strategy) only compute the training loss over part of these 2000 images?
(2) What is the main technical difficulty of combining PAC-Bayes bound and sample compression scheme to obtain new bound in Theorem 2.4?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We thank the reviewer for his feedback, which will help us highlight the precise nature of our contribution. We undertake to add these clarifications to the manuscript.
**1. First concern (the theoretical results are not novel enough)**
**1.1.** We agree that our theoretical results are moderately novel, and we respectfully suggest that our work should not be judged on this basis. Our main contributions are not new PAC-Bayesian and sample compression generalization bounds or advanced proof techniques. Instead, we propose an original way to leverage existing results in a meta-learning framework. We are the first to apply PAC-Bayesian and sample compression to hypernetworks. With this in mind, we undertake to present various kinds of generalization bounds in a unified setting, advancing the idea that each bound can inspire a particular hypernetwork architecture. To our knowledge, this unconventional perspective is fresh, and we argue that it deserves to be shared with the machine learning community.
**1.2.** Hence: “What is the main technical difficulty of [...] to obtain new bound in Theorem 2.4?” There is no technical difficulty here for those being familiar with the usual PAC-Bayes and sample compress proof schemes; our result is indeed based on existing ones. The interest in our results does not lie in their complexity. In the proof of Theorem B.1 and Theorem B.2, we unify and generalize the proof techniques of previous results. Doing so, we present Theorem B.1, which recovers a version of the theorem of Laviolette & Marchand (2005) for real-valued losses and a version of the theorem of Thiemann et al. (2017) for different sizes of compression sets. Theorem B.2 is a tighter version of previous works and a correct version of Theorem 39 of Germain et al. (2015). Indeed, as pointed out in lines 809-839, their proof was only correct for one size of compression set.
Finally, Theorem 2.4 is a direct consequence of Theorem B.1. The idea of using a Dirac distribution to obtain “sample-compression” style bounds is not novel, as it was explored in Marchand & Laviolette (2005). Nevertheless, mixing probabilistic messages and deterministic compression sets is entirely new and follows our peculiar way of looking at generalization bounds to inspire original hypernetworks. This enables us to use real-valued messages instead of binary messages, which is pivotal to optimizing the parameters of the PB SCH architecture bounds with respect to the message in a continuous way.
**2. Second concern (the experimental results are not convincing)**
**2.1.** We rigorously reported the empirical evaluation in both unfavorable (Table 1) and favorable (Table 2) environments. On the binary MNIST task, our methods compare very favorably to others. On the CIFAR100 task, our method achieves the best bound by a reasonable margin.
**2.2.** The reviewer asks “how to demonstrate the posterior and the prior is similar?” To better illustrate our claim, we computed the prior-posterior KL term of both Pentina & Lampert (2014) and Zakerinia et al. for a few tasks (see https://imgur.com/a/tcIJlNZ). This shows that the KL values are lower for a small amount of pixel swap and larger for the binary MNIST task.
**2.3.** As to “why the test bounds in Table 1 are smaller than the test bound of benchmark methods”, it partly relies on the fact that our bounds are based on the $kl(q,p)$ comparator function (see Eq. 1) instead of being linear as in Pentina (2014). Also, a small compression set and message are selected; thus, only a small complexity term enters the computation of the bound.
**2.4.** Concerning the performances of PB SCH, we study in a new figure (see https://imgur.com/a/r45Wq56) the test error and generalization bound for PB SCH as a function of both the compression set size and the message size. See “Rebuttal to Reviewer 5RkH, 2.3” for a discussion of these results.
**3. “Theorems 2.1-2.3 are existing results [... and] can be deferred to the supplementary material.”**
We thank the reviewer for the suggestion. We will move these parts to the appendix and use the freed space to clarify the abovementioned points.
**4. “In the experimental setting, each training task including 2000 images [...] of these 2000 images?”**
This is right: half the images (the support set) are used to generate the predictor, while the other half (the query set) is used to compute the meta train loss. However, a new random support/query split is performed at each epoch for every train dataset.
**5. “Some crucial explanations should be added to make the whole proof clearer and more readable. For example, in line 1069, why [...].”**
On Line 1069, the expectation is on discrete distributions $P_J$, and $Q_J$, the latter being a Dirac on $\mathbf{j}$ such that $Q(\mathbf{j})=1$, explaining the equality. We will add this clarification and comment on the many other steps of the proof.
---
Rebuttal Comment 1.1:
Comment: Thank authors for the detailed responses. Some of my concerns have been addressed. My responses are as follow:
(1) I still value the theoretical contribution of one paper submitted to ICML conference. As the authors acknowledged, only Theorems 2.4 and 2.5 are the novel results in this paper, and the proof technique is not new and not technical. This to some extent will lower the theoretical novelty of this paper.
Nevertheless, authors also suggest that the propoed hypernetworks based on PAC-Bayes and sample compression is the key contribution of this paper. I agree with this point, and the proposed method is novel for me.
(2) For the second concern, the performance of PB SCH (i.e. the combination of PAC-Bayes and sample compression) is not satisfactory. I think as the combation, PB SCH can outperform both PBH and SCH, but in Table 1 and 2 SCH obtains better performance. The new figure seems to show that a better tradeoff combination of PBH and SCH can lead to a better result, but in practice it is hard to tune the trade-off parameter. For this concern, I still doubt that whether PBH this method is effective, when compared with other benchmark methods and compared with SCH. If PBH does not work well (i.e. at least comparable results with other in Table 1 and 2) in most datasets, the effectiveness of the proposed hypernetwork is not sound enough.
Overall, considering the effectiveness of the proposed algorithms as well as the theoretical contribution, i will maintain the initial score at the current stage.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for the well-argumented response.
We agree that the trade-off of PBSCH is challenging to tune. This is why we initially performed the model selection relying on a validation set in Tables 1 and 2, which end up selecting the same models for PB SCH and PBH (without any compression set). The new figure (https://imgur.com/a/r45Wq56) indeed shows the tradeoff between the message size and the compression set size. We will include this figure in the revised version of our manuscript, as it helps in grasping the behavior of the investigated hypernetworks, and we will also honestly mention the difficulty of performing model selection based on the bounds as one of the current limitations of our approach.
While we disagree that PBH does not work well in the MNIST/CIFAR binary environments, we intentionally did not proclaim that one version of PBH, SCH or PBSCH is the most effective. We proposed, investigated and compared three possibilities. Among them, PBSCH is a hybrid of PBH and SCH, and comes with its own challenge due to the increased complexity. These three examples may serve as a starting point for the community to explore generalization bounds in this original setting, e.g., bounds and architectures suitable to LLM-scale models by reconstructing the parameters of LoRA layers. | Summary: The paper introduces new generalization bounds combining both PAC-Bayes and sample compression framework, and apply it in a meta-learning scheme. They introduce three different designs inspired by different theorems by using hypernetworks.
Claims And Evidence: Technically, the generalization bounds proved in this paper are not meta-learning generalization bounds. Meta-learning bounds, usually are applied in the Baxter's setting during the meta-learning *training* phase based on the training task, to guarantee the performance of the future test tasks. The bounds provided here however, assume that the training phase is over and the hypernetworks are learned from previous tasks, and provide a bound on the generalization gap for the test tasks. I think this should be made clear in the paper, in discussion to related works. Other than this, the paper is clear.
Methods And Evaluation Criteria: The provided theorem and the methods derived from them are interesting, and the addition of the sample compression for a meta-learning setting is novel. In general the experiments are done based on standard benchmarks as in prior works, and the results are good. However, I have the following questions/concerns. See Below.
Theoretical Claims: I skimmed the proofs and they seem correct, however I didn't check them in full detail.
Experimental Designs Or Analyses: - As mentioned above, the prior works provide a meta-learning bound in the training phase, and this work is a transfer bound. How did you do the comparison of the tables? Are they all test time bounds? And yes, can you clarify how did you compute the bounds for the baselines?
- Amit & Meir, 2018 had another experiments based on MNIST in which they shuffle the labels instead of pixels. That experiment can be interesting, since I assume sample compression would need at least 10 samples. It is interesting to see if it is possible to get good result with less than 10 samples by using the message.
- The new bound proved in the paper is about PB SCH, however, in the experiments, it has $c=0$ and it is reduced to PB setting. Is it because additional samples doesn't improve? Can you come up with a setting, which $c>0$ helps?
Supplementary Material: Some parts of the proofs. They seem correct.
Relation To Broader Scientific Literature: Prior works focus on the generalization gap in the meta-learning setting, however the focus of this paper is improving the generalization gap for a given task, after the meta-learning training. Hypernetworks were also used in the prior works for meta-learning, but the addition of sample compression is novel and an interesting idea for gain improvement.
Essential References Not Discussed: The paper uses hypernetworks, however does not have any citations related to it or any prior works who use hypernetworks. The following papers are some relevant references which probably should be discussed. The first paper is the paper that introduces the hypernetworks, and the other three use hypernetworks for personalized federated learning, in a meta-learning scheme, which I think are relevant. The use of hypernetwork is similar to the current paper, and a discussion and comparison is needed.
- Ha, D., Dai, A. M., and Le, Q. V. HyperNetworks. ICLR, 2017.
- Shamsian, A., Navon, A., Fetaya, E., and Chechik, G. Personalized Federated Learning using Hypernetworks. ICML, 2021.
- Scott, J., Zakerinia, H., and Lampert, C. H. PeFLL: Personalized Federated Learning by Learning to Learn. ICLR, 2024.
- Amosy, O., Eyal, G., and Chechik, G. Late to the party? On-demand unlabeled personalized federated learning. WACV, 2024.
Other Strengths And Weaknesses: _
Other Comments Or Suggestions: - I think the pixel swap experiments when there are less samples per task for training tasks and/or test tasks is more interesting. I assume Opaque encoder does not work in this setting.
Questions For Authors: - Why are some bounds in table 2 bigger than 1? especially, there is a 1372 in the table. Is this correct?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for his careful reading of the paper.
**1. Claims And Evidence**
The reviewer correctly says that “the generalization bounds proved in this paper are not meta-learning generalization bounds.” Instead, our framework shows a new way of using generalization bounds in a meta-learning framework. We will clarify this further to avoid misconceptions, especially in the Related Works part of the introduction.
**2. Experimental Designs Or Analyses:**
**2.1.** Concerning the comparison with the benchmarks: for our algorithms, we naturally used the support set for the computation of the bound, and the query set for the computation of the test error. Though the benchmarks do not require support/query splits, it is necessary to compute the bound and the test error on independent sets to ensure the reliability of the results. Since the query/support split is done in a random 50/50 fashion for our approaches, we split similarly the datasets for the benchmarks to compute the bound and the test error.
**2.2** Concerning the labels shuffle experiment, we agree it would be an interesting use case for our algorithms and will add this experiment to the revised version of our manuscript.
**2.3** Concerning PB SCH and $c > 0$: we produce new results (see: https://imgur.com/a/r45Wq56) depicting the test error and generalization bound for PB SCH as a function of both the compression set size and the message size. We see that the message is better-suited for the minimization of the test error, but a trade-off between the compression set size and the message size is required to obtain the best bounds (In Table 1 and 2, we used the validation error to select the compression set size and message size). Interestingly, when we enforce a small message size, the benefit of using $c>0$ becomes apparent.
**2.4** We also present the contribution of each of the terms in the bound to the bound value for a few algorithms on the 200 pixels swap (see here: https://imgur.com/a/9d4fOvB) and on the CIFAR100 binary task (see here: https://imgur.com/a/GSyoggu). In the figures, the cumulative contributions are displayed, while in the tables, the marginal contributions are displayed. The bounds are decomposed as follows:
- The observed meta train “error”
- The “confidence penalty”, which corresponds to the term $-\ln(\delta)$ in Theorem 2.1 and the similar terms in other bounds
- The “complexity term”, which corresponds to the KL factor in the PAC-Bayes bounds. The latter is further decomposed into the compression set probability and the message probability in our sample compression-based bounds.
When considering the decomposition on the 200 pixels swap experiment, we see that our approach, despite having a larger error term, relies on a small message probability and a null compression set probability to yield competitive bounds. In contrast, for Pentina & Lampert, the complexity term profoundly impacts the bound, making it non-competitive. As for the decomposition on the CIFAR100 experiments, it is interesting to see that the bound from Zakerinia et al. and the one from PB SCH have a similar decomposition, whereas SCH$_+$, despite being penalized by the message probability, relies on a better treatment of its error and confidence term to obtain best bound of the four considered algorithms. This is empirical evidence of the tightness of our bounds compared to those of the runner-ups, all factors (error, confidence $\delta$, …) being kept equal, thanks to the non-linear comparator function $\Delta$ (see Theorem 2.1).
**3. Essential References Not Discussed**
We agree that discussing and conceptually comparing prior work on hypernetworks will enrich the paper. We thank the reviewers for the references from the federated learning literature. They are great examples of the benefits of meta-learned representations and hypernetworks. On a similar note, kindly point toward “Rebuttal KTTB, points 2. and 6.” where the differences between our approaches and model compression approaches.
**4. Questions For Authors**
Concerning the large bound values, many bounds mathematical expressions can give values greater than one. This is especially true when the bound is linear, i.e., of the form “true error $\leq$ empirical error + complexity term” (which is the case for many benchmarks) since the complexity term is usually not upper-bounded. The calculated bound values for Amit & Meir (2018) in Table 2 are truly around 1000, for the complexity term here is too important. We point to the fact that all of our bounds enjoy being bounded in the interval [0,1], so that no vacuous (trivial, >1) bound can be outputted.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response. I'm still confused about the experiments, and the bounds reported.
The bounds from prior works in the paper are meta-learning bounds, and are computed during the meta-training phase. I.e. If there are n training tasks with datasets S_1, ..., S_n the bounds are obtained from S_1, ..., S_n for the expected performance of a future task with dataset S (So a two level generalization bound). However, the bounds in this paper, are bounds based on S, given the results of the meta-learning. These quantities are not comparable. Can you clarify what are the reported bounds for the baselines?
---------------------------------------------------------------
Update: Thanks for providing the source code. I checked the code for (Guan & Lu, 2022), and it seems the reported numbers are for meta-test tasks and not the meta-learning bounds. Therefore, based on the code the comparison in the paper is valid (I didn't find the similar part in the second source, but I assume you did the same). However, I strongly suggest to make the differences clear in the paper, and also make your results reproducible by providing the implementation. I increase my score.
---
Reply to Comment 1.1.1:
Comment: We apologize for the confusion concerning the reported bounds for the baselines. You are right that “If there are $n$ training tasks with datasets $S_1, ..., S_n$ the bounds are obtained from $S_1, ..., S_n$ for the expected performance of a future task with dataset $S$”, while our bounds depend on observations made on a meta-test task. More precisely:
- For (Pentina & Lampert, 2014), (Amit & Meir, 2018), and (Guan & Lu, 2022) benchmarks, we used the implementations of the learning algorithms and bound computations by (Guan & Lu, 2022) (see https://proceedings.mlr.press/v162/guan22b.html, Related Material) ;
- For (Rezazadeh, 2022) and (Zakerinia et al., 2024), we used the implementation we used of the learning algorithms and bound computations by (Zakerinia et al., 2024) (see https://github.com/hzakerinia/Flexible-PAC-Bayes-Meta-Learning/). For example, the bounds reported for "(Guan & Lu, 2022) - kl" and "(Guan & Lu, 2022) - Catoni" correspond respectively to Theorem 3 and 4 in their article, while the bound reported for "(Zakerinia et al., 2024)" corresponds to Theorem 3.1 in their article.
You are also correct that our bounds and those of the baselines above are not directly comparable, as our methods provide a generalization bound for each specific downstream predictor outputted by our proposed hypernetwork (once a new dataset $S$ is observed). Still, the goal is to certify the result of a meta-learning process, and we reported all bound values to provide comparison points (we are not aware of other works that provide “generalization bounds via meta-learned model representations” as we did). As written above, we will make sure to emphasize this to avoid misconceptions. We will also detail the baseline bounds computation in the appendix. | Summary: The paper provides novel PAC-Bayesian bounds for meta-learning within the sample compression framework. The approach is based on the hypernetwork architecture. The paper also provides an experiment to show that the proposed bounds can be tighter than prior works. The key technical innovation is extending the sample compression bound (Bazinet et al., 2024) into a setting with real-valued messages.
### update after rebuttal
I am maintaining the current score following the rebuttal.
Claims And Evidence: The claims made in the submission are well supported.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: The theorems in the main text look correct to me.
Experimental Designs Or Analyses: Yes, the experiment setup looks correct to me.
Supplementary Material: N/A
Relation To Broader Scientific Literature: The key contribution (Theorem 2.4) extends the prior bound for sample compression (Bazinet et al., 2024) to the setting where messages are real-valued.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: 1. The hypernetwork framework is rather complicated. It would be helpful if the authors considered simpler alternative frameworks and included a discussion explaining why each component is important to their approach.
2. The experimental results show mixed performance - good on CIFAR but not ideal on the MNIST pixel swap task. Including more experiments would be more convincing in demonstrating that the proposed method works effectively across different scenarios.
Other Comments Or Suggestions: 1. A typo on line 53 'a'
2. It would be nice to provide real-world examples of "message" in the preliminary section.
3. Line 223, what is $\mu$, in the equation (5) ?
4. For the computed bounds, it would be helpful to include a short description that explains how much better the bounds are compared to prior work, since the bounds are rather complicated and it is not easy to interpret the results.
Questions For Authors: 1. The bound computation on page 5 is for a uniform distribution on J and the message. The choice is quite simple but seems arbitrary. What kind of distribution would lead to a better bound compared to a uniform one?
2. Regarding Table 1, the best results are bolded. For the bound column, is the bolded value the one with the smallest gap with the true test error?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for his careful reading of the paper.
**1. Other Strengths And Weaknesses**
**1.1.** Concerning the complexity of the proposed framework, the encoder-decoder architectures are central to our contributions; each component has its unique role in the whole. We are open to performing new experiments if the reviewer has a suggestion on simplifications. That said, we suggest extending the current Appendix G, accompanying Figures 7 to 9 with a comprehensive description and motivation of every choice we made concerning the encoder-decoder architecture.
**1.2.** Concerning the mixed empirical performances, we report empirical evaluation in unfavorable (Table 1) and favorable (Table 2) environments to investigate a new family of methods honestly. Nevertheless, we agree that adding more experiments could reinforce the encouraging performances on the meta-learning binary variants of MNIST and CIFAR100.
That is why we crafted new heatmaps (see https://imgur.com/a/r45Wq56) depicting the test error and generalization bound for PB SCH as a function of both the compression set size and the message size (Recall that Tables 1 and 2 report the performances of the models obtaining the best validation error). These detailed results help in grasping the inner workings of our proposed approach. They depict that using a large message is better-suited for minimizing the test error, but a trade-off between the compression set size and the message size is required to obtain the best bounds. Interestingly, when the message size is restricted to be small, we clearly see the benefit of using $c>0$.
**2. Other Comments Or Suggestions**
Many thanks for pointing out the typos and ambiguities. We will give an example of “message” in Section 2.2 and explain our bounds compared to the ones from the literature in the appendix.
**3. Questions For Authors**
**3.1.** Concerning the choice of a prior distribution over the compression set and the (discrete) message, one can think of using a prior that gives more importance to smaller compression sets and shorter messages. In our current approach, these values are hyperparameters that are fixed, but as discussed in 1.2, the elaboration of an algorithm that selects these quantities as parameters would benefit from such priors. These are interesting options that we will mention as future research directions.
**3.2.** In our tables, the bolded values correspond to the smallest bound values, not those with the smallest gaps. | null | null | null | null | null | null |
An Online Adaptive Sampling Algorithm for Stochastic Difference-of-convex Optimization with Time-varying Distributions | Accept (oral) | Summary: In this paper, the authors propose an online adaptive sampling algorithm for solving nonsmooth DC problems under time-varying distributions.
Their major technique is the development of a convergence rate for the sample average approximation of subdifferential mapping.
Based on the technique, they show their algorithm converges subsequentially to DC critical points almost surely under proper assumptions.
## update after rebuttal
The author's reply makes sense, so I keep my score.
Claims And Evidence: The authors propose stochastic algorithms to solve the nonsmooth DC problem, and they provide the asymptotic convergence of their algorithm under mild assumptions.
Their theoretical findings are both solid and interesting.
My only concern for this paper is what machine learning applications belong to the class of stochastic DC problems under time-varying distributions.
I have noticed the author listed the online sparse robust regression as an application of their problem.
Could you provide more machine-learning examples of the nonsmooth DC problem?
In addition, is there any real-world time-varying data distribution that satisfies Assumption 4.6, could you provide some examples?
Methods And Evaluation Criteria: The proposed methods and evaluation criteria for solving the problem make sense to me, and the author provides a rigorous theoretical analysis of their methods.
Theoretical Claims: See above.
Experimental Designs Or Analyses: The experimental designs are valid and sound.
Supplementary Material: I have read the proof of theorems in Sections 3 and 4. Overall, the proofs in the supplementary materials are sound and easy to follow.
Relation To Broader Scientific Literature: See above.
Essential References Not Discussed: I think most related works are cited and discussed.
Other Strengths And Weaknesses: Since there are many works developing stochastic methods with non-asymptotic convergence rates for nonconvex nonsmooth optimization problems.
Is it possible to show the proposed methods with a non-asymptotic convergence rate for solving the stochastic DC problems?
Other Comments Or Suggestions: I found some small typos in the paper:
* In line 209, did you miss a square for the $\delta_n$?
* In assumption 4.1, I think it is better to say $\rho_g$-strongly convex.
* In Lemma 4.2, there is no definition of the function $f_t$, although it is not hard to guess.
* The conclusion and future work section is missing in this paper.
* In Line 708, what is $\alpha_h$? Should it be $\alpha'$?
Questions For Authors: See above.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your careful reading and valuable feedback. We address your comments as follows:
1. There are some other machine learning problems with a nonsmooth DC structure. It is well known that piece-wise linear functions are DC. In order to guarantee both robustness and continuity, they could serve as the surrogate loss functions for binary classification. A direct example is calculating AUROC (area under ROC curve) of a predictive function $h(\cdot,w)$:
$$AUROC(h(\cdot,w)) = Pr(h(x_+,w) > h(x_-,w)) = E_{x_+ \sim P_+, x_- \sim P_-} [\mathbb{1}(h(x_+,w) > h(x_-,w))],$$
where $P_+$ is the distribution of positive examples, $P_-$ is the distribution of negative examples, and $h(x,w) : \mathcal{X} \to \mathbb{R}$ is a predictive function parameterized by a vector $w \in \mathbb{R}^d$. $\mathbb{1}(\cdot)$ is an indicator function of a predicate.
Let $\ell(w; x, x') = \ell(h(x',w) - h(x,w))$ denote a pairwise surrogate loss for a positive-negative pair $(x, x')$ to approximate $\mathbb{1}(\cdot)$. If $h(x,\cdot)$ and the surrogate loss $\ell(\cdot~; x, x')$ are both piecewise linear, the minimization problem with regard to $w$ can be formulated as a stochastic DC problem.
For the time-varying data distribution, section 6 provides an example of time-varying multi-variable normal distributions that satisfy the assumption. Since the Wasserstein-1 distance of some common distributions is easy to calculate or control, it is not hard to construct examples of time-varying exponential or uniform distributions that satisfy the assumption. We will add some examples in the final version of the paper. A simple but direct real-world example is problems with finite outliers or finite times of distribution shifts (due to the change of environment).
2. Establishing non-asymptotic rates remains an interesting but challenging problem. The derivation of Theorems 3.5.(ii) and 3.6.(iii) in [1] suggests that proving strict non-asymptotic rates still requires a smoothness assumption on either $g$ or $h$, even in the deterministic setting. Without such assumptions, obtaining non-asymptotic guarantees becomes significantly more difficult. This challenge persists unless a relaxed convergence criterion, such as nearly $\epsilon$-critical points, is considered.
[1] Hoai An Le Thi, Van Ngai Huynh, Tao Pham Dinh, and Hoang Phuc Hau Luu. Stochastic difference-of-convex-functions algorithms for nonconvex programming. SIAM Journal on Optimization, 32(3):2263–2293, 2022.
3. We sincerely appreciate your meticulous attention to detail. We agree that saying "$\rho$-strongly complex" is better. For the typos in lines 209 and 708, we will carefully revise them and ensure accuracy in the final version. We will also add a conclusion section. For reference, the definitions of $f_t$, $g_t$, and $h_t$ can be found in line 243 (left).
Thank you again for your valuable feedback. Your insights are greatly appreciated and will help improve the clarity and rigor of our work. | Summary: The authors address the minimization of a function defined as the difference of two convex functions.
Moreover, these two convex functions are expressed as the expectations of random functions.
The authors then propose online estimators based on an adaptive sampling algorithm.
Claims And Evidence: The proofs seem solid.
Methods And Evaluation Criteria: The simulation work is somewhat limited but very promising. However, the methods have not been applied to real data.
Theoretical Claims: The proofs seem true.
Experimental Designs Or Analyses: N/A
Supplementary Material: N/A
Relation To Broader Scientific Literature: The key contribution consists in considering subdifferentiable sets while in the existing literature, authors often consider smooth functions.
Essential References Not Discussed: Not to my knowledge.
Other Strengths And Weaknesses: The proposed methods and results are very interesting.
However, I have a few remarks:
- I believe the discussion on the number of data points to generate should be expanded. The approach appears computationally intensive since, at each step, a larger dataset needs to be simulated than in the previous step. That said, I do agree that the simulations suggest the proposed method is computationally faster than existing methods.
- The paper is somewhat difficult to read, partly due to the complexity of the problem studied and partly because of the large number of technical lemmas in the core of the paper (e.g., Lemmas 3.3 and 4.2). These lemmas do not seem to aid comprehension and instead make the paper heavier. It would be better to move them to the appendix, freeing up space for more in-depth simulations or an application to real data.
- The proofs are challenging to follow, with some steps moving quite quickly.
Other Comments Or Suggestions: It's just a suggestion, but wouldn't the proofs have been simpler (at least to establish the almost sure convergence of the estimators) by using Robbins-Siegmund's theorem?
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your careful reading and valuable feedback. We address your concerns as follows:
1. We appreciate your comments on the simulation work. Our primary goal was to verify the theoretical validity of our method rather than to apply it to real data. The numerical result has demonstrated that our algorithm is efficient and effective in a simple but common problem. Your suggestion is very meaningful, and we plan to apply our algorithm to real and large-scale data in future work.
2. Regarding the sublinearly increasing dataset size, this aspect is inherent to our approach and difficult to avoid, as it ensures the necessary accuracy of the algorithm at each step. The choice of sample size and step size remains a fascinating topic in stochastic optimization. Even in standard smooth SGD without further assumptions, a non-vanishing step size requires a sublinearly increasing sampling size to guarantee convergence, since the variance reduction procedure is unavoidable.
In our paper, the proximal terms $\mu_t$ for each DC subproblem can be pre-selected arbitrarily, as long as they are upper and lower bounded by positive constants. At time $t$, our $O(t^{2+\epsilon})$ sample size for $g$ and $O(t^{1+\epsilon})$ sample size for $h$ match the order of the smooth case. Moreover, our adaptive strategy is also designed to control the sample sizes when the current iterate is far from the critical points. Our sample size has already been an almost tight result, thanks to the novel $O(\sqrt{p/n})$ pointwise convergence rate for the SAA of subdifferential mappings.
4. To improve readability, we will consider moving some of the technical lemmas to the appendix while ensuring that the main ideas remain accessible in the main text. We also recognize that certain proof steps move quickly, and we will provide additional explanations and guidance to enhance clarity. We are also ready to include an additional simulation in our paper.
5. We agree that Robbins-Siegmund's theorem could be used in Theorems 4.8 and 5.3. However, the major part of our proof could not be replaced by this. The convergence of some series, such as the ones on the right-hand side of (18), remains essential; Robbins-Siegmund's theorem cannot simplify these. Regarding Theorem 5.3, we must separately consider iteration steps that satisfy the Summable Condition and the Stepsize Norm Condition, even if the theorem is used.
That being said, we acknowledge Robbins-Siegmund's theorem as an insightful tool for guiding our convergence analysis, and we will add a remark discussing its relevance. However, the proofs themselves would not be significantly simplified by directly applying the theorem.
Thank you again for your thoughtful feedback. We will incorporate these improvements to enhance the clarity and accessibility of our work. | Summary: The paper studies stochastic difference-of-convex (DC) optimization. The analysis accounts for distribution shifts, and for non-smoothness of the components is derived, introducing some non-trivial technical contributions. The obtained algorithm is validated in a numerical experiment.
Claims And Evidence: The paper is extremely well written in my opinion. All claims are motivated, situated with respect to prior work, which makes is rather easy to follow even for non-experts.
Methods And Evaluation Criteria: Then proposed methods make sense, and are essentially generalizations of prior discussed work.
Theoretical Claims: I did not closely check the correctness of the claims. I believe they are overall correct, as the proofs are sketched and motivated in the main text in a rather convincing manner, and the main claim generalize prior known results.
Experimental Designs Or Analyses: I did not check the validity of the experiment itself.
Supplementary Material: I only skimmed over some of the mathematical derivations in the appendix.
Relation To Broader Scientific Literature: The authors do a fantastic job in my opinion situating this work with respect to prior works on this topic. Even small technical derivations are compared to prior analogous results, as well as the main ideas.
Essential References Not Discussed: I am not aware of essential references which are missed.
Other Strengths And Weaknesses: As I mentioned, the paper is extremely well written in my opinion.
The technical contributions, even beyond the end results, are quite nice.
In particular, accounting for distribution shifts in the descent lemma by incorporating a Wasserstein distance term (Lemma 4.5) is a very interesting idea, which I have not seen before. This approach is really nice, and can be applied to many other optimization scenarios.
Other Comments Or Suggestions: Additional comments:
- Remark 2 is unclear to me, what does an isomorphic map imply here and why?
Questions For Authors: - Assumption 3.2 confuses me - wouldn't this correspond later to the gradient being Lipschitz? If so, this would require smoothness, which the authors are trying to avoid. I kindly ask the authors clarify this issue.
Questions to the authors:
- Is the idea of incorporating a Wasserstein distance term in the descent lemma (Lemma 4.5) novel in this work?
- It would be nice to derive non-asymptotic rates, which seems doable via the provided descent lemma (perhaps under further quantifiable assumptions). Is there a clear difficulty in doing so? It is fine to leave this for future work nonetheless.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your careful reading and valuable feedback. We address your questions as follows:
1. **Remark 2:** The main idea is that if there exists an isomorphic mapping between the probability spaces of the random variables $\xi$ and $\zeta$ associated with $G$ and $H$ (e.g., if $\xi$ and $\zeta$ originate from the same probability space, share the same distribution, or even are the same random variable), then we only need to sample the variable with the higher sampling demand. The other variable’s samples can be obtained directly via this mapping, reducing the overall sampling cost without affecting convergence.
2. **Assumption 3.2** concerns only the Lipschitz continuity of the original function, not its gradient. The function itself may not even be differentiable, e.g., $\varphi(x,\omega)=|x-\omega|$ with corresponding $L_{\varphi}=1$. We require $\varphi(\,\cdot,\omega)$ to be Lipschitz continuous in $x$ with a universal constant $L_{\varphi}$, independent of $\omega$. Our result does not assume any smoothness.
3. The idea of using the Wasserstein distance to detect distribution shifts in online or decision-dependent optimization is not new, see [1,2]. However, we are the first to directly incorporate a Wasserstein distance term into the descent lemma and establish almost sure convergence.
[1] Drusvyatskiy D, Xiao L. Stochastic optimization with decision-dependent distributions. Mathematics of Operations Research, 2023, 48(2): 954-998.
[2] Che E, Dong J, Tong X T. Stochastic gradient descent with adaptive data. arXiv preprint arXiv:2410.01195, 2024.
4. Establishing non-asymptotic rates remains an interesting but challenging problem. The derivation of Theorems 3.5.(ii) and 3.6.(iii) in [3] suggests that proving strict non-asymptotic rates still requires a smoothness assumption on either $g$ or $h$, even in the deterministic setting. Without such assumptions, obtaining non-asymptotic guarantees becomes significantly more difficult. This challenge persists unless a relaxed convergence criterion, such as nearly $\epsilon$-critical points, is considered.
[3] Hoai An Le Thi, Van Ngai Huynh, Tao Pham Dinh, and Hoang Phuc Hau Luu. Stochastic difference-of-convex-functions algorithms for nonconvex programming. SIAM Journal on Optimization, 32(3):2263–2293, 2022.
Thank you again for your valuable feedback. We appreciate your insights and will carefully consider them in our revisions. | Summary: This paper proposes algorithms for solving a stochastic DC program in a time-varying setting. Specifically, the distributions used to define stochastic convex components may vary over time and are assumed to converge to the true distributions. The proposed algorithm is a variant of the classic DC algorithm and uses SAA to estimate the current distributions on the fly. The main result is an almost sure convergence guarantee to DC critical points, up to taking subsequences. To prove this result, the authors develop a new upper bound on the estimation error of the SAA scheme for convex subdifferentials in terms of the excess of one set over another. Overall, I think the contribution is interesting and meaningful for solving a difficult stochastic DC program.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: Yes, I have checked some of the proofs in Appendices A and B, and they look good to me.
Experimental Designs Or Analyses: No.
Supplementary Material: I have checked some of the proofs in Appendices A and B, and they look good to me.
Relation To Broader Scientific Literature: The new algorithmic framework in this paper is built on the classic DCA and SAA schemes. The authors propose new theoretical results to determine the number of samples sufficient for almost sure convergence to a DC critical point.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Overall, I think this paper makes good and interesting contributions to DC programming in the modern stochastic setting. I only have the following comments.
- L019, right: I suggest using the term "DC critical point" here rather than "critical point," since the latter is sometimes used interchangeably with "stationary point" and has a totally different meaning compared to a DC critical point.
- L127, right: It seems $\mathbb{P}(\Omega)$ is a set of distributions. Hence, it is not clear to me what the meaning of $\xi \sim \mathbb{P}(\Omega)$ is.
- L160, left: It seems that $g, h$ here are convex extended-real-valued functions, since you need to use the indicator function $i_C:\mathbb{R}^p\to\overline{\mathbb{R}}$ to represent the constraint $x \in C$. However, I cannot find their concrete definitions.
- L160, right: The function $\phi$ should be $\varphi$?
- L280, right: Compared with the convergence conditions in L328, left, these two summable conditions seem a bit stringent. It would be illustrative if a concrete example were discussed that satisfies this summable condition.
- L806: Lemma 4.3 should be Corollary 4.3.
- L869: \bar{z}_{n} should be \bar{z}_{n_t + 1}.
- Some papers are listed in the references without a citation in the main paper, e.g., (Kantorovich, 1958), (Goldstein, 1977), (Geyer, 1994), (Mehta, 2016, 2014), (Liu et al., 2018) and many others.
Other Comments Or Suggestions: See above.
Questions For Authors: See above.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your careful reading and valuable feedback. Below, we provide clarifications regarding the comments you raised:
1. **L019, right:** Thank you for pointing this out. In this paper, "critical point" specifically refers to a "DC critical point," following the convention in other DC programming literature. We will add a paragraph in Section 2 to clarify this distinction.
2. **L127, right:** Thanks for pointing out this improper statement. We will change the statement to the following:
*Let $\mathbb{P}(\Omega)$ denote the set of Radon probability measures on $\Omega$, where each measure $P \in \mathbb{P}(\Omega)$ has a finite first moment. That is, $\mathbb{E}_{\xi \sim P}[d(\xi, \xi_0)] < \infty$ for some $\xi_0 \in \Omega$.*
3. **L160, left:** We will modify the definition of $g$ to be extended-real-valued functions so that it covers the indicator function. The function $h$ is real-valued to avoid making $f-g = -\infty$. Thank you again for pointing this out.
4. **L280, right:** These conditions seem to be necessary. In Section 6, we provide an example of online sparse robust regression to illustrate their role. We will add another example directly after the summable assumptions for further illustrations, as suggested by the reviewer.
5. **L160, right; L806; L869; references:** We appreciate your careful attention to detail. We will correct these typographical errors and remove uncited references accordingly.
Thank you again for your valuable feedback. We will carefully incorporate these revisions to improve the clarity and precision of our work. | null | null | null | null | null | null |
Federated Causal Structure Learning with Non-identical Variable Sets | Accept (poster) | Summary: The paper introduces FedCDnv, a novel algorithm for federated causal discovery where clients observe non-identical but overlapping variable sets. A key challenge in this scenario is the spurious dependencies introduced by non-overlapping variables. To address this, the paper proposes a two-level priority selection strategy that aggregates local graphs from each client to form a global causal graph. The paper demonstrates the effectiveness of FedCDnv through extensive experiments on synthetic, benchmark, and real-world datasets, showing improvements over existing methods.
Claims And Evidence: Some claims in the paper are not fully supported by strong evidence. For example, the authors claim that FedCDnv can achieve federated causal discovery while "preserving data privacy" (line 93-94).
This claim seems over-claimed since the paper only mentions that "FedCDnv exchanges structural information rather than raw data, protecting data privacy to a certain extent", without any specific designs for the privacy concerns.
Another claim is that FedCDnv works even "when the sample distributions differ across clients" (Assumption 2.2, lines 101-105). However, the paper does not offer theoretical analysis or empirical results to support this point.
Methods And Evaluation Criteria: Methods:
The idea of aggregating both "good" and "correct" relationships using the concept of "stable relationships" is novel and well-motivated. However, some details need further explanation:
* In Algorithm 1 (line 7) and lines 250-251, the paper claims to integrate all local PAGs into a global graph by taking the union of nodes and edges. However, there is no analysis of this union operation. Consider the following situations:
* If a client learns a wrong edge, it becomes part of the global graph. What effect does this have on the final result?
* When different clients report conflicting directions for an edge, how is the conflict resolved? Is it merged into a bi-directional edge, and how does this impact the outcome?
* In Algorithm 2, lines 9–12, the paper applies an "orientation rule" on the client side. The rationale behind this rule is not sufficiently explained, and further analysis is needed to clarify its effectiveness and correctness.
Evaluation Criteria:
The paper uses False Discovery Rate (FDR) to evaluate the effectiveness of FedCDnv in identifying the definite causal and non-causal relationships in Sec 4.3. It would be helpful to explain why FDR is chosen over other metrics and to also include results on recall for a more complete evaluation.
Theoretical Claims: There are several issues with proofs of the theorem and lemmas in the paper:
Theorem 3.1: There is an issue with case (2.c). In the provided example, the set A_X^{G} should be {A, Y} rather than {A, B}. This error questions the correctness of this case.
Lemma 3.2: This lemma has several problems.
* First, the statement is unclear. It introduces Z_n as a set of "variable pairs" (like <X,Y>), yet later it states the "single node" X \in Z_n. This inconsistency needs to be addressed.
* Second, the proof is not well-explained.
* In case (1), the learned casual relationship A->B alligns with the gound truth and should be definite. However, the lemma claims it is non-definite.
* In case (2), the lemma says that X and Y "might" be adjacent, which suggests uncertainty, yet it does not clarify what happens if they are learned as non-causal.
Overall, the proof of Lemma 3.2 is not convincing.
Lemma 3.3: The proof seems to assume that X and Y are adjacent in the local graph, but this assumption is not mentioned in the lemma statement, which reduces clarity.
Lemma 3.4 appears to be correct.
Experimental Designs Or Analyses: In Assumption 2.2, the paper claims that FedCDnv works when the sample distributions differ across clients. However, there is no empirical evidence to support this claim.
In Section 4.3, the authors use the false discovery rate (FDR) as an evaluation metric for definite relationships. It would be helpful to explain why FDR is chosen over other metrics and to also include results on recall for a more complete evaluation.
The reults about the performance across different numbers of clients are presented in appendix. These results could be moved to the main paper, as the number of clients is an important factor in federated leanring and deserves more emphasis.
Supplementary Material: All parts of the supplementary material are reviewed.
Relation To Broader Scientific Literature: The paper studies federated causal discovery when clients observe non-identical but overlapping variable sets, which is a novel and well-motivated problem.
The idea of using "stable relationship" to aggregate both "good" and "correct" relationships from local causal graphs to form a global causal graph is novel and effective.
The experimental results indicate that FedCDnv outperforms state-of-the-art methods across varying numbers of nodes and clients, demonstrating the effectiveness of the proposed method.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: There are certain points that need to be clarified in the paper.
* In Section 3.2.1 (lines 179-181), the paper states that: "The first level is to determine whether the adjacency between X_i and X_j is caused by relative latent variables, which is detected by Lemma 3.4". However, Lemma 3.4 only accounts for bidirected edges. What happens if the adjacency between X_i and X_j is not a bidirected edge?
* In Section 3.2.1 (line 200), the definition of the p-value p_{ij}^{c^k'} is unclear. Does it represent the average of all p-values from the independence tests between X_i and X_j across all possible conditioning sets in c_k'?
* In Section 3.2.2 (lines 256-257), a brief description of "the rules described by Zhang" would be helpful. Specifically, it should clarify that these rules is about the orientation of edges in the causal graph.
* In Section 4.1 (line 288), the term "graph size" is vague. It should be explicitly defined as the number of nodes in the graph.
* The pseudocode for Algorithm 3 needs better formatting. The initial value of w_{ij}^{c^k'} is missing, and the first level of PSS is not included in the algorithm.
Other Comments Or Suggestions: N/A
Questions For Authors: 1. How does the method perform when the probability distribution of samples varies across clients?
2. What is the rationale behind using the "union" operation to aggregate the local PAGs into a global graph?
3. The issues mentioned in the Theoretical Claims part need to be addressed. (see the Theoretical Claims part for details)
4. What recall results were obtained when evaluating the performance of the identified definite relationships?
5. What's the performance of FedCDnv compared to the method proposed in [1] by Wang et al. (2023)? (see the Essential References Not Discussed part for details)
6. Could you clarify the first two points mentioned in the Other Strengths and Weaknesses section?
A clear and detailed responses to question 1-5 would help.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: $\textbf{Responses for “Questions For Authors” are as follows.}$
$\textbf{R1}$. Our method handles distributed data with varying samples distributions, where the observed variable sets are non-identical. We also experimentally evaluate the impact of $\delta$ on FedCDnv's performance. A lower $\delta$ indicates fewer overlapping variables and greater variability in the sample distribution. Fig.12 shows that as $\delta$ decreases from 95% to 60%, FedCDnv's performance drops by approximately 10%.
$\textbf{R2}$. The "union" operation is applied only in the first aggregation phase, where adjacencies and arrowheads of all local graphs are transferred to a global graph. This global graph is then used to update the local graph in each client, without altering their skeletons and can even learn new orientations by using Zhang’s [2008 AI] rules (as illustrated in Fig. 3). The experiments in Figures 4-6 show a significant improvement in F1-orie.
$\textbf{R3-1 for Theorem 3.1}$. Theorem 3.1 assumes that <X,Y> is a non-overlapping variable pair. Under case (2.c), if A o→ X ←o Y o→ B holds, then any dataset observing X will not observe Y, indicating that $A_X^{G}$ cannot be {A,Y}.
$\textbf{R3-2 for Lemma 3.2 (First)}$. Sorry for this misunderstanding. $Z_n$ is the set of variables that appear in non-overlapping variable pairs. We will clarify it in the revised version.
$\textbf{R3-2 for Lemma 3.2 (Second)}$.
For case (1): Since the ground truth is unknown, we must consider both case(1) and case(2). When the condition in Lemma 3.2 is satisfied, either case (1) or (2) occurs, but we do not know which one occurs, which prevents us from guaranteeing the correctness of the learned relationships. Therefore, Lemma 3.2 claims it is non-definite.
For case (2): Due to the unknown ground truth, as long as there exists any case where the learned relationship is non-definite, then the learned relationship is considered non-definite. Therefore, the situation that "they are learned as non-causal" no longer needs to be considered.
$\textbf{R3-3 for Lemma 3.3}$. The proof does not assume that X and Y are adjacent in the local graph. It only assumes that X and Y are not adjacent in the ground truth. Here, the learned non-causal relationship between X and Y does not satisfy the condition "{A,B} $\subseteq \mathcal{A}_X^G \cup \mathcal{A}_Y^G$" in Lemma 3.3, meaning the premise does not hold, and thus it cannot be discussed in Lemma 3.3.
$\textbf{R4}$. We use FDR to evaluate the reliability of FeddG by quantifying the proportion of false discoveries, where FeddG is the graph extracting only definite relationships from FedG. In contrast, recall is defined as True Positives / (True Positives + False Negatives). Since the denominator (True Positives + False Negatives) remains unchanged before and after extraction, the recall of FeddG (Rec-dC, Rec-dnC) is necessarily lower than (or equal with) that of FedG (Rec-C, Rec-nC). This is expected because FeddG is a subset of FedG, excluding non-definite relationships. The experimental results confirm this trend, as shown in the table below (Due to limited space, only a few are shown here).
| nV | Rec-C | Rec-dC | Rec-nC | Rec-dnC |
|-----|----------------------|----------------------|----------------------|----------------------|
| 20 | 0.60869 $\pm$ 0.20393 | 0.41304 $\pm$ 0.28344 | 0.99491 $\pm$ 0.01685 | 0.97344 $\pm$ 0.03836 |
| 40 | 0.35476 $\pm$ 0.03688 | 0.22619 $\pm$ 0.02916 | 0.99567 $\pm$ 0.00638 | 0.98617 $\pm$ 0.00976 |
| 60 | 0.54615 $\pm$ 0.10288 | 0.40769 $\pm$ 0.10383 | 0.99825 $\pm$ 0.00261 | 0.99200 $\pm$ 0.00784 |
| 80 | 0.37650 $\pm$ 0.05856 | 0.23132 $\pm$ 0.07087 | 0.99868 $\pm$ 0.00154 | 0.98734 $\pm$ 0.00373 |
| 100 | 0.58360 $\pm$ 0.04306 | 0.34098 $\pm$ 0.05933 | 0.99883 $\pm$ 0.00158 | 0.98761 $\pm$ 0.00652 |
$\textbf{R5}$. Thanks for your comment, but there is no [1] that you mentioned.
$\textbf{R6-1 for Weaknesses (lines 179-181)}$. Thanks for your comment. If the adjacency between $X_i$ and $X_j$ is not a bidirected edge but takes forms such as $X_i$ o—o $X_j$, $X_i$ o→ $X_j$ and $X_i$ ←o $X_j$, we provide a condition to address such cases. Specifically, if for every $G_{k_n} \subseteq$ {$G_{k_n}$}, there exists $Z \subseteq O_{k_n}$ such that $X⊥Y∣Z$ holds in $D_{k_n}$, and in every $G_{k_a}\in$ {$G_{k_a}$}, the variables in $Z$ are never observed simultaneously ($X_i,X_j \notin Z$), then the conflicting adjacency arises due to non-identical observed variable sets. We will clarify this in the revised version.
$\textbf{R6-2 for Weaknesses (line 200)}$. If $X_i$ and $X_j$ are adjacent in $G_{k'}$, $p_{ij}^{c_k'}$ represents the average p-value from independence tests between $X_i$ and $X_j$ across all possible conditioning sets in $c_k'$. If $X_i$ and $X_j$ are non-adjacent in $G_{k'}$, $p_{ij}^{c_k'}$ corresponds to the p-value associated with the separating set that renders them independent. | Summary: This paper proposes novel algorithm FedCDnv, a federated method for learning causal structure where different clients observe non-identical variable sets. It mainly addresses two challenges: 1) spurious dependencies introduced by non-overlapping variable pairs, which may lead to incorrect causal conclusions, and 2) Varying importance of (non-)causal relationships between different variables within a client, requiring a careful aggregation mechanism. It also develops theories to detect spurious dependencies, defining stable relationships as those that are both "correct" and "good" across graphs discovered by multiple clients, and bridging the local learning with federated aggregation. The experiments conducted on synthetic, benchmark, and real-world datasets support the claims.
Claims And Evidence: Yes. The paper provides theoretical and empirical support for the claims where the proof support the detecting spurious dependencies and the experiment includes comparisons with distributed and federated CSL methods, testing on synthetic, benchmark, and real-world datasets.
However, one potential weakness is that the paper does not deeply analyze or discuss the worst-case performance of FedCDnv, i.e., scenarios where FedCDnv underperforms or where its assumptions may not hold.
Methods And Evaluation Criteria: Yes. The proposed methods and the evaluation criteria make sense for the problem of FCD with non-identical variable sets, including multiple benchmark in synthetic and real-world datasets for the specific problem.
Theoretical Claims: Yes, the proofs for the proposed theories appear logically sound where Theorem 3.1 formalizes conditions under which non-overlapping variable for non-causal relationships, Lemmas 3.2-3.3 formalize criteria for detecting spurious dependencies. Lemma 3.4 provides a heuristic rule for identifying unobserved confounders, though it might be better labeled as a "proposition" to reflect its heuristic nature.
Experimental Designs Or Analyses: Yes, the experimental setup is sound as it include distributed and federated CSL methods and several benchmarks. An improvement would be testing on more real-world datasets to enhance generalizability.
Supplementary Material: Yes. I review the additional related works, definition, proof and the extent experiment result in the related work.
Relation To Broader Scientific Literature: This work proposes a federated method for learning causal structure which extend the prior work by allowing different clients observe non-identical variable sets, considering the presence of latent variables, detecting spurious dependencies, and computing the varying importance of (non-)causal relationships between different variables within a client. While prior FCD methods, such as FedPC, FedCSL, NOtears-ADMM, assume identical variable sets. Distributed methods, such as CDUIOV and CD-MiNi, ignore the might incorrect causal conclusions caused by non-overlapping variable pairs. Both makes FedCDnv a significant contribution to handling real-world non-identical variables.
Essential References Not Discussed: The paper could cite: (1) Alternative methods for handling latent confounders in federated learning, such as "Causal inference with latent variables: Recent advances and future prospectives." (2) Federated algorithms in other domains, such as "scFed: federated learning for cell type classification with scRNA-seq".
Other Strengths And Weaknesses: Strengths:
- The problem studied in this paper is both novel and practically significant. The challenges arising from non-identical variable sets across clients in federated settings are nontrivial and require careful consideration.
- The motivation for the study is clear and compelling, particularly regarding the spurious dependencies caused by non-overlapping variable pairs, which indeed require serious attention. (3) The underlying assumptions and theoretical proofs are well-established, and the experimental evaluation provided is thorough, covering two critical aspects comprehensively.
Weaknesses:
- Alg.3 seems a bit misleading. In line 9, $w_{ij}$ is calculated using Eq. (2), which requires scaled p-values. However, it is unclear where these scaled p-values are derived from, and no further detailed explanation is provided.
- Limited real-world dataset evaluation, with most results relying on benchmarks.
- There is no discussion of worst-case performance, making it unclear how FedCDnv behaves under extreme heterogeneity.
Other Comments Or Suggestions: - Lines 273-274 in Page 5 "… FedG as definite ones, obtaining FeddG". The notation could be clarified for better readability.
- The notation of $\textbf{Z}$ and $Z$ in Alg. 2 is somewhat confusing, as $Z$ generally refers to a variable within the set $\textbf{Z}$. Additionally, the presentation in line 4 of Alg. 2 could be improved for greater clarity. Line 198R: $\frac{n_k}{n}$ should be corrected as $\frac{n_{k’}}{n}$.
- The capitalization format of section titles is inconsistent. For example, Section 3.4 should be titled "Privacy and Costs Analysis" for consistency. It is recommended that the authors thoroughly review the manuscript to ensure uniform formatting.
Questions For Authors: 1. Does the term “an oracle of conditional independence tests” refer to perfectly accurate CI tests?
2. Could the authors clarify the process of Alg.3?
3. How does the communication cost scale with an increasing number of clients? Have you explored optimizations for reducing overhead?
4. Can FedCDnv be extended to handle intervention data, given that CDUIOV explicitly models interventions?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: $\textbf{Responses for “Questions For Authors” are as follows.}$
$\textbf{Q1}$. Does the term “an oracle of conditional independence tests” refer to perfectly accurate CI tests?
$\textbf{R1}$. Yes, “an oracle of conditional independence tests” refers to fully accurate CI tests.
$\textbf{Q2}$. Could the authors clarify the process of Alg.3?
$\textbf{R2}$. Algorithm 3 is proposed for implementing the designed two-level priority selection strategy (PSS). The inputs include the updated local causal graph Pag$_u^{c_k}$, sample size $n_k$, and stability matrix R$_s^{c_k}$ of each client, while the output is the global causal graph FedG over the integrated variable set $O$.
During aggregation, adjacencies are determined by comparing $val_{ij}$ (for $X_i$ and $X_j$) with 0. The computation of $val_{ij}$ depends on $w_{ij}^{c_k}$ and the sample size weight $w_{c_k}$. Specifically, if the relationship between $X_i$ and $X_j$ is stable, $w_{ij}^{c_k}$ is based solely on $w_{c_k}$. Otherwise, it is derived from the product of $w_{c_k}$ and the $w_{ij}$ value computed by each client (as Eq. (2)). Stability is encoded in the stability matrix, where each entry represents the stability score (2, -2, 1, -1) multiplied by the scaled p-value. Consequently, $w_{ij}$ is computed by dividing each entry by the corresponding stability score.
For orientation aggregation, only arrows are considered. If any local graph contains an arrow X→Y, it is incorporated into FedG as X $\circ\hspace{-0.43em}\rightarrow$ Y.
$\textbf{Q3}$. How does the communication cost scale with an increasing number of clients? Have you explored optimizations for reducing overhead?
$\textbf{R3}$. We have theoretically analyzed the communication cost of the proposed FedCDnv algorithm, which is $O(4md^2)$ and increases linearly with the number of clients. We are aware that the stability matrix R$_s^{c_k}$ contributes to communication costs. In future work, we will explore optimizations to reduce this overhead.
$\textbf{Q4}$. Can FedCDnv be extended to handle intervention data, given that CDUIOV explicitly models interventions?
$\textbf{R4}$. Yes, FedCDnv can be extended to handle interventional data. CDUIOV learns causal structures over the integrated set of variables from interventional datasets across multiple domains. It assumes that within each domain, interventions are performed on an identical set of variables. CDUIOV aims to address inconsistencies in causal relationships caused by unknown intervention targets and non-overlapping variable pairs. Therefore, when each client holds multiple interventional datasets, FedCDnv can integrate these local structures into a global causal graph while preserving data privacy.
$\textbf{Responses for “Weaknesses” are as follows.}$
$\textbf{Q5-1}$. Alg.3 seems a bit misleading.
$\textbf{R5-1}$. Please see R2.
$\textbf{Q5-2}$. Limited real-world dataset evaluation, with most results relying on benchmarks.
$\textbf{R5-2}$. We have conducted experiments using the real-world Sachs dataset, which consists of measurements from 11 phosphorylated proteins and phospholipids in individual cells. The experimental results are presented in Table 1 and Table 6.
In addition, this study also has real-world applications. We found the eICU Collaborative Research Database (eICU-CRD) (link https://physionet.org/content/eicu-crd/2.0/), a real-world dataset available on the PhysioNet website, as a motivating example for applying FedCDnv (See R2 of Reviewer 2 for details).
$\textbf{Q5-3}$. There is no discussion of worst-case performance, making it unclear how FedCDnv behaves under extreme heterogeneity.
$\textbf{R5-3}$. In the experimental section, we analyzed the worst-case performance of FedCDnv under increasing heterogeneity. When the observed variables differ across clients, the data distribution of each client also varies. We experimentally evaluated the impact of different values of $\delta$ on FedCDnv's performance, where a lower $\delta$ indicates fewer overlapping variables and greater variation in sample distributions. Fig. 12 shows that as $\delta$ decreases from 95% to 60%, FedCDnv's performance drops by approximately 10%. | Summary: The paper introduces FedCDnv, a federated causal structure learning algorithm designed for scenarios where clients have non-identical but overlapping variable sets. The method introduces theoretical criteria to distinguish definite causal and non-causal relationships. A two-level priority selection strategy (PSS) is developed to aggregate both “correct” (definite) and “good” (stable) relationships from local causal graphs into a global causal graph.
Claims And Evidence: Yes, the claims (detecting good and correct causal relationships) are backed up by theoretical statements with proofs.
Methods And Evaluation Criteria: Yes, standard causal discovery metrics (e.g., F1) are used for gauging the accuracy of the learned graph; and the datasets used (synthetic and real) make sense for the problem.
Theoretical Claims: Yes, Theorem C.1. No issues were found.
Experimental Designs Or Analyses: Yes, all the experiments in the main text and the parameter-sensitivity experiments in Appendix D.4.
Supplementary Material: Yes, Appendix B, parts of C, and D.4.
Relation To Broader Scientific Literature: The work extends federated causal discovery to non-identical variable sets, addressing gaps in prior works (e.g., FedCSL, Notears-ADMM) that assume identical variables.
Essential References Not Discussed: None, to the best of my knowledge.
Other Strengths And Weaknesses: **Strengths**
1. The problem of FCD with non-overlapping variables is novel and realistic one in practice.
2. Theoretical justifications are provided (e.g.,correctness of identifying definite causal and non-causal relationships).
3. Privacy and communication costs are given.
**Weaknesses**
1. The communication costs of a client $c_k$ scale with $O(d_k^2)$, where $d_k$ is the number of variables observed by the client.
Other Comments Or Suggestions: 1. Figure 1 is never mentioned/referenced in the text.
2. Line 294 Section 4.1 should read "real-**world** data."
3. The error bars in the figures are hardly visible. I suggest either using error bands, or also displaying the whiskers (lines).
4. Tables 1 and 2 contain too many significant digits to be easily legible. I suggest reporting figures to 2 or max 3 decimal places.
Questions For Authors: 1. Could the authors provide a real-world motivating example where FCD is used/needed and there are non-overlapping variables observed by clients?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: $\textbf{Responses for “Questions For Authors" are as follows.}$
$\textbf{Q1}$. Could the authors provide a real-world motivating example where FCD is used/needed and there are non-overlapping variables observed by clients?
$\textbf{R1}$. Thanks for your comment. A real-world example where non-overlapping variables are observed can be found in the eICU Collaborative Research Database (eICU-CRD, link https://physionet.org/content/eicu-crd/2.0/) [1-3]. It is a multi-center intensive care unit database covering over 200,000 admissions to ICUs across the United States between 2014-2015. In the "vitalPeriodic.csv" file, different ICU stays (identified by "patientunitstayid") record partially overlapping physiological variables, as shown below:
| vital-periodic-id | patient-unit-stay-id | sao2 | heart-rate | respiration | cvp | system-ic-systolic | system-ic-diastolic | system-ic-mean | pa-systolic | pa-diastolic | pa-mean | icp-st1 | icp-st2 | icp-st3 |
|-------------------|----------------------|------|------------|-------------|-----|--------------------|---------------------|----------------|-------------|--------------|---------|---------|---------|---------|
| 35511110 | 141945 | 100 | 76 | 17 | | | | | | | | -1 | -0.81 | 0 |
| 35417390 | 141945 | 89 | 87 | 24 | | | | | | | | -1 | -0.69 | 0.1 |
| 28818781 | 142000 | 96 | 108 | 30 | 8 | 134 | 48 | 74 | | | | | | |
| 28839535 | 142000 | 98 | 80 | 20 | 11 | 142 | 50 | 82 | | | | | | |
| 48431259 | 142035 | 98 | 98 | 29 | 17 | 116 | 60 | 76 | 34 | 13 | 23 | | | |
| 48435712 | 142035 | 94 | 92 | 23 | 16 | 116 | 68 | 82 | 34 | 14 | 25 | | | |
In general, clinical practice requires access to up-to-date patient data from hospitals, which is often distributed and privacy-sensitive. Due to regulatory constraints such as HIPAA and institutional policies, hospitals cannot directly share raw and up-to-date patient data for collaborative analysis. This creates a critical need for federated causal structure learning with non-identical variables sets, which allows clients to collaboratively infer causal relationships over integrated variables while keeping data decentralized and secure.
[1] Pollard, T., Johnson, A., Raffa, J., Celi, L. A., Badawi, O., & Mark, R. (2019). eICU Collaborative Research Database (version 2.0). PhysioNet. https://doi.org/10.13026/C2WM1R.
[2] Nakayama LF, Restrepo D, Matos J, Ribeiro LZ, Malerbi FK, Celi LA, Regatieri CS. BRSET: A Brazilian Multilabel Ophthalmological Dataset of Retina Fundus Photos. PLOS Digit Health. 2024 Jul 11;3(7):e0000454. doi: 10.1371/journal.pdig.0000454. PMID: 38991014; PMCID: PMC11239107.
[3] Goldberger, A., Amaral, L., Glass, L., Hausdorff, J., Ivanov, P. C., Mark, R., ... & Stanley, H. E. (2000). PhysioBank, PhysioToolkit, and PhysioNet: Components of a new research resource for complex physiologic signals. Circulation [Online]. 101 (23), pp. e215–e220.
$\textbf{Responses for “Other Comments Or Suggestions" are as follows.}$
$\textbf{Q2-1}$. Figure 1 is never mentioned/referenced in the text.
$\textbf{R2-1}$. Thank you for pointing this out. We apologize for the oversight. Figure 1 is used in the fourth paragraph of the introduction to illustrate spurious dependencies, but it was not explicitly referenced in the text. In the revision, we will embed Figure 1 within the fourth paragraph of the introduction.
$\textbf{Q2-2}$. Line 294 Section 4.1 should read "real-world data."
The error bars in the figures are hardly visible. I suggest either using error bands, or also displaying the whiskers (lines).
Tables 1 and 2 contain too many significant digits to be easily legible. I suggest reporting figures to 2 or max 3 decimal places.
$\textbf{R2-2}$. Thanks for your suggestions. We revised the errors and carefully checked the manuscript. In addition, for the error bars, we replaced them with whiskers (lines) to make them clearer. For Tables 1, 2 and 6, we retained three decimal places and revised them accordingly. Thanks again.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for addressing my questions, and in particular for providing a real-world motivating example. I have increased my score.
---
Reply to Comment 1.1.1:
Comment: Thank you for your thoughtful review and for increasing your score. We sincerely appreciate your positive feedback and support. | Summary: This paper investigates federated causal structure learning, aiming to discover causal relationships between variables from data distributed across individual clients while considering privacy concerns. The paper addresses federated causal structure learning with non-identical variable sets and designs an effective strategy to aggregate "correct" and "good" relationships between variables during collaborative learning. Experimental results demonstrate that the proposed method is effective on synthetic, benchmark, and real-world data.
Claims And Evidence: Yes. The claims are clear and convincing.
Methods And Evaluation Criteria: Yes. The proposed methods and evaluation criteria make sense for the problem of federated causal structure learning with non-identical variable sets.
Theoretical Claims: Yes. I have carefully reviewed all the proofs.
Experimental Designs Or Analyses: Yes. I have checked the soundness of all the experimental designs and the experimental analyses are valid.
Supplementary Material: Yes. I have reviewed all the supplementary material.
Relation To Broader Scientific Literature: The key contributions of the paper related to the problem of causal discovery with non-identical variable sets.
Essential References Not Discussed: Yes. The related works are cited and discussed.
Other Strengths And Weaknesses: **Strengths:**
1. This paper proposes a federated causal structure learning method to address the challenge of discovering causal relationships under non-identical variable sets.
2. The effectiveness of the method is validated through experiments on synthetic data, benchmark data, and real-world data, demonstrating its applicability and robustness across various types of data.
**Weaknesses:**
1. Theorems 3.2 and 3.3 only indicate that the relationship between variables X and Y is uncertain, but do not analyze under what conditions their relationship can be determined.
2. How does the proposed method aggregate the results obtained from different servers? The paper discusses taking the union of variable sets and the union of edge sets. When the relationships between two variables differ, such as Xo-oY, Xo->Y, and X->Y, what does the union of edges look like?
Other Comments Or Suggestions: N/A
Questions For Authors: See the weaknesses above.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: $\textbf{Q1}$. Theorems 3.2 and 3.3 only indicate that the relationship between variables X and Y is uncertain, but do not analyze under what conditions their relationship can be determined.
$\textbf{R1}$. Thanks very much for your comment. As stated in line 114 of the manuscript, we initially assume that all relationships among overlapping variable pairs are definite (or "correct"). Theorems 3.2 and 3.3 are proposed to detect which of these relationships become non-definite due to the influence of non-overlapping variable pairs. Therefore, the relationships that satisfy the conditions of Theorems 3.2 and 3.3 are non-definite, while all other relationships remain definite.
$\textbf{Q2-1}$. How does the proposed method aggregate the results obtained from different servers?
$\textbf{R2-1}$. Thanks for your comment. There are two aggregation stages.
(1) For the first stage, the server integrates the adjacencies and orientations of local causal graphs learned by all clients into a global graph. For adjacencies, a variable pair is considered adjacent if any client reports the adjacency between them. For orientations (there are three types of orientations: circle 'o', tail '-', and arrow '>'), only the identified arrows are included in the global graph.
(2) For the second stage, the server applies the proposed two-level priority selection strategy to aggregate all updated local graphs. First, for adjacency aggregation: a) First Priority Level: Check whether adjacency conflicts arise due to non-overlapping variable pairs, using Lemma 3.4; b) Second Priority Level: If the conflict is due to statistical errors, the final adjacency is determined by comparing $val_{ij}$ (for $X_i$ and $X_j$) with 0. The computation of $val_{ij}$ depends on two factors: $w_{ij}^{c_k}$ and the sample size weight $w_{c_k}$. Second, for orientation aggregation, the process is similar to stage (1).
$\textbf{Q2-2}$. The paper discusses taking the union of variable sets and the union of edge sets. When the relationships between two variables differ, such as X $\\circ\hspace{-0.4em}-\hspace{-0.4em}\circ\$ Y, X $\circ\hspace{-0.43em}\rightarrow$ Y, and X$\rightarrow$Y, what does the union of edges look like?
$\textbf{R2-2}$. When the relationships between two variables differ across clients, only identified arrows are included. For example, for X $\\circ\hspace{-0.4em}-\hspace{-0.4em}\circ\$ Y, X $\circ\hspace{-0.43em}\rightarrow$ Y, and X$\rightarrow$Y, the final relationship between X and Y will be X $\circ\hspace{-0.43em}\rightarrow$ Y.
---
Rebuttal Comment 1.1:
Comment: I thank the author for the comprehensive response, particularly the detailed explanation of aggregating multiple servers results into a unified result. The clarifications have effectively addressed all my concerns, and I have increased my score.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your thoughtful review. We sincerely appreciate your positive comments and the increased score. | null | null | null | null | null | null |
Large Language-Geometry Model: When LLM meets Equivariance | Accept (poster) | Summary: EquiLLM integrates Large Language Models (LLMs) with geometric Graph Neural Networks (GNNs) to improve 3D structure and dynamics prediction. It uses an LLM for invariant feature processing, a GNN for equivariant encoding, and an adapter to ensure equivariance while leveraging external knowledge. Experiments show significant improvements in molecular dynamics, human motion, and antibody design, demonstrating strong generalizability.
Claims And Evidence: See subsequent subsections.
Methods And Evaluation Criteria: See subsequent subsections.
Theoretical Claims: See subsequent subsections.
Experimental Designs Or Analyses: See subsequent subsections.
Supplementary Material: See subsequent subsections.
Relation To Broader Scientific Literature: No.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strengths:
1. The paper presents an innovative attempt by cleverly integrating pre-trained large language models with equivariant networks and demonstrates the feasibility of the proposed approach.
2. The paper conducts experiments on multiple tasks related to equivariant graph neural networks, proving the applicability of the method across various tasks.
3. Compared to the baseline models provided in the paper, the proposed method demonstrates superior performance.
Weaknesses:
1. The paper selects tasks from three different domains, but it is unclear whether the chosen methods are mainstream approaches within their respective fields. This raises concerns about whether the proposed method has been fairly compared with widely recognized and robust baselines. For example, in molecular dynamics (MD) simulations, the paper employs a temporal equivariant graph network to predict atomic positions in future frames. While this is a valid mathematical modeling choice, a more common approach in MD simulations is to first predict atomic forces at each frame and then compute the next frame’s positions accordingly to better preserve physical consistency. If the paper does not intend to directly predict forces like machine learning force fields in MD, it should provide an explanation for this choice.
2. The choice of baselines may not be comprehensive enough, particularly for domain-specific models (typically listed above the first horizontal divider in tables). For instance, in Table 1, EGNN represents work from 2021—should more recent methods from the past two years, such as Equiformer v2, MACE, etc., be included to strengthen the credibility of the results? Similar concerns apply to other tasks as well.
3. The integration of a large language model inevitably leads to a significant increase in inference time. However, the results section does not provide any data regarding model parameters, training time, or inference time, making it difficult to assess the computational cost of this combination. Given this, in the selection of baselines, it is reasonable for the proposed method to outperform large language models alone (typically listed between the first and second horizontal dividers in tables) since additional information and parameters are introduced. However, in domain-specific tasks such as molecular dynamics prediction and protein structure prediction, there are already many large-scale pretrained models, such as Uni-Mol, AlphaFold, etc. If inference time is not considered a primary factor, should these methods also be included in the comparison?
[1]Liao Y L, Smidt T. Equiformer: Equivariant graph attention transformer for 3d atomistic graphs[J]. arXiv preprint arXiv:2206.11990, 2022.
[2]Liao Y L, Wood B, Das A, et al. Equiformerv2: Improved equivariant transformer for scaling to higher-degree representations[J]. arXiv preprint arXiv:2306.12059, 2023.
[3]Batatia I, Kovacs D P, Simm G, et al. MACE: Higher order equivariant message passing neural networks for fast and accurate force fields[J]. Advances in neural information processing systems, 2022, 35: 11423-11436.
[4]Ji X, Wang Z, Gao Z, et al. Uni-Mol2: Exploring Molecular Pretraining Model at Scale[J]. arXiv preprint arXiv:2406.14969, 2024.
[5]Jumper J, Evans R, Pritzel A, et al. Highly accurate protein structure prediction with AlphaFold[J]. nature, 2021, 596(7873): 583-589.
Other Comments Or Suggestions: NO.
Questions For Authors: 1. In the Equivariant Adapter section, the paper describes \( e_{\phi_m}, \phi_x, \) and \( \phi_h \) in Equation (6) as Multi-Layer Perceptrons (MLPs), but their specific dimensions do not appear to be provided in the main text or appendix. Have the authors considered adding these details in the appendix or releasing the source code to enhance reproducibility?
2. In the ablation study, the paper analyzes the impact of including the Equivariant Encoder, but since the LLM itself is not equivariant, only the invariant part of the output can be fed into the LLM. Can we demonstrate that the invariant features obtained through the Equivariant Encoder truly capture "spatial information" or similar properties that improve LLM predictions, compared to a potential Invariant Encoder? Given that invariant computations are generally more computationally efficient than equivariant ones.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We sincerely thank you for the time and careful consideration you have given to providing detailed and constructive feedback. Your valuable insights have greatly improved both the technical accuracy and clarity of our manuscript. We have meticulously revised the paper to incorporate your suggestions. Below, we respond to each of your comments point by point.
## Responses to Weaknesses
> **Q1: The explanation for this choice.**
In our experiments, we follow the mainstream settings on three different domains. For MD17 and Human Motion Capture datasets, future frame prediction is a common setting that used in many studies, such as Eqmotion[1] and ESTAG. This approach offers a more direct method to assess model capabilities without relying on external solvers. For the antibody design task, we benchmarked against MEAN (ICLR 2023) and GeoAB (ICML 2024), which represent the current mainstream and state-of-the-art approaches in this domain.
> **Q2: Including additional baselines.**
Thank you for your suggestion. Due to the tight time limitation, we have included additional experimental results for only one recent model, Equiformer, on the MD17 dataset. The results in Table F indicate that our EquiLLM still outperforms Equiformer on by a large margin. We'll include more experiments across three tasks to further strengthen credibility.
**Table F. The performance of Equiformer on MD17.**
||Aspirin|Benzene|Ethanol|Malonaldehyde|Naphthalene|Salicylic|Toluene|Uracil|
|-|-|-|-|-|-|-|-|-|
|Equiformer|10.13|2.00|1.88|8.05|3.43|5.79|2.09|4.38|
|EquiLLM|**2.391**|**0.732**|**1.031**|**1.671**|**1.453**|**2.162**|**1.178**|**1.060**|
> **Q3: Computational cost & Include the Uni-Mol, AlphaFold, etc.**
We appreciate your comments. As shown in Table G on antibody desgin task, our comparative analysis of inference times reveals that EquiLLM requires slightly more computation than state-of-the-art methods (MEAN and GeoAB), but this modest overhead is justified by its substantial accuracy gains. However, it is worth to note that the primary contribution of this paper is not the optimization of computational cost, but rather the integration of Pretained Large Language Models into geometric learning. We excluded large pretrained models (Uni-Mol, AlphaFold) due to their extensive domain-specific pretraining and significantly higher computational costs, which would result in an unfair comparison.
**Table G. The inference time on RAbD.**
||Inference time/s|
|-|-|
|GeoAB|0.0265|
|MEAN|0.0139|
|EquiLLM|0.0539|
## Responses to Questions For Authors:
> **Q1: More details**
Great suggestion! We will include detailed descriptions of the MLP dimensions in the manuscript's appendix and will open-source the code upon paper acceptance.
> **Q2: Do Invariant features capture "spatial information"**
Thank you for your question. In our Equivariant Encoder, equivariant and invariant features interact through message passing and feature updating, with 3D spatial distances explicitly encoded. As established in PAINN (Section 3.3), incorporating distance information across stacked layers implicitly models angular relationships, enabling the output invariant features to inherently capture spatial geometric information.
To validate our design, we have conducted ablation studies by replacing the Equivariant Encoder with two types of invariant encoders: (1) a standard GNN (see Table A, Response to Reviewer c49n) and (2) a canonicalization approach converting equivariant vectors to invariant forms (see Tables D&E, Response to Reviewer M747). Both variants underperformed our original model, confirming that the advantage of equivariant encoder against the invariant counterparts in effectively capturing spatial information.
[1] EqMotion: Equivariant Multi-agent Motion Prediction with Invariant Interaction Reasoning, CVPR2023. | Summary: This paper presents a method for solving equivariant tasks by combining a pre-trained large language model (LLM) with a trained, geometric graph network. The large language model is prompted only with invariant quantities, which come from both a natural language prompt and learned invariant features from the graph network. It outputs invariant quantities, which are fed back into a new equivariant graph network. Only the equivariant networks are trained, while the LLM weights are frozen. They evaluate their method on a molecular dynamics dataset, a human motion capture dataset, and an antibody design dataset.
Claims And Evidence: The claims made regarding experimental performance, relative to the chosen baselines (more on that later), are clearly supported by the reported numbers. However, I found several of the motivating claims to be made without evidence/citation. For example:
“A natural idea is to directly employ LLMs for modeling 3D physical systems. However, this approach fails to yield
satisfactory results in practice.” Are there citations to support this?
“A key limitation is that LLMs are trained to process ordered and discrete text tokens, restricting their ability to directly comprehend unordered and continuous data in 3D space.” Actually, tokenizing 3D structures is an active area of research, and has been deployed successfully in recent papers e.g. ESM3, ProSST, BindGPT, CHEAP, Geo2Seq, etc.
“Therefore, it is non-trivial to integrate the strengths of both LLMs and geometric GNNs while maintaining essential geometric properties.” Canonicalization is a very natural way of achieving this, which has been used together with LLMs for certain applications. Thus, I find this claim too strong. This should also be added as a baseline.
“More significantly, LLMs’ flexibility in prompt engineering enables the development of tailored instructions that better leverage their capabilities, producing outputs more precisely suited to the task.” I believe that this is probably true, but for good scholarship, statements like this should either be phrased as “We speculate, based on our results, that…” or with explicit citations to back up the claim.
"Although the aforementioned methods promote interactions between GNNs and LLMs through various paradigms and yield promising results, they have yet to explore tasks involving 3D structural data, such as 3D structure generation and dynamic trajectory simulation in 3D space.” I believe that this is simply not true; consider e.g. ESM3. Although it uses equivariant attention instead of an equivariant graph network, I think this is tangential to the claim. The authors should tone down the strength of this claim, e.g. perhaps if you restrict to works which freeze pre-trained LLMs this is true (although, with the sheer volume of LLM literature, it is hard to say for absolutely sure).
Methods And Evaluation Criteria: The benchmark datasets do make sense, and they cover a diverse range of tasks.
Theoretical Claims: n/a
Experimental Designs Or Analyses: It seems to me that the lack of other, stronger baselines are the biggest flaw in the experimental design. For example, natural ones include fine-tuning the LLM (without a geometric module), canonicalizing the inputs to the LLM, etc. The authors claim that things like fine-tuning are “too expensive”, but it is also not fair to compare their method (which involves some training computation as well as a pretrained LLM) to methods which have no training compute (pretrained LLM alone without fine-tuning) or to methods which have no access to a pretrained LLM (such as geometric models trained from scratch).
As a useful sanity check, I would recommend computing the equivariance error for each model, as a way of ensuring that there are no implementation bugs in the proposed method.
One ablation I would like to see is, replacing the equivariant graph network with a non-equivariant network (eg a transformer).
Supplementary Material: Yes, all of it (it was not very long).
Relation To Broader Scientific Literature: The lines of work on equivariant architectures, vs language model approaches, are mostly disparate; this paper unifies them in a way that’s conceptually easy to understand.
Essential References Not Discussed: There are several papers that combine language models with equivariant layers that are not discussed. For example, the ESM3 paper trains (from scratch) a masked language model that includes equivariant modules for the 3D structure channel, as well as other channels containing non-structural information (similar to this paper’s task prompt).
Other Strengths And Weaknesses: Strengths:
The proposed method is easy to understand, and it does not seem too hard to implement since only the geometric graph network is trained (not the LLM). It can be adapted to a variety of domains, as shown in the experiments. Also, the performance is quite a bit better than the chosen baselines. I think this merging of LLMs with geometric methods is a valuable direction.
Weaknesses:
The idea itself is simple, and feels like an incremental change relative to the literature — yet the paper is framed as a methods paper, not as an application paper. The baselines/ablations are not a very strong comparison: the pure graph models do not get to benefit from a pretrained language model in any way, whereas the language models are not fine-tuned at all on the task (which is not the case for e.g. Gruver et al 2024, which fine-tunes a pretrained language model for materials generation). It is of course good to compare EquiLLM to graph networks trained from scratch, but stronger baselines are necessary to validate the authors’ specific method. Also, the authors make very strong claims about the novelty of their work, which ignores existing, more complicated methods that train combination language models and geometric modules from scratch, together (eg ESM3); the related work and contextualization of this work is lacking.
Other Comments Or Suggestions: It is straightforward to use pertained, non-equivariant models for equivariant tasks with canonicalization and/or frame-averaging (see e.g. “Equivariant Adaptation of Large Pretrained Models” by Mondal et al 2023). The use of a trained geometric module is strictly more general, so it might perform better, but the authors should check this by comparing their method to canonicalization (perhaps allowing the same computation budget to fine-tune as was used to train the graph networks).
The clarity of the paper could be strongly improved along certain dimensions. For example, it is not made clear which parts of the learning pipeline are actively trained vs pretrained (and fixed) vs fine-tuned, except for a very brief aside on Line 177 that the LLM weights are frozen. This is a very important part of the proposed method and should be made clear from the start.
The related work also needs work. I believe that “Geometric GNNs” is too broad of a category to properly summarize in one paragraph; the authors cite a seemingly random selection of specific papers instead of summarizing overall categories of approaches (and then citing multiple papers for each paradigm). Related work should summarize the state of a field to the extent relevant for the contribution, which the related work currently does not do. Papers such as ESM3 and others, which use language models for structural tasks, are also not adequately cited. Also, ESTAG is one of the main comparison methods, so it should be described in greater detail in the experiments section.
Overall, I think the idea is neat and intuitive, and I'd like to see a more polished version of this paper (with more thorough baselines), published eventually.
Some typos:
*L16, “fall” -> “fail”
*L292: “definite”
Questions For Authors: 1. How can you ensure that there wasn’t data leakage, where the chosen datasets were used to train the LLMs (both the one that is used in your method, and the ones you compare against)?
2. Can the authors please contextualize their experimental results (specifically, the evaluation metrics) by citing the SOTA numbers for each task, with references?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We are deeply grateful for the time and effort you have dedicated to offering valuable feedback. We have revised the paper to address all your comments. Below, we respond to each of your points in detail.
**Questions in Claims And Evidence**
> Q1:
In our experiments, we evaluated multiple LLMs (GPT, Gemini, and DeepSeek) by directly inputting 3D systems, all of which performed significantly worse than our method. This finding aligns with CrystalLLM's [1] conclusion that direct fine-tuning of LLMs for 3D structure modeling leads to suboptimal performance across 5/8 evaluation metrics.
> Q2
While our work focuses on equivariant 3D tasks requiring directional vector outputs, the works you mentioned primarily address invariant 3D tasks. That said, we will expand our discussion in the revised manuscript to incorporate recent advances in 3D structure tokenization.
> Q3
We have expanded our discussion of canonicalization in the manuscript and moderate the original claim accordingly.
> Q4
We have revised the claim in our paper in the format as you suggested.
> Q5
Regarding the claim in the Related Work section, our intention was primarily to contrast our EquiLLM with the LLM+GNN approaches mentioned in the same paragraph. In the revised version, we will add a detailed discussion of ESM3 and removed the original claim to ensure a more precise and rigorous presentation.
**Questions in other sections:**
> Q6: Sanity check.
We have computed the equivariance error for each module, and the overall model indeed satisfies equivariance.
> Q7: Ablation study.
We have replaced the equivariant GNN with a normal GNN, with the results shown in Row 2 of Table A in our response to Reviewer c49n. The model exhibits performance degradation, indicating the importance of maintaining E(3) equivariance
> Q8: Lack of other, stronger baselines.
As requested, we have conducted the following experiments:
1. Using pretrained models with canonicalization: On the MD17 dataset, we first subtract the mean from coordinates to ensure translational invariance, then perform SVD decomposition for rotational invariance. We directly feed this canonicalized data into GPT-4o-mini, with results shown in Row 2 of Table D. The results demonstrate that while canonicalization indeed improves model's predictive capability, there remains a remarkable performance gap compared to our EquiLLM. This suggests that direct prediction of 3D coordinates remains suboptimal for current LLMs.
Table D. Results of the pretrained models with canonicalization on MD17.
||Aspirin|Benzene|Ethanol|Malonaldehyde|Naphthalene|Salicylic|Toluene|Uracil|
|-|-|-|-|-|-|-|-|-|
|GPT-4o-mini|13.070|9.581|5.011|9.910|35.155|10.627|8.132|9.762|
|GPT-4o-mini+canonicalization|11.783|3.055|4.512|8.916|8.263|9.751|6.364|8.989|
|EquiLLM|**2.391**|**0.732**|**1.031**|**1.671**|**1.453**|**2.162**|**1.178**|**1.060**|
1. Fine-tuning the LLM: Following CrystalLLM, we finetune the llama-7b model on MD17. Due to token length limitations in our prediction task (predicting 10 frames), we select the smallest molecule, Ethanol (3 heavy atoms) for evaluation. We have investigated three experimental settings: (1) 500 samples (the original paper's setup) trained for 10 epochs; (2) 30,000 samples trained for 1 epoch; (3) 30,000 samples trained for 1 epoch with canonicalization. The results in Table E reveal that without canonicalization, 500-sample and 30,000-sample fine-tuned models perform poorly, lagging behind EquiLLM by two orders of magnitude. Remarkably, when we incorporate canonicalization as suggested, the model's predictive performance improved by a factor of 100, even surpassing GPT-4o-mini. This compelling result demonstrates that the combination of canonicalization with direct LLM fine-tuning is indeed promising and warrants further investigation.
Table E. Results of fine-tuning LLM on MD17.
||Ethanol|
|-|-|
|Setting 1|460|
|Setting 2|457|
|Setting 3|4.446|
|EquiLLM|**1.031**|
> Q9: The clarity of the paper.
In our current implementation, the LLM module remains pretrained (and fixed), while all other module parameters are learnable and trained from scratch.
> Q10: The related work also needs work.
In the revised manuscript, we will modify the Related Work section by providing a clearer organization, including a comprehensive discussion of ESM3 and other language model-based methods for structural tasks. We will also provide detailed descriptions of ESTAG in the experimental section.
> Q11: Typos & Contextualize their experimental results.
We have implemented all revisions in accordance with your suggestions in the revised manuscript.
> Q12: Ensure wasn’t data leakage
Our study utilizes exclusively 3D structural data, whereas all comparative LLMs were pretrained on textual corpora alone. This fundamental modality difference potentially effectively eliminates the risk of data leakage.
[1] Fine-Tuned Language Models Generate Stable Inorganic Materials as Text, ICLR24 | Summary: This paper puts forward EquiLLM, a strategy to merge large language models (LLMs) with geometric (E(3)-equivariant) graph neural networks (GNNs). The motivation is straightforward: GNNs with built-in physical symmetry can handle 3D data in a rotation-, reflection-, and translation-consistent way, but they typically lack the broader domain insights or contextual knowledge that LLMs are good at capturing. Conversely, LLMs excel at analyzing text and general knowledge, yet they struggle when asked to directly process 3D coordinates or enforce geometric symmetries.
EquiLLM bridges that gap by clearly splitting the workloads:
1. Equivariant Encoder (GNN) – Handles spatial structure, ensuring that rotating or shifting inputs leads to the correct rotation or shift in outputs.
2. Prompted LLM – Receives only “invariant” features and carefully prepared textual prompts (e.g., sequence data, summary statistics, or domain context). This way, the LLM can apply its pretrained knowledge without worrying about coordinate transformations.
3. Equivariant Adapter – Recombines the LLM’s outputs with the GNN’s spatial embeddings. Because the adapter is itself an equivariant module, any coordinate transformations flow through properly.
By cleanly separating invariant and equivariant representations, EquiLLM is able to inject the LLM’s knowledge about, for instance, chemical or biological concepts, while still guaranteeing correctness in handling 3D geometry. Experiments on molecular dynamics, human motion, and antibody design suggest that this setup can outperform using purely geometric networks or purely language-based models.
Claims And Evidence: Overall, the key quantitative claims—namely that EquiLLM maintains E(3)-equivariance and achieves higher accuracy than both GNN-only and LLM-only baselines—are reasonably backed by the results on molecular dynamics, human motion, and antibody design. The authors provide side-by-side performance tables, ablation studies, and comparisons across multiple datasets. These empirical tests support the conclusion that adding a language model to a geometric GNN can improve 3D prediction accuracy under symmetry constraints.
However, there are a few areas where the evidence leaves some questions:
1 Role of “External Knowledge.”
The paper attributes improvements partly to leveraging LLM “domain knowledge,” but how that knowledge is used is demonstrated only indirectly. While ablations show that prompts help, they do not isolate whether the gains come specifically from knowledge embedded in LLM pretraining or simply from a new trainable pathway (even though the LLM is frozen). A direct test—e.g., tasks that require specialized domain facts only learned from large-scale text—could clarify how much of the improvement is truly “knowledge-driven” rather than architectural flexibility.
2 Independence of Added Model Capacity.
While the LLM’s parameters are frozen, the approach still involves an additional module (the LLM plus the adapter) beyond the geometric encoder. This extra capacity might explain some of the improvement. The authors conduct ablations highlighting the importance of prompting, but comparisons controlling for parameter count could strengthen the argument about how much improvement comes from bridging textual knowledge and 3D GNNs.
3 Formal Proofs of Equivariance.
The paper relies largely on references to prior geometric GNN proofs and the stated separation of invariant vs. equivariant pathways. While that is common in similar research, readers not well-versed in geometric GNNs might want a more explicit derivation.
On the whole, the main findings about quantitative improvements under 3D symmetry constraints are well supported by experiments. Claims regarding “injecting domain knowledge” and attributing gains mainly to that knowledge are plausible but would benefit from more targeted evidence isolating the effect of LLM pretraining.
Methods And Evaluation Criteria: Yes, the paper’s choices of tasks and datasets are sensible for testing a framework that combines LLMs with 3D-equivariant GNNs. The molecular dynamics, human motion, and antibody design settings all demand careful handling of 3D structures under symmetry transformations, which aligns with the paper’s claim about preserving E(3)-equivariance. Plus, each of these tasks benefits from the richer contextual or domain-level reasoning that an LLM can contribute.
The benchmarks—MD17 for small-molecule dynamics, a motion-capture dataset for human skeletal movement, and a standard antibody-design dataset—are representative of real-world scenarios where both spatial invariances and contextual knowledge are key.
Theoretical Claims: The paper primarily cites existing formulations of geometric GNNs that have already established E(3)-equivariance, rather than presenting a fully standalone proof for its combined EquiLLM framework. The core argument is that by strictly separating invariant features (handled by the LLM) from equivariant features (handled by the GNN), the architecture inherits the GNN’s established symmetry properties. Since the paper does not include a detailed, from-scratch proof of how these components integrate to preserve equivariance, there is no step-by-step proof to check in the manuscript itself. Instead, the authors rely on references to standard results in geometric deep learning.
Conceptually, the design appears consistent with known proofs for equivariant GNNs, and nothing in the method obviously breaks those symmetries. However, for a rigorous guarantee, readers would need to rely on both the cited prior proofs and a clear statement about how the LLM’s outputs (restricted to invariant inputs and outputs) blend with the GNN pipeline.
I would suggest the authors do a better job at convincing the reader that their work is theoretically sound and more self-contained.
Experimental Designs Or Analyses: The experimental setup for each of the three application domains—molecular dynamics, human motion prediction, and antibody design—largely follows established practices (e.g., using MD17 for small molecule simulations, a motion capture dataset, and standard antibody-design benchmarks). The baseline models are fairly chosen, and the evaluation metrics (root-mean-squared error for 3D positions, cross-entropy for sequences, etc.) are standard. The inclusion of ablation studies also helps show the effect of prompting and how the LLM interacts with the GNN.
One minor point is that the authors rely on a fixed set of hyperparameters across different architectures; while this is aimed at fairness, some baselines might not be fully optimized. Another is that although the ablation results highlight the model’s different components, they do not completely isolate the effect of LLM “knowledge” versus just having additional trainable modules. However, none of these issues seem to undermine the core claims, and overall the experiments appear consistent and well controlled.
Supplementary Material: N/A
Relation To Broader Scientific Literature: This paper takes ideas from two active research areas—3D-equivariant graph neural networks (GNNs) and large language models (LLMs)—and combines them in a single method called EquiLLM. Existing geometric GNNs preserve important symmetry constraints for molecules or other 3D structures but lack broad domain understanding. Meanwhile, LLMs are trained on large amounts of text-based knowledge but do not naturally handle 3D transformations. EquiLLM addresses this by giving the LLM only invariant or “directionless” information (such as molecular descriptors or statistical summaries) while the GNN processes raw 3D coordinates. The two parts share information through a small adapter that maintains the desired geometric symmetries. Experimental results on molecular dynamics, human motion, and antibody design show that EquiLLM outperforms both purely geometric models and purely language-based models, offering a middle ground that applies each technique where it works best.
Essential References Not Discussed: Some recently published approaches combine large language models with graph-based molecular or protein modeling but do not necessarily enforce strict 3D symmetries. Methods such as MoleculeSTM, MolCA, or Prot2Text show how text-based knowledge can be integrated with structural data. Also, newer “text-to-structure” techniques generate or modify 3D configurations directly from prompts, which might offer additional context. Finally, broader libraries such as e3nn for building E(3)-equivariant networks could situate EquiLLM among related tools for geometric learning. A brief discussion of these works would help readers place EquiLLM in the wider landscape of combining language models with 3D-aware architectures.
Other Strengths And Weaknesses: Strengths:
• The paper’s main contribution—combining an equivariant GNN with an LLM by carefully separating invariant and directional information—feels fresh, especially given the clear synergy with 3D tasks.
• The authors demonstrate the technique on several real-world applications (molecular dynamics, human motion, and antibody design), indicating practical significance.
• The writing, while sometimes concise on certain technical points, is reasonably clear for readers with a background in geometric deep learning, and the experimental setup is straightforward to follow.
Weaknesses:
• The manuscript relies heavily on referencing established proofs for equivariant GNNs. A more direct or step-by-step argument that the combined system remains equivariant would improve clarity for a broader audience.
• While the experimental results are comprehensive, the paper could devote more attention to precisely how LLM knowledge influences outcomes—especially to separate general architectural effects from true “knowledge injection.”
• The ablations show the benefits of prompting but do not fully dissect which aspects of prompt engineering bring the greatest advantages.
Overall, the idea of letting GNNs handle 3D geometry and LLMs handle higher-level contextual information is an interesting hybrid design that could be valuable in a range of domains involving complex spatial structures and domain knowledge.
Other Comments Or Suggestions: n/a
Questions For Authors: 1. How does pretrained knowledge from the LLM specifically boost performance?
2. How is E(3)-equivariance assured once data flows through the LLM?
3. Can you illustrate how “geometry-aware” prompts are formulated to remain invariant?
4. How sensitive is the method to changes in prompts or hyperparameters, especially for the LLM component?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely appreciate your recognition of our work! We are deeply grateful for the time and thoughtful effort you have dedicated to offering such detailed and constructive feedback. Your valuable suggestions have significantly enhanced both the scholarly rigor and presentation of our manuscript. Having carefully revised the paper to reflect your comments, we now address each point in detail below.
> **Q1: Role of "External Knowledge "& how LLM knowledge influences outcomes1 Role of "External Knowledge.**
Thank you for raising this point. Directly explaining how the LLM's knowledge affects the results is a challenging task. Here, we indirectly demonstrate, through ablation experiments, that the model's performance suffers significantly without properly designed prompts to activate the LLM's knowledge. Specifically, when removing antigen, light chain, and heavy chain feature descriptions from antibody design prompts (Table C, Row 1), we observe clear performance degradation, highlighting how domain-specific knowledge enhances EquiLLM's geometric modeling capabilities.
> **Q2: Independence of Added Model Capacity.**
We sincerely appreciate your thoughtful review. We have removed the LLM module for further ablation study as you suggested, with results presented in Row 2 of Table C. The model exhibits significant performance degradation, underscoring the critical role of LLM in our framework. We have included these ablation results in the revised manuscript.
**Table C. Further ablation studies on RAbD.**
||AAR|TM-score|RMSD|
|-|-|-|-|
|w/o object feature|38.32%|0.9826|1.76|
|w/o LLM|37.58%|0.9818|1.79|
|w/o prompt1|37.84%|0.9820|1.76|
|w/o prompt2|38.57%|0.9823|1.77|
|w/o prompt3|38.52%|0.9827|1.74|
|EquiLLM|**38.97%**|**0.9830**|**1.73**|
> **Q3: Formal Proofs of Equivariance & How is E(3)-equivariance assured once data flows through the LLM?**
We apologize for any lack of clarity in the current manuscript. To clarify, since the LLM exclusively processes invariant features, its outputs remain strictly invariant. These invariant outputs are then concatenated with the original equivariant features from the encoder through a skip connection, and subsequently processed by the equivariant adaptor. Throughout this data flow, we rigorously maintain E(3)-equivariance. In the revised version, we will add the following contents: (1) a rigorous mathematical proof of the framework's equivariance properties, and (2) a detailed analysis of how data flow maintains E(3)-equivariance throughout the architecture.
> **Q4: Which aspects of prompt engineering bring the greatest advantages & How sensitive is the method to changes in prompts.**
Thank you for this valuable comment! We have conducted more detailed prompt ablations on the RAbD dataset, to investigate the impact of different prompt components on model performance. For antibody design task, the object statistical information encompasses two hierarchical levels:
1. **Chain-level features**:
1. Inter-chain centroid distances (prompt 1)
2. Maximum residue-residue distances within each chain(prompt 2)
2. **Residue-level features**:
1. Statistics (max/min/mean) of residue-to-centroid distances per chain(prompt 3)
As shown in Rows 3-5 of Table C, the results demonstrate that chain-level features contribute more significantly to performance improvement compared to residue-level features. We hypothesize that this discrepancy arises because chain-level features provide macroscopic structural information that better facilitates global 3D structure understanding and modeling. These comprehensive ablation results will be included in the revised manuscript.
> **Q5: How "geometry-aware " prompts are formulated to remain invariant.**
Nice question! To guarantee the invariance of geometry-aware prompts input to the LLM, we exclusively employ **distance-based** statistical measures, which are inherently rotation- and translation-invariant. The three prompt types mentioned above are deliberately designed as distance metrics to preserve E(3)-invariance. | Summary: The authors propose EquiLLM – a framework designed to enhance spatial reasoning in 3D structure and dynamics by integrating geometry-aware prompting and equivariant Graph Neural Network layers. Experiments on molecular dynamics, human motion, and antibody design demonstrate are carried out, and show good performance.
Claims And Evidence: Most of the claims are well-supported. However, some arguments are not as convincing.
For instance, the authors “One possible solution is to adapt existing multimodal LLM architectures, such as LLaVA (Liu et al., 2024b), by treating 3D structures as a separate modality and simply replacing the image encoder with a geometric GNN. However, this naive adaptation fails to satisfy the E(3)-equivariance requirement.” It seems that what the authors do is simply replacing the encoder with an equivariant GNN encoder. I would say that the authors’ method is built on the LLaVA approach.
Methods And Evaluation Criteria: Appropriate evaluation methods are used in the article. one concern is that the authors only trained the GNN encoders and adaptors. It would be interesting to explore the impact of fine-tuning the LLM layers as well.
Theoretical Claims: N/A
Experimental Designs Or Analyses: Model comparison: it would be great to compare different encoding layers to verify the significance of the equivariance as claimed by the authors. Meanwhile, the baseline models are relatively outdated and weak models.
In line 361, “To ensure a fair comparison, all hyperparameters (e.g. learning rate, number of training epochs) are kept consistent across our model and all other baselines”. This is not a valid approach since different models would need different hyperparameters to make them function well.
Supplementary Material: Yes, I take into account the supplementary material (Dataset details).
Relation To Broader Scientific Literature: Compared with existing Large Language Models for Sciences, incorporating equivariance GNN encoders is a meaningful attempt. And the empirical results support the motivation.
Essential References Not Discussed: Speaking about equivariant GNN, key references such as [1] should be better discussed/acknowledged in the article.
1. Satorras, Vıctor Garcia, Emiel Hoogeboom, and Max Welling. "E (n) equivariant graph neural networks." International conference on machine learning. PMLR, 2021.
Other Strengths And Weaknesses: Strengths: The authors have shown good empirical results, demonstrating efficacy of the proposed method.
Weaknesses: Lack of more technical details such as how is the language model trained,
The language model (GPT-2) used in the experiments is very outdated.
Other Comments Or Suggestions: A comparison with LLMs trained with normal GNN encoders would be great to see the effect of equivariance here.
Questions For Authors: Why is the training size so small?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely appreciate the time and effort you have devoted to providing detailed and constructive feedback. Your insightful comments have been invaluable in improving both the technical quality and clarity of our manuscript. We have carefully revised our paper to incorporate your suggestions. Below, we address each of your points individually.
> **Q1: Method is built on the LLaVA approach.**
Thank you for your comment. There may be some misunderstandings here—our method does not simply replace LLaVA's encoder with an equivariant GNN encoder, as that would compromise the framework's overall equivariance. Instead, EquiLLM introduces an innovative design, as shown in Fig.1. First, the equivariant GNN encoder extracts both equivariant and invariant features, but only the invariant features are fed into the LLM, unlike LLaVA where the LLM receives all encoder outputs. Then, after LLM processing, the output is concatenated with the encoder's equivariant features via a skip connection and passed to the equivariant adapter module to generate both equivariant and invariant predictions.
> **Q2: The impact of fine-tuning the LLM layers.**
Nice suggestion! We set the LLM's parameters to be trainable and fine-tune the model on the SAbDab dataset. However, the experimental results (Table A, first row) show performance degradation, suggesting that fine-tuning may compromise the original information encoded in the LLM, particularly since the dataset used for fine-tuning is not large enough.
**Table A. EquiLLM with different backbones.**
||AAR|TM-score|RMSD|
|-|-|-|-|
|Finetune LLM|38.57%|0.9819|1.77|
|Normal GNN encoder|32.32%|0.9308|4.14|
|Qwen2.5-3B|**39.04%**|0.9828|1.76|
|Original|38.97%|**0.9830**|**1.73**|
> **Q3: Different encoding layers & normal GNN encoders.**
Thank you for raising this point. We additionally replace the original equivariant GNN encoder with a normal GNN encoder. As shown in Row 2 of Table A, the model exhibits significant performance degradation, demonstrating the importance of maintaining E(3) equivariance when modeling 3D structures.
> **Q4: Baseline models are relatively outdated and weak models.**
Thank you for this constructive comment. We would like to clarify that ESTAG (NeurIPS 2023) remains the SOTA model on MD17, while MEAN (ICLR 2023) and GeoAB (ICML 2024) are also leading methods on the RAbD dataset. That said, we agree that additional baselines could further validate our approach. To address this, we have included the results of Equiformer (ICLR 2023) on MD17 in Table B, where it still significantly underperforms our model.
**Table B. The performance of Equiformer on MD17.**
||Aspirin|Benzene|Ethanol|Malonaldehyde|Naphthalene|Salicylic|Toluene|Uracil|
|-|-|-|-|-|-|-|-|-|
|Equiformer|10.13|2.00|1.88|8.05|3.43|5.79|2.09|4.38|
|EquiLLM|**2.391**|**0.732**|**1.031**|**1.671**|**1.453**|**2.162**|**1.178**|**1.060**|
> **Q5: All hyperparameters are kept consistent.**
Thank you for your comment. Our experimental settings follow the ESTAG paper, using identical parameter configurations across all baseline models to ensure fair comparisons. We also explored various hyperparameter choices for the baselines on MD17, but the performance improvements remained marginal compared to our model's results.
> **Q6: Discuss/acknowledge EGNN.**
Great suggestion! We will provide a more comprehensive discussion of EGNN in the revised version.
> **Q7: Lack of more technical details.**
Thank you for this insightful suggestion. In our EquiLLM framework, the LLM module parameters remain frozen during training. In the revised version, we will provide more technical details and a brief introduction to the specific LLM employed in our work.
> **Q8: The language model is very outdated.**
Thank you for your insightful observation. To address this point, we conducted additional evaluations using the Qwen2.5-3B model (see Table A, Row 3). While it shows marginal improvement in AAR, we observe slight decreases in RMSD and TM-score performance. We hypothesize that the language model's capability remains constrained by limited text-3D structure paired data; otherwise, upgrading the LLM component could yield significant gains. We leave this exploration for future work.
> **Q9: The training size**
Nice question! Our experimental setup primarily follows established conventions in the field. For the MD17 and Human Motion datasets, we adopt the same configurations as the EGMN and ESTAG papers, while for the SAbDab dataset, we maintain the settings used in MEAN and GeoAB. | null | null | null | null | null | null |
Implicit Subgraph Neural Network | Accept (poster) | Summary: This paper proposes a bi-level optimization framework for subgraph-level predictive tasks, where the outer level is to minimize the subgraph-level prediction loss, and the inner-level is to enforce the fixed-point conditions of the implicit representation of the subgraphs, so that they don't have to rely on rigid subgraph definitions.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes.
Theoretical Claims: I checked the assumptions and conclusions of the theoretical claims about all the theorems. The assumption to ensure convergence is the general assumption made in implicit GNNs. I didn't check the detailed proof of Thm. 3.8, but the conclusion intuitively makes sense.
Experimental Designs Or Analyses: I checked the comparison on subgraph classification datasets, and also the ablation studies about the subgraph-level Information and sensitivity analysis. Both looks good to me except the Sec 4.3 about the efficiency analysis.
As far as I know, EIGNN takes a nonnegligible amount of time for preprocessing. Could you please also specify the time consumption, besides average runtime per epoch?
Supplementary Material: I reviewed the Append A: Ablation Study on SubGNN.
Relation To Broader Scientific Literature: This paper is a direct application of graph implicit models and bilevel optimization, where the latter enables the usage of the former to flexibly deal with the subgraph representation learning problem. This is a novel strategy since it is the first work to apply GIM to subgraph learning and benefits from the long-range dependencies and flexible problem design.
Essential References Not Discussed: Could you please compare with GEQ (Efficient and scalable implicit graph neural networks with virtual equilibrium, https://drive.google.com/file/d/1u2zJ_LJyIEFOjatUiT2gG1QXVLkTY3_y/view?pli=1)?
It is not for subgraph learning tasks, but it also formulates node classification as a bilevel optimization problem and develops a provably convergent algorithm and "replace gradient descent in the inner loop with fixed-point iteration". Can this framework be adapted to subgraph learning also?
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: N/A
Questions For Authors: Could you please resolve my questions in the Experimental Designs Or Analyses and Essential References Not Discussed?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We appreciate the reviewer's feedback.
---
### Time Consumption
Yes, EIGNN requires a preprocessing step. Our method incorporates a pretraining stage. Below is a comparison of the total runtime (in seconds) for both methods on the PPI_BP dataset. The reported values are the means over 10 runs. We will update the running time results in the paper to reflect your suggestion.
| Method | Preprocessing Time (s) | Training Time (s) | Total Time (s) |
|--------|------------------------|-------------------|----------------|
| ISNN | 248.43 | 807.69 | 1056.12 |
| EIGNN | 1201.84 | 956.54 | 2158.39 |
---
### Compare with GEQ
Thanks for pointing out a missing reference. I think this paper proposes an interesting algorithm and has the potential to be adapted to subgraph representation learning. We would love to include this method as one of our baselines. However, the author did not release their code (the GitHub link does not work in their paper). We will reach out to them after the review period and include the paper in our literature review. | Summary: This paper introduced ISNN, the first implicit model for subgraph representation learning, along with a provably convergent bilevel optimization algorithm for training. The proposed ISNN also integrates label-aware subgraph-level information. This paper converts the fixed-point iteration into bi-level optimization to improve the stability of subgraph-level learning tasks.
Claims And Evidence: The author's statement is somewhat convincing in the paper, but the plausibility and reproducibility of the experimental results need to be further examined.
Methods And Evaluation Criteria: The proposed methodology has research implications for the study of the issue at hand.
Theoretical Claims: I think the authors' proof of Theory 3.8 is essentially correct.
Experimental Designs Or Analyses: The experiment is generally correct, see weekness for details.
Supplementary Material: The authors did not provide supplemental material.
Relation To Broader Scientific Literature: I finished reading this paper and for the time being I did not find the main contribution of the paper to be relevant to the wider scientific literature.
Essential References Not Discussed: I don't think there are any important references that have not been discussed.
Other Strengths And Weaknesses: Strengths:
1.This paper is novel and a good target for research.
2.The article is overall easy to read, although there are some logic problems.
Weakness:
1. The writing logic of the paper is somewhat lacking and lacks clear motivation.
2. This paper uses implicit graph neural networks to solve subgraph learning tasks, but lacks sufficient novelty and contribution.
3. Although the authors give an experimental motivation for using bi-level optimization, the experimental results indicate that smaller gamma values may also lead to instability. Therefore, I still doubt the validity of this motivation and whether the author can provide more rigorous theoretical analysis.
4. Some symbols in this paper lack sufficient explanation and clarification.
5. It seems feasible to convert the fixed-point iteration into bi-level optimization, and I would like to know the difference in the final convergence between these two methods.
Other Comments Or Suggestions: For more information, see Weaknesses.
Questions For Authors: For more information, see Weaknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s detailed feedback. Below are our responses addressing each concern:
---
## Logic and Motivation Problems
In the revised manuscript, we will include an expanded discussion on why implicit graph neural networks are particularly suited for subgraph learning tasks. We will clarify how capturing long-range dependencies through fixed-point iterations provides a distinct advantage over conventional methods, thereby reinforcing the motivation behind our approach.
---
## Novelty and Contribution of the Paper
While our method builds on implicit graph neural networks, our contribution lies in:
1. **Novel Extensions:** Developing new extensions to these models specifically for subgraph learning.
2. **Label-Aware Integration:** Integrating label-aware subgraph-level information via a novel hybrid graph framework.
3. **Bilevel Optimization Formulation:** Formulating the training as a bilevel optimization problem with convergence guarantees.
These aspects, to our knowledge, are the first to be explored in the subgraph context and lead to significant performance improvements over state-of-the-art subgraph neural networks. We will further elaborate on these contributions in the revision.
---
## Motivation for Using Bi-level Optimization
The objective of implicit models can be naturally formalized as a bilevel optimization problem. The lower-level problem involves finding the fixed-point embeddings for the current setting, and the upper-level problem focuses on minimizing the classification loss given the fixed-point embedding from the lower level. Bilevel optimization provides a different angle to train implicit models, offering computational advantages. The stability of implicit models is a very interesting topic for further exploration, and to the best of our knowledge, no work has investigated this area so far.
---
## Explanation and Clarification of Symbols
We will check and revise the manuscript to ensure that all symbols are clearly defined.
---
## Convergence
Both the implicit differentiation (via fixed-point iteration) formulation and the bilevel optimization formulation often lead to equivalent problems, meaning that optimally solving one also solves the other. However, the resulting algorithms differ in various settings. In our case, there are higher practical computational costs for implicit differentiation, which arise due to the following two differences:
- **Iteration Flexibility:** Bilevel optimization permits the use of minimal fixed-point iterations during the early phases of training—empirically, one iteration per gradient step is often sufficient. In contrast, implicit differentiation requires a relatively larger number of fixed-point iterations at each gradient step to maintain accuracy.
- **Backward Gradient Computation:** Implicit differentiation involves computing backward gradients through an additional fixed-point iteration, which adds extra computational overhead compared to bilevel optimization.
---
We hope these responses address your concerns and clarify the contributions in our work.
---
Rebuttal Comment 1.1:
Comment: Thanks for the author's thoughtful reply to the question, and considering the opinions of other reviewers, I decided to raise the original score to 3. | Summary: This paper combines the information from both subgraphs and nodes to form a hybrid graph to tackle the subgraph-level graph learning problem. Instead of directly training a GNN on the hybrid graph, it uses implicit GNN combined with a bilevel optimization way to enhance model performance. Convergency guarantee is provided for this method. Experiment results show that this method achieved good performance on the evaluated datasets.
Claims And Evidence: The claim that directly train a implicit GNN on the hybrid graph leads to poor performance is underpinned by the experiments.
Methods And Evaluation Criteria: I am not familiar with subgraph-level graph learning tasks, so I am not sure whether these datasets are good.
Theoretical Claims: I checked the proof in the main paper.
Experimental Designs Or Analyses: The experiment looks valid to me.
Supplementary Material: I did not review supplementary material.
Relation To Broader Scientific Literature: This paper is closely related to a broader scope of implicit and unfolded GNNs, where bilevel optimizations are commonly used.
Essential References Not Discussed: I am not aware of such.
Other Strengths And Weaknesses: The motivation is not clear to me. I would appreciate it if the author can elucidate why subgraph neural networks are important.
Other Comments Or Suggestions: In Fig 3 (b), the (1) under (S2) should be (2), and its children should be (1), (3), (6).
Questions For Authors: What is the connection between max-zero-one label trick and Weisfeiler-Leman test?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank you for your valuable feedback. Our responses are as follows:
## Motivation
While traditional graph neural networks focus on node-level or entire graph-level representations, many real-world problems require understanding the structure within parts of a graph. Subgraphs often represent meaningful patterns—like communities, motifs, or functional units—that are lost when only considering individual nodes. Moreover, traditional GNNs perform poorly on subgraph-level tasks due to their limited ability to capture localized and complex structural nuances, resulting in weak generalization when applied directly to subgraph classification or related tasks. Therefore, researchers incorporate subgraph information into their models to achieve better generalization. Previous works only consider structural or membership information of subgraphs, which means they primarily leverage the connectivity patterns or the existence of subgraph membership without integrating additional contextual cues. In contrast, our approach augments this by incorporating label-aware subgraph-level information that enriches the representation. By explicitly modeling both the inherent structure and the associated label information, our method not only differentiates between subgraphs with similar structural features but also captures long-range dependencies and complementary interactions among subgraphs. This holistic integration of node-level and subgraph-level cues ultimately leads to more expressive embeddings and significantly improved performance on subgraph-level predictive tasks.
---
### Mistake in Figure
We thank the reviewer for pointing out this mistake. We will correct it.
---
### Connection between Max-Zero-One Label Trick and Weisfeiler-Leman Test
Max-zero-one labeling enhances subgraph representation learning by augmenting node features based on their membership in subgraphs. It serves as a relaxation of the zero-one trick [1]—which processes each subgraph individually by assigning binary labels. The zero-one trick is capable of producing embeddings with expressiveness equivalent to the $1$-Weisfeiler-Leman ($1$-WL) test, meaning it can distinguish graph structures as well as the $1$-WL test does. Since the max-zero-one trick relaxes these binary assignments by jointly processing multiple subgraphs, it is inherently weaker. Therefore, its discriminative power is limited to that of the $1$-WL test and cannot exceed it.
---
[1] Zhang, Muhan, et al. "Labeling trick: A theory of using graph neural networks for multi-node representation learning." *Advances in Neural Information Processing Systems* 34 (2021): 9061-9073.
---
We hope these responses address your concerns and clarify the contributions and design choices in our work. | Summary: The paper introduces the Implicit Subgraph Neural Network (ISNN), an innovative approach designed to enhance subgraph representation learning. ISNN is the first to use implicit neural network models explicitly for subgraphs, addressing limitations in existing methods, particularly concerning capturing long-range dependencies between subgraphs. The authors formulate subgraph representation learning as a bilevel optimization problem, providing theoretical guarantees for convergence and introducing a computationally efficient training algorithm. Experimental results demonstrate that ISNN significantly outperforms existing state-of-the-art approaches across multiple benchmark datasets.
---
(+) The introduction of implicit neural networks for subgraph representation is original and fills a gap in existing subgraph learning methods.
(+) The paper provides solid theoretical backing for the convergence of their proposed bilevel optimization method.
(+) Extensive experiments on multiple datasets clearly show superior performance in terms of Micro-F1 and AUROC scores compared to existing baselines.
(+) The proposed method demonstrates practical runtime efficiency, addressing scalability concerns.
---
(-) Limited sensitivity analysis and ablation studies on certain critical hyperparameters, potentially affecting the interpretability and broader applicability of the approach.
(-) The effectiveness of various proposed methods for constructing the subgraph-level graph is not deeply explored, as shown by their similar performances, even when random edges are introduced.
(-) The reliance on pretraining for constructing subgraph-level graphs could limit applicability in dynamic or real-time settings.
---
## update after rebuttal
Thanks to the authors for the detailed and clear rebuttal. The new experiments on label-aware subgraph construction are helpful and support your design choices well. The gamma sensitivity results are also useful and show reasonable stability.
I understand the current limitations around dynamic graphs, and I agree it’s a valuable direction for future work. Overall, your responses strengthen the paper, and I still lean toward a weak accept (solid contribution with room to grow).
Claims And Evidence: The claims made regarding performance improvements and computational efficiency are well-supported through clear experimental evidence and theoretical analysis. However, the claims regarding the benefit of explicitly constructed subgraph-level information are less convincing, as experiments suggest similar performance even with random graph construction.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria are appropriate for the subgraph classification problem. Utilizing standard benchmarks (PPI-BP, HPO-METAB, HPO-NEURO, and EM-USER) aligns well with community standards, providing convincing comparisons against established baselines.
Theoretical Claims: The correctness of the theoretical claims has been checked, particularly regarding the convergence analysis (Theorem 3.8 and Lemma B.1). No immediate issues were found, and the proofs appear sound and complete.
Experimental Designs Or Analyses: The experimental design is robust, evaluating performance on several real-world datasets and comparing against multiple baselines. The analyses are thorough and valid, with clear performance metrics and adequate runtime comparisons.
Supplementary Material: I carefully checked the supplementary material.
Relation To Broader Scientific Literature: The paper appropriately situates its contributions within existing literature on subgraph neural networks and implicit models. It effectively highlights shortcomings in previous models such as SubGNN, GLASS, and SSNP, clearly articulating how ISNN extends beyond these methods to address specific limitations in capturing subgraph-level information and long-range dependencies.
Essential References Not Discussed: The paper extensively covers relevant literature. However, it could benefit from discussing more works on dynamic or evolving graph scenarios (if available) since the current formulation and pretraining step may be restrictive in such contexts.
Other Strengths And Weaknesses: (+) Strong motivation clearly identifying practical limitations of existing subgraph models.
(+) Clear visualization and description of methodological innovations.
(+) Practical significance and clear demonstration of superior performance.
(-) Limited investigation of the impacts of graph construction techniques.
(-) Potential issues in scalability and practicality for real-time updates due to reliance on pretraining.
Other Comments Or Suggestions: - Additional discussion on the scalability of the model to large-scale dynamic networks would strengthen the paper.
- Clarification on hyperparameter sensitivity analysis, especially regarding γ, could enhance the practical utility of the paper.
Questions For Authors: 1. How sensitive is ISNN to changes in subgraph-level graph construction methods, beyond the initial random comparison?
2. Can you elaborate on the potential applicability or adaptations required for ISNN in dynamic or streaming graph settings?
3. Could you provide further justification or examples where explicit subgraph-level information significantly impacts performance compared to random edge assignment?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank you for your valuable feedback. Our responses are as follows:
---
## Sensitivity to Subgraph-Level Graph Construction Methods
In our experiments, we previously compared four subgraph-level graph construction methods—**random**, **neighborhood**, **position**, and **structure**. The goal was to show that these classical methods do not significantly improve classification performance, as they achieve similar results to using random subgraph information. This observation echoes conclusions from prior work like GRASS and SSNP.
To further the comparison, we introduce additional label-aware methods:
- **class-rand:** For each class, we randomly connect *k* pairs of training supernodes.
- **Star:** For each class, we select a centroid supernode and connect it with all other supernodes within that class.
### Experimental Results on HPO-METAB
| Metric | ISNN-rand | ISNN-class-rand | ISNN-Star | ISNN |
|-----------|----------|--------------|---------|-------|
| **F1** | 0.589 | 0.672 | 0.707 | 0.731 |
| **AUROC** | 0.876 | 0.901 | 0.916 | 0.924 |
The **ISNN-class-rand** method improves performance over ISNN-rand by using label information to ensure that subgraphs within the same class have similar embeddings, making classification easier. The **ISNN-Star** method further boosts performance by selecting a centroid supernode for each class and connecting it to all other supernodes, reinforcing intra-class similarity, though it introduces more edges. Our **ISNN** method connects only the top-*k* most distant pairs of supernodes, balancing label-aware benefits with a sparse graph structure.
---
## Additional Hyperparameter Sensitivity Analysis
We further investigate the sensitivity of hyperparameter **gamma** on ISNN.
### Ablation Results on Gamma for PPI_BP
| Gamma | F1 (10 Runs) |
|-------|-----------------|
| 0.001 | 0.713 ± 0.014 |
| 0.01 | 0.726 ± 0.022 |
| 0.05 | 0.670 ± 0.008 |
| 0.1 | 0.655 ± 0.007 |
| 0.5 | 0.654 ± 0.022 |
| 1 | 0.634 ± 0.025 |
### Ablation Results on Gamma for EM_USER
| Gamma | F1 (10 Runs) |
|-------|-----------------|
| 0.001 | 0.869 ± 0.012 |
| 0.01 | 0.808 ± 0.027 |
| 0.05 | 0.853 ± 0.027 |
| 0.1 | 0.913 ± 0.031 |
| 0.5 | 0.857 ± 0.018 |
| 1 | 0.771 ± 0.035 |
These results show that the best performance occurs when gamma is between **0.01 and 0.1**. For example, PPI_BP achieves its best performance at gamma = $0.01$, while EM_USER peaks at gamma = $0.1$. Gamma values outside this range lead to worse performance, highlighting the importance of using a moderate gamma.
---
## Applicability and Adaptations for Dynamic or Streaming Graph Settings
**ISNN** is designed to capture long-range dependencies via implicit iterations and a hybrid graph framework. To achieve this, the model requires access to the entire graph to compute the fixed-point embedding. Therefore, directly adapting ISNN to dynamic or streaming settings is challenging. However, if a smooth condition on the embeddings can be ensured (for example, bounding the change in the fixed-point embedding under edge perturbations), the error in approximating the final fixed-point embedding could be controlled. This approach offers a potential pathway to adapt ISNN to dynamic settings, which we plan to investigate in future work.
---
## Justification for Label-aware Subgraph-Level Information Versus Random Edge Assignment
Our approach, **ISNN**, significantly outperforms random edge assignment. Please compare the performance of ISNN on the HPO-METAB dataset in Table 3 against that of ISNN-rand in Table 4 of our manuscript.
To further illustrate the benefit of our label-aware subgraph information, consider the following example: if two subgraphs with different labels are connected through random edge assignment, the GNN is forced to make their embeddings more similar, which complicates the classification task. Our label-aware design, on the other hand, reinforces the inherent differences between subgraphs, thereby facilitating more discriminative embeddings.
---
## Scalability of Our Method
The complexity of a standard GCN is given by:
$$ O(n \cdot d^2 + E \cdot d) $$
where:
- $n$ is the number of nodes,
- $E$ is the number of edges, and
- $d$ is the hidden dimension.
For our method, the complexity becomes:
$$ O((n+s) \cdot d^2 + (E+k) \cdot d) $$
where:
- $s$ is the number of subgraphs, and
- $k$ is the number of additional edges introduced.
Thus, the additional overhead introduced by ISNN is an additive term of:
$$ O(s \cdot d^2 + k \cdot d) $$
Empirically, $k$ can be controlled to remain small, and $s$, which is part of the input, is typically small in most practical subgraph learning tasks. Therefore, ISNN exhibits scalability similar to that of a standard GCN in most practical scenarios.
---
We hope these responses address your concerns and clarify the contributions and design choices in our work. | null | null | null | null | null | null |
Causal-PIK: Causality-based Physical Reasoning with a Physics-Informed Kernel | Accept (poster) | Summary: The paper presents Causal-PIK, a novel method for causality-based physical reasoning that leverages a Physics-Informed Kernel within a Bayesian optimization framework. The primary focus is on single-intervention physical reasoning tasks, where an agent must make decisions based on the causal effects of its actions in complex environments.
Causal-PIK incorporates causal reasoning into the decision-making process, helping agents efficiently explore and learn from their interactions with the environment. The findings demonstrate that Causal-PIK significantly outperforms state-of-the-art methods on benchmarks like Virtual Tools and PHYRE, achieving higher success rates while requiring fewer actions to solve tasks. In experiments, Causal-PIK achieved an AUCCESS score of 65.0 on the Virtual Tools benchmark, surpassing previous models by 7 points. On the PHYRE benchmark, it attained an AUCCESS score of 51.3, outperforming the best prior results by 9 points.
The algorithm iteratively updates a Gaussian process to model the scoring function based on previous actions and their outcomes. It selects actions that maximize an Upper Confidence Bound acquisition function, informed by the Physics-Informed Kernel, which reflects both causal effects and action similarities. Overall, Causal-PIK advances the field of physical reasoning in AI by integrating causal insights into the action selection process, demonstrating improved efficiency and effectiveness in solving complex reasoning tasks.
## Update after rebuttal
I appreciate the authors' response, and I will increase my score.
Claims And Evidence: The manuscript claims that the proposed method is based on causality. However, there is no clear definition of what causality it possesses or a piece of strong evidence to support that the model really captures causality that other models cannot.
Methods And Evaluation Criteria: The use of BO is reasonable to model the iteratively updated action proposal of solving a physical puzzle. The kernel design is grounded in physics and can be helpful to the next choice. The evaluation metric (AUCCESS) is from the original PHYRE paper and can reflect the model performance. However, the authors could consider including more diverse metrics that can more directly reflect the efficiency of solving the puzzles such as the attempts used to successfully solve a puzzle.
Theoretical Claims: The theoretical formula including the iterative action proposal and the similarity design is correct.
Experimental Designs Or Analyses: The authors validate their method on two physical reasoning tasks, with notable improvement compared with previous baselines. However, the two tasks both only consider single-step interventions. To further verify the proposed method, the authors could consider using physical reasoning tasks requiring multi-step interventions to solve (e.g. I-PHYRE). Besides, I think the authors could test against a more dynamic prediction model with BO to see if there is consistent improvement. What if the dynamic prediction is not accurate? Is there any analysis of how the prediction errors affect the performance of the proposed method?
Supplementary Material: I have reviewed the supplementary material consisting of one section "Dynamics Model".
Relation To Broader Scientific Literature: Previous related work addresses solving such physical reasoning tasks by either proposing better dynamic prediction models or by designing better action proposal methods. This work lies in the second school. Prior work utilizes RL to propose actions but falls short in terms of efficiency. This work tries to minimize this gap using a physics-informed kernel to update under the BO framework.
Essential References Not Discussed: No other reference should be discussed.
Other Strengths And Weaknesses: The manuscript addresses an important aspect of physical reasoning, that is learning from feedback. However, the main contribution of this work is to introduce BO to maintain the belief in action proposals. The technical soundness, thus, is a little bit weak. The experimental results and furtherThe writing is no clear enough. For example, there are many confused or duplicate usage of words such as causal similarity and action similarity.
Figure 2 and figure 4 have too much blank area. Figure 3 is a little confusing to convey important information. analysis are also inadequate.
Other Comments Or Suggestions: The writing is no clear enough. For example, there are many confused or duplicate usage of words such as causal similarity and action similarity.
Figure 2 and figure 4 have too much blank area. Figure 3 is a little confusing to convey important information.
Questions For Authors: The authors should better explain what makes the proposed method a causality-based model. What about the generalization of the physics-informed kernel on other tasks?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank reviewer d7h6 for their thoughtful feedback. We are glad they found the method addresses an important aspect of physical reasoning, that is learning from feedback. We address the reviewer’s comments and will incorporate all of the following discussions in the final draft.
> [d7h6.1] Resistance to noisy causal effect predictions
We appreciate the reviewer’s suggestion regarding the impact of the dynamics model’s accuracy on our method’s performance. To address this, we conducted an additional experiment to analyze the effect of the accuracy of the dynamics model.
Instead of artificially adding more noise to existing predictions, we aimed to demonstrate that the ones we used were already highly noisy. To do this, we trained the dynamics model on tasks from the test templates, ensuring prior exposure to similar puzzles. As a result, the L2 error for object bounding boxes improved to 3.56, compared to 19.3 ± 4.55 when tested on entirely unseen puzzles.
Despite this difference in prediction accuracy, our method achieved an AUCCESS of 45, which is only 4 points higher than the 41.6 ± 9.33 AUCCESS we reported for the case with unseen dynamics. This demonstrates that even with a substantial increase in prediction error, the performance drop is small, indicating that our method remains resilient to noisy dynamic predictions. While improved dynamic predictions can enhance performance, our approach does not rely on perfect predictions, retaining robustness even in the presence of inaccuracies.
> [d7h6.2] Causal-PIK tested on I-PHYRE
Please refer to comment [vdey.2].
> [d7h6.3] Why is the method based on causality
We appreciate the reviewer’s concern regarding our claims about causality and would like to clarify how our Physics-Informed Kernel explicitly captures causal relationships. We define causality as the dependence of an effect on its preceding causes. In our proposed method, this dependence is captured as follows:
* Action-Effect Modeling
Unlike models that rely purely on statistical correlations (e.g., our ablation with an RBF Kernel), our Physics-Informed Kernel is designed to encode causal dependencies between actions and their physical consequences. Instead of clustering actions based on feature-space proximity, our approach evaluates their actual impact on the system. For instance, two actions occurring in different spatial locations may be considered similar if they lead to the same physical outcome, such as maintaining the environment in a stationary state.
* Causal Structure in the Reward Function
Our method explicitly quantifies an action’s effect by normalizing the observed distance by the no-action baseline. This ensures that the kernel measures the true causal influence of an action rather than confounding factors.
> [d7h6.4] Learning from feedback
We appreciate the reviewer's feedback regarding the technical contributions of our work. We believe our technical contributions extend significantly beyond introducing BO to maintain a belief over action proposals. A key innovation in our Physics-Informed Kernel is its ability to generalize knowledge across actions based on their causal effects. Unlike methods that learn only from direct observations, our approach leverages physical reasoning to infer the outcomes of untested actions that are predicted to share the same causal effect as observed ones. This means that from a single rollout, our model updates its belief not just about the executed action, but also about all actions that are predicted to produce a similar physical outcome. This significantly enhances sample efficiency and enables reasoning about alternative scenarios without exhaustive exploration.
We acknowledge that aspects of our presentation could be clarified, and we will refine the manuscript to better articulate these technical contributions in the final draft.
> [d7h6.5] Improving Figures 2, 3, and 4
We appreciate the reviewer’s feedback on how to make Figures 2, 3, and 4 better. We will include these modifications in the final manuscript draft.
> [d7h6.6] Updated PHYRE results for our method (Table 2)
Please refer to comment [jF9J.2].
> [d7h6.7] Unclear writing with duplicate usage of words
We appreciate the reviewer’s feedback regarding unclear portions of the manuscript. We will work on improving the writing such as removing the duplicate usage of causal and action similarity in the final manuscript draft.
> [d7h6.8] Extra evaluation metrics
We appreciate the reviewer’s suggestions and we will include a breakdown of the average success rate per puzzle and the average number of attempts per puzzle in the supplementary material.
> [d7h6.9] Scaling to bigger states and actions and generalizing to more complex scenarios
Please refer to comments [jF9J.1] and [jF9J.3]. | Summary: The paper attempts to address the challenge of single-intervention physical reasoning tasks. It proposes Causal-PIK, which combines Bayesian optimization and a Physics-Informed Kernel. The method leverages physical intuition and causality to iteratively find optimal actions. Experimental results on the Virtual Tools and PHYRE benchmarks demonstrate that Causal-PIK outperforms previous state-of-the-art approaches. It achieves higher AUCCESS scores, requires fewer attempts to solve complex physical reasoning puzzles.
## update after rebuttal
I have read the authors response. The authors' reply has addressed some of my concerns. And I agree in general with other reviewers that it is a good work and I lean towards acceptance.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: There is no theoretical claims made in this work. So does not apply.
Experimental Designs Or Analyses: Yes. I have checked the experimental designs. I do find many choices of the setup are valid and support their claims.
Supplementary Material: Yes. The entire supplementary material.
Relation To Broader Scientific Literature: The paper extends the existing work of SSUP. But unlike SSUP, it propose a Bayesian Optimization method combined with a physics-informed kernel to learn and suggest new actions on the fly. The method can adapt to the accumulated experience and the results show that this method does help in learning.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strengths
* The paper fills a gap in the research area as few works have focused on effectively building a model based on past experience for physics tasks. By introducing Causal-PIK, it extends our understanding of computational models for such tasks, offering new insights into how agents can learn from previous attempts to solve complex physical reasoning problems.
* Implementation of the proposed method is well-executed. The design of the Physics-Informed Kernel is reasonable, as it effectively captures important physical intuitions like causal effects and causal similarity between actions. This kernel is crucial for the performance of the overall method, enabling more accurate predictions of action outcomes.
* The paper combines advances in neural networks, such as using the RPIN for the dynamics model, and machine learning methods like Bayesian optimization. This integration allows Causal-PIK to leverage the benefits of both fields, resulting in a more powerful approach that can efficiently search for optimal actions in complex physical environments.
Weaknesses
* The paper does not explicitly state the number of initial data points used in Algorithm 1 for the warm-up phase. It's unclear how many initial data points are needed to reach good performance. Also, it seems to me that the selection of seed data is likely to be critical. Since the method relies on these initial observations to update the model and select subsequent actions, poorly chosen seed data might lead the algorithm way off. Would it be possible to show the impact on performance when you vary the number of seed data points? How would completely random selection impact the final performance.
* The current method may face challenges in scalability. The method is currently tested only on relatively small-scale toy problems like the Virtual Tools and PHYRE benchmarks. In real-world scenarios, physical systems are often high-dimensional, with more complex and nuanced dynamics. I also noticed a recently proposed multi-step challenge similar to PHYRE called I-PHYRE [1]. Would it be easy to apply the proposed method to the new challenge?
[1] Li, Shiqian, et al. "I-PHYRE: Interactive Physical Reasoning." The Twelfth International Conference on Learning Representations.
Other Comments Or Suggestions: See above.
Questions For Authors: See above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank reviewer vdey for their thoughtful feedback. We are glad they found the method implementation to be well-executed with a design that effectively captures important physical intuitions. We address the reviewer’s comments and will incorporate all of the following discussions in the final draft.
> [vdey.1] Characteristics and dependence on initial point set
Thank you for pointing out that we did not state the number of initial data points and how we chose these data points. To initialize the GP for both Virtual Tools and PHYRE, we use $n_{initial} = 9$ initial data points. This is the same number of initial data points that our Virtual Tools baseline SSUP (Allen et al., 2020) uses to initialize their method. SSUP found that this value provided a good trade-off between the number of initial points, total attempts required, and convergence time. Since we use the same initialization framework as SSUP, we expect their [parameter analysis](https://www.pnas.org/doi/suppl/10.1073/pnas.1912341117/suppl_file/pnas.1912341117.sapp.pdf) to extend to our results. Importantly, like SSUP, we treat these initial noisy rollouts as warm-up samples that do not count towards the total attempt count.
We choose the $n_{initial}$ initial points following the same approach as SSUP: randomly select a dynamic object from the environment and sample a point from a Gaussian distribution centered at the object's center. As a result, each puzzle attempt has a unique set of random $n_{initial}$ points. This heuristic helps the GP build a noisy prior that includes different areas.
We will incorporate these relevant details in the final draft.
> [vdey.2] Causal-PIK tested on I-PHYRE
Thank you for the suggestion to consider multi-step reasoning tasks like I-PHYRE. While we acknowledge the value of such tasks, we would like to highlight that I-PHYRE is not necessarily a more challenging problem than our current setting. For example, assuming a time resolution of 0.01s and five removable objects over a 15-second period, the action space in I-PHYRE would be limited to 7,500 possible actions—significantly more constrained than the over 2 million possible actions in our setting for PHYRE.
To the best of our knowledge, no prior work has attempted both benchmarks. While benchmarks like Phyre and Virtual Tools focus on spatial physical reasoning, I-PHYRE emphasizes time-based physical reasoning. Nonetheless, our method could be adapted to solve such problems by querying Causal-PIK at every time step and incorporating a no-op action choice. While this modification falls outside the scope of our current contribution, we believe it could offer valuable insights for future developments.
> [vdey.3] Scaling to bigger states and actions and generalizing to more complex scenarios
Please refer to comments [jF9J.1] and [jF9J.3].
> [vdey.4] Updated PHYRE results for our method (Table 2)
Please refer to comment [jF9J.2]. | Summary: This paper proposes a method, Causal PIK, using Bayesian optimization for causal reasoning via a Physics-Informed Kernel, in order to obtain an expressive posterior distribution over the environment dynamics.
Unlike prior works directly using a learned dynamics model to choose actions, Causal-PIK uses dynamics predictions to instill physical intuition into kernel updates during Bayesian optimization.
A crucial component of the physics informed kernel is causal similarity of actions, capturing how similar actions are based on their ability to cause similar outcomes in the environment.
The proposed method is tested on single-intervention physical reasoning tasks: Virtual Tool and Phyre, and is shown to beat state-of-the-art methods on these benchmarks.
## Update after rebuttal
Thanks for the rebuttal. I remain positive about the paper and will maintain my score.
Claims And Evidence: The authors highlight the importance of the kernel used for the Gaussian Process and instead of a standard RBF kernel, they develop a kernel that encodes an intuition of causality and physics. They show that this outperforms the RBF kernel.
Methods And Evaluation Criteria: The proposed method is evaluated on two benchmarks: Virtual Tool and Phyre.
The main baseline in Virtual Tools is SoTA method SSUP, which samples actions from an object-based prior and simulates the sampled actions to find the best action to try.
In Phyre, Causal-PIK operates over the full action space, but also compares against some methods that operate on a reduced action space, which simplify the problem.
The authors show that their method outperforms baselines on these benchmarks. However, the standard deviation in the results are quite high. That's why I am wondering if the results are statistically significant.
Theoretical Claims: n/a
Experimental Designs Or Analyses: The tested benchmarks are valid. My main concern at the moment is the high variance across the results.
Supplementary Material: yes, only one section regarding dynamics model details.
Relation To Broader Scientific Literature: Physical reasoning models, intuitive physics are well-established concepts. Causal-PIK, unlike prior work, uses Bayesian optimization using previous trials to inform future action selection.
Essential References Not Discussed: not to my knowledge
Other Strengths And Weaknesses: Strengths:
- method that can efficiently solve causal intervention tasks with a few attempts via physics-informed Bayesian Optimization
Some weaknesses:
1. The authors first construct initial sets of actions X and scores y to initialize the GP prior using a probabilistic intuitive physics engine. The dependence on this initial dataset is not ideal.
2. Do the authors see any limitations with the current form of the physics-informed kernel?
3. How do the authors expect this method to scale in high-dimensional action/state spaces?
4. How do the authors interpret the high variance in the results presented in Tables 1 and 2? Especially for the Virtual Tools benchmark
Other Comments Or Suggestions: 1. I think Figure 2 can be further improved for clarity, especially the action similarity component.
Questions For Authors: Copying over my questions from the strengths and weaknesses field above:
1. The authors first construct initial sets of actions X and scores y to initialize the GP prior using a probabilistic intuitive physics engine. The dependence on this initial dataset is not ideal. Can the authors comment on this part? How many samples are needed?
2. Do the authors see any limitations with the current form of the physics-informed kernel?
3. How do the authors expect this method to scale in high-dimensional action/state spaces?
4. How do the authors interpret the high variance in the results presented in Tables 1 and 2? Especially for the Virtual Tools benchmark
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank reviewer nzda for their thoughtful feedback. We are glad they consider the chosen benchmarks to be relevant for the task at hand. We address the reviewer’s comments and will incorporate all of the following discussions in the final draft.
> [nzda.1] High variance in the results presented in Tables 1 and 2
The high variance observed in the results presented in Tables 1 and 2, particularly for the Virtual Tools benchmark, can be attributed to the nature of the puzzles themselves. Some puzzles are significantly more challenging, leading to near-zero scores, while others are more easily solvable, creating a broad distribution of results. This effect is not unique to our method—other baselines also exhibit substantial variance for the same reason.
To further illustrate this, in the Phyre benchmark, the supplemental material (Figure 10 in ((Bakhtin et al., 2019)) provides histograms showing the distribution of the number of actions that solve a given task. These histograms highlight how some tasks are much more difficult than others, reinforcing the idea that the variance is an inherent property of the task design rather than a shortcoming of any particular method.
> [nzda.2] Characteristics and dependence on initial point set
Please refer to comment [vdey.1].
> [nzda.3] Physics-Informed Kernel limitations
We appreciate the reviewers' interest in understanding the limitations of the Physics-Informed Kernel (Causal-PIK). As outlined in the limitations section of the paper, Causal-PIK currently does not share knowledge across tasks. Enabling agents to recognize similarities between tasks and leverage past observations from tasks requiring similar physical reasoning remains an important avenue for future work. By identifying regions of the action space that share underlying dynamics, agents could potentially integrate prior knowledge to solve new tasks more efficiently.
Another limitation arises from the noise introduced by causal effect predictions, which directly impacts performance. Poor predictions can introduce misleading similarities, potentially guiding the agent in the wrong direction. Improving these predictions would enhance the expressivity of the Physics-Informed Kernel. However, our results demonstrate that Causal-PIK remains robust despite this noise, suggesting potential for future sim-to-real transfer. For additional information on how Causal-PIK resists noise in causal effect predictions, we kindly refer the reviewer to comment [d7h6.1].
> [nzda.4] Scaling to bigger states and actions
Please refer to comment [jF9J.1].
> [nzda.5] Improving Figure 2
We appreciate the reviewer’s feedback on the action similarity component of Figure 2. We will make this part of the figure clearer in the final manuscript draft.
> [nzda.6] Updated PHYRE results for our method (Table 2)
Please refer to comment [jF9J.2].
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarifications. I will maintain my current score.
---
Reply to Comment 1.1.1:
Comment: Dear reviewer nzda, thank you again for your thoughtful feedback during this rebuttal period. In our previous responses, we mentioned that we had a human study in progress to better explain and contextualize the complexity of the tasks described in our paper. This study has now concluded, and we would like to provide you with the final scores for completeness. AUCCESS scores for humans on the PHYRE benchmark with the final sample (n = 50) are 36.6 ± 10.2, which is very close to the preliminary scores we reported in the initial reply. Again, participants were given a maximum of 10 attempts (for reference, Causal-PIK @10 has a score of 24.8 ± 9.22).
To further analyze the variance in scores, we computed individual scores for each puzzle on both benchmarks. The correlation between AUCCESS for human participants and various models across puzzles on Virtual Tools is:
* Ours Causal-PIK @10: r = 0.63 (p = 0.003)
* Ours RBF @10: r = 0.66 (p = 0.001)
* SSUP: r = 0.71 (p < 0.001)
* DQN: r = 0.32 (p = 0.17)
The correlation between scores for humans and models on PHYRE 1B is:
* Ours Causal-PIK @10: r = 0.71 (p < 0.001)
* Ours Causal-PIK @100: r = 0.73 (p < 0.001)
* Ours RBF @10: r = 0.66 (p < 0.001)
* Ours RBF @100: r = 0.64 (p < 0.001)
* Harter et al. @100: r = 0.55 (p = 0.005)
The high correlation in scores between humans and our model, even when restricted to a maximum of 10 attempts per puzzle, suggests high alignment in the types of physical dynamics that were found to be easy or difficult to reason about. Causal-PIK was most correlated with humans across individual puzzles on PHYRE, but slightly less correlated than SSUP on Virtual Tools, although overall AUCCESS was still higher. This may be due to the fact that our model was able to solve several puzzles that humans find very challenging. For instance, on one particular puzzle (Table B), humans scored 0.07 and SSUP only scored 0.04, while Causal-PIK scored 0.31. This lowers the correlation with humans across puzzles, but highlights the overall performance of our method. We hope that these additional analyses provide further insight into the variance in scores that you mentioned.
Additionally, statistical analysis using z-tests reveals that Causal-PIK significantly outperforms the baseline SSUP (Allen et al., 2020) on the Virtual Tools benchmark (z = 8.508, p < 0.0001). Likewise, on the Phyre benchmark, Causal-PIK shows substantial improvement over the baseline with comparable action space size (Harter et al., 2020; z = 59.540, p < 0.0001). Notably, when compared to the baseline model operating within an action space 200 times more constrained (Qi et al., 2021), z-test scores (z = –3.170, p = 0.0015) show that Causal-PIK can perform equally well with a much harder task setting.
Thank you again for your time and consideration throughout this review process. | Summary: The paper introduces Causal-PIK, a novel approach that integrates a Physics-Informed Kernel with Bayesian Optimization to reason about causality in single-intervention physical reasoning tasks. Experimental results on Virtual Tools and PHYRE physical reasoning benchmarks verify the proposed method could finish the task with fewer actions.
## update after rebuttal
Thank you for the thoughtful rebuttal. I appreciate the effort to address my concerns. I will maintain my original score.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes, though the targeted task of physical reasoning seems to be only a toy setting with only 3-30 object candidates and 2D coordinates.
Theoretical Claims: There are no theoretical claims.
Experimental Designs Or Analyses: The experimental designs are sound, with good ablation studies on the proposed PIK kernel. The experimental analyses seem valid.
Supplementary Material: There is no Supplementary Material.
Relation To Broader Scientific Literature: The paper designed a physics-informed kernel and uses Bayesian Optimization to reason over the causality. If the approach should be able to scale to real-world-level tasks requiring more complicated state and action space, it would be helpful.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strength: The paper is clearly presented, and the proposed method achieves new sota on two benchmarks.
Weakness: The targeted task of physical reasoning seems to be only a toy setting with limited action space and oracle state space, whether the proposed method could be useful with real questions remains unknown.
Other Comments Or Suggestions: N/A
Questions For Authors: Is there any evidence the proposed method could scale up to solving physical reasoning problems of more complexity?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank reviewer jF9J for their thoughtful feedback. We are glad they consider our method to be clearly presented with a sound experimental design which includes good ablation studies. We address the reviewer’s comments and will incorporate all of the following discussions in the final draft.
> [jF9J.1] Scaling to bigger states and actions
We appreciate the reviewers’ interest in how our approach scales to higher-dimensional action and state spaces. We would like to emphasize that our method already operates in significantly larger action spaces compared to many baseline approaches. For example, in the Phyre benchmark, our action space comprises 2M possible actions, whereas other baselines consider only 10K.
Increasing the state and action space—such as introducing multiple objects in specific configurations or simultaneously determining both the direction and speed for accurate object throws—does not alter the fundamental kernel equations (2)–(6), meaning the kernel would remain expressive at comparing the immediate effect of these high dimensional actions. Certainly, the search space for BO would increase as these new dimensions are encoded within the state returned by the dynamics model. However, researchers have shown that BO scales robustly with high-dimensional actions (Antonova et al., 2019). We included this information in the limitations section of our manuscript.
> [jF9J.2] Updated PHYRE results for our method (Table 2)
We appreciate the reviewer's feedback which prompted additional experiments. During this process, we discovered and fixed a bug in our AUCCESS score computation for the Phyre benchmark. This correction has resulted in the following updated PHYRE results in Table 2:
* Ours Causal-PIK: 41.6 ±9.33 (previously reported as 51.3±8.46)
* Ours RBF: 27.7±9.68 (previously reported as 31.99±9.46)
The Virtual Tools results in Table 1 remain the same, as they were not affected by this bug.
Importantly, these corrections do not affect the claims or contributions of our paper. Our approach still achieves state-of-the-art performance, maintaining a significant 10-point margin above the baseline that uses the same size action space. Furthermore, our method performs comparably to baselines that utilize a drastically reduced action space of 10K.
We will incorporate these corrected values in the final manuscript draft.
> [jF9J.3] Tasks lacking complexity
We appreciate the reviewer’s interest in understanding how our method scales to more complex tasks. While the tasks in this study may appear deceptively simple in their 2D form, they are in fact quite challenging—even for humans. Solving these puzzles requires an understanding of the underlying physics involved, such as momentum, balance, geometry, mass, and propulsion. Importantly, agents do not have any details about the environment, such as object density, friction coefficients or material composition, making it impossible to plan the exact solution without active exploration.
To further emphasize the complexity of our current setup, we are conducting an ongoing human study using the Phyre benchmark, which is similar to the study presented in Allen et al. (2020). A total of 50 participants (currently n=17) will be recruited from Prolific and shown one variation of each of the 25 puzzles in a random order. Participants will have 10 attempts to solve each puzzle by using the mouse to draw and place the ball in any valid location in the scene. On each attempt, they will watch the simulation run until it either succeeds, after which they will continue on to the next puzzle, or time out or all objects stop moving, at which point they can try again. If they run out of attempts, then they will also be directed to the next puzzle. Preliminary results show that participants spend about 1.8 minutes on each puzzle. Preliminary AUCCESS scores, which will be added to the manuscript to complement Tables 1 and 2, are as follows:
* Virtual Tools - Humans: 53.25 ± 23 (Causal-PIK @10: 65.0 ± 25.0)
* Phyre - Humans: 34.9 ± 10.72 (Causal-PIK @10: 24.8 ± 9.22)
These preliminary results demonstrate that humans find these puzzles to be very challenging.
A logical progression of our work would involve creating a 3D high-fidelity environment for these tasks. This shift to a more complex setting would still leave the kernel equations unchanged. We would only need to change the dynamics model to one designed for 3D environments, such as those proposed by Xue et al. (2024) [A] or Driess et al. (2023) [B].
[A] Xue, H., Torralba, A., Tenenbaum, J., Yamins, D., Li, Y., & Tung, H. Y. (2024). 3D-IntPhys: towards more generalized 3D-grounded visual intuitive physics under challenging scenes. Advances in Neural Information Processing Systems, 36
[B] Driess, D., Huang, Z., Li, Y., Tedrake, R., & Toussaint, M. (2023, March). Learning multi-object dynamics with compositional neural radiance fields. In Conference on robot learning (pp. 1755-1768). PMLR | null | null | null | null | null | null |
Propagate and Inject: Revisiting Propagation-Based Feature Imputation for Graphs with Partially Observed Features | Accept (poster) | Summary: This paper identifies the problem of having low-variance channels after diffusion with mostly missing values. This happens when the available states are very similar. They propose adding random features to those channels and restarting the diffusion process with these synthetic features and the original low-variance features, which leads to higher-variance features. Experimental results show improvements on downstream tasks.
Claims And Evidence: Claims and Evidence are generally fine.
Methods And Evaluation Criteria: * There is no clear motivation for injecting random node states into existing channels.
* A channel has some meaning, so why would a random feature at a random node make sense?
* To me, the method is more like dropping low-signal channels and then adding new channels with your proposed diffused signal.
* As your method puts a higher influence on the synthetic features for diffusion, this seems to already be in the same direction, so you do not need the original low-variance features.
* If the authors are convinced that the low-variance features are beneficial, an experiment should be conducted to compare it to the case when dropping the low-variance features.
* A diffusion process with random features will provide structural information, e.g., which nodes are closely connected and far apart. This is no longer related to the original features.
* Consequently, the proposed method can be seen as a positional encoding that adds the diffused features as additional channels to a graph.
* Thus, I would not see your method as an imputation method but as adding structural information from which any imputation method can benefit.
* Your method is not permutation equivariant as a random node is chosen, which should be noted but is generally fine for the applications that are considered.
* Experiments should, therefore, compare first and foremost with methods for positional encoding, not with imputation methods.
Theoretical Claims: Theoretical claims only concern the convergence of diffusion processes, which is always given.
Experimental Designs Or Analyses: * To me, the experiments do not evaluate the interesting parts of this method.
It would be interesting to:
* Apply FISF to other imputation methods as additional channels.
* Compare the additional channels to other positional encodings.
* Evaluate whether removing low-variance channels matters and how many additional channels with synthetic features improve results.
Supplementary Material: I checked the theoretical part of the Appendix.
Relation To Broader Scientific Literature: The paper relates nicely to literature on imputation methods. Connections to positional encodings are essential but are missing.
Essential References Not Discussed: References to positional encodings are missing, e.g., the following:
Laplacian Positional Encoding: Dwivedi et al., Benchmarking graph neural networks, JMLR 2023.
Random Walk positional encoding: Dwivedi et al., Graph neural networks with learnable structural and positional representations, ICLR 2022.
Other Strengths And Weaknesses: I like the idea of adding structural information to tasks with missing features. There seems to be a lot of potential, I just wish that this paper went a bit deeper.
Other Comments Or Suggestions: -
Questions For Authors: -
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We greatly appreciate the reviewer’s detailed and perceptive comments.
First of all, we would like to clarify that propagation-based imputation methods for graph learning with missing features are designed to assign values to missing features in a way that improves downstream task performance. Accordingly, these **imputation methods preserve the original dimensionality of the feature matrix in their output**. In this context, our work focuses on addressing a limitation of current propagation-based imputation methods identified in this study.
> **Q1.** A channel has some meaning, so why would a random feature at a random node make sense?
Yes, each channel has its own meaning. However, when a channel is filled with nearly identical values, such a low-variance channel contributes little to downstream tasks. Our goal is to **make that channel, which has become uninformative for downstream tasks, useful**. As the reviewer pointed out, while the synthetic feature may not preserve the original meaning of the channel, it helps to restore distinctiveness within the channel, leading to performance improvements in downstream tasks.
> **Q2-2.** Evaluate whether removing low-variance channels matters.\
> **Q2-1.** If the authors are convinced that the low-variance features are beneficial, an experiment should be conducted to compare it to the case when dropping the low-variance features.
To show that observed features within a low-variance channel are beneficial in addition to the synthetic feature, we conduct additional experiments comparing FISF to FISF-OSF—a variant that performs the final diffusion process in low-variance channels using only synthetic features, without observed features. Table 33 presents the results; **the superior performance of FISF highlights the importance of using low-variance observed features**. FISF leverages the fact that the observed features within a low-variance channel have nearly identical values by preserving and diffusing them during the diffusion process, thereby making use of the remaining feature information.
**Table 33:** https://anonymous.4open.science/r/ICML12446-AF4E/Table%2033.png
> **Q3-1.** I would not see your method as an imputation method but as adding structural information from which any imputation method can benefit.\
> **Q3-2.** Apply FISF to other imputation methods as additional channels.\
> **Q3-3.** Compare the additional channels to other positional encodings.
We agree with the reviewer’s perspective that diffusion with synthetic features can be viewed as a way of adding structural information, and that this approach can also be applied to other imputation methods as additional channels. To address the reviewers’ concerns, **we conduct additional experiments in which FISF is applied to other imputation methods as additional channels** (denoted as FISF$^+$). We **further compare FISF$^+$ with Laplacian Positional Encoding (LPE) [1] and Random Walk Positional Encoding (RWPE) [2]**. Table 34 shows the results. As shown in the table, while LPE and RWPE generally improve the performance of existing imputation methods by providing structural information, **FISF$^+$ consistently achieves the most significant performance improvements**. Unlike positional encodings, our approach can use the feature information preserved in low-variance channels.
**Table 34:** https://anonymous.4open.science/r/ICML12446-AF4E/Table%2034.png
> **Q4.** Evaluate how many additional channels with synthetic features improve results.
In FISF, the number of channels into which synthetic features are injected is controlled by the hyperparameter $r$, and its effect was analyzed in Appendix C.8. However, in the context of FISF$^+$, **we conduct additional experiments to evaluate how many additional channels with synthetic features improve results**. Table 35 shows the results. As shown in the table, **even a small number of additional channels using FISF$^+$ leads to substantial performance improvements**. We further observe that, for each dataset, the optimal number of additional channels tends to lie near the number of channels into which the original FISF injects synthetic features.
**We will cite the insightful references provided by the reviewer [1, 2] and include all the experimental results presented above in the revised manuscript**.
**Table 35:** https://anonymous.4open.science/r/ICML12446-AF4E/Table%2035.png
> **Q5.** Your method is not permutation equivariant as a random node is chosen, which should be noted but is generally fine for the applications that are considered.
We agree that, while our method is not permutation equivariant due to the random node selection, this has not posed any practical issues in the applications considered. We will explicitly note this property in the revised manuscript for clarity.
[1] "Benchmarking graph neural networks." JMLR 2023.\
[2] "Graph neural networks with learnable structural and positional representations." ICLR 2022.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their detailed rebuttal and for conducting additional ablation studies. These seem to be very valuable in confirming the effectiveness of the proposed approach. It is a promising tool for cases when permutation equivariance is not required. I now support accepting this paper and have increased my score to 3.
---
Reply to Comment 1.1.1:
Comment: We are grateful to the reviewer for taking the time to share this thoughtful rebuttal comment. Your insightful suggestions allowed us to further validate the effectiveness of our proposed approach from a different perspective. We sincerely appreciate your decision to raise your score and your support for the acceptance of our paper. | Summary: This paper targets missing data imputation for graph data. The authors highlighted that existing propagation-based methods produce nearly identical values within each channel and they contribute little to graph learning. To resolve this limitation, the authors propose a propagation-based imputation scheme that consists of two diffusion stages. First, the method imputes the data using existing propagation-based methods in which the data obtain the low-variance channels. Then the method removes all the imputed features in the low-variance channels and generates a synthetic feature by injecting random noise into a randomly selected node. Finally, the method diffuses both the observed and synthetic features to produce the final imputed features which have distinct imputed values for those channels. The experiments show that the methods increase the variance of imputed values for different channels so the graph learning tasks like semi supervised node classification and link prediction.
Claims And Evidence: The claims in the paper are supported by clear and convincing evidence.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense for the target problem.
Theoretical Claims: The overall proofs in the supplementary are sound.
Experimental Designs Or Analyses: The experimental designs is comprehensive with different data set and methods are included mainly for semi supervised node classification and link prediction. And effects of hyperparameter and ablation study are included which are detailed in the supplementary.
Supplementary Material: I have review the supplementary material for the theoretical proof and experiment parts.
Relation To Broader Scientific Literature: The study is focus on graph missing data Imputation which is applicable to any domain that have graph data which suffer from missing issues such as social network that contain many missing features for people in the network.
Essential References Not Discussed: No
Other Strengths And Weaknesses: Strengths:
1.The writing, visualization and the organization of the paper is clear. And comprehensive experiments includes many data set and methods. In particular, ablation study, effect of hyperparameters and scalability are also included to make the work more solid.
2.Theretical proof is derived to show the convergence of diffusion stages and show why channel-wise inter-node diffusion produce similar imputed values for the channel where the known features have similar values.
Other Comments Or Suggestions: No
Questions For Authors: No
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer’s positive evaluation of our work and the absence of noted weaknesses. We thank the reviewer for recognizing that the claims in our paper are supported by clear and convincing evidence, and for highlighting the clarity of the writing, the soundness of the theoretical analysis, and the comprehensiveness of the experiments, including ablation studies and scalability. That said, we noticed that the reviewer’s overall recommendation was “Weak Accept (i.e., leaning towards accept, but could also be rejected).” If there are any remaining concerns or suggestions for improvement that we may have overlooked, we would greatly appreciate your feedback and are ready to address them promptly.
Throughout the rebuttal period, we have conducted the following additional discussions on our proposed method to further strengthen our paper, all of which will be included in the revised manuscript:
* Generalizability across downstream GNN architectures\
(see **Table 30**: https://anonymous.4open.science/r/ICML12446-AF4E/Table%2030.png)
* Ablation study on the use of synthetic features\
(see **Table 32**: https://anonymous.4open.science/r/ICML12446-AF4E/Table%2032.png)
* Description of the algorithm\
(see **Algorithm 1**: https://anonymous.4open.science/r/ICML12446-AF4E/Algorithm%201.png)
* Applicability to existing imputation methods as additional channels\
(see **Table 34**: https://anonymous.4open.science/r/ICML12446-AF4E/Table%2034.png)
* Effect of the number of additional channels with synthetic features on performance\
(see **Table 35**: https://anonymous.4open.science/r/ICML12446-AF4E/Table%2035.png)
We hope that these additions and clarifications help address any remaining uncertainties and reinforce your confidence in the significance of our work. If the reviewer finds our responses satisfactory, we would be sincerely grateful if you would consider revisiting your overall recommendation. | Summary: This paper addresses the issue of missing features in graph data, which hinders the effectiveness of Graph Neural Networks (GNNs). Existing diffusion-based imputation methods often result in low-variance channels, where feature values across nodes are nearly identical, leading to poor performance in downstream tasks. The paper proposes Feature Imputation with Synthetic Features (FISF), a novel imputation scheme that mitigates the low-variance problem by introducing synthetic features. FISF consists of two diffusion stages: pre-diffusion and diffusion with synthetic features. Pre-diffusion identifies low-variance channels using existing methods like PCFI. Then, FISF injects synthetic features into randomly chosen nodes in these channels, followed by a second diffusion stage that spreads the synthetic features to increase variance and improve node distinctiveness.
Claims And Evidence: This paper introduces the low-variance issue in feature imputation. And address the issue by synthetic features.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: no theoretical claims
Experimental Designs Or Analyses: yes
Supplementary Material: Yes. The scalability parts.
Relation To Broader Scientific Literature: This may benefit the missing issue in the graph-structured data. However, the missing issues have been explored well before.
Essential References Not Discussed: I do not find the missing essential references.
Other Strengths And Weaknesses: Pros:
FISF introduces a new perspective on feature imputation by addressing the low-variance issue with synthetic features.
This approach shows promising results in experiments.
Cons:
This work seems to show its advantage in especially large missing rate, such as 0.995 and 0.999. However, such a large missing rate is impractical in real applications.
There is no comparison regarding scalability and efficiency in the main body.
Other Comments Or Suggestions: It is better to provide a description of the algorithm in the paper.
It is highly recommended to put the analysis in the appendix to the main body. The reorganization of the paper will greatly enhance this paper.
Questions For Authors: See Other Strengths And Weaknesses.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the thoughtful questions and valuable suggestions to further improve our work.
> **Q1.** This work seems to show its advantage in especially large missing rate, such as 0.995 and 0.999. However, such a large missing rate is impractical in real applications.
**Our FISF consistently demonstrates superiority across various missing rates ($r_m$), including low $r_m$, as shown in Figure 3 in the manuscript**. As the reviewer mentioned, the performance gain obtained with FISF diminishes as the missing rate decreases. However, this is natural since a smaller $r_m$ means fewer missing features to impute, making it difficult to achieve a significant improvement solely through the superiority of the imputation method. Nevertheless, FISF consistently shows its effectiveness at even low $r_m$.
Furthermore, **addressing high rates of missing features is an important issue in real-world scenarios**. As data sources become more diverse and abundant, the prevalence of highly incomplete data is also increasing. Consequently, **handling large missing rates has drawn significant attention across various domains**, including semiconductors [1], healthcare [2], and transportation [3], where datasets often exhibit extreme missing rates of 97.5\%, 99.98\%, and 99.99\%, respectively.
[1] "Bayesian nonparametric classification for incomplete data with a high missing rate: an application to semiconductor manufacturing data." IEEE Transactions on Semiconductor Manufacturing (2023).\
[2] "Temporal Belief Memory: Imputing Missing Data during RNN Training." IJCAI 2018.\
[3] Li, Jinlong, et al. "Dynamic adaptive generative adversarial networks with multi-view temporal factorizations for hybrid recovery of missing traffic data." Neural Computing and Applications 35.10 (2023): 7677-7696.
> **Q2-1.** There is no comparison regarding scalability and efficiency in the main body.\
> **Q2-2.** It is highly recommended to put the analysis in the appendix to the main body. The reorganization of the paper will greatly enhance this paper.
We agree that scalability and efficiency are important considerations when evaluating imputation methods. To demonstrate the effectiveness and validity of FISF, we conducted extensive and in-depth analyses. However, due to the strict 8-page limit, it was challenging to include these analyses in the main body in addition to presenting the core results. As noted in Appendix C.5 and C.6, we have already provided a complexity analysis and empirical results demonstrating the scalability of FISF. Since the final versions of accepted papers are allowed one additional page, **we will reorganize the paper by incorporating the analyses from Appendix C.5 and C.6 into the main body** in response to the reviewer’s suggestion. We sincerely appreciate the reviewer’s insightful suggestion and believe that this revision will further strengthen the presentation of our paper. If there are any remaining concerns or suggestions for improvement, we would be happy to receive your constructive feedback and are fully prepared to address any remaining points promptly.
> **Q3.** It is better to provide a description of the algorithm in the paper.
We agree that providing a description of the algorithm is helpful for improving the clarity of the proposed method. In response to the reviewer’s suggestion, **we have written Algorithm 1 and will include it in Section 4 of the revised manuscript**.
**Algorithm 1**: https://anonymous.4open.science/r/ICML12446-AF4E/Algorithm%201.png | Summary: This work identifies a limitation of previous works for learning on graphs with missing features, that being the output channels for feature imputation have low-variance. To solve this problem, the authors diffuse the observed features with injected random noise to produce final imputed features. Their method, FISF, is compared to several baselines on standard node classification tasks.
Claims And Evidence: The empirical results support the claims that FISF reduces the number of channels with low-variance as compared to other methods such as FP and PCFI. Furthermore, on the task setting where node features are removed at high rates, FISF demonstrates superior performance on node classification tasks.
Methods And Evaluation Criteria: The methods are sound and evaluation is standard for this problem.
Theoretical Claims: The paper does not emphasize any theoretical claims. However, the work provides sufficient support for their claim in Sec 4.3 within the appendix.
Experimental Designs Or Analyses: The experimental design is appropriate for the graph learning tasks.
Supplementary Material: I read through Sec. A and B of the appendix, and briefly looked through Sec. C.
Relation To Broader Scientific Literature: Works investigating graph learning using data with missing node features largely depend upon explicitly/implicitly predicting the values of the missing data. This work identifies one of the issues with a naive implementation of feature imputation, where many output channels have low-variance. This understanding will be important in future works.
Essential References Not Discussed: To my knowledge, the relevant literature is discussed.
Other Strengths And Weaknesses: The empirical results are very strong. The addition of theoretical justifications would make this work even stronger. See questions.
Other Comments Or Suggestions: See questions.
Questions For Authors: (1) Is there any theory to corroborate the reason that these low-variance channels are negatively impacting downstream node classification tasks? On a related note, why does the injection of synthetic features improve downstream tasks? Theory on the “why” would make this work incredibly strong.
(2) Why perform channel-wise diffusion? What if all channel features are determined by the synthetic features only? Are there any studies/ablations?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We greatly appreciate the reviewer’s positive feedback on the strength of our empirical results and theoretical justifications. We also appreciate the insightful questions, which provide valuable guidance for further enhancing our paper.
> **Q1.** Is there any theory to corroborate the reason that these low-variance channels are negatively impacting downstream node classification tasks? On a related note, why does the injection of synthetic features improve downstream tasks? Theory on the “why” would make this work incredibly strong.
We would like to respectfully clarify that our claim is not that low-variance channels produced by propagation-based imputation methods negatively impact downstream node classification tasks, but rather that they contribute little to them. **From a theoretical perspective, a zero-variance channel—corresponding to the first eigenvector of the graph Laplacian—is regarded in the literature as an example of zero expressiveness, as it is not useful for discriminating between nodes** [1, 2]. Based on this, we identify the issue that existing propagation-based imputation methods produce channels with near-zero variance in their output. We experimentally confirm that these low-variance channels contribute very little to downstream tasks, as shown in Figure 1b and Appendix C.7 of the manuscript.
We theoretically prove that propagation-based feature imputation produces low-variance channels when all observed features in a given channel have the same value. To prevent a group of nearly identical values within a channel from diffusing only themselves and consequently forming a low-variance channel, we inject a synthetic feature with a randomly sampled value that is likely to differ from the existing known values, allowing it to participate in the diffusion process. As a result, our FISF effectively increases the variance of low-variance channels in its output matrix, as shown in Figure 1a, Figure 11, and Figure 12 in the manuscript. This increase in channel variance leads to significant performance improvements across various downstream tasks and diverse domains, as shown in Figure 3, Table 1, Table 2, and Table 4 in the manuscript. In summary, **synthetic feature injection enables low-variance channels to overcome their lack of distinctiveness and recover expressiveness**.
Appendix D.1 in the manuscript provides justification for synthetic feature injection, including conceptual explanation and in-depth analysis on channel variance. We will add this discussion to Appendix D.1 of the revised manuscript, interpreting the low-variance problem through the lens of spectral graph theory.
[1] Chung, Fan RK. Spectral graph theory. Vol. 92. American Mathematical Soc., 1997.\
[2] Von Luxburg, Ulrike. "A tutorial on spectral clustering." Statistics and computing 17 (2007): 395-416.
> **Q2**. Why perform channel-wise diffusion? What if all channel features are determined by the synthetic features only? Are there any studies/ablations?
In Appendix C.1 of the manuscript, we conducted an ablation study by adjusting each hyperparameter of FISF and the number of synthetic features injected per channel to analyze the effectiveness of its components. In addition to this study, and to address the reviewer’s concern, **we further conduct an additional ablation study on the use of synthetic features**. We compare the performance of FISF and three of its variants, depending on how synthetic features are injected and utilized within a low-variance channel.
* FISF-A (only synthetic feature injection): A synthetic feature is injected but not used in the diffusion process.
* FISF-B (fully synthetic features): All missing features are directly replaced with randomly sampled synthetic values without any diffusion.
* FISF-C (diffusion only with a synthetic feature): Diffusion is performed using only the synthetic feature, with known features removed.
Table 32 presents the results of semi-supervised node classification. As shown in the table, simply injecting synthetic features, or performing diffusion using only the injected synthetic feature without any observed features within the channel, results in significantly worse performance compared to the original FISF. The reason for performing channel-wise diffusion using both a synthetic feature and the observed features within a channel is that it **enables the output channel to capture both structural information and the feature information from the observed features simultaneously**. First, since the diffusion process is based on the graph structure, the diffusion of the synthetic feature can encode structural information. In addition, by preserving the original values of the observed features during the diffusion process, the feature information can also be retained.
We will include this extended ablation study in Appendix C.1 of the revised manuscript.
**Table 32**: https://anonymous.4open.science/r/ICML12446-AF4E/Table%2032.png
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for their clarifications. I would agree now with the understanding that low-variance channels provide little contribution, and synthetic features would enable these low-variance channels to have more discrimination. I suppose a natural follow-up is if there is a better strategy to choose these synthetic features than sampling from a random distribution, but this would probably be outside the scope of the claims for this work.
---
Reply to Comment 1.1.1:
Comment: **We sincerely thank the reviewer for taking the time to provide this thoughtful rebuttal comment. We are glad to hear that our clarifications have addressed the reviewer's concerns.**
As the reviewer insightfully suggested, exploring alternative strategies for generating synthetic features is indeed a meaningful direction. To this end, **we conduct an extensive ablation study comparing various value assignment strategies for synthetic features**. Since FISF does not involve a learning process during imputation, statistical approaches may serve as the most reasonable alternatives to random sampling. Specifically, we compare the performance of FISF variants using different value assignment strategies, including the max, min, mean, and median of the observed features. We also evaluate a variant called Channel-wise Mean + Std, which statistically determines synthetic feature values on a per-channel basis. The results are presented in Table 36. As shown in the table, the original FISF consistently achieves the best performance. We believe this performance gain stems from the increased diversity across feature channels in the imputed matrix, facilitated by the use of randomly sampled values. We will include this important discussion and the corresponding experimental results on value assignment strategies for synthetic features in the revised manuscript. If the reviewer has any suggestions for a potentially more promising strategy, we would warmly welcome them.
Additionally, we explored the effects of the magnitude of synthetic feature values, as reported in Appendix C.9 of the manuscript.
**If the reviewer's follow-up question has been fully addressed, as well as the previous concerns, we would sincerely appreciate your reconsideration of the overall recommendation**.
**Table 36**: https://anonymous.4open.science/r/ICML12446-AF4E/Table%2036.png | Summary: In this paper, the authors introduce FISF, a novel approach for graph feature imputation. FISF effectively mitigates the low-variance channel problem by strategically injecting synthetic features, thereby enhancing performance in both semi-supervised node classification and link prediction tasks across a wide range of missing rates.
Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence.
Methods And Evaluation Criteria: Yes
Theoretical Claims: Yes
Experimental Designs Or Analyses: Yes
Supplementary Material: Yes. All parts.
Relation To Broader Scientific Literature: The key contributions of the paper is important to the broader scientific literature.
Essential References Not Discussed: No
Other Strengths And Weaknesses: 1. Through numerous experiments, FISF was evaluated on multiple benchmark datasets. The results show that FISF significantly improves the performance of semi-supervised node classification and link prediction tasks, proving the effectiveness of the method.
2. The convergence of the diffusion stage was theoretically proven.
3. The method is novel as it is the first research to apply synthetic features to imputation.
Other Comments Or Suggestions: 1. All the compared baselines are from before 2023. Since I'm not familiar with this field, are there any other more advanced baselines?
2. Only GCN is used as the backbone. Other graph neural networks, such as GIN, can be considered to verify the structural generalizability of the method proposed by the author.
Questions For Authors: 1. The author has supplemented a lot of experiments. Haven't they considered publishing this research in a journal?
2. The generation methods of missing features only consider two situations: structural missing and uniform missing. Are these two situations common in real-world scenarios?
3. The author uses Grid search to determine the hyperparameters. Since there are quite a lot of hyperparameters and multiple experiments are needed, how long does it actually take to train the model once?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the detailed feedback and insightful questions that help us further improve our work.
> **Q1.** Since I'm not familiar with this field, are there any other more advanced baselines?
We appreciate the reviewer’s thoughtful question. **Before submitting the paper, we conducted a thorough investigation of recent methods and included all state-of-the-art baselines relevant to graph learning with missing features**. We have closely followed recent advances and carefully examined the literature in this area. Propagation-based feature imputation methods, including FP and PCFI, have demonstrated exceptional performance, and researchers have recently focused on extending the existing methods to new applications [1, 2]. **In contrast, our study identifies a key limitation of current propagation-based methods and proposes a solution that effectively addresses this issue**, achieving significant performance improvements. We will incorporate very recent work on propagation-based feature imputation, thereby ensuring that our paper reflects the most up-to-date developments in the field.
[1] "Gene-Gene Relationship Modeling Based on Genetic Evidence for Single-Cell RNA-Seq Data Imputation." NeurIPS 2024\
[2] "Relation-Aware Diffusion for Heterogeneous Graphs with Partially Observed Features." ICLR 2025.
> **Q2.** Other graph neural networks, such as GIN, can be considered to verify the structural generalizability of the method proposed by the author.
To verify the structural generalizability of the proposed method, FISF, **we conduct additional experiments using GIN as the downstream network for the imputation methods**. Table 30 presents the semi-supervised node classification results at a missing rate of 0.995. As shown in the table, **FISF consistently outperforms state-of-the-art methods across all datasets and missing settings, demonstrating its strong generalizability** across datasets, missing settings, and downstream network architectures. We will include this table in Section 5 of the revised manuscript to emphasize this important discussion.
**Table 30**: https://anonymous.4open.science/r/ICML12446-AF4E/Table%2030.png
> **Q3.** The author has supplemented a lot of experiments. Haven't they considered publishing this research in a journal?
We greatly appreciate the reviewer’s encouraging comment. We submitted this work to ICML to receive timely feedback and **to make a prompt contribution to the long-standing topic of missing values in the machine learning community**. To demonstrate the generalizability and validity of our method, we conducted extensive experiments and thorough analyses in the submitted paper.
> **Q4.** Are structural missing and uniform missing common in real-world scenarios?
Structural missing and uniform missing, where the features of randomly selected nodes and randomly selected feature values in the feature matrix are removed, respectively, are categorized as Missing Completely At Random (MCAR) among missingness mechanisms. **MCAR is the most commonly assumed setting in the missing data community** [3, 4]. Our proposed FISF consistently demonstrates its effectiveness under both missing settings.
To further validate the effectiveness of FISF beyond MCAR settings, **we also conducted experiments under Missing Not At Random (MNAR) scenarios in Appendix C.3**. In MNAR, the probability of missingness depends on the unobserved values themselves. For these experiments, we designed two MNAR settings: MNAR-I and MNAR-D. In MNAR-I, the probability that a feature is missing increases as the feature value increases; in MNAR-D, vice versa. Table 5 in the manuscript shows classification accuracy in semi-supervised node classification on the OGBN-Arxiv dataset under MNAR settings. The results reveal that FISF consistently outperforms the baselines across both MNAR settings, thereby **demonstrating its effectiveness even in MNAR scenarios**.
[3] "Gain: Missing data imputation using generative adversarial nets." ICML 2018.\
[4] "Handling missing data via max-entropy regularized graph autoencoder." AAAI 2023.
> **Q5.** How long does it actually take to train the model once?
To address the reviewer's concern, we report the average training and hyperparameter tuning time of the FISF model, measured on a single NVIDIA GeForce RTX 2080 Ti GPU and an Intel Core i5-6600 CPU at 3.30 GHz. As shown in Table 31, **training a single FISF model takes only a few minutes**. Despite using grid search, the efficiency of FISF enables the model for OGBN-Arxiv, which contains 169,343 nodes, to complete hyperparameter tuning in less than a day, while other FISF models require only a few hours. We provide a detailed discussion of the complexity and scalability of FISF in Appendix C.5 and Appendix C.6, respectively. We will add this discussion and table to Appendix C.5 in the revised manuscript.
**Table 31**: https://anonymous.4open.science/r/ICML12446-AF4E/Table%2031.png
---
Rebuttal Comment 1.1:
Comment: Thank you for your responses.
My questions have all been addressed. However, since I'm completely unfamiliar with this field, I will temporarily keep my score unchanged to avoid interfering with the decisions of other reviewers.
---
Reply to Comment 1.1.1:
Comment: We are pleased that our responses have addressed all of the reviewer’s concerns. We sincerely appreciate the reviewer’s insightful feedback, which has contributed to further improving our paper, and their continued support for its acceptance. | null | null | null | null |
Hidden No More: Attacking and Defending Private Third-Party LLM Inference | Accept (poster) | Summary: The paper investigates the vulnerabilities of private inference in large language models (LLMs). Recent organizations rely on third-party LLM inference services when deploying large models locally due to resource constraints. These setups raise significant privacy concerns as user prompts will be disclosed to untrusted parties.
The authors introduce a white-box reconstruction attack called the vocabulary-matching attack (VMA) that can recover user input text from the hidden states of LLMs with high accuracy. The attack leverages the causal ordering of decoder-only transformers and the finite set of tokens in the vocabulary, iteratively guessing the input token by comparing intermediate hidden states.
To mitigate the vulnerability, the paper introduces Cascade, a multi-party inference scheme based on token-level sharding. Cascade's key idea is to shard hidden states along the token dimension across multiple parties. In this way, no single party can get enough information to reconstruct the user's input. The authors demonstrate that Cascade effectively defends against both their VMA and the existing attack.
## update after rebuttal
Thank the authors for the detailed response. My concern about the white-box access has been addressed. I will raise my rating for the point.
Claims And Evidence: Not always.
Methods And Evaluation Criteria: Could be improved.
Theoretical Claims: Not applicable.
Experimental Designs Or Analyses: Yes.
Supplementary Material: No.
Relation To Broader Scientific Literature: Not sure.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: The paper has natural weaknesses, which may limit its contributions. First, the attack assumes open-weight access, which may be less applicable in proprietary or black-box LLM inference setups. Besides, if white-box LLM access is granted, the adversary can get the input and output directly. There is no need to mount the proposed attack. Therefore, the VMA seems artificial.
Second, while Cascade prevents the VMA and the existing attack, it does not provide cryptographically rigorous guarantees like SMPC. The authors note that Cascade does not secure individual input embeddings and suggest SMPC for this layer. However, embeddings protected by SMPC provide computational indistinguishability, which means that VMA cannot work in this case.
Third, Cascade's security relies on multi-party setups, which is impractical in typical LLM deployment environments, especially when scale shards are needed. Besides, the choice of sharding parameters (e.g., c and δ) needs careful tuning in practice, which may also challenge Cascade's practical applications.
Other Comments Or Suggestions: No.
Questions For Authors: No.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for your review and for recognizing Cascade's effectiveness against the vocab-matching attack (VMA). We address your comments below.
**W1**
Regarding white-box access: we clarify this differs from open-weights. Our setting involves access to permuted hidden states and weights, but not the corresponding input tokens. We emphasize that the security of schemes [[1]](https://arxiv.org/abs/2405.18744), [[2]](https://arxiv.org/abs/2312.00025) and [[3]](https://arxiv.org/abs/2412.10652) all assume the difficulty of this reversal problem for security. To the best of our knowledge, there is no prior work that demonstrates the insecurity of permuted hidden states.
Furthermore, our attack extends to break the schemes of [2, 3] even in the closed-weights setting. These protocols give permuted model weights and embeddings to the party doing inference, where the embeddings are computed locally by the user. Therefore, the VMA may be applied exactly as mentioned in our submission - except that now the embedding matrix is not known, so the first step of inference in the attack seemingly cannot be done.
However, it is straightforward to bypass this. The adversary can collect the (finite) set of input embeddings in the vocabulary over repeated inference calls, and can perform the VMA by iterating through this set. Decoding of the embeddings to tokens is also possible even if the tokenizer remains private – this essentially constitutes a simple substitution cipher, where each token in the vocabulary is substituted by its embedding vector. This may be easily broken by collecting data over many queries and using straightforward methods such as frequency analysis and contextual modelling.
Therefore, our result extends to breaking the proposed schemes of [2] and [3] even in the closed-weight setting. We further note that the proposed scheme of [1] explicitly provides the model weights to the party performing inference, and so corresponds to the open-weights setting.
**W2**
We wish to clarify your statement that: “The authors note that Cascade does not secure individual input embeddings and suggest SMPC for this layer. However, embeddings protected by SMPC provide computational indistinguishability, which means that VMA cannot work in this case.”
First we assume that you mean that Cascade cannot work in this case. We clarify that here we mean the using SMPC for the first layer, but then _decoding into plaintext_ after it -- e.g. in additive secret sharing, parties send shards of their additive shares to Cascade nodes, who sum them. In this way, Cascade can be used on the plaintext hiddens from layer 1 onwards. Similarly, SMPC can be applied to any number of initial layers, followed by Cascade on the plaintext token-sharded hiddens for the remaining layers.
Regarding lack of cryptographic guarantees - we are clear in our submission that Cascade is _not_ a cryptographic scheme (e.g. lines 80, 321, 414). However, it is our belief that novel defensive methods are of value to the wider research community if they provide sufficient practical defence, even if they do not have formal guarantees that can be proven. We point to the extensive literature on adversarial attacks and defences of neural networks such as [[4]](https://arxiv.org/abs/2406.05927), [[5]](https://arxiv.org/abs/2302.04638), [[6]](https://arxiv.org/abs/2404.09349), none of which have formal guarantees, yet are used commonly in practice due to their efficacy (e.g. see https://robustbench.github.io/).
**W3**
Regarding the tuning of parameters, we have shown from our experiments that for sufficiently large values of $c$ and $\delta$, good security is obtained. Although _optimum_ performance may indeed be dependent on the use case, we are confident that users may follow our prescribed heuristics of $c \geq 8, \alpha \geq 12$ and $m \geq 4$ and achieve good security in the majority of cases.
Regarding the difficulty of obtaining enough nodes, we point to the success of previous projects in decentralized training and inference ([[7]](https://github.com/learning-at-home/hivemind), [[8]](https://arxiv.org/abs/2209.01188), [[9]](https://github.com/PrimeIntellect-ai/prime)) that received contributions from thousands of distinct participants. As such, we are optimistic that sufficient nodes can be gathered in such a setting in order to enable the use of Cascade.
**Conclusion**
Thank you again for your considered review. We have responded to your points above, where in some cases - such as the open-weights setting - we have shown extensibility of our method to also cover the closed-weights setting, and we have also clarified some misunderstandings, such as the compatibility of SMPC with Cascade. We strongly believe in the value of our work, both to show the insecurity of existing proposed schemes, as well as to offer a potential solution to their insecurity. We would be grateful if you would consider raising your score in light of the above. | Summary: This paper investigates privacy vulnerabilities in third-party LLM inference services, focusing on an open-weight setting. The authors first propose a vocabulary-matching attack, which can recover original user prompts from intermediate hidden states with near-perfect accuracy and remains effective against various permutation-based and noise-based defenses. Then, a multi-party inference scheme that shards hidden states at the token level is proposed, which is robust against the vocabulary-matching attack while maintaining computational and communication efficiency. Experiments on Gemma-2-2B-IT and Llama-3.1-8B-Instruct validate the effectiveness of the attack and defense.
## Update after rebuttal
Some of my concerns have been addressed, but the paper needs a major revamp in organization and writing. I will keep my score.
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: This paper is mainly empirical, and not many theoretical results are involved.
Experimental Designs Or Analyses: Yes. The attack evaluation methodology is well-structured, covering different LLM architectures, defenses, and layers, while the security of Cascade is evaluated through $(c,\delta)$-sharding experiments.
Supplementary Material: Yes. I went through all the supplementary material.
Relation To Broader Scientific Literature: 1. The paper builds on prior works on LLM inversion attacks (e.g., Wan et al. (2024), Morris et al. (2023b)), improving attack effectiveness.
2. It challenges the security assumptions of prior permutation-based defenses (Zheng et al. (2024), Yuan et al. (2024), Luo et al. (2024)).
3. The proposed defending mechanism presents a new trade-off between privacy and efficiency, improving over SMPC approaches (Li et al., 2023; Dong et al., 2023b).
Essential References Not Discussed: The literature review is comprehensive enough in my opinion.
Other Strengths And Weaknesses: **Strengths:**
1. The attack method is simple yet highly effective, even against some commonly considered defenses based on permutation and perturbation.
2. Extensive experimental results are presented to validate the effectiveness of both the proposed attack and defense.
**Weaknesses:**
1. The paper is quite difficult to follow, the organization and writing need significant improvement. For example, Figure 1 is presented but not explicitly mentioned in the main text. The key steps of the proposed multi-party inference scheme (i.e., Algorithm 2) should be discussed in detail.
2. The proposed defending mechanism, Cascade, lacks formal security guarantees and relies on empirical evaluation.
3. It seems that the attack/defense suffers from high computational and communication overhead, and it is not clear how it works when scaled to larger models (e.g., Llama-3-70B).
Other Comments Or Suggestions: There is a typo in Algorithm 2, line 10.
Questions For Authors: See my comments about weaknesses.
In addition, it would be helpful if the author could provide some exemplary scenarios in practice, in which a user requires multiple third parties to perform LLM inference as this may incur privacy concerns.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your review. We are glad that you found our attack effective, and found our experimental work to be extensive. We address your comments below.
**W3**
Thank you for raising this important point. We have now run scaling experiments to investigate the effect of model size on the performance of both our attack as well as the defense. We present this below.
| Model Size (num parameters) | Average Attack Time (s) | Vocabulary Size | Model Name |
|----------------|--------------------------|------------------|--------------------------|
| 1B | 49 | 128256 | Llama-3.2-1B-Instruct |
| 2B | 124 | 256000 | Gemma-2-2B-IT |
| 8B | 69 | 128256 | Llama-3.1-8B-Instruct |
| 27B | 304 | 256000 | Gemma-2-27B-IT ($\epsilon = 30$) |
| 27B | 124 | 256000 | Gemma-2-27B-IT ($\epsilon = 40$) |
This table shows the average attack time over 10 decodings for each of the above models, including the optimizations mentioned in Section 4.2.2 of our submission. As can be seen, the attack time does not significantly increase due to model size, but is primarily a function of the vocabulary size, as well as the parameter $\epsilon$. Even if $\epsilon$ is not very well chosen, the attack still takes on the order of minutes for perfect decoding of length 100 prompts.
Next, we present model-size-scaling analysis for Cascade:
| Model Size (num parameters) | Mean Runtime (s) | 95% Confidence Interval (s) |
|--------------------------|------------------|------------------------------|
| 110M | 0.7 | 0.62 – 0.74 |
| 335M | 1.3 | 1.24 – 1.46 |
| 1B | 2.6 | 2.33 – 2.96 |
| 7B | 12.7 | 11.07 – 14.07 |
| 13B | 22.7 | 20.58 – 25.99 |
The above numbers are obtained over 100 runs, with $\alpha = 2$. The models for the 110M and 335M sizes are BERT-base and BERT-Large, and for 1B-13B are Llama 2. We see that the runtime grows sublinearly with the number of parameters. We emphasize that our baseline, [[1]](https://arxiv.org/abs/2307.12533), reported a single forward pass on Llama 7B in approximately 300s, so our approach is ~24x faster. Due to computational constraints, we have not yet run on 70B, but we believe the same scaling law will apply as above. We have now updated our local revision to include the above results, and will include those - as well as the 70B result - in our camera ready version.
**W2**
Thank you for mentioning this point. We agree, and we are very clear in our submission that Cascade is not a cryptographic scheme (e.g. lines 80, 321, 414). However, it is our belief that novel defensive methods are of value to the wider research community if they provide sufficient practical defence, even if they do not have formal guarantees that can be proven. We point to the extensive literature on adversarial attacks and defences of neural networks such as [[2]](https://arxiv.org/abs/2406.05927), [[3]](https://arxiv.org/abs/2302.04638), [[4]](https://arxiv.org/abs/2404.09349), none of which have formal guarantees, yet are used commonly in practice due to their efficacy (e.g. see https://robustbench.github.io/).
**Q2**
To be clear, the use of multiple parties in Cascade is to provide additional security via token-sharding. For feasibility of obtaining enough parties, we point to the success of previous projects in decentralized training and inference ([[5]](https://github.com/learning-at-home/hivemind), [[6]](https://arxiv.org/abs/2209.01188), [[7]](https://github.com/PrimeIntellect-ai/prime)) that received contributions from thousands of distinct participants over diverse geographies. As such, we are optimistic that sufficient nodes can be gathered in such a setting in order to provide good security.
**W1**
We agree with all the points you have raised here. We have now included references to Figure 1 in the text, reorganized certain sections such as experimental results and related works, and brought some of the Appendix D details regarding Algorithm 2 to the main body. We also thank you for your eagle-eyed spotting of the typo in Algorithm 2, which we have now fixed.
Thank you once again for your thoughtful and considered review. We strongly believe in the value of our work both to highlight the security inadequacy of permuted hidden states proposed by several existing protocols, and our proposed solution as a potential method to address it. In light of our addressal of your points above, we would be very grateful if you would consider raising your score. Thank you! | Summary: This manuscript explores the field of private inference and proposes a vocabulary-matching attack that exploits hidden states to recover the original input of an LLM. The authors highlight that existing permutation-based and noise-based schemes fail to provide sufficient security against such an attack. To address this vulnerability, the manuscript introduces a token-sharded multi-party inference framework that is crypto-free. Experimental results demonstrate its performance improvements over existing cryptographic methods.
Claims And Evidence: Partially.
See Sec. Other Strengths And Weaknesses for detailed comments.
Methods And Evaluation Criteria: The proposed methods have limited practicality and potential security concerns.
See Sec. Other Strengths And Weaknesses for detailed comments.
Theoretical Claims: The proofs have been checked.
See Sec. Other Strengths And Weaknesses for detailed comments.
Experimental Designs Or Analyses: Experiments were conducted to support the manuscript's performance claims.
However, some experiments are not properly designed or require further clarification.
See Sec. Other Strengths And Weaknesses for detailed comments.
Supplementary Material: No supplementary material has been provided for this submission.
Relation To Broader Scientific Literature: Despite its shortcomings, this manuscript introduces the potential for crypto-free or lightweight crypto-based techniques in developing more efficient privacy-preserving schemes.
Essential References Not Discussed: The reviewer recommends that the authors consider [1], as it relates to the security of the proposed method.
See Sec. Other Strengths And Weaknesses for detailed comments.
[1] Wong, Harry WH, Jack PK Ma, Donald PH Wong, Lucien KL Ng, and Sherman SM Chow. "Learning Model with Error-Exposing the Hidden Model of BAYHENN." In IJCAI, pp. 3529-3535. 2020.
Other Strengths And Weaknesses: This paper introduces an attack method against permutation-based and noise-based defenses while proposing a crypto-free scheme to prevent original input reconstruction from intermediate hidden states.
### Strengths:
+ The manuscript effectively highlights the performance limitations of crypto-based schemes, which hinder their adoption in real-world applications.
+ It identifies potential security risks in existing defense methods.
+ The manuscript is comprehensive, and its experimental results are notable.
### Weaknesses:
However, the reviewer has identified several key issues, which lead to the recommendation to reject the manuscript in its current form.
#### 1. Insufficient Technical Contribution
- While the proposed vocab-matching attack successfully breaks existing permutation-based and noise-based defenses, it primarily relies on an approximation-based brute-force search method. The attack strategy minimizes the distance between current and original input states, which is a straightforward extension of existing methods.
- The proposed defense method is fundamentally based on splitting the hidden states into multiple segments, ensuring no single entity has access to a complete hidden state. However, the idea is intuitive and lacks sufficient technical depth for publication in ICML.
#### 2. Security Concerns and Practical Limitations
- According to [1], exposing intermediate plaintext values (rather than encrypted values) can still lead to privacy leakage. This concern needs to be formally addressed with rigorous proofs.
- While the scheme prevents full exposure of consecutive values in the hidden state vector, parties can still access partial consecutive values. The authors must prove whether this partial exposure can aid in reconstructing parts of the user’s input.
- Regarding the SoftMax layer, many variants use the formulation:
$
e^{x_i - \max(x)} / \sum_j e^{x_j - \max(x)}
$
Under the current sharding technique, how can the max(x) term be computed without cryptographic operations? If the max operation requires inputs from all parties, would this contradict the manuscript's core design principle, which restricts each party’s access to partial data?
- Given these concerns, the current security proof, which claims that search complexity is sufficiently large to ensure security, is inadequate. The authors must provide a more rigorous, formal security proof with numerical analysis, rather than relying on limited empirical experiments with specific attacks and models.
#### 3. Issues in Presentation and Writing Quality
- The manuscript’s writing is informal, requiring substantial improvement.
- The notation system is inconsistent and incomplete. The authors should provide a comprehensive notation table, rather than defining symbols only when they first appear.
- Avoid using coding-style expressions such as `a[:, R_i, T_k]` without explicit explanation. While these may be common in Python-based implementations, they are not standard in academic writing.
- The organization of the manuscript needs restructuring. While space constraints exist, too many crucial details, including the construction of the inference protocol, are placed in the appendix. It is recommended to integrate "Existing Work" and "Related Work" into a single section for better readability.
- Clarify the conceptual rationale behind the proposed method.
- Figure 1 should be redesigned to better illustrate the scheme. The current version is difficult to interpret unless the reader has prior knowledge of the method.
#### 4. Experimental Design Issues
- While the reviewer acknowledges the significant overhead of MPC-based methods, the manuscript fails to accurately compare the performance gap between crypto-based methods and the proposed approach.
- The experimental setup does not properly account for potential optimizations in cryptographic schemes:
- The manuscript assumes model weights are public while user input is private.
- With this assumption, many expensive MPC operations can be replaced by more efficient alternatives, such as:
- Secure multiplication via Homomorphic Encryption (HE)
- Beaver’s triplets for multiplication
- However, the baseline cryptographic methods (PUMA, MPCFormer) were designed for settings where both weights and inputs are private, leading to an unfair comparison.
- Unless the authors can explicitly clarify and adjust the baselines to match the same setting, the current performance claims remain questionable.
[1] Wong, Harry WH, Jack PK Ma, Donald PH Wong, Lucien KL Ng, and Sherman SM Chow. "Learning Model with Error-Exposing the Hidden Model of BAYHENN." In IJCAI, pp. 3529-3535. 2020.
Other Comments Or Suggestions: N/A.
Questions For Authors: N/A.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for your detailed review of our submission. We appreciate that you found our paper effectively identifies potential security risks in existing privacy-preserving schemes and that our work is comprehensive in coverage.
**W4**
We agree with your point. We have now modified the implementation of MPCFormer in Crypten ([[1]](https://arxiv.org/abs/2109.00984)), the SMPC framework they used, to support the use of public weights. Crypten uses Beaver’s triples for matrix multiplication - see Appendix C.1.1 of their paper. For fair comparison, we run on the same setup as Cascade. The table below shows means and 95% confidence intervals over 100 runs:
| **Scheme** | **BERT-Base Runtime (s)** | **BERT-Large Runtime (s)** |
|----------------------------------|----------------------------------|----------------------------------|
| MPCFormer (private weights) | 339.35 [311.58, 396.92] | 1407.26 [1281.12, 1680.31] |
| MPCFormer (public weights) | 49.40 [45.61, 57.29] | 143.88 [131.00, 178.64] |
| Cascade$_{α=2}$ | 0.66 [0.615, 0.738] | 1.33 [1.24, 1.46] |
| Cascade$_{α=4}$ | 0.59 [0.51, 0.69] | 1.57 [1.44, 1.73] |
| Cascade$_{α=8}$ | 0.74 [0.62, 0.96] | 1.58 [1.27, 1.97] |
MPCFormer indeed benefits from a significant speedup by the use of public weights in its protocol - approximately in the range of 7-10x. However, this is still 50-100x slower than Cascade. Moreover we note that the runtime of Cascade does not seem to grow very quickly as a function of $\alpha$.
We will also update the result of PUMA. This requires the use of a separate SMPC framework ([[2]](https://github.com/secretflow/spu)). We will have the experimental results for this ready for the final draft of the paper.
**W2**
Thank you for bringing to our attention the work of [[3]](https://www.ijcai.org/proceedings/2020/488). It is a stark reminder of the importance of applying care when making claims about the strength of guarantees provided by a scheme – as with the claimed strength of plaintext permuted hidden states, which we demonstrate are easily decodable. We note that the work [3] devises a successful attack against, BAYHENN, claimed the same formal guarantees of privacy as FHE, by misapplication of the LWE assumption. However we are clear in our submission that Cascade is _not_ a cryptographic scheme (e.g. lines 80, 321, 414), and therefore do not claim any formal guarantees against e.g. learning-based attacks.
The subtract-max variant of softmax works by calculating the maximum $m^x_k$ over each partial row $a^x_k$ of the attention scores given to AttnNodes, where $x$ is the row and $k$ is the column index of the AttnNode – as well as $v^x_k=expsum(a_k^x-m_k^x)$. Thus in standard softmax, CompNode$_i$ gets $o^x_k=softmax(a^x_k)V_k,w^x_k=expsum(a^x_k)$ for all $k$ and rows $x\in R_i$; and for subtract-max, it gets $o^x_k,m^x_k,v^x_k$.
We analyze the security of standard softmax in Appendix J, showing that it necessitates large ‘gaps’, which $(c,\delta)$ sharding satisfies. We can now also show such gaps are _sufficient_ for first layer security – by showing equivalence to the subset-sum problem over vectors. Due to the 5000 character limit, we will include details of this in our next response. We briefly mention here that the subtract-max variant simply adds 1 to the value of $c$ required for security over standard softmax.
**W1**
Our attack is not brute-force; it exploits the properties of attention and a finite dictionary to break permuted hidden states in linear time rather than the exponential time that is assumed in the schemes of [[4]](https://arxiv.org/abs/2405.18744), [[5]](https://arxiv.org/abs/2312.00025) and [[6]](https://arxiv.org/abs/2412.10652). [6] even has a ‘proof’ using a misapplication of distance correlation theory, that is incorrect; we will include further details in the next response. The attack also requires suitable ‘matching functions’ such as sorted-L1-distance (see lines 234-235). Further, practical success of this attack was not obvious a-priori, due to the possibility of many ‘collisions’ of states when matching over all N hidden states - the result of nearly 0 collisions is intriguing in its own right.
Moreover, our proposal for Cascade has not to our knowledge been proposed in prior literature. Even if it is considered intuitive, our extensive experimental coverage of performance and security is, we think, of value to the wider community.
**W3**
Thank you for your points. We agree with your suggestions, and have added a notation table and modified the Python-style notation.
Thank you once again for your thorough feedback. Even if we do not fully align on the value of our work, we feel that your review was particularly thoughtful and of high value. We look forward to following up with our next response to expand on some of the above points in further detail. | Summary: This paper proposes a vocabulary matching attack by exploiting the autoregressive characteristics of the generative model, which can attack the privacy-preserving large language model (LLM) inference framework based on permutation and noise under the assumption that the model parameters are public. At the same time, the author introduces a sharding-based privacy-preserving LLM inference framework Cascade. Cascade uses the token sharding method in the sequence dimension to maintain computational and communication efficiency, while providing security against the proposed attack and previous reversal methods.
## update after rebuttal
I thank the authors for their detailed responses. After reading the responses and comments from other reviewers, I think this paper requires major revisions regarding the writing, the theoretical security discussion, and the practicality of the multi-party setup. Therefore, I kept my score.
Claims And Evidence: Overall, the claims made in this paper are supported by clear and convincing evidence but there are still areas for improvement. In particular,
1. Using only the ROUGE score to measure the security of Cascade is not sufficient. More indicators and evidence are needed to enhance it.
2. Considering only the two attack methods of vocabulary matching and learning is not comprehensive. Experimental results of other attack methods are needed.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria are well-aligned with the problem of privacy-preserving LLM inference. They address both the attack and defense aspects comprehensively using appropriate datasets and metrics. Some additional evaluations could further strengthen the conclusions, but overall, the methods and criteria make sense for the problem at hand.
Theoretical Claims: The security proof in Section 7.2 of the paper is reasonable, but requires more formal treatment, e.g., supplementary cryptographic security analysis.
Experimental Designs Or Analyses: The experimental designs and analyses in the paper are generally sound and valid, particularly for the vocabulary-matching attack. The evaluation of the Cascade defense mechanism is reasonable but could be strengthened with more direct security assessments and additional experimental validations.
Supplementary Material: Yes, I read all the appendices.
Relation To Broader Scientific Literature: The paper makes contributions by both advancing attack methodologies and proposing a practical defense mechanism that addresses demonstrated vulnerabilities, filling gaps in the literature on privacy-preserving LLM inference.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: **Strengths:**
1. This paper proposes an effective attack method for the privacy-preserving LLM inference framework with random permutation and noise mechanism in the scenario where the model parameters are public.
2. This paper proposes a privacy-preserving LLM inference algorithm based on token sharding, which is new to the area.
**Weaknesses:**
1. The effectiveness of the word list matching attack is mainly reflected in the open-weight setting, which may not represent all deployment scenarios. In closed-weight or more restricted environments, the word list matching attack will fail.
2. The actual deployment of Cascade in a distributed environment may face challenges that are not fully addressed in this paper, such as network reliability and synchronization issues.
3. In terms of security, Cascade cannot achieve theoretical security, and the security strength is related to the number of nodes. Finding a large number of non-colluding nodes in actual deployments seems difficult to achieve.
Other Comments Or Suggestions: 1. The notation becomes quite dense in sections describing Cascade's implementation. A notation table would help readers keep track of variables and sharding parameters.
2. The algorithm description (Algorithm 1) could benefit from a step-by-step example to illustrate how it works in practice.
3. The security analysis (Section 7.2) would be clearer with a diagram showing how token sharding prevents reconstruction attacks.
Questions For Authors: 1. The attack assumes access to the full model architecture and weights. How would the effectiveness of the attack change in scenarios where the model architecture or weights are not fully known to the adversary? Would the attack still be feasible with partial knowledge?
2. The security analysis assumes certain sharding parameters $(c, \delta)$. How would the security properties change if an adversary could influence or discover these parameters in a real-world deployment?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for your detailed review. We are glad that you found our proposed attack to be effective in the open-weights setting, and that Cascade is novel. We provide responses to some of the points you have raised below.
**W1 + Q1**
A slight extension of our vocab-matching attack can additionally break the schemes of [[1]](https://arxiv.org/abs/2312.00025) and [[2]](https://arxiv.org/abs/2412.10652) in the closed-weights setting. Due to space constraints, please find the details of this in the response to reviewer nbMt.
**W2**
Thank you for raising this important point. We have now tested Cascade in a real life WAN network setting with up to 18 different physical machines, and 72 different logical nodes. Our full results for single layer inference on BERT under these settings is shown below:
| **Scheme** | **BERT-Base Runtime (s)** | **BERT-Large Runtime (s)** |
|----------------------------------|----------------------------------|----------------------------------|
| MPCFormer | 55.320 | 141.222 |
| PUMA | 33.913 | 73.720 |
| Cascade$_{α=2}$ | 0.662 [0.615, 0.738] | 1.331 [1.237, 1.464] |
| Cascade$_{α=4}$ | 0.588 [0.513, 0.688] | 1.572 [1.441, 1.734] |
| Cascade$_{α=8}$ | 0.742 [0.622, 0.962] | 1.584 [1.271, 1.965] |
| Vanilla Inference | 0.091 [0.084, 0.121] | 0.273 [0.200, 0.993] |
The above table shows the average runtimes for Cascade under three choices of $\alpha$. Larger $\alpha$ has more nodes. Mean results are given over 100 runs, and a 95% confidence interval is additionally shown in brackets. Values for MPCFormer and Puma are taken from their respective papers.
We also measured the total communicated bytes in the same setting. Even in the most expensive $\alpha = 8$ setting, Cascade is $\sim150\times$ more efficient in total bytes transferred than the baselines. We conclude that although Cascade will be negatively impacted by poor network conditions, this is true for any SMPC method, and the effect of this will be less deleterious on Cascade than other protocols.
**W3**
We point to the success of previous projects in decentralized training and inference ([[3]](https://github.com/learning-at-home/hivemind), [[4]](https://arxiv.org/abs/2209.01188), [[5]](https://github.com/PrimeIntellect-ai/prime)) that received contributions from thousands of distinct participants over diverse geographies. As such, we are optimistic that sufficient nodes can be gathered in such a setting in order to provide good security.
**Q2**
In fact, our security analysis already assumes the worst case of perfect knowledge of the security parameters $c$ and $\delta$ by the adversary. In practical deployment, the exact sharding scheme may not be known (it can even be changed or randomized continually). We have now made this point more clearly in our local draft and will update it for the camera-ready version.
**Claims 1**
We agree with your assessment. We have now additionally computed BLEU, F1 and token accuracy, as well as provided reconstruction examples in the local draft.
**Claims 2**
Examining the literature, we did not find existing methods of attack that are not learning-based. Are there particular alternative methods that you believe are applicable to this setting?
**Theoretical Claims**
We are clear in our submission that Cascade is _not_ a cryptographic scheme (e.g. lines 80, 321, 414). However, we do examine all possible sources of leakage in Appendix J of our submission.
Moreover, it is our belief that novel defensive methods are of value to the wider research community if they provide sufficient practical defence, even if they do not have formal guarantees that can be proven. We point to the extensive literature on adversarial attacks and defences of neural networks such as [[6]](https://arxiv.org/abs/2406.05927), [[7]](https://arxiv.org/abs/2302.04638), [[8]](https://arxiv.org/abs/2404.09349), none of which have formal guarantees, yet are used commonly in practice due to their efficacy (e.g. see https://robustbench.github.io/).
**Other Comments**
We agree with all of the readability points you have suggested, and have now included them in our local draft. We will make these changes to the camera-ready version as well.
Thank you once again for your thorough review and your insightful comments. If you have any further comments, or if you think our submission can be improved in any way, please let us know. We have made a significant effort to address each of your points and would appreciate it if you would consider raising your score in light of our response. Thank you! | null | null | null | null | null | null |
GPTAQ: Efficient Finetuning-Free Quantization for Asymmetric Calibration | Accept (poster) | Summary: Following the widely used quantization framework GPTQ, this work identifies the problem in GPTQ named symmetric calibration that emerges from the per-layer optimization scheme. To tackle these challenges, this work proposes a unique calibration pipeline based on asymmetric calibration, which fully considers the quantization error and deviation in the output when updating the weights. Concretely, channel parallelization, neuron decomposition, and Cholesky reformulation for matrix fusion are utilized to parallelize the solution. The proposed GPTQv2 is extensively verified across various LLMs and ViTs on multiple tasks, demonstrating remarkable efficiency and effectiveness. It can be a plugin for QuaRot/SpinQuant and improve the performance with minimal overhead.
Claims And Evidence: Most claims made in the submission are supported by clear and convincing evidence.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense for the problem. Algorithm1 gives a clear introduction to how GPTQv2 differs from GPTQ. The proposed techniques are well-motivated.
Theoretical Claims: check the correctness of proofs for theoretical claims in the main text.
Experimental Designs Or Analyses: I have checked the soundness/validity of experimental designs and analyses. Some issues include:
- Bitwidth settings are limited to W4A4/W2A4. Considering the need for near-lossless quantization of LLMs/ViTs in some application scenarios, W6A6/W8A8 results would be a plus.
- Missing baselines for ViT quantization. Rotation-based methods such as QuIP can also be applied in ViT quantization, and FrameQuant[1] is omitted.
- Figure 4 (a) seems to be missing bars for latency comparison. The latency overhead of GPTQv2 compared to GPTQ is non-negligible, especially on higher dimensions.
- For SpinQuant, I think it should be categorized as FT-free since it only involves optimizing the rotation but not updating the weights. In addition, the author of SpinQuant mentioned that optimizing rotation is highly efficient, 0.5h for L3-8B in their paper. It could be the difference in hardware system, please double check it.
- It would be interesting to see if the proposed GPTQv2 could be applied to Diffusion Transformers such as Q-DiT [2]. (No need to perform additional experiments, discussion on it would be enough)
[1] FrameQuant: Flexible Low-Bit Quantization for Transformers, ICML 2024
[2] Q-DiT: Accurate Post-Training Quantization for Diffusion Transformers, arxiv 2023/CVPR 2025
Supplementary Material: I have checked the codes in the supplementary material and went through the additional experiments/memory analysis in the appendix. The codes help improve the reproducibility confidence and additional results discussing weight-only quantization and rotation are helpful.
Relation To Broader Scientific Literature: This work is built on GPTQ, but the core contribution is novel.
Essential References Not Discussed: Existing work related to quantization and Optimal Brain Surgeon (OBS) are cited.
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: N/A
Questions For Authors: Please see the points above.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thanks for your comments and positive feedback. Please check our response below.
>1. Bitwidth settings are limited to W4A4/W2A4. Considering the need for near-lossless quantization of LLMs/ViTs in some application scenarios, W6A6/W8A8 results would be a plus.
Thank you for this suggestion. We have conducted additional experiments with W6A6 bitwidth on LLaMA2-7B and LLaMA3-8B to evaluate GPTQv2 in near-lossless quantization scenarios. The results are presented below (reporting WikiText-2 perplexity):
| Model | LLaMA2-7B | LLaMA3-8B |
|-------------------|-----------|-----------|
| Pretrained (FP16) | 5.47 | 6.14 |
| OmniQuant | 5.87 | 7.24 |
| QLLM | 5.91 | - |
| DuQuant | 5.53 | 6.27 |
| QuaRot+GPTQ | 5.50 | 6.24 |
| QuaRot+GPTQv2 | 5.49 | 6.21 |
As expected, the improvements in the W6A6 setting are more modest compared to those in lower bitwidth scenarios, since higher precision quantization already preserves most of the model's capabilities, leaving less room for enhancement. Nevertheless, GPTQv2 still consistently outperforms GPTQ across both models.
>2. Missing baselines for ViT quantization. Rotation-based methods, such as QuIP, can also be applied in ViT quantization, and FrameQuant[1] is omitted.
Thanks for letting us know about the FrameQuant paper. As far as we can tell, QuIP and FrameQuant are performing weight-only quantization on ViTs, which would be less effective than LLM decoding, as the ViT inference is compute-bounded instead of I/O-bounded.
Nevertheless, we test DeiT-S with 2-bit per-channel quantization with or without QuIP incoherence processing. The results are shown below, with * indicating our implementation.
| Method | ImageNet accuracy |
|--------------------|-------------------|
| FrameQuant (r=1.0) | 66.35 |
| QuIP | 65.70 |
| GPTQ* | 57.11 |
| GPTQv2* | 60.58 |
| QuIP + GPTQ* | 65.45 |
| QuIP + GPTQv2* | 68.02 |
>3. Figure 4 (a) seems to be missing bars for latency comparison. The latency overhead of GPTQv2 compared to GPTQ is non-negligible, especially on higher dimensions.
Sorry for the missing histograms. This appears to be a browser-specific rendering problem. If you download our paper, the histograms should be visible when viewing the PDF in Chrome or in standard PDF readers. We will ensure the figure is displayed correctly in all formats in the revised version of the paper.
For the GPTQv2 vs GPTQ latency comparison, we can give a theoretical upper bound here. According to Algorithm 1, the operations of v2 are at most 2x of v1. With sufficiently high dimensions, we can expect the latency may approach 2x. However, in practice, we observe 30%-50% more time in typical LLMs (7B to 405B).
>4. For SpinQuant, I think it should be categorized as FT-free since it only involves optimizing the rotation but not updating the weights. In addition, the author of SpinQuant mentioned that optimizing rotation is highly efficient, 0.5h for L3-8B in their paper. It could be the difference in hardware system, please double check it.
We maintain that SpinQuant should be categorized as finetuning-based rather than finetuning-free for the following reasons:
+ From a computational perspective, optimizing rotation matrices or weights involves the same core operations - both require backpropagation through the network, use of the Straight-Through Estimator (STE), and gradient-based optimization. The fact that the optimization target is a rotation matrix rather than weight values does not reduce the computational requirements.
+ Conceptually, SpinQuant modifies the effective weight representation through rotation optimization. Whether directly updating weights or optimizing transformations applied to weights, both approaches adjust the model's parameters that will be quantized.
Regarding the finetuning time, it’s due to the SpinQuant authors using 8 A100 GPUs to finetune the rotation matrix. Therefore, in Section 5.3, we explained that “we additionally report the GPU Hours (on one A100) required to run the algorithm. Although in practice, SpinQuants runs on 8 A100 GPUs ”
>5. It would be interesting to see if the proposed GPTQv2 could be applied to Diffusion Transformers such as Q-DiT [2].
Thanks for the reference on transformer-based diffusion models. We noticed that Q-DiT pointed out an important observation that activation distribution undergoes continuous changes across timesteps. In this case, the activation asymmetry may accumulate not just through layers but through time steps as well. We expect GPTQv2 will have better performance if $\Delta \mathbf{X}$ can capture more information as we did in the experiments of quantization order (Appendix B.1). We will discuss this problem of diffusion models in our paper related work.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response from the authors. The additional results on W6A6 and ViTs are helpful, please consider including them in the revised draft. Most of my concerns are well-addressed, therefore, I would like to increase my score to 4.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for raising the score. We will add the new results to our final version of the paper. | Summary: Authors propose a novel fine-tuning free quantization framework GPTQv2 for LLMs. Authors first analyze the problem of previous "symmetric calibration"using optimal brain compression to derive a close-formed solution, and propose a novel "asymmetric calibration" to take quantization error as well as the accumulated asymmetry error into consideration. Secondly, authors utilize various techniques to parallelize the solution calculation, including channel parallelization, neuron decomposition, and Cholesky reformulation for matrix fusion. Extensive results on various LLMs and datasets reveal the effectiveness of the proposed methods.
Claims And Evidence: All claims are well-explained.
Methods And Evaluation Criteria: I've checked all theoretical and qualitative analysis and claims in this paper. See "Other Strengths And Weaknesses" part of this review for my major & minor concerns about the methodology and equation derivation.
Theoretical Claims: I've checked all theoretical and qualitative analysis and claims in this paper. See "Other Strengths And Weaknesses" part of this review for my major & minor concerns about the methodology and equation derivation.
Experimental Designs Or Analyses: I've checked all experimental settings, comparison and results in this paper. See "Other Strengths And Weaknesses" part of this review for my major & minor concerns about the experimental part.
Supplementary Material: Any possible details in the supplementary material is checked.
Relation To Broader Scientific Literature: All contributions are technical and all datasets used for experiments are open-sourced. Thus no key contributions of this paper related to the broader scientific literature.
Essential References Not Discussed: All necessary references are discussed.
Other Strengths And Weaknesses: ## Major weakness
1. What does the lambda in eq.10 represent for? Is it a hyper-parameter? If so, then why there is a gradient on it?
2. In table. 2, why the performance improvement is more significant on fine-tuning quantization methods than fine-tuning free ones, when compared with GPTQ? It would be better to deeply discuss about the phenomenon.
3. In table. 4, why the ppl result of GPTQv2' is worse than GPTQ, while the zero-shot avg result is better than GPTQ, which is counter-intuitive.
4. I curious about the performance improvement when activation quantization is added before weight quantizations. It seems like the improvement compared to GPTQ is more significant under this condition. In my perspective, the quantized $\\tilde{X}$ will introduce noise into $\\Delta X$, therefore, it should be worse.
## Minor weakness
1. Seems like the histograms in Figure 4(a) are missing.
Other Comments Or Suggestions: See above.
Questions For Authors: See above.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you very much for your comments and thorough review. Please check our response to your questions.
>1. What does the lambda in eq.10 represent for? Is it a hyper-parameter? If so, then why there is a gradient on it?
$\lambda$ is the Langrange multiplier, which is not a hyperparameter. By taking derivatives of the Lagrangian with respect to both $\Delta\mathbf{w}$ and $\lambda$ and setting them to zero, we simultaneously: (1) ensure the quantization constraint is satisfied exactly ($\partial L/\partial \lambda=0$ enforces $\Delta\mathbf{we}_q^\top+\mathbf{w}_q-\hat{\mathbf{w}}_q=0$), and (2) find the local minimum of the objective function. This approach follows standard constrained optimization techniques used in the original OBS and OBQ frameworks.
>2. In table. 2, why the performance improvement is more significant on fine-tuning quantization methods than fine-tuning free ones, when compared with GPTQ? It would be better to deeply discuss about the phenomenon.
We think the reason why finetuning quantization performs worse is the need to handle the massive outliers in activations. This issue was addressed only with QuaRot (FT-free) and SpinQuant (FT-based) until recently.
>3. In table. 4, why the ppl result of GPTQv2' is worse than GPTQ, while the zero-shot avg result is better than GPTQ, which is counter-intuitive.
This is an excellent observation. Perplexity and zero-shot accuracy measure different aspects of LLM capabilities. Perplexity primarily evaluates next-token prediction on the pre-training distribution (a form of memorization), while zero-shot accuracy tests the model's ability to generalize knowledge to new tasks.
When applying only the second term of our method, we're optimizing for the residual output error from previous layers, which better preserves the model's generalization capabilities at the expense of exact next-token prediction. The full GPTQv2 balances both aspects by combining both terms. This suggests that different quantization objectives might be optimal depending on the downstream task priorities.
>4. I curious about the performance improvement when activation quantization is added before weight quantizations. It seems like the improvement compared to GPTQ is more significant under this condition. In my perspective, the quantized $\tilde{\mathbf{X}}$ will introduce noise into $\Delta \mathbf{X}$, therefore, it should be worse.
Thanks for the question. We kindly refer you to our Algorithm 2 in Appendix C. If the activation quantization is enabled during calibration, we will disable it when computing and caching $\tilde{\mathbf{X}}$ to ensure we always use the FP model activation. This is why the improvement will be more significant.
>5. Seems like the histograms in Figure 4(a) are missing.
Sorry for the missing histograms. This appears to be a browser-specific rendering problem. If you download our paper, the histograms should be visible when viewing the PDF in Chrome or in standard PDF readers. We will ensure the figure is displayed correctly in all formats in the revised version of the paper. | Summary: The authors introduce a new quantization method, GPTQv2. The key innovation here is the development of an asymmetric calibration approach, differing fundamentally from GPTQ, by explicitly aligning the quantized layer's outputs to the original, full-precision activations. They derive a closed-form solution using Optimal Brain Compression principles. Experiment evaluations show substantial improvements in model performance across vision and language tasks.
## Update after rebuttal
I maintain my original score. I am generally satisfied with the authors’ response.
Claims And Evidence: The paper claims to achieve superior performance as compared to GPTQ and supports its claims via various experiments.
Methods And Evaluation Criteria: Chosen evaluation metrics (e.g., perplexity, accuracy on PiQA, HellaSwag, etc.) are appropriate and standard for the field.
Theoretical Claims: Theoretical claims sound reasonable, addressing the shortcomings of the original GPTQ work.
Experimental Designs Or Analyses: Experiments and ablation studies seem methodologically sound and thorough. However, actual hardware numbers would strengthen the work further.
Supplementary Material: NA
Relation To Broader Scientific Literature: GPTQv2 is effectively contextualized against existing methods, clearly highlighting its innovative elements over the original GPTQ algorithm and other finetuning-free quantization methods.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: **Strengths**
- Clear and thorough theoretical foundation with detailed mathematical derivations provided.
- Efficient computational strategies that significantly reduce quantization overhead, making practical implementation feasible.
- Extensive experiments demonstrating clear and consistent performance improvements across different transformer architectures and tasks.
**Weaknesses**:
- Lack of detailed hardware-level deployment and overhead analyses slightly limits practical applicability insights.
Other Comments Or Suggestions: NA
Questions For Authors: NA
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your positive assessment of our theoretical foundation and experimental results. Regarding your concern about hardware-level deployment and overhead analyses, we would like to clarify that GPTQv2 maintains full compatibility with GPTQ's quantization format since we did not modify the `quant()` function in Algorithm 1. This means that our method can leverage all existing hardware-optimized kernels and infrastructure developed for GPTQ without additional overhead during inference.
Take an example of the format of [`AutoGPTQ`](https://github.com/AutoGPTQ/AutoGPTQ) library, for every quantized layer, the variables are defined as
```python
class QuantLinear(nn.Module):
def __init__(self, bits, group_size, in_features, out_features):
self.bits = bits
self.group_size = group_size
m, n = in_features, out_features
self.qweight = torch.zeros((n, m//32 * self.bits), dtype=torch.int32)
self.qzeros = torch.zeros((n//group_size, m//32 * self.bits), dtype=torch.int32)
self.scales = torch.zeros((n//group_size, m), dtype=torch.float16)
self.g_idx = torch.zeros(n, dtype=torch.int32)
```
The same quantization format in GPTQv2 can immediately benefit from specialized kernels like [Marlin](https://github.com/IST-DASLab/marlin) and [ExLLaMA](https://github.com/turboderp-org/exllamav2) without requiring new hardware optimizations. Currently, we are integrating GPTQv2 into popular quantization libraries, and we will expand our hardware-specific analyses in the next version of the paper. | Summary: This paper proposed a modification to the widely-used GPTQ method. The main idea is that instead of minimizing the differences between quant(W)*A and W*A, authors proposed to minimize the differences between quant(W)*A with W*A_fp, i.e. its counterpart in the unquantized model. As in typical PTQ works, "sequential" quantization, i.e. assuming layer 0 to l-1 are quantized while quantizing layer l, is generally considered more effective, because later layers may have a chance to absorb some quantization errors accumulated from quantizing previous layers. Directly matching layer output to counterparts in unquantized model would be closer to the idea of distillation, which usually requires more iterations and data to achieve better results. Interestingly this work showed that for GPTQ, "distillation style" would be a better option than "sequential PTQ style."
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: Yes, Appendix A.1, seems fine.
Experimental Designs Or Analyses: Yes, the choice of models include vision transformers and LLMs, model size ranged from <1B, 7-70B, and 405B is proper. The selection of metrics include ImageNet accuracy, wiki2 perplexity, and a range of 0-shot tasks is valid.
Supplementary Material: Yes, Appendix A, B, and C.
Relation To Broader Scientific Literature: This work is an improvement of a widely-used method, GPTQ, which is a weight-only quantization method that can address memory-bound issue of LLMs.
Essential References Not Discussed: Citations/references are sufficient.
Other Strengths And Weaknesses: Strength:
1. Well written manuscript.
2. Good amount of experimental data, including those additional results in appendix. Achieved meaningful improvement in accuracy/perplexity compared to original GPTQ.
3. Considered implementation efficiency and provide improved formulas so that overall process time would be comparable to original GPTQ.
Weakness:
overall, very nice work. Just a few minor suggestions.
1. The terms "asymmetric calibration" and "symmetric calibration" doesn't seem to be very intuitive, and maybe a bit confusing with the symmetric/asymmetric quantization. In fact, this set of terms is not used a lot in the manuscript. Maybe author can consider adding a few sentences around the definition of these terms to enhance the connection between the main proposed concept, so that the readers could grasp the main idea easier.
2. Since this work is meant to be an improvement of the original GPTQ, it would be beneficial to start the discussion with a comparison to vanilla GPTQ, i.e. weight only, per-group quantization. Maybe author could consider moving Appendix B.3/Table 7 into main manuscript, with a few more examples from different Llama models.
3. Even though (possibly) the majority of researchers still calls it GPTQ, the original author officially published their work at ICLR 2023 in the name of "OPTQ". I would not suggest the author of this work to change all the names/acronyms in the manuscript, but in respect of the choice of the original "GPTQ" authors, maybe include both names during first mention/citation and state that for only GPTQ will be used afterward for simplicity reason.
Other Comments Or Suggestions: please see Weakness above
Questions For Authors: None
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your positive feedback on our manuscript. We appreciate your interpretation of GPTQ as a "sequential PTQ style" method versus the "distillation style" of our GPTQv2. Please see our responses to your specific concerns below:
>1. The terms "asymmetric calibration" and "symmetric calibration" doesn't seem to be very intuitive, and maybe a bit confusing with the symmetric/asymmetric quantization. In fact, this set of terms is not used a lot in the manuscript. Maybe author can consider adding a few sentences around the definition of these terms to enhance the connection between the main proposed concept, so that the readers could grasp the main idea easier.
We agree that our terminology could be clearer. The terms "symmetric" and "asymmetric" specifically refer to the calibration objective: symmetric calibration uses the same input activation $\mathbf{X}$ for both the quantized and full-precision weights, while asymmetric calibration accounts for the input activation discrepancy between quantized weights and full-precision weights. We will add clearer definitions of these terms in the introduction and highlight their conceptual differences.
>2. Since this work is meant to be an improvement of the original GPTQ, it would be beneficial to start the discussion with a comparison to vanilla GPTQ, i.e. weight only, per-group quantization. Maybe author could consider moving Appendix B.3/Table 7 into main manuscript, with a few more examples from different Llama models.
Thank you for your suggestion. We agree that demonstrating improvements over vanilla GPTQ (weight-only, per-group quantization) is important since many existing libraries target this setup. In the revised version, we will move Table 7 to the main text and expand it with comprehensive comparisons across multiple LLaMA models. Here are additional results for LLaMA3-8B's perplexity:
| Bitwidth | W4A16-G128 | W3A16-G128 | W2A16-G128 |
|----------|------------|------------|------------|
| GPTQ | 6.71 | 7.91 | 25.24 |
| GPTQv2 | 6.41 | 7.72 | 14.17 |
>3. Even though (possibly) the majority of researchers still calls it GPTQ, the original author officially published their work at ICLR 2023 in the name of "OPTQ". I would not suggest the author of this work to change all the names/acronyms in the manuscript, but in respect of the choice of the original "GPTQ" authors, maybe include both names during first mention/citation and state that for only GPTQ will be used afterward for simplicity reason.
This is an essential problem that we were not aware of. We'll add a footnote in the introduction to clarify the naming issue. Thanks again for your suggestion.
---
Rebuttal Comment 1.1:
Comment: Thank for your clarifications. I would like to keep my assessment unchanged.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your positive feedback. We will clarify them in the final version. | null | null | null | null | null | null |
Boosting Protein Graph Representations through Static-Dynamic Fusion | Accept (poster) | Summary: This manuscript proposes a simple relational heterogeneous GNN model to represent both structural information and molecular dynamics correlations of proteins. It validates the effectiveness of approach in multiple protein graph representation related tasks simultaneously.
Claims And Evidence: I think the claims made by the manuscript have been well verified. There are no obvious flaws to be found.
Methods And Evaluation Criteria: This paper presents a simpler relational graph neural network model that addresses protein-related applications. Its technical contribution may be limited in terms of the GNN research field. It is of application value from the field of protein prediction applications.
Theoretical Claims: This paper makes no new explicit theoretical claims.
Experimental Designs Or Analyses: I believe that the experimental work in this paper is worthy of recognition for its validation of the model on multiple protein-related tasks. The downside is that it may lack a current hotspot approach for comparing each task. Since I don't know much about these tasks, I can't give specific examples, but I believe this should be present.
Supplementary Material: N\A
Relation To Broader Scientific Literature: From the GNN research area, with which I am more familiar, the model may have only a limited application contribution.
In terms of bioinformatics mothodology research, this manuscript proposes that there is some value in simultaneously combining protein structural information and molecular dynamics correlations to construct graph structures.
In terms of protein-related application tasks, the current model may be from in better results. For example, AlphaFold, or LLM, so its contribution deserves further discussion.
Essential References Not Discussed: I think it is clear that this paper is missing some key baseline model discussions. Using only RGCN and RGAT as baseline is clearly insufficient.
Other Strengths And Weaknesses: Strengths
- This paper is original for the intersection of deep learning and the biomedical field. Its a cross-field study with practical contributions.
- The writing in this manuscript is clear and no reading difficulties were found.
Weaks
- As mentioned earlier, technological innovation is limited
- As mentioned earlier, there are limitations to the baseline approaches used.
Other Comments Or Suggestions: N/A
Questions For Authors: - Do the authors have compared current advanced models like AlphaFold or LLM, and analyzed the contributions this manuscript can make?
- This paper may lack a theoretical discussion of the methodology. This article may lack a theoretical exposition of the methodology, which may also potentially strengthen the validation of the article. So is there any theoretical support for combining protein structure information with molecular motion correlations?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **To Weakness 1:**
We acknowledge the reviewer's concern about technical novelty. Our straightforward framework bridges static structures and dynamic correlations from molecular dynamics—a growing need as such data becomes increasingly available. The intentional simplicity of our approach is actually advantageous as it:
1) ensures broad applicability across different architectures,
2) facilitates easy adoption, and
3) establishes a clear baseline for future work in this emerging area.
Our primary contribution is demonstrating that integrating these complementary information sources provides consistent benefits across different tasks and architectures. As detailed in our response to Reviewer 2, we've conducted additional experiments with both domain-specific (SS-GNN) and equivariant architectures (EGNN), showing that our approach generalizes beyond simple GNNs.
With AlphaFold having largely solved static structure prediction, the frontier has shifted toward sampling dynamical conformational ensembles and generating molecular trajectories. The EU Horizon project (ID: 101094651) aimed at creating a Molecular Dynamics Data Bank further emphasizes the timeliness of our contribution. Our framework provides a simple yet effective approach to leverage this emerging data, establishing a strong baseline for future research in protein dynamics modeling.
**To Weakness 2:**
We appreciate the reviewer's concern about baseline comparisons. As detailed in our response to Reviewer 2, we have extended our evaluation with additional architectures beyond RGCN and RGAT:
1) We implemented a relational variant of EGNN.
2) We adapted SS-GNN, a domain-specific model for binding affinity prediction.
These experiments demonstrate that our approach offers consistent benefits across different architectural complexity levels and maintains its advantages when integrated with domain-specific models.
**To Question 1:**
Thank you for this question. We would like to clarify that these models address fundamentally different tasks than our approach:
- **Different problem domains:** AlphaFold is designed for protein structure prediction from sequence, while our work focuses on utilizing existing structural and dynamic information to predict protein-related properties. Similarly, protein language models (PLMs) like ESM primarily operate on sequence data, not on integrating dynamics.
- **Complementary research directions:** Our approach is complementary rather than competitive to powerful models like AlphaFold and protein language models (PLMs). As AlphaFold excels in static structure prediction, research is shifting toward dynamic conformations and trajectories, where our framework can provide valuable insights. Integrating PLM embeddings into node features also represents an intriguing avenue for capturing sequence, structural, and dynamic relationships simultaneously.
We believe our work effectively addresses the challenge of integrating molecular dynamics with structural data, laying a foundation for future models that bridge these currently separate areas.
**To Question 2:**
Thank you for this question. Our approach of combining structural and dynamic information has solid theoretical foundations:
- **Graph-theoretic properties:** We conducted network analysis on our Distance and Combined graphs, revealing significant improvements in key graph properties. For atomic-level graphs, the network diameter decreased from 24.44 to 21.32, and the average shortest path length reduced from 9.7 to 8.9 when correlation edges were added. For residue-level graphs, these improvements were even more dramatic, with diameter decreasing from 10.12 to 6.68 and average shortest path length from 4.3 to 3.2. These quantitative metrics demonstrate that correlation edges create critical shortcuts in the graph.
- **Physics-driven graph rewiring:** Our approach can be viewed as a physics-driven graph rewiring method. Such rewiring is known to mitigate over-squashing in GNNs [1]. The correlation edges, derived from actual physical motion relationships, create direct pathways between dynamically coupled but spatially distant regions.
- **Graph curvature analysis:** We analyzed the Ollivier-Ricci curvature of both Distance and Combined graphs. The combined graphs show general increases in positively curved edges (red in the visualization), indicating improved information flow properties. Positive curvature regions are associated with better message passing efficiency in graph neural networks. An example (PDB-ID 2I5J) is here: [https://anonymous.4open.science/r/rebuttal_2025_1-16F1/figure_ricci.png]
These analyses provide theoretical support for why our combined graph approach enhances performance, especially in tasks involving long-range protein interactions.
**References:**
[1] Attali, H., Buscaldi, D., & Pernelle, N. (2024). *Rewiring techniques to mitigate oversquashing and oversmoothing in GNNs: A survey.
---
Rebuttal Comment 1.1:
Comment: I couldn't agree more with the shortcomings mentioned by reviewer Au2g in the first point of improvemnt. Admittedly this paper is an innovative cross-disciplinary work, but similar research paradigms are already commonplace. And this work using deep learning models is not more inspiring, its contribution to the current ICML conference seems insufficient. I will maintain the score.
---
Reply to Comment 1.1.1:
Comment: Thank you for your feedback. We recognize the challenges of evaluating interdisciplinary ML work. We‘d like to clarify our paper's positioning and address your concerns with new evidence. We hope this may support a more positive rating.
## Application-Driven ML Research Value
Our submission belongs to the **application-driven ML paper track**, for which ICML guidelines explicitly state:
- "Novel ideas that are simple to apply may be especially valuable"
- "Originality need not mean wholly novel methods. It may mean a novel combination of existing methods to solve the task at hand"
The value of our contribution lies not in proposing a new model, but in providing an easy-to-use yet effective framework for creating better protein representations that consistently improve performance across diverse tasks and architectures, which also introduces the first application of relational GNNs to directly process protein MD data.
## New Experimental Validation
To address the concern about limited baselines raised by you and Reviewer Au2g, we have made our best effort to validate our approach across diverse architectures:
- **Invariant GNN**: RGCN and RGAT (in original submission)
- **Equivariant GNN**: Relational EGNN (https://arxiv.org/abs/2102.09844)
- **Graph Transformer**: Relational GPS (https://arxiv.org/abs/2205.12454)
- **Domain-specific model**: SS-GNN (https://doi.org/10.1021/acsomega.3c00085), a specialized model for binding affinity prediction
All Results: https://anonymous.4open.science/r/rebuttal_2025_1-16F1
Across all architectures, Combined Graph consistently outperforms Distance Graph. The fact that we could implement and evaluate these within a short timeframe demonstrates our method's simplicity and ease of use, while the consistent performance improvements confirm its effectiveness.
## Analysis of Equivariant vs. Invariant Architectures
We've conducted an analysis comparing invariant (RGCN, RGAT) and equivariant (R-EGNN) architectures across tasks and graph types. Here's a summary of which architecture performs best for each combination:
| Task | Distance Graph | Correlation Graph | Combined Graph|
|-|-|-|-|
| Atomic Adaptability | R-EGNN | R-EGNN | RGCN|
| Binding Site Detection | R-EGNN | R-EGNN | R-EGNN|
| Binding Affinity | R-EGNN | RGAT | RGCN|
This provides several insights:
1. **Equivariant advantage for distance graph**: R-EGNN works best on Distance Graph as it naturally uses distances to modulate message passing, making it well-suited for Distance Graph.
2. **Task-specific architecture selection**: Binding site detection shows the most consistent benefit from equivariant architectures, likely due to its regular graphs (all nodes represent Cα atoms), making 3D spatial relationships particularly important.
3. **Architecture design implications**: For Combined Graph, RGCN often outperforms R-EGNN, suggesting our preliminary implementation of R-EGNN, where we process distance and correlation graphs with separate EGNNs and then merge their outputs, may not be optimal. The design of fusion mechanisms that effectively utilize both static and dynamic information in equivariant architectures is a valuable direction for future work.
## Theoretical Support
- **Graph properties**: Adding correlation edges creates shortcuts and changes key graph properties:
| Graph Level | Metric | Distance | Combined|
|-|-|-|-|
| Atomic | Diameter | 24.4 | 21.3|
| Atomic | Avg. Shortest Path | 9.7 | 8.9|
| Residue | Diameter | 10.1 | 6.7|
| Residue | Avg. Shortest Path | 4.3 | 3.2|
- **Physics-driven graph rewiring** and **graph curvature analysis**: Please see our first rebuttal.
## Emerging Research Direction
With AlphaFold2 having largely solved static protein structure prediction, the research frontier has shifted toward generating dynamic protein structures. Recent landmark studies illustrate this trend:
- **Generative modeling of MD trajectories** generates MD trajectories directly (https://arxiv.org/abs/2409.17808)
- **Conformational ensemble generation** produces protein structures of different dynamic states (https://doi.org/10.1101/2024.12.05.626885)
The Molecular Dynamics Data Bank project further reflects the growing abundance of MD data (https://mddbr.eu/about/). With the rapid growth of these research directions and MD data availability, **methods like ours that effectively utilize dynamic information will become increasingly valuable**.
## Summary
From an application-driven ML perspective, our contribution offers significant value: a simple, easy-to-use, and broadly applicable approach that effectively enhances protein graph representations by fusing static and dynamic information. The consistent performance improvements across diverse tasks and architectures, connection to emerging ML research frontiers and computational biology needs, and insights into architecture selection make our work a meaningful contribution to both the ML and biology communities. | Summary: The authors propose to integrate structural and dynamic distance-based features into relational graph neural networks to predict local and global properties of 3D protein biomolecules. The authors' experiments are comprehensive and informative, and this work outlines a notable gap in the literature on protein representation learning. Nonetheless, the depth of the authors' methodological contributions is quite limited, which makes this work still seem preliminary.
## Update after rebuttal:
The authors have addressed my main concern regarding the novelty and impact of this work. As such, I am comfortable with my current score of "Accept".
Claims And Evidence: The claims made by the authors are clear and convincing thanks to their repeat experiments.
Methods And Evaluation Criteria: The authors' evaluation criteria are clear and well-founded.
Theoretical Claims: The authors do not make any notable theoretical claims.
Experimental Designs Or Analyses: The validity of the authors' experimental designs is sound.
Supplementary Material: I've reviewed each of the authors' supplementary materials.
Relation To Broader Scientific Literature: The authors adequately establish their work in the existing body of protein representation learning literature.
Essential References Not Discussed: No essential references were omitted as best as I can tell.
Other Strengths And Weaknesses: **Strengths:**
- The authors point out an important gap in the protein representation learning literature.
- The authors conduct several key experiments to demonstrate the utility of including dynamic (i.e., molecular dynamics-derived) information for protein representation learning.
- The authors' experiments, including their metrics and dataset splits, are standardized and easily interpretable.
**Points for improvement:**
- The authors' proposed methodological advances (i.e., adding dynamics-based edges into existing *invariant* relational graph neural network models) are somewhat limited in my view. The dynamics-driven insight is important, but the authors' experiments lack a depth of characterization of how far the benefits of such dynamics information extend beyond simple relational graph neural networks. More specifically, the paper currently reads as if the authors directly took the MISATO dataset, performed a simple integration of its features into existing relational graph neural networks, and then ran a bunch of experiments (which are important nonetheless). For a workshop paper, this would offer outstanding value for readers, though as a full conference submission, I believe more experimental depth is needed to provide readers (and the research community broadly speaking) with lasting value through this work.
- Similar to the first point above, the authors only study *invariant* graph neural networks, though it has been shown for many years now that certain types of *equivariant* graph neural networks can deliver notable performance benefits for protein representation learning [1].
- Some of the authors' (biomolecular) graph construction details are omitted, such as how ligands (i.e., small molecules) are integrated into the authors' static-dynamic graph construction processes.
**References:**
[1] Jamasb, A. R., Morehead, A., Joshi, C. K., Zhang, Z., Didi, K., Mathis, S. V., ... & Blundell, T. L. Evaluating Representation Learning on the Protein Structure Universe. In The Twelfth International Conference on Learning Representations.
Other Comments Or Suggestions: I'd highly suggest the authors include experiments with more than relatively simple relational graph neural network architectures such as RGCN and RGAT. Instead, the authors may consider also experimenting with relational graph transformers such as those of [1]. More importantly in my view, however, the authors should consider relational variants of *equivariant* graph neural networks such as those of [2], since proteins can inherently be seen as 3D point clouds with node and edge features.
**References:**
[1] Diao, C., & Loynd, R. (2022). Relational attention: Generalizing transformers for graph-structured tasks. arXiv preprint arXiv:2210.05062.
[2] Satorras, V. G., Hoogeboom, E., & Welling, M. (2021, July). E (n) equivariant graph neural networks. In International conference on machine learning (pp. 9323-9332). PMLR.
Questions For Authors: - How do the authors construct their input graphs for protein-ligand binding affinity prediction? Do they omit the ligands in such graphs, or do they extend their static-dynamic graph construction algorithm to ligand molecules in this setting? If the latter is the case, what are the details of this topology construction?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: **To Weakness 1:**
We appreciate the reviewer's feedback. While our approach appears straightforward, its significance lies in bridging the gap between static structural information and dynamic behavior in protein representation.
To address concerns about experimental depth and generalizability, we've conducted additional experiments:
- **Domain-specific architectures**: We evaluated our approach on SS-GNN (Zhang et al., 2023, https://doi.org/10.1021/acsomega.3c00085), a specialized model for binding affinity prediction. We maintained all hyperparameters and featurization exactly as reported by the original authors, only replacing the GNN component with RGCN. The results show consistent improvements when using our Combined Graph approach [Results: https://anonymous.4open.science/r/rebuttal_2025_1-16F1/ss-gnn_binding_affinity_dist5.0_corr0.70.png.png https://anonymous.4open.science/r/rebuttal_2025_1-16F1/ss-gnn_binding_affinity_dist8.0_corr0.60.png.png]
- **Equivariant architectures**: As mentioned in our response to Weakness 2, we've also tested our approach with a relational EGNN variant.
Our primary contribution is demonstrating that integrating static and dynamic information provides consistent benefits across different tasks and architectures. The intentional simplicity of our approach facilitates easy integration with various models and establishes an effective baseline for future research in protein dynamics modeling - particularly important as molecular dynamics data becomes increasingly available.
**To Weakness 2:**
We thank the reviewer for this comment. We originally focused on topological information (without explicit coordinates) to isolate the impact of our core contribution - the integration of dynamic correlation information. This simpler setup allowed us to directly evaluate the benefit of our graph representation approach. However, we agree that equivariant GNNs are important for protein representation learning.
To address this concern, we've now implemented a simple relational variant of EGNN (Satorras et al., 2021) and conducted additional experiments across two tasks. These experiments show that our approach generalizes beyond invariant GNNs and delivers benefits when applied to equivariant architectures as well:
1. **Binding Site Detection**: Our combined graph approach shows substantial improvements over the distance-only baseline across all metrics (+14.49% in F1 score, +23.78% in AUCPR). The correlation graph alone underperforms the distance graph, but when combined, we see consistent improvements, suggesting effective integration of both information types. [Results: https://anonymous.4open.science/r/rebuttal_2025_1-16F1/relational-egnn_binding_site_detection.png.png]
2. **Atomic Adaptability Prediction**: For this inherently dynamic property, the correlation graph alone shows remarkable improvements over the distance baseline (+11.97% average improvement across all metrics). Interestingly, the combined graph performs similarly to the distance graph rather than outperforming both individual graphs. We attribute this to limitations in our preliminary relational EGNN implementation, which may not optimally fuse information from different relation types. Nevertheless, these results still validate that dynamic information captured in the correlation graph provides valuable signal for predicting motion-related properties. [Results: https://anonymous.4open.science/r/rebuttal_2025_1-16F1/relational-egnn_atomic_adaptability.png.png]
3. **Binding Affinity Prediction**: Experiments are still ongoing.
These experiments demonstrate that our approach extends beyond simple invariant GNNs to more sophisticated equivariant architectures. While our simple relational EGNN implementation shows mixed results for combining information types, the experiments consistently confirm the value of dynamic information for protein property prediction. With further architectural refinements, we believe the complementary nature of static and dynamic information can be more effectively leveraged in equivariant networks.
**To Weakness 3:**
Ligands are only included in the binding affinity prediction task (not in atomic adaptability prediction or binding site detection). In protein-ligand complexes, both protein and ligand atoms are treated consistently - they are part of the same correlation and distance matrices, to which thresholds are applied to construct the adjacency matrix. The only distinction is a binary attribute specifying whether each atom belongs to the ligand or protein. We can add these details to an updated version of the manuscript.
**To Question 1:**
Please see our response to Weakness 3.
**To Other Comments Or Suggestions:**
We thank the reviewer for these valuable suggestions. As mentioned in our response to Weakness 2, we have implemented a relational variant of EGNN, and we agree that further exploration of relational transformers would be valuable future work.
---
Rebuttal Comment 1.1:
Comment: I'd like to thank the authors for their insightful rebuttal. Based on their latest results with a relational equivariant graph neural network (EGNN) implementation, I'd like to increase my score from a "Weak reject" to a "Weak accept", to signal that I believe the contributions of this work have notably improved with the benchmarking of both invariant and equivariant representation learning improvements. To further improve this manuscript, I'd recommend the authors include either a qualitative or quantitative analysis of the (theoretical or empirical) benefits of equivariant vs. invariant representation learning for static-dynamic protein graphs (e.g., highlighting when, why, or how one type of learning may be better than another). This would enhance the depth of contributions this paper offers for the machine learning community.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for the positive feedback and for acknowledging the value of our experiments with equivariant GNNs.
## New Experimental Results
**Binding Affinity Prediction with Relational EGNN:**
https://anonymous.4open.science/r/rebuttal_2025_2-AB23/relational-egnn_binding_affinity.png
**Results with Relational Graph Transformers (Relational GPS):**
Results for the three tasks: https://anonymous.4open.science/r/rebuttal_2025_1-16F1/relational-gps_atomic_adaptability.png, https://anonymous.4open.science/r/rebuttal_2025_1-16F1/relational-gps_binding_affinity.png, https://anonymous.4open.science/r/rebuttal_2025_1-16F1/relational-gps_binding_site_detection.png
We chose GPS as it represents a widely-used graph transformer baseline with an official implementation in PyTorch Geometric that can be easily modified into a relational variant by replacing its local message passing layer with RGCN (https://arxiv.org/abs/2205.12454).
Both Relational EGNN and Relational GPS results show consistent trends: while distance and correlation graphs show varying performance across different tasks, the combined graph consistently delivers the best results.
## Analysis of Equivariant vs. Invariant Representation Learning
Following your recommendation, we've compared invariant (RGCN, RGAT) and equivariant (R-EGNN) architectures across our three tasks and graph types. Here's a summary of which architecture performs best for each combination:
| Task | Distance Graph | Correlation Graph | Combined Graph |
|------|----------------|-------------------|----------------|
| Atomic Adaptability Prediction| R-EGNN | R-EGNN | RGCN |
| Binding Site Detection | R-EGNN | R-EGNN | R-EGNN |
| Binding Affinity Prediction| R-EGNN | RGAT | RGCN |
Based on these results, we can offer several insights:
1. **Overall architecture comparison**: While performance varies across tasks, R-EGNN generally outperforms invariant models in most scenarios. This is expected as EGNN preserves rotational and translational symmetries of the molecular structure, which is crucial for protein modeling.
2. **Equivariant advantage for distance graphs**: R-EGNN consistently performs best on distance graphs across all three tasks. This makes sense architecturally, as EGNN explicitly uses coordinates and distances to modulate the message passing process, making it particularly well-suited for distance-based representations.
3. **Correlation graph and equivariance**: For correlation graphs, R-EGNN still shows advantages in two tasks. While correlation edges explicitly encode long-range dependencies, equivariance/symmetry also implicitly models certain long-range relationships. The connection between these approaches is subtle and deserves further exploration.
4. **Task-specific behavior**: Binding site detection shows the most consistent benefit from equivariant architectures across all graph types. This is likely due to the more regular graphs (all nodes represent Cα atoms, although belonging to different amino acids), making 3D spatial relationships particularly important.
5. **Combined graph performance**: For combined graphs, RGCN outperforms R-EGNN in two out of three tasks. This suggests that our preliminary implementation of relational EGNN, where we process distance and correlation graphs with separate EGNN models and then merge their outputs, may not optimally integrate information from different relation types. The design of fusion mechanisms that effectively leverage both static and dynamic information in equivariant architectures is a valuable direction for future work.
The analysis reveals which architectural properties are best suited for different protein graph representations, providing valuable insights for future protein representation research. These new insights will be included in our revised manuscript.
## Concluding Remarks
We are grateful for the reviewer's suggestions which have significantly enhanced the quality of our manuscript. Through our original experiments and these new additions, we have now validated our static-dynamic fusion approach across a comprehensive range of architectures:
- Invariant GNNs: RGCN and RGAT
- Equivariant GNNs: Relational EGNN
- Graph Transformers: Relational GPS
- Domain-specific architectures: Relational SS-GNN
The fact that we could implement and evaluate these diverse architectures within a short timeframe highlights a key strength of our approach: its simplicity, ease of use, and consistent effectiveness. Our framework is designed to be easily integrated with various backbone architectures while reliably delivering performance improvements.
As noted in the ICML guidelines for application-driven ML submissions, "novel ideas that are simple to apply may be especially valuable." Our work exemplifies this principle by providing a straightforward yet effective approach to integrating static and dynamic information of protein that can be easily adopted by the broader research community. | Summary: The paper introduces a novel graph representation technique that integrates both static structural information and dynamic correlations from molecular dynamics (MD) trajectories for enhanced protein property prediction. This technique combines relational graph neural networks (RGNNs) with a dual approach:Distance-Based Graph: Captures spatial proximity using Euclidean distances between nodes.
Correlation-Based Graph: Derives motion correlations from MD trajectories, highlighting dynamically coupled regions that may be spatially distant. The Combined Graph integrates these two sources of information, allowing the model to leverage both structural constraints and dynamic interactions.
Claims And Evidence: Yes. The authors claim that their approach provides superior performance across three tasks: Atomic Adaptability Prediction, Binding Site Detection and Binding Affinity Prediction.
Methods And Evaluation Criteria: Yes. The authors utilize Relational Graph Convolutional Networks (RGCN) and Relational Graph Attention Networks (RGAT) as baseline models. Experimental results are presented with statistical significance over multiple runs.
Theoretical Claims: No. This is not a theoretical paper.
Experimental Designs Or Analyses: Yes. The datasets are split using sequence clustering to prevent information leakage. Comprehensive ablation studies are conducted to demonstrate the benefit of combining static and dynamic graphs.
Supplementary Material: Yes. The supplementary material includes implementation details, hyperparameters, dataset preparation, and extensive experimental results.
Relation To Broader Scientific Literature: The paper builds upon recent advancements in protein representation learning, graph neural networks, and MD simulations. It effectively combines concepts from static graph-based methods (e.g., structure-based GNNs) and dynamic analysis (e.g., dynamical network analysis, mutual information).
Essential References Not Discussed: dynamical surface representation methods, like [1].
[1] Sun, D., Huang, H., Li, Y., Gong, X., & Ye, Q. (2023). DSR: dynamical surface representation as implicit neural networks for protein. Advances in Neural Information Processing Systems, 36, 13873-13886.
Other Strengths And Weaknesses: Strengths:
1. Innovative Graph Representation: The combination of static and dynamic features offers a more comprehensive view of protein behavior.
2. Robustness of Results: The improvements are consistent across different architectures and evaluation metrics.
Weaknesses:
1. The approach may be computationally intensive due to the incorporation of MD trajectories and complex graph processing.
2. How to build the dataset, or how to get the dynamic information? In your setting, It seems that you get the complex and dynamics by Amber20 software package? it seems that the dynamics are obtained by software simulation instead of real wet-lab experiments, how to evaluate the correctness of the dataset?
Other Comments Or Suggestions: None.
Questions For Authors: 1. Line 124-125, why the threshold is set to 0.6 and 0.3?
2.See weakness.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: **Essential References Not Discussed**
> Dynamical surface representation methods, like [1]. [1] Sun, D., Huang, H., Li, Y., Gong, X., & Ye, Q. (2023). DSR: dynamical surface representation as implicit neural networks for protein. Advances in Neural Information Processing Systems, 36, 13873-13886.
Thank you for pointing out this relevant reference. We will include this citation in our revised manuscript and discuss how our approach relates to dynamical surface representation methods.
**Weakness 1:**
> The approach may be computationally intensive due to the incorporation of MD trajectories and complex graph processing.
The computational cost of our approach can be divided into three steps:
1. **Generating molecular dynamics simulations.** Our approach utilizes the ever-growing availability of MD data rather than generating new simulations. For example, the recently funded EU Horizon project (Grant agreement ID: 101094651) aims at creating a Molecular Dynamics Data Bank similar to the existing protein data bank (PDB).
2. **Data processing and preprocessing.** Various publicly available Python packages exist for analyzing MD trajectories, such as PyTraj which we used in our implementation. While computing correlation matrices requires significant computational resources, this is a one-time preprocessing step. We efficiently parallelized this computation across 50 jobs on Intel Xeon Platinum processors (36 cores each), completing all correlation matrices for the dataset in approximately 30 minutes. Once computed, these matrices are stored and reused without additional overhead.
3. **Model training.** In our experiments, training with the Combined Graph requires only 15-30% more time compared to using the Distance Graph alone, which we consider an acceptable trade-off given the significant performance improvements observed across all tasks.
We argue that our integration of molecular dynamics information through correlation graphs is computationally efficient compared to methods that might incorporate entire molecular dynamics trajectories. By essentially compressing the complex dynamical behavior into correlation edges, we achieve a balance between capturing essential dynamic information and maintaining computational tractability.
**Weakness 2:**
> How to build the dataset, or how to get the dynamic information? In your setting, It seems that you get the complex and dynamics by Amber20 software package? it seems that the dynamics are obtained by software simulation instead of real wet-lab experiments, how to evaluate the correctness of the dataset?
While the structures of the complexes have been experimentally determined through X-ray crystallography, the dynamics data was indeed collected through simulations. The main reason being that there are no wet-lab experiments that yield time-resolved atomic positions. Note there are methods such as NMR spectroscopy that can provide some information on dynamics, but the availability of these data is highly limited as the respective experiments are challenging.
The molecular dynamics data generated by MISATO follow the current state-of-the-art and have been extensively validated on experimental data (https://www.nature.com/articles/s43588-024-00627-2). This validation process ensures that the simulated dynamics reasonably approximate real protein behavior, providing confidence in the reliability of our approach.
**Question 1:**
> Line 124-125, why the threshold is set to 0.6 and 0.3?
These thresholds are indeed hyperparameters, and there's no theoretical method to determine their optimal values directly. For the distance thresholds (4.5Å for atomic-level and 10Å for residue-level), we adopted widely established values from the literature as stated in line 145, left column: "These thresholds are widely used in protein modeling: the 4.5Å threshold captures meaningful atomic interactions (Bouysset & Fiorucci, 2021), while the 10Å threshold is commonly adopted for residue-level contacts (Gligorijevic et al., 2021b)".
As we mention in the paper at line 127, right column: "These thresholds are chosen to maintain similar graph sparsity, thereby achieving a fairer comparison when either Correlation or Distance Graph is used."
Specifically, we conducted an analysis on a subset of proteins to determine correlation thresholds that would yield graphs with comparable sparsity (similar average node degree and edge count) to the distance-based graphs. This approach ensures that any performance differences between Distance and Correlation graphs stem from the fundamental information encoded by different edge construction methods (spatial proximity versus dynamic correlation), rather than from differences in graph density.
---
Rebuttal Comment 1.1:
Comment: 1. Comparisons with protein dynamics related works. Strength, difference, etc.
2. "While the structures of the complexes have been experimentally determined through X-ray crystallography, the dynamics data was indeed collected through simulations. The main reason being that there are no wet-lab experiments that yield time-resolved atomic positions. " This makes me confused, as the dynamics seem not real, this is a tricky problem. Since the dynamics are derived from simulations rather than real experimental observations, the research based on these dynamics is inherently limited in its practical validity.
---
Reply to Comment 1.1.1:
Comment: Thank you for your review. We noticed your evaluation changed from "weak accept" to "weak reject" on April 6, based on concerns about the use of simulation-derived dynamics. We believe this important epistemological question deserves a thorough response and hope our clarification will address your concerns and justify reconsidering your evaluation to a more positive rating.
### Current limitations of experimental techniques
We must clarify that **currently, no experimental technique can provide continuous atomic-resolution trajectories of protein dynamics**:
- X-ray crystallography or cryo-EM provides atomic-resolution structures but only static snapshots under specialized conditions, such as crystals or frozen states
- NMR spectroscopy provides valuable dynamic insights but is constrained to smaller proteins, indirect structural inference, and cannot yield explicit atomic trajectories
- Ultrafast spectroscopy achieves remarkable temporal resolution yet provides limited structural detail and cannot generate continuous atomic trajectories
In contrast, MD simulations are indispensable precisely because they provide continuous, atomic-level trajectories unavailable from experiments. MD simulations are not only used in academic research but have become critical tools in pharmaceutical and chemical engineering, where they enable the prediction of molecular behavior that cannot be directly observed.
### Scientific validity of simulation-derived data
The scientific validity of research based on simulation-derived data has been firmly established. Consider these examples:
The 2013 Nobel Prize in Chemistry was awarded for "the development of multiscale models for complex chemical systems" – specifically recognizing MD simulation used in our research. This award acknowledges that simulations effectively capture essential physical and chemical processes despite not being direct experimentation.
The 1998 Nobel Prize in Chemistry was awarded for density-functional theory (DFT) and computational methods in quantum chemistry, which enable predicting molecular properties through calculations rather than direct measurement.
Recently, the 2024 Nobel Prize in Chemistry was awarded for AlphaFold2, a computational model that predicts protein structures with accuracy comparable to "real experimental observations" – demonstrating that computational approaches can generate highly reliable results.
These examples illustrate the broader scientific consensus on simulation-derived data. For more examples, we refer to a special issue that examines how computation has empowered numerous Nobel Prize-winning discoveries by making complex systems computable and providing insights inaccessible to direct experimentation (https://www.nature.com/collections/ggidgjfffi).
Thus, although simulation-derived data inherently reflect theoretical models, their extensive validation, widely recognized scientific impact, and widespread acceptance provide strong confidence in their practical validity.
### MD simulations as ground truth for ML research
MD simulations are now widely accepted as the ground truth for numerous ML research in natural science:
- Generative models of MD trajectories (https://arxiv.org/abs/2409.17808) use MD simulations as the gold standard for training and evaluating models that predict molecular motion
- Conformational ensemble generation (https://doi.org/10.1101/2024.12.05.626885) develops deep learning systems that can generate protein structure ensembles, which are then compared against MD simulations as the reference standard
- Machine learning force fields (https://doi.org/10.1021/acs.chemrev.0c01111) are developed and compared against classical force fields used in MD simulations, with MD providing the benchmark data for assessing their performance
These examples demonstrate that MD simulations serve as established benchmarks against which ML methods are evaluated.
## Regarding comparison with protein dynamics related works:
This is important. We will discuss DSR as suggested and also compare with established approaches like Gaussian Network Model, which are limited by their coarse-grained nature and reliance on harmonic approximation.
In contrast, our approach automatically captures correlations at all scales while maintaining compatibility with powerful relational GNNs.
## New experiments
We have conducted extensive experiments that further validate our static-dynamic fusion approach:
- Equivariant GNN: A relational variant of EGNN (https://arxiv.org/abs/2102.09844)
- Graph Transformer: A relational variant of GPS (https://arxiv.org/abs/2205.12454)
- Domain-specific architecture: SS-GNN (https://doi.org/10.1021/acsomega.3c00085), a specialized model for binding affinity prediction
Results: https://anonymous.4open.science/r/rebuttal_2025_1-16F1
Across all architectures, Combined Graph outperforms Distance Graph, demonstrating the simplicity, ease of use, and consistent effectiveness of our method. | null | null | null | null | null | null | null | null |
AutoCATE: End-to-End, Automated Treatment Effect Estimation | Accept (poster) | Summary: In this paper, authors have developed and released an AutoML library, called AutoCATE, for the automated selection and hyperparameter tuning of meta-learners for CATE estimation. They divide the CATE development pipeline into evaluation, estimation and ensembling phases, where evaluation corresponds to choosing a proxy risk, estimation corresponds to steps used for training CATE learners, including hyperparameter tuning, and ensembling uses proxy risk from evaluation stage to find the best learners from the estimation stage which can consist of ensemble of learners. They did extensive experiments on four benchmark datasets: IHDB, ACIC, Twins and News, to study different design choices for the three stages of AutoCATE.
Claims And Evidence: 1. They claimed AutoCATE as AutoML solution for CATE but it is limited only to meta-learner approaches and does not consider many other learners based on trees, neural networks, including causal forest, TARNet and SNet etc.
Methods And Evaluation Criteria: 1. As discussed above, the analysis is limited to meta-learners only, and does not consider other CATE learners.
Theoretical Claims: N/A
Experimental Designs Or Analyses: The experimental design looks okay to me as they have provided detaild experiments to study the three phases, i.e., evaluation, estimation and ensembling, of the the proposed framework, where they study the effect of using different approaches for risk, learners and baselearners. However, somethings were not clear as I did not find there proper definition, e.g., AllMeta, and AllBase?
Supplementary Material: I did not check this part as this was not needed.
Relation To Broader Scientific Literature: 1. As per my understanding, there already exists some work for automated CATE, as also referenced by the authors. However, the authors' discussion in the related work is focused on the conclusions of the existing work for auto CATE, which were dataset/setting specific. Authors should have directly focused on their approach and how authors' approach adds value to the literature? Is it just another paper/code for developing an auto-CATE pipeline?
Essential References Not Discussed: As discussed above, the discussion is focused only on meta-learners and their automated training which obviously does not cover all CATEs in general.
Other Strengths And Weaknesses: Strengths:
1. This paper discusses an important challenge of hyperparameter tuning and training, faced in the causal inference literature for CATE estimation. They develop auto-ml solution for CATE.
2. The paper is generally clear and well-organised into clear heading and subheadings with relevant discussions.
3. Overall experimental analysis covers all aspects of the proposed framework, considering 4 benchmark datasets.
4. Authors have ensured the credibility of the work by releasing the code.
Weaknesses:
1. As discussed above, authors did not clearly placed their contribution against the literature as there exists some work on auto-CATE, and they did not clarify how their proposed is different from the existing ones rather they focused on findings of the existing works.
2. Auto-CATE discussion is limted to meta-learners only which does not cover many other categories of CATE learners. So, in my opinion the contribution is not sufficient for this venue.
## Update after the rebuttal and follow-up discussion
Dear authors, thanks for your response. I am convinced with your response to Related Work and Limited data points. So, I am raising my score from 2 to 3. All the best!
Other Comments Or Suggestions: 1. To find a risk measure, in Stage 1 Evaluation, you're training CATE learners by doing hyperparameter tuning etc which is required in a standard case for a CATE learner. So, it appears a chicken-and-egg problem to me. How do you make choices in the first stage? Moreover, it is possible to repeat the 3 phases until convergence to get a better estimates?
2. Will your framework work in limited data settings? Because you make choices for risk estimators on the validation data which is smaller part of the given dataset and it is likely to lead to unstable estimates if the choices are made on a small dataset.
3. Moreover, your AutoCATE framework seems to be computationally expensive to me. Comment on it and add as a limitation, if needed. Moreover, I did not notice significant improvement of combining multiple risks from the evaluation. So, is it useful as compared to the addtional computations/complexity it adds?
Questions For Authors: As discussed above, my concerns are related to the contributions of this work as compared with the literature, and second, AutoCATE is limited to meta-learners only.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your thorough review!
___
## Related work
While prior work automates parts of CATE estimation, no approach–to our knowledge–provides an end-to-end automated framework. Existing work only automates part of our approach (e.g., learning pseudo-outcomes with AutoML). In contrast, AutoCATE extends automation across the entire pipeline, integrating model selection, preprocessing, and ensembling. In doing so, we address novel challenges like balancing data for training and validation or using multi-objective optimization. Our empirical study is also considerably more extensive, covering diverse datasets and revealing new insights into model selection.
We will revise our paper to better highlight these distinctions.
## Metalearners
AutoCATE is a general-purpose framework for CATE estimation, not limited to metalearners. While we use them for their flexibility, our key contribution is automating selection, tuning, and validation—critical for all CATE methods. Other approaches (e.g. TARNet) can easily be integrated in this framework. In future versions, we aim to support custom models through a user-friendly API.
Similar to general AutoML frameworks (e.g. FLAML), which do not incorporate each supervised ML algorithm, AutoCATE focuses on structured automation within an extensible search space. This remains a fundamental challenge, regardless of specific estimators.
## Clarity: AllMeta and AllBase
These terms define search spaces in our automated ML pipeline for CATE validation and estimation. AllMeta includes all metalearners in our search (S, T, DR, X, R, RA, Lo, Z, U, F), covering all known techniques in the literature. AllBase consists of the nine base learners in our framework (see Figure 1). BestMeta and BestBase are selected subsets based on performance—metalearners are chosen from experiments and the baselearners are known to work well for tabular data.
We will clarify these in the revised paper—thank you for pointing this out! If anything else is unclear, we would be happy to clarify further.
## AutoCATE stages
In Stage 1, AutoCATE constructs pseudo-outcomes as proxies for ground truth CATE. These are derived from metalearners, but not full CATE learners themselves: e.g., we use the DR-Learner’s pseudo-outcome directly, but without fitting the final model to predict it. This step is also performed on _validation_ data, not training data.
Stage 2 trains a CATE estimator on training data and evaluates it using pseudo-outcomes from Stage 1. As evaluation depends on a risk measure, it logically precedes estimation, avoiding a chicken-and-egg problem. Both stages involve choices not dictated by data (e.g., which risk measure to use), which we study systematically.
Repeating stages until convergence is not straightforward, as estimation and evaluation are independent stages with distinct data splits. While evaluation informs estimator selection, estimators do not directly influence evaluation. Nevertheless, exploring these interactions (e.g., by sharing information in their optimization) could be an interesting avenue for future work.
## Limited data
We agree that limited data can be a challenge for CATE validation. We emphasize the importance of having sufficient data for validation to mitigate such issues (as highlighted in Figure 3). We also study cross-validation (see Figure 7 and Table 5), which can help to stabilize the estimates for smaller datasets. Empirically, we observe good results for the IHDP data with only 672 instances. Some baselearners (e.g. linear regression) can perform reasonably well with very little data.
That said, we acknowledge that for very small datasets, other approaches may be more appropriate. We will include a more detailed discussion on scenarios where our framework may not be optimal in the conclusion. We refer to the response to reviewer NSJK, where we enlist more limitations.
## Computational complexity
Efficiency is an important consideration in our framework. AutoCATE runs fairly quickly on small to moderate-size datasets, often completing in minutes locally (see Table 9). AutoCATE also allows for trade-offs by limiting search space, reducing trials, or choosing faster learners (see Figure 9). Further improvements could be made based on more efficient search and pruning algorithms. Nevertheless, we agree that computational complexity is a potential limitation and will discuss it more in depth in the revision.
## Multiple risk measures
We provide an initial exploration of combining risk measures. While no combination consistently outperforms the best single measure, we see significant potential for further development (see Figure 13e). Though not yet optimal, it provides a solid foundation for future improvements in both performance and efficiency. Although this approach can increase computational complexity, reusing components like propensity scores helps mitigate this impact.
___
Thank you once again! Please let us know if there are any remaining concerns.
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal.
1. **Related work**: Unfortunately, only proper citations and comparison/discussion will convince me about the value added by your paper. Your discussion of related work is not based on comparing the approach of existing works for hyperparameter tuning against yours rather on their conclusions.
2. **Metalearners**: metalearners form only one direction for ITE estimation. If your framework is generic then you should have considered few examples from other research directions of ITE.
3. *Limited data*: How much is validation data for IHDP out of 672 instances? Training models on the validation as compared to the rest of the data seems to lead to unstable results. So, this framework will work when you have sufficiently large datasets.
My concerns about correct placement of this work w.r.t. the literature, and limited applicability of the framework to only subset of ITE methods, remain unaddressed so I will keep my score.
---
Reply to Comment 1.1.1:
Comment: Thank you for your response!
___
## Related work
We believe there may be a misunderstanding: to our knowledge, the problem of automated CATE has _not_ been addressed in prior work. Existing research only addresses **specific components**, such as tuning nuisance models, but none propose an end-to-end automated system for CATE estimation.
We categorize the literature on CATE methods into estimation and evaluation (see Section 2.2). For _estimation_, many methods exist, but practical guidelines for their tuning and implementation are lacking. For _evaluation_, we provide a more detailed discussion below.
### Comparison with CATE evaluation
This work has a different focus: how to evaluate a model for CATE estimation? Importantly, none of these aim to automate CATE estimation end-to-end. Instead, they evaluate a fixed pool of CATE estimators with a fixed pool of evaluation measures and compare their performance. Nevertheless, we can compare components of these works, see the table below.
Our work differs significantly from these efforts. Only Mahajan et al. (2018) (partially) _automate evaluation_ by tuning the nuisance models with AutoML. However, they do not address how long to tune these models or how to automatically find the best CATE estimator. In contrast, AutoCATE fully automates the entire process—estimation, evaluation, and ensembling—integrating model selection, hyperparameter tuning, preprocessing, and ensembling.
Moreover, while these works fix the estimator pool and overlook _tuning efficiency_, we analyze performance as a function of tuning iterations. This enables us to explore trade-offs in estimator quality under finite compute, a critical practical concern not addressed in earlier studies. Moreover, the _search space_ in our work is far more extensive than in previous studies, with 2,187 pipeline configurations (excluding hyperparameters), based on 9 baselearners and 10 metalearners—many of which combine differet baselearners, an approach rarely considered in prior research.
|Paper|Estimation search|Evaluation search (nuisance model)|Search efficiency considered?|Baselearners|Metalearners|
|-|-|-|-|-|-|
|Schuler et al. (2018)|Pre-specified pool of estimators|Exhaustive grid search|No|2|3|
|Mahajan et al. (2018)|Pre-specified pool of estimators|FLAML|No|5|7|
|Curth & van der Schaar (2023)|Pre-specfied pool of estimators (with underlying nuisance models tuned using grid search)|Exhaustive grid search|No|2|5|
|Doutreligne & Varoquaux (2023)|Pre-specified pool of estimators|Random search|No|2|3|
|AutoCATE |Random search|Random search|Yes|9|10|
### Why this matters
Our core innovation lies in framing CATE estimation as a _system design_ problem—optimizing the configuration of a general protocol that can be applied in a range of practical scenarios. As such, our empirical analysis (Section 4) goes beyond existing work by evaluating the influence of a much _wider range of design choices_ across a wide variety of settings (four benchmark families, spanning binary and continuous outcomes, as well as different sizes and dimensionalities), which no prior study has systematically examined.
We hope this clarification resolves the reviewer’s concern. We will revise the paper to include this discussion and make these distinctions more explicit.
___
## Metalearners
While exploring alternative CATE estimation methods would be valuable in future work, the goal of our paper is to develop a **general-purpose framework for the automated selection, tuning, and evaluation of CATE estimators**, rather than benchmarking every class of estimator.
Metalearners are a natural candidate to _validate the usefulness of our framework_ as they enable a very large search space of 2,187 estimator pipelines without considering hyperparameters. This way, we are able to empirically validate our primary contributions and rigorously analyze key design choices, including optimization trials, ensembling strategies, and model selection approaches. In our opinion, these questions are _independent_ of the considered CATE estimator classes.
Similarly, related work on CATE evaluation (e.g., Schuler et al., Mahajan et al., Curth & van der Schaar) generally also focuses exclusively on metalearners.
We hope this clarifies that our choice for metalearners reflects a design decision aligned with the paper's scope and contributions, not a limitation of the framework.
___
## Limited data
The IHDP dataset has 672 total instances, which we split into training and validation sets. _How much data should we allocate for validation?_ We explicitly study this in Figure 3. Results are fairly robust when allocating 30–70% to validation. Performance drops at the extremes (e.g., 10% or 90%), as expected. Again, cross-validation could help improve results in these settings—see Figure 7 and Table 5.
We will discuss this limitation in the revision (see our answer to reviewer NSJK).
___
Thank you for your time and effort during this review process! | Summary: This paper presents AutoCATE, an automated, end-to-end framework for estimating Conditional Average Treatment Effects (CATE). The core motivation is that while ML methods have made significant advancements in causal inference, their adoption remains limited due to the complexities in pipeline selection, hyperparameter tuning, and validation.
To address these issues, the authors propose framing the problem as a counterfactual Combined Algorithm Selection and Hyperparameter (CASH) optimization and develop AutoCATE, a framework that integrates evaluation, estimation, and ensembling into a single automated solution. The framework searches across various ML models, metalearners, and hyperparameters to optimize CATE estimation.
Claims And Evidence: Claim 1: AutoCATE is “the first end-to-end, automated solution” tailored for CATE estimation. The authors provide an overview of existing libraries (e.g., CausalML, EconML) and show that, while these offer various metalearners or partial automation, they do not perform a comprehensive search across risk measures, model architectures, hyperparameters, and ensembling.
Evidence: The paper includes a table comparing software packages, demonstrating that other libraries focus on some metalearners or on certain tuning aspects, whereas AutoCATE addresses the entire pipeline.
Claim 2: Jointly optimizing evaluation methods and ML pipelines boosts performance over standard “predict the observed outcome” baselines and over conventional single metalearner approaches.
Evidence: Empirical results indicate that simply optimizing based on the observed-outcome (µ) risk can lead to suboptimal CATE predictions, whereas using specialized risk measures (e.g., T-risk, DR-risk) aligns model selection with the actual causal objective. Experiments also show improved accuracy (in terms of √PEHE or Qini-based metrics) when comparing AutoCATE to typical T-/S-Learners that only tune on one group outcome at a time.
Claim 3: Metalearners like T-, DR-, and RA-Learners tend to achieve competitive or best performance on average.
Evidence: Through ablation studies, the paper shows consistent strong results for T-, DR-, and RA-Learners, whereas others (like U- or R-Learners) can produce outlier performance in some data sets. This is measured through extensive random-search trials and is illustrated with results in tables and plots.
Overall, the major claims are around: (1) the novelty of a fully automated pipeline specifically for CATE, (2) the demonstrated importance of risk measures aligned with the causal objective, and (3) empirical benefits of ensembling. The evidence rests on thorough experiments across four well-known semi-synthetic causal benchmarks
Methods And Evaluation Criteria: - The paper includes multiple metalearners (S-, T-, Lo-, DR-, RA-, etc.) and a large suite of baselearners (random forests, gradient boosting, MLPs, etc.). Metalearners are combined with different pseudo-outcome-based risk measures (e.g., T-risk, DR-risk, kNN-risk) to evaluate the pipeline on held-out or cross-validation data.
- Evaluation metrics for final performance include √PEHE (for measuring overall error on the true potential outcomes in synthetic data) and AUQC (for ranking-based tasks).
- The approach is sensible for observational data with confounding: it respects standard assumptions (unconfoundedness, overlap). The use of pseudo-outcomes or IPW-based risk measures is standard in causal inference, but the novelty lies in systematically searching across them and ensembling.
From a methodological perspective, the authors’ approach to separating the “evaluation pipeline” from the final “estimation pipeline” is carefully designed: it includes data splitting so that the pseudo-outcome (or other risk) is learned on a separate portion, then used to guide model selection. This is appropriate for CATE tasks where the ground truth is never observed for each individual’s counterfactual.
Theoretical Claims: The work frames the search problem in a counterfactual CASH setting and cites known results about pseudo-outcomes converging to the true treatment effect under standard assumptions (e.g., DR-/R-Learners). However, the paper does not provide new proofs or theoretical derivations. Instead, it references known theoretical results (e.g., asymptotic unbiasedness of DR, properties of R-Learners).
No obvious errors appear in the sketches of theory or in the references to established results. The theoretical claims are primarily restatements of known properties (e.g., each metalearner’s consistency under standard conditions).
Experimental Designs Or Analyses: - The experiments span four widely used semi-synthetic data sets in CATE research: IHDP, ACIC, Twins, and News. They vary in size, dimensionality, data-generating processes, and outcome type (binary vs. continuous).
- The evaluation metrics (e.g., √PEHE, AUQC) are standard, and each data set has a known “ground truth” effect or can approximate it, so the experimental design is appropriate.
- Model comparisons systematically vary key design choices, risk measures, baselearners, data splits, and provide results in tables and line plots.
- An ablation study approach is used to highlight which parts of AutoCATE (like using T-risk vs. DR-risk or enabling feature selection vs. not) matter most.
- Overall, the experimental setup is sound. The sample sizes and repeated runs (e.g., 50–200 trials, multiple random seeds) mitigate random variation.
A potential limitation is that the paper relies heavily on semi-synthetic data, so real-world complexities (e.g., non-stationarity, unobserved confounders) may not always arise. The authors acknowledge this and suggest future applications to purely real-world data.
Supplementary Material: The authors include an extensive appendix that details the metalearners, risk measures, ablation studies, and additional results (e.g., effect of number of cross-validation folds). They also provide a comparison table of available CATE software packages, plus some usage examples. I have not identified any gaps between the main paper and the supplementary information: the appendices appear to comprehensively support the main results.
Relation To Broader Scientific Literature: - In AutoML, the authors draw parallels to general-purpose tools like AutoGluon, H2O AutoML, or FLAML, pointing out that standard AutoML focuses on conventional supervised tasks (e.g., classification, regression) and does not address the challenges of CATE estimation (lack of ground truth, confounding, etc.).
- For CATE methods, they discuss standard metalearners such as the T-, S-, X-, DR-, R-Learners, and highlight that prior work compares them in isolation or tunes them partially, but has not combined them in one broad pipeline that includes “evaluation pipeline search.”
- They position their approach as bridging these two areas: AutoML + CATE-specific validation.
This situates the paper well in an emerging area of interest, applying automated search for valid causal effect estimation.
Essential References Not Discussed: The paper cites many standard references (e.g., R-, DR-, X-Learners) and relevant AutoML works (FLAML, H2O, etc.). One might also compare with the EconML approach, which implements an “R-risk ensembling.” The authors do mention EconML, but they might elaborate on how exactly EconML’s ensemble compares with the new stacking approach. Nonetheless, the references in the paper are quite thorough, and there do not appear to be major missing lines of prior research.
Other Strengths And Weaknesses: Strengths:
- The idea of decoupling an “evaluation pipeline” with multiple risk measures from the “estimation pipeline” and then ensembling is a strong conceptual contribution that addresses known challenges in model selection for causal inference.
- The thorough ablation studies reveal which metalearners and which risk measures typically excel, giving the community new insights.
- The open-source release (in Python) lowers the barrier to entry for robust causal effect estimation, potentially encouraging broader adoption.
Weaknesses:
- Real-world data sets that contain unobserved confounders or strong domain shifts are not tested extensively; thus, the framework’s performance under such violations remains open.
- The search space can become large and computationally expensive (especially for high-dimensional data), though the authors do note ways to tune time or limit certain baselearners.
- The paper does not propose new theoretical results for metalearners, it mainly integrates known approaches. However, the overall pipeline is novel, so this does not detract from the paper’s main contributions.
Other Comments Or Suggestions: See the above weaknesses section.
Questions For Authors: Are there heuristics that shrink the space automatically based on intermediate findings?
You discuss combining multiple risk measures (e.g., DR + T + kNN). Did you consider dynamic weighting or meta-learning that adaptively emphasizes certain pseudo-outcomes depending on early performance signals?
Some results show that simple top-k ensembles can outperform advanced stacking in certain data sets. Could there be reasons (e.g., overfitting to pseudo-outcomes) why stacking underperforms? Are there strategies to mitigate it?
Do you have plans to incorporate domain knowledge or observational data diagnostics (e.g., checking positivity/overlap) more directly into AutoCATE’s workflow (e.g., automatic filtering of extreme propensity scores)?
Ethical Review Flag: Flag this paper for an ethics review.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your insightful review–this is highly appreciated!
## Real-world data and violations
We agree that these are crucial considerations. While we use real-world data to validate AutoCATE as much as possible (e.g, Twins and the uplift data in Appendix D.4), we acknowledge that more real-world validation would be useful. Unfortunately, we are unfortunately limited by the fact that the CATE is unknown in real data.
We agree that considering violations of identifiability assumptions is important. We have added a synthetic experiment varying selection bias (γ) and found that while performance declines with higher bias, more optimization trials can partially mitigate the effect (we refer to the response to Reviewer NSJK). Even with strong overlap violations (γ > 10), AutoCATE remains competitive. We will update the paper to clarify the data-generating process and these findings. Finally, we also point out AutoCATE’s good performance on IHDP, which also contains overlap violations [1].
More generally, we recognize the need for improving robustness to such violations and will include this limitation in the revised version.
## Domain knowledge and observational data diagnostics
Thank you for the insightful suggestion. We agree that incorporating domain knowledge and data diagnostics would be valuable, and we would love to support such features in future releases. AutoCATE allows for trimming extreme pseudo-outcomes (e.g., from very small propensity scores), though this was not included in our paper. We also agree that checking overlap/covariate balance and filtering propensity scores would be useful additions.
While sensitivity analyses for hidden confounders are appealing, they may fall outside the scope of our automated approach. Instead, we aim for AutoCATE to be complementary to other packages (e.g. DoWhy), which address other aspects of causal inference. Our main goal is _automating_ CATE estimation, a gap we see in current tools. Nevertheless, we see many remaining challenges for truly supporting practical adoption of these methods.
## Computational complexity and heuristics
We have designed AutoCATE with efficiency in mind, using parallelization via Optuna, efficient ML implementations via scikit-learn, and configurable constraints. Computational times for different datasets are summarized in Table 9.
Nevertheless, there are several opportunities to improve AutoCATE’s speed. We currently use a naive random search algorithm without heuristics. As our search is implemented with Optuna, more advanced search algorithms and pruning strategies can easily be used instead. Finally, different metalearners have widely varying time complexities (Figure 9). Future research could try to use these discrepancies to further optimize the search.
In the revised version, we will stress the complexity of AutoCATE more explicitly as a limitation.
## Combining risk measures
Thank you for these insightful suggestions!
We explore various static strategies for combining risk measures (e.g., averaging, ranking) and compare them empirically in Table 3. While we analyse correlations between risk measures (Figure 13a), we do not yet use this information, though it seems promising.
Our current work focuses on the feasibility of combining risk measures using simple approaches. While the suggested dynamic approaches are really exciting directions, their implementation is challenging–key issues include the lack of ground truth and the variance of risk measures. While tackling these challenges is beyond the scope of this work, we see them as important future research directions and hope our framework can serve as a foundation for further advancements.
## Stacking
Why might stacking underperform? Without a ground truth CATE, validating and tuning stacking weights is challenging. Stacking may overfit pseudo-outcomes with high variance or outliers. The squared error loss is sensitive to outliers, and alternatives like Huber loss could improve robustness. Stacking also requires more training data, and with limited data, its complexity may hurt generalization.
To improve stacking, we could try to filter pseudo-outcomes based on reliability, possibly using risk measure agreement. Multi-objective stacking could create more generalizable models. Using loss functions like Huber loss could mitigate noisy outcomes. Further research into tailored stacking and ensembling for CATE estimation is needed. Limiting stacking to top models could also improve stability.
___
Thank you again for your detailed and thoughtful review. We agree that there is much work ahead, and while some suggestions are beyond the scope of this paper, we are happy to consider any specific approaches you feel should be included for acceptance within the remaining time.
___
[1] Curth, A., Svensson, D., Weatherall, J., & van der Schaar, M. (2021). Really doing great at estimating CATE? A critical look at ML benchmarking practices in treatment effect estimation. NeurIPS. | Summary: The authors propose a pipeline for automating the several design choices required for CATE estimation; from preprocessing datasets to different risk measures for model selection. The pipeline is divided into three stages corresponding to the following three questions; what risk measure should be used for model selection, what CATE estimators should be trained, and finally how should be select over the trained CATE estimators and combine them for better generalization. The authors conduct experiments on widely used benchmarks and present interesting insights regarding the numerous design choices in CATE estimation.
## Update after rebuttal
Thanks for the rebuttal! My concerns have been addressed and I want to retain my rating for acceptance.
Claims And Evidence: **Strengths**
- The authors have done a really good job at empirical validating all the design choices involved with the proposed framework AutoCATE! The scale of the empirical study is quite comprehensive; experiments involve a variety of meta-learners, base-learners, and risk measures. This makes their findings interesting and significant for practitioners and future work, and their software package should also make it easy for practitioners to adopt the proposed pipeline.
**Weaknesses**
- My concerns are mostly regarding the claims and insights regarding the end-to-end automation part of AutoCATE.
- Regarding experiment in section 5.5, the authors should follow the procedure of AutoML to tune S/T Learner (Mahajan et al. 20023) instead of manual grid search. This would ensure stronger baselines and a fair comparison with them. Similarly, the authors can construct meta-learners with nuisance model trained via AutoML (Mahajan et al. 20023), and that could serve an alternative set of CATE estimators for experiment in section 5.3 (Estimation) as well. For example, the BestBase estimator currently involves a manual search over a grid of different algorithms and hyperparameters, but this could be automated via AutoML.
- I am not sure what are the main conclusions from the experiments with combined risk measures? The authors did not experiment with many combinations, and only considered combining T & DR risk and different T risk. So the experiments are not exhaustive which makes it hard to interpret what the main trend should be and what recommendations can be made. Similar comment for ensembling with multiple risk measures; I think the strategy of combining risk measures is the most novel aspect of the work, so analyzing it in depth would make the paper strong.
Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria make sense for the problem at hand. All the baselines and benchmarks used in this paper are widely used in causal inference.
Theoretical Claims: There are no theoretical claims in the paper.
Experimental Designs Or Analyses: Yes, I checked the soundness/validity of all the experiments in the paper, and the experiment design doesn't have any flaws.
Supplementary Material: Yes, I checked all parts of the supplementary material.
Relation To Broader Scientific Literature: This work builds upon the prior literature [1, 2, 3] on empirically analyzing model selection strategies for CATE estimation. Model selection in causal inference is a challenging task due to the fundamental challenge of unobserved counterfactual potential outcomes, which led to several heuristics being proposed for this task without much clarity on when a certain strategy must be used. Hence, prior empirical studies [1, 2, 3] empirically analyzed these model selection strategies, often finding that several strategies can be optimal. This paper further builds upon this, by proposing novel ways of combining multiple model selection strategies and also extensively analyzing the role of other factors like dataset preprocessing that was missed by earlier works.
References
- [1] Schuler, Alejandro, Michael Baiocchi, Robert Tibshirani, and Nigam Shah. "A comparison of methods for model selection when estimating individual treatment effects. arxiv. 2018." arXiv preprint arXiv:1804.05146 (1804).
- [2] Curth, Alicia, and Mihaela Van Der Schaar. "In search of insights, not magic bullets: Towards demystification of the model selection dilemma in heterogeneous treatment effect estimation." In International conference on machine learning, pp. 6623-6642. PMLR, 2023.
- [3] Mahajan, Divyat, Ioannis Mitliagkas, Brady Neal, and Vasilis Syrgkanis. "Empirical analysis of model selection for heterogeneous causal effect estimation." arXiv preprint arXiv:2211.01939 (2022).
Essential References Not Discussed: No, I believe all essential references have been discussed to the best of my knowledge. The authors have written a detailed related works section.
Other Strengths And Weaknesses: **Strengths**
- Authors introduce several novel components. To the best of my knowledge, analyzing the role of dataset preprocessing and dataset splits for training/evaluation in CATE estimation has not been done in prior works. Further, the authors experiment with novel strategies for model selection with multiple risk measures and ensembling of CATE estimators.
- The paper overall is well written and organized which makes it easy to follow and understand main results. The experiment results are clearly presented with good discussion around them. I especially like their comparisons with the findings from prior benchmarking studies for CATE model selection.
Other Comments Or Suggestions: - It would be nice to have statistics regarding the scale of the empirical study before section 5; like how many risk measures, how many meta-learners and base-learners for estimation, how many estimators are included for the model selection study, etc.
Questions For Authors: For major questions, please refer to the the "claims and evidence" sections above.
- How are the multiple risk measures combined? Do we take the average of risk measures, like average of T and DR risk in the experiments?
- How do the authors obtain the best meta-learner or best base-learner? Is it based on how well they fit the observational data?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you for your time and effort in reviewing our work!
## Obtaining the best meta/base-learner and training nuisance models with AutoML
Please allow us to clarify our approach. AutoCATE follows an AutoML-based procedure to tune ML pipelines at multiple stages: first, to optimize risk measures for model selection, and second, to construct optimal CATE estimators by searching over meta- and base-learners. We obtain the best CATE estimator–i.e., a metalearner constructed with (multiple) baselearner(s)--in the second stage, based on the risk measure selected in the first stage. As the reviewer suggests, this indeed ensures that we pick the CATE estimator that best predicts the (observational) validation data.
Our design ensures a fully automated, end-to-end approach that integrates evaluation, estimation, and ensembling within a single framework (see Section 4.5). Importantly, all components of a metalearner (i.e., each baselearner) are tuned automatically and simultaneously, ensuring the best possible downstream performance. We believe that this is one of the key strengths of our work compared to previous approaches!
We hope that these additions clarify our approach. We will update the paper to make this aspect more clear. Nevertheless, if there are any remaining questions, please do let us know.
## Tuning the S- and T-Learner
Regarding the tuning of S-/T-Learners, this is done based on a random search with the number of trails equal to AutoCATE, allowing for a fair comparison. As such, we do not use a manual grid search for either AutoCATE or the benchmarks. Mahajan et al. (2023) use AutoML separately for evaluation and for learning the nuisance models underlying CATE estimators. As neither S- and T-Learners use nuisance models, we believe that our approach is similar to the one in Mahajan et al. (2023). Additionally, they do not consider the impact of only using a limited number of optimization trials, which is an important consideration in practice. Nevertheless, if there are distinctions we have missed, we would be happy to look into them and adjust the experiments.
We will revise our manuscript to better clarify the reasoning behind and training procedure for the benchmarks.
## Combining risk measures
Thank you for these thoughtful questions. We appreciate the opportunity to clarify our approach and conclusions.
### How are risk measures combined?
We explore multiple strategies for combining risk measures (see also Appendix B.5), including:
- Averaging normalized risk measures (as in Table 1b). This approach corresponds to the reviewer’s suggestion.
- Averaging rankings of risk measures to improve robustness to outliers.
- Euclidean distance to the origin (best possible performance).
- Selecting all Pareto-optimal points.
### What are the main conclusions?
Our experiments serve as an initial exploration into combining risk measures. While our hypothesis was that relying on multiple risk measures could enhance robustness and performance, we find that no strategy consistently outperforms using a single T-risk. However, results indicate that this approach is promising, and further research—such as leveraging risk measure correlations (Figure 14a) or advanced ensembling—could improve performance. Therefore, we believe there is evidence that this novel approach provides a fruitful and promising direction for future research on CATE estimator validation. By highlighting the potential of this approach and providing a foundation for future research with our software package, we hope to encourage the community to further explore these ideas.
We will update our paper to clearly state the main conclusions.
## Scale of the empirical study
Thank you for this great suggestion! We agree that summarizing the scale of our empirical study upfront helps highlight our contributions, and we will incorporate this more expliclity in the revised manuscript.
In the main body, we present experiments across a total of 247 distinct datasets, across four benchmark families, spanning binary and continuous outcomes, as well as different sizes and dimensionalities. Additionally, we include additional experiments on synthetic data for this rebuttal. In the appendix, we also explore AutoCATE on two uplift datasets.
AutoCATE’s full search space consists of 2,187 possible pipelines (3 feature selection × 3 scaling × 27 meta/base-learner configurations × 9 base learners), excluding hyperparameters (Appendix B.3). It incorporates 8 different risk measures. A full overview of its configuration options can be found in Appendix B.6.
We appreciate the reviewer’s suggestion and will ensure this information is presented more explicitly in the paper.
___
Once again, thank you for your detailed review! While we hope that these responses address your concerns, please let us know if there are any remaining points. We would be happy to engage further if needed.
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal! My concerns have been addressed and I want to retain my rating for acceptance. | Summary: The paper presents AutoCATE, an automated framework for CATE estimation, optimizing model selection, tuning, and validation via counterfactual Combined Algorithm Selection and Hyperparameter (CASH) optimization. It unifies evaluation, estimation, and ensembling, automating key design choices for improved generalization. Experiments on benchmarks show AutoCATE outperforms existing methods, and it is released as open-source software for broader adoption.
Claims And Evidence: The authors claim that AutoCATE surpasses conventional CATE estimation methods and provide empirical evaluations on benchmark datasets that show AutoCATE achieving superior results compared to existing approaches.
Methods And Evaluation Criteria: Yes, the evaluation criteria make sense, however, it could be expanded to cover more scenarios (see section “Questions For Authors”).
Theoretical Claims: The paper makes no theoretical claims.
Experimental Designs Or Analyses: Yes, the experimental designs or analyses are valid.
Supplementary Material: I briefly looked at the appendix.
Relation To Broader Scientific Literature: NA
Essential References Not Discussed: NA
Other Strengths And Weaknesses: - The paper is well-written and easy to follow.
Other Comments Or Suggestions: - Not sure if ICML is the right venue for this paper, as it lacks theoretical contributions (e.g., new proofs, formal guarantees, or novel optimization formulations).
Questions For Authors: - Besides these well-known datasets, why not use fully synthetic datasets to systematically control the degree of selection bias and covariate shift? This would allow for precise evaluation of AutoCATE's robustness across different levels of bias and distribution shift, ensuring clearer insights into its generalization capabilities.
- To what extent can users intervene in the automated process? Are there provisions for customizing or overriding certain steps in the pipeline to tailor it to specific needs or preferences?
- What are the known limitations or potential failure cases of AutoCATE? Are there scenarios where it might not be the optimal choice for CATE estimation?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review!
## Robustness to selection bias
We agree that synthetic data allows for precise control over selection bias and covariate shift, enabling a more systematic evaluation.
We have added a synthetic experiment where we vary the degree of selection bias (controlled by the parameter γ\gammaγ). Our results indicate that while AutoCATE’s performance degrades as selection bias increases, increasing the number of search trials helps mitigate this effect. Even under strong overlap violations ($\gamma > 10$), AutoCATE can result in good performance.
We also compare AutoCATE to benchmark models across different bias levels. The results confirm that AutoCATE consistently performs competitive to each baseline in settings with moderate bias and remains relatively robust under extreme bias.
We will update the paper to include a detailed explanation of the data-generating process (DGP) and expand on these findings.
Thank you for this valuable suggestion—we believe this addition strengthens our evaluation and provides clearer insights into AutoCATE’s generalization capabilities.
#### _Synthetic data: setup_
|Gamma|0|1|10|100|1000|
|-:|:-:|:-:|:-:|:-:|:-:|
|Extreme propensities ([0, 0.01) and (0.99, 1]) (%)|0.0|0.1|72.0|97.0|99.7|
#### _Synthetic data: $\sqrt{\text{PEHE}}$ (SE) for AutoCATE with different number of evaluation and estimation trials_
|Gamma|0|1|10|100|1000|
|-:|:-:|:-:|:-:|:-:|:-:|
|__5 trials__|0.49 (0.03)|0.53 (0.02)|0.88 (0.12)|0.93 (0.12)|0.95 (0.12)|
|__10 trials__|0.20 (0.03)|0.25 (0.02)|0.46 (0.07)|0.54 (0.08)|0.58 (0.11)|
|__50 trials__|0.14 (0.01)|0.15 (0.03)|0.35 (0.04)|0.52 (0.06)|0.44 (0.09)|
#### _Synthetic data: $\sqrt{\text{PEHE}}$ (SE) for AutoCATE and benchmarks with 50 evaluation and estimation trials_
|Gamma|0|1|10|100|1000|
|-:|:-:|:-:|:-:|:-:|:-:|
| __S-LR__|1.23 (0.06)|1.23 (0.06)|1.25 (0.07)|1.25 (0.07)|1.25 (0.07)|
| __T-LR__|0.08 (0.01)|0.09 (0.01)|0.23 (0.07)|0.23 (0.08)|0.24 (0.08)|
| __S-RF__|0.51 (0.03)|0.59 (0.03)|0.82 (0.04)|0.83 (0.04)|0.82 (0.04)|
| __T-RF__|0.42 (0.02)|0.46 (0.03)|0.98 (0.06)|0.70 (0.07)|0.70 (0.06)|
| __S-GB__|0.39 (0.02)|0.45 (0.03)|0.72 (0.03)|0.75 (0.04)|0.75 (0.05)|
| __T-GB__|0.37 (0.02)|0.40 (0.03)|0.56 (0.05)|0.55 (0.04)|0.56 (0.05)|
| __AutoCATE__|0.14 (0.01)|0.15 (0.03)|0.35 (0.04)|0.52 (0.06)|0.44 (0.09) |
## Customizability
AutoCATE is designed to be highly customizable yet easy to use. Users can specify the search space (preprocessors, metalearners, baselearners) and set design parameters (evaluation protocols, trial numbers, ensemble methods). Domain experts can also fine-tune risk measures, evaluation metrics, and validation procedures. The interface follows scikit-learn conventions for intuitive use, with configuration details available in Appendix B.6. Future versions will include an API for custom algorithms, enabling users to integrate their own CATE estimation and evaluation methods.
## Limitations
While AutoCATE is designed to be broadly applicable, we acknowledge that no method is universally optimal for all CATE estimation scenarios.
As such, there are certain settings where AutoCATE may be less suitable:
- Very small datasets (n<50), where model selection based on pseudo-outcomes may be unreliable. For IHDP, we achieve good performance with only n=672 instances in the training set.
- Large datasets with constrained compute, where an AutoML-based approach may be too computationally expensive.
- Scenarios requiring strong domain knowledge integration, where a fully customized pipeline may be preferable.
- Data that requires extensive preprocessing, such as raw image or text data.
- Settings with fairness or regulatory constraints, where automated model selection may need additional safeguards.
- Cases violating our causal assumptions, such as strong violations of overlap, hidden confounders.
- Cases considering a different setting, such as instrumental variables or time series/panel data settings.
We agree that clarifying these limitations will strengthen the paper and will update the manuscript accordingly. Thank you for this valuable suggestion. If the reviewer thinks of other limitations that we have missed, we would be happy to also include those.
## Theoretical contributions
While our paper does indeed not present new formal proofs or theoretical guarantees, it aligns with ICML's emphasis on application-driven research, as highlighted in the call for papers. ICML encourages innovative techniques and problems motivated by real-world needs, with an emphasis on reproducible experiments and sound analysis, rather than mandatory theoretical components. Many influential ICML papers prioritize empirical insights and conceptual innovations, and we believe our work contributes to this tradition by addressing a significant practical challenge in causal inference.
___
Thank you for your time and effort in reviewing our work! Please let us know if you have any remaining concerns.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for addressing my comments.
Given the empirical nature of this work, it is crucial to explicitly state its limitations within the paper.
I will maintain my recommendation. | null | null | null | null | null | null |
PILAF: Optimal Human Preference Sampling for Reward Modeling | Accept (poster) | Summary: This paper introduces PILAF (Policy-Interpolated Learning for Aligned Feedback), a novel sampling strategy for iterative/online DPO. The authors show that with this new sampling algorithm, the gradient of the loss function matches the KL-regularized objective function, and they further provide asymptotic analysis of DPO with PILAF. For the experiments, they implement PILAF with some empirical approximation and evaluate it both with iterative and online DPO, where it consistently outperforms baseline methods (XPO and Best-of-N).
Claims And Evidence: Yes.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: I checked Theorem 4.1~4.3. They look good to me.
Experimental Designs Or Analyses: The experiments look good to me. However, I suggest that the authors add VPO in "Value-Incentivized Preference Optimization: A Unified Approach to Online and Offline RLHF" by Cen et al. to the baselines because it can also be considered as a variant of DPO.
Supplementary Material: I checked the proofs of the theorems.
Relation To Broader Scientific Literature: The key contribution is the proposed new sampling algorithm for DPO, which has better empirical performance than XPO and best-of-N on the HH-RLHF dataset. However, this contribution is limited because it is specifically designed for DPO, whose performance is not SOTA on most benchmarks. The authors can try whether this sampling strategy is also useful in other algorithms like IPO, KPO and PPO. If this sampling strategy is shown to be useful universally, this work would have a greater impact.
Essential References Not Discussed: "Value-Incentivized Preference Optimization: A Unified Approach to Online and Offline RLHF" by Cen et al. also proposes a variant of DPO, which has similar structures to XPO. The authors should also add this work into discussion.
Other Strengths And Weaknesses: None.
Other Comments Or Suggestions: None.
Questions For Authors: None.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful review, particularly for recognizing the strength of both the theoretical and experimental parts, and for checking the proofs of all theoretical results.
1. > "Value-Incentivized Preference Optimization". The authors should also add this work into discussion.
We thank the reviewer for bringing this up. Please note that we did cite Cen et al. around line 621. As we discussed there, their method does not modify the sampling scheme; rather, it adds a regularization term to the DPO objective to encourage departure from the calibration samples. Therefore, we keep this reference in the appendix and focus on those papers - more directly comparable to our work - that modify the sampling scheme in the main body of our paper. We can relocate this distinction in related works to the main body of the paper if you would consider that it adds clarity to our exposition.
2. > Add VPO to the baselines.
At the reviewer’s suggestion, we have added VPO as a baseline in the online setting. Due to the need to generate responses in all experiments, completing the full VPO comparison is infeasible within the rebuttal period. Therefore, we are adding this baseline for the online setting and plan to complete the comparison for the iterative setting in the camera-ready version. We report the figure at [anonymous URL](https://drive.google.com/file/d/11wfcQT07dxy0IpXuIcdRUrtRBoPPuwXD). The results show that PILAF outperforms VPO, even after a small hyperparameter search for VPO (details can be found in the provided link).
3. > However, this contribution is limited because it is specifically designed for DPO, whose performance is not SOTA on most benchmarks. The authors can try whether this sampling strategy is also useful in other algorithms like IPO, KPO and PPO.
We would like to clarify that our motivation—and the scope of our theoretical and experimental contributions—is not limited to DPO, but rather to reveal and address a fundamental misalignment that pervades the two‑phase RLHF framework. A prevailing assumption in recent RLHF work is that on‑policy data generated during training constitutes “good” alignment data. However, we demonstrate that—even when using on‑policy samples—the alignment process remains suboptimal.
Specifically, RLHF consists of two sequential phases: first, preference data are collected and used to extract human values and train a reward model— via maximum‑likelihood estimation either explicitly (as in PPO) or implicitly (as in DPO, IPO, and KPO)—and second, the learned reward model guides policy optimization. Our main theorem shows that this two‑phase design and the MLE optimization of the reward creates the misalignment between the update gradient and policy gradient maximizing the true human values. Although this issue affects both PPO and DPO equally, we present our theorem in the context of DPO for clarity. We articulate this motivation in the Introduction and provide a detailed discussion of its implications for PPO in Appendix G. By the same reasoning, our theoretical principle extends to IPO; however, because IPO modifies the optimization objective (and thus the gradient), its optimal sampling scheme differs from that derived for DPO. For clarity and focus, we leave a detailed analysis of IPO and its corresponding optimal sampling scheme to future work.
The contribution of our work is multifaceted. First, we rigorously identify and characterize the misalignment problem inherent to the two‑phase RLHF framework—an issue largely overlooked by prior work. Second, we develop a comprehensive, assumption‑light theoretical analysis that directly yields an optimal sampling strategy; unlike Cen et al. (VPO), our approach makes no restrictive assumptions (e.g., reward‑model linearity) and therefore holds under very general conditions. This requires substantial theoretical innovation. Third, we empirically validate PILAF's effectiveness on modern large language models, demonstrating significant and consistent improvements over existing baselines. Consequently, our contribution extends far beyond proposing a new algorithm: it offers a universal, theory‑driven perspective on addressing misalignment for RLHF.
Finally, we respectfully disagree that DPO is not state‑of‑the‑art. Recent studies confirm that the principal driver of performance in RLHF is whether an algorithm incorporates online data generation. When implemented online, DPO matches PPO’s performance as reported by [Noukhovitch et al. 2025, Tang et al. 2024]. Accordingly, we evaluate our method in both iterative and online settings.
Noukhovitch, Michael, et al. "Asynchronous RLHF: Faster and More Efficient Off-Policy RL for Language Models." ICLR 2025.
Tang, Yunhao, et al. "Understanding the performance gap between online and offline alignment algorithms." arXiv preprint arXiv:2405.08448 (2024).
Please let us know if our responses address your concerns.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' response.
1. I was suggesting including more experiments about applying the new sampling mechanism in other algorithms like PPO to verify its generality.
2. As for the performance of online DPO and PPO, Tang et al's work didn't really implement PPO and study its performance. Noukhovitch et al's work showed that online DPO is slightly better than a baseline PPO implementation on GSM8K. However, I noticed some recent, more extensive comparisons (https://github.com/RLHFlow/Online-DPO-R1) showing that PPO is still the strongest algorithm on more datasets. I guess there are some controversies going on, so I just suggest that the authors can try this sampling mechanism in PPO too. If it works out, this would make a greater impact.
Overall, I decided to maintain my score. | Summary: The paper "PILAF: Optimal Human Preference Sampling for Reward Modeling" introduces Policy-Interpolated Learning for Aligned Feedback (PILAF), a novel sampling strategy designed to improve reinforcement learning from human feedback (RLHF), particularly in reward modeling for aligning large language models (LLMs) with human values.
Claims And Evidence: See weaknesses.
Methods And Evaluation Criteria: 1. The authors simply use HH-RLHF dataset, which is not enough to validate their conclusion. It would be better to execute on more benchmarks.
2. The approach is online iterative DPO, but they simply compare with the vanilla DPO, which is unfair, because online DPO outperforms the vanilla one. The authors are suggested to compare their algorithm with more online methods, such as the general online DPO [1,2,3]
Besides, the works [1,2] are actually online DPO instead of simply iterative DPO since they collect samples generated from the trained policy and get them labeled by a preference oracle.
[1] Xiong, W., Dong, H., Ye, C., Wang, Z., Zhong, H., Ji, H., Jiang, N., and Zhang, T. Iterative preference learning from human feedback: Bridging theory and practice for rlhf under kl-constraint. In Forty-first International Con- ference on Machine Learning, 2024.
[2] Dong, H., Xiong, W., Pang, B., Wang, H., Zhao, H., Zhou, Y., Jiang, N., Sahoo, D., Xiong, C., and Zhang, T. Rlhf workflow: From reward modeling to online rlhf, 2024.
[3] Guo, S., Zhang, B., Liu, T., Liu, T., Khalman, M., Llinares, F., Rame, A., Mesnard, T., Zhao, Y., Piot, B., et al. Direct language model alignment from online ai feedback. arXiv preprint arXiv:2402.04792, 2024.
Theoretical Claims: The theoretical analysis is a little confusing and it is hard to get the intuitions why they use such sampling strategy \pi- and \pi+. The authors are suggested to clarify the insights more clearly instead of just listing theorems and equations that seems distinct from the algorithms. Now, it's hard to find connections between the theorem and experiments.
Experimental Designs Or Analyses: See Methods And Evaluation Criteria.
Supplementary Material: No.
Relation To Broader Scientific Literature: They provide realistic ways for online exploration for RLHF.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: No.
Other Comments Or Suggestions: See comments above.
Questions For Authors: Please answer the questions above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their time in providing the review.
1. > HH-RLHF not enough. More benchmarks.
Please allow us to put our work into more context. Our contributions extend beyond simply empirically validating a new sampling algorithm. Rather, first, we identify and rigorously characterize a previously overlooked misalignment problem in RLHF. Second, we develop a general theoretical framework—free of restrictive assumptions—that directly informs a principled solution for addressing the problem with guarantees. Third, we translate these insights into T‑PILAF, a practical algorithm which we validate at scale using 8B LLMs and the HH‑RLHF dataset. Together, we believe the novel theoretical results and large‑scale experiments convincingly demonstrate (1) the existence of misalignment in standard RLHF, (2) how it can be addressed in a first‑principles manner, and (3) the effectiveness of our method in practice. For further discussion of our contributions, please refer to response #3 to Reviewer kgC8.
We believe the strength of our proposed algorithm lies in its theoretical grounding, provably solving the alignment problem that we expose. As such, we feel that validation on a prominent benchmark, HH-RLHF, with a large LLM summoning a significant amount of compute at our disposal, is convincing evidence.
2. > The approach is online iterative DPO, but they simply compare with the vanilla DPO, which is unfair, because online DPO outperforms the vanilla one. The authors are suggested to compare their algorithm with more online methods, such as the general online DPO [1,2,3]
Following the definitions in [1,3], we distinguish between iterative and online data collection as follows: iterative sampling generates all preference data from the policy network at the beginning of each iteration, whereas online sampling produces preference data continuously, with each new batch drawn from the current policy network.
Importantly, we are not comparing vanilla DPO against these modes; instead, we hold the underlying RLHF iterative/online framework fixed and vary only the sampling strategy. Concretely, in both the iterative and online setups, every method collects preference data from the policy network—iteratively in the iterative setup (Sections 6.1) and batch‑by‑batch in the online setup (Section 6.2)—with the sole difference being how those samples are generated. This distinction is detailed in the implementation paragraphs and via different values of $n_t$ in Algorithm 1. The term *Vanilla* only denotes the sampling method as vanilla, as shown in Table 1. Thus, we are ensuring a fair comparison exactly as the reviewer suggested.
3. > Besides, the works [1,2] are actually online DPO instead of simply iterative DPO since they collect samples generated from the trained policy and get them labeled by a preference oracle.
We adopt the definitions of “online” and “iterative” sampling exactly as presented in the referenced papers. In particular, collecting samples from a fully trained policy at each iteration is referred to as iterative DPO in [1, 2].
4. > The theoretical analysis is a little confusing and it is hard to get the intuitions why they use such sampling strategy \pi- and \pi+. The authors are suggested to clarify the insights more clearly instead of just listing theorems and equations that seems distinct from the algorithms. Now, it's hard to find connections between the theorem and experiments.
To provide an intuition for the misalignment problem, note that DPO implicitly defines the reward as $r_\theta(x,\vec y) = \beta \cdot \log\left(\frac{\pi_\theta(\vec y \mid x)}{\pi_{\mathrm{ref}}(\vec y \mid x)}\right)$ which is trained via maximum‑likelihood estimation. When training with preference data generated by $\pi_\theta$, the optimization is biased. Reviewer PDrJ also summarizes this aptly as "it may not generalize well to reflect true preferences because the sampled comparisons do not represent the broader preference landscape." Nonetheless, this intuition does not prescribe a concrete sampling strategy for correcting the misalignment.
Instead, we use theoretical analysis to reveal the root cause and derive a principled solution. By comparing Equations 14 and 15a, we demonstrate how the gradient produced by standard sampling diverges from the true alignment gradient. Following Reviewer qMsM’s suggestion, we have added a lemma in the main text that explicitly shows this discrepancy. These insights directly inform T‑PILAF, a sampling algorithm that leverages $\pi^-$ and $\pi^+$, to realign the empirical gradient with its theoretical counterpart (as explained in line 245 left). PILAF is then the practical instantiation of T‑PILAF, and our experiments validate its effectiveness. This seamless integration of theory and practice is precisely what Reviewer PDrJ praised: “these theoretical results motivate the practical design of PILAF.”
Please let us know if our responses address your concerns. | Summary: This paper investigates strategies to leverage interpolated response sampling for improving human preference data collection and reward modeling in RLHF. The authors propose a Policy-Interpolated Learning for Aligned Feedback PILAF method that generates response pairs by interpolating between a reference policy and the current policy to better align reward model training with the true preference objective; then, they develop a practical version of PILAF and evaluate it in iterative and online DPO training setups. They find significant gains in reward model performance, alignment quality, and sample efficiency compared to Vanilla sampling and Best-of-N sampling methods.
Claims And Evidence: I think the mathematical proof and experiments together provide a reasonable justification for the effectiveness of the proposed method. Although more analysis could strengthen the connection, the current results are generally convincing.
Methods And Evaluation Criteria: The idea of interpolated response sampling for improving reward modeling in RLHF makes sense for the problem. Most existing work focuses on sampling responses directly from the current policy or using simple heuristics, but these approaches cannot effectively align reward model learning with the true human preference objective. This could limit the efficiency and quality of preference data. This paper introduces a method that interpolates between a reference policy and the current policy to generate more informative and aligned comparisons, as well as significantly improves reward model quality and sample efficiency.
The workflow is well-structured, as it builds on theoretical insights T-PILAF and adapts them into a practical algorithm PILAF to make the method applicable to real-world RLHF pipelines. This approach balances exploration and exploitation during response sampling and further enhances the alignment between reward model training and human preferences.
Theoretical Claims: The authors provide two main parts of theoretical claims.
(1) The authors formalize the oracle objective that an ideal reward model should be optimized to reflect true human preferences. They analyze how standard response sampling strategies, e.g., sampling only from the current policy, cause a gradient misalignment between the reward model's learning objective and the oracle objective. This misalignment means that even if a reward model fits the data it sees, it may not generalize well to reflect true preferences because the sampled comparisons do not represent the broader preference landscape.
(2) The authors propose T-PILAF. This framework generates response pairs by interpolating between a reference policy and the current policy. They prove that this interpolation mechanism aligns the reward model's gradient with the oracle gradient, thus correcting the bias from single-policy sampling. The proof shows how interpolated sampling balances exploration and exploitation. In this way, the reward model training is more statistically efficient and better aligned with the underlying preference function. Overall these theoretical results motivate the practical design of PILAF.
Experimental Designs Or Analyses: The experiments are extensive, with detailed analysis of the results. These experiments validate the effectiveness of PILAF in improving reward model quality, sample efficiency, and alignment performance and demonstrate the robustness and scalability of the method across different training setups, iterative and online DPO, and model sizes. The evaluation includes comparisons to strong baselines including Vanilla, Best-of-N, and Hybrid sampling, and covers both quantitative metrics, reward model performance, KL divergence, and training dynamics.
Supplementary Material: I reviewed the appendix in the supplementary material, including the additional explanations on theoretical formulations and experimental settings, but there may be parts I missed.
Relation To Broader Scientific Literature: This paper is related to preference-based RL, reward modeling in RLHF, and sampling-based data efficiency methods. The main differences are that (1) it introduces interpolated sampling to address gradient misalignment (against standard preference data collection), (2) it provides a formal analysis of gradient alignment (against heuristic sampling approaches), and (3) it focuses on optimizing sample efficiency through theory-grounded methods rather than relying solely on empirical strategies.
Essential References Not Discussed: Not found.
Other Strengths And Weaknesses: Thank you for taking the time to write and submit this work. The key strengths that I have observed are as follows:
1. I really appreciated the effort that the authors put into the related works section. They clearly did their research into the relevant domains and introduced me to new papers as well.
2. Their preliminary and motivation works section was very clear. I especially appreciated how they took the time to lay out the theoretical formulation and explain the connection between reward modeling gradients and preference sampling in detail.
3. Figure 1 is very clear. it illustrates the core idea of this method.
Overall, the authors bring up an interesting problem of aligning reward model training with true human preference gradients. In reward modeling, this would be an interesting setting to see how we can optimize preference data collection strategies to improve alignment given the constraints of human annotation cost and model sample efficiency.
Other Comments Or Suggestions: The authors may consider adding an ablation study to analyze whether the gains from PILAF are truly due to its interpolated sampling mechanism and gradient alignment rather than other confounding factors. For example, comparing PILAF with versions that remove or vary the interpolation component would help validate this core contribution.
Questions For Authors: See above.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thanks for appreciating our work, especially its development from theory to algorithm design. We are sincerely pleased that the reviewer aacknowledged the misalignment problem we identified, found the combination of mathematical proofs and experiments to provide a reasonable justification, and considered our experimental results to be extensive.
1. > The authors may consider adding an ablation study to analyze whether the gains from PILAF are truly due to its interpolated sampling mechanism and gradient alignment rather than other confounding factors. For example, comparing PILAF with versions that remove or vary the interpolation component would help validate this core contribution.
Thank you for the suggestion. Following it, we added two ablation studies to isolate the contributions of PILAF’s interpolation and extrapolation components. Each component was replaced individually with vanilla sampling, yielding two baselines: one with $({\vec y}^a, {\vec y}^b)=(\pi_\theta^+, \pi_\theta)$ (ablation of the interpolation component) and one with $({\vec y}^a, {\vec y}^b)=(\pi_\theta, \pi_\theta^-)$ (ablation of the extrapolation component). We denote these ablation variants as PILAF-extrapolate and PILAF-interpolate, where one response is obtained via vanilla sampling and the other via extrapolation or interpolation, respectively. Due to time constraints, we completed these ablations only for the online setup; we plan to extend this to the iterative setting in the camera-ready version.
We include the figures at [anonymous URL](https://drive.google.com/file/d/11wfcQT07dxy0IpXuIcdRUrtRBoPPuwXD). Our theory suggests that the two sampling responses should come from different distributions in order to yield a controlled difference that the model can effectively learn from. Both ablation variants introduce such differences and outperform vanilla sampling. However, the variant with only interpolation (combined with vanilla sampling for the other response) performs much worse than full PILAF, highlighting the importance of the extrapolation response. The PILAF-extrapolate variant achieves slightly worse final results, and its convergence is much slower (each dot in our figure represents one evaluation after 50 steps). Overall, these ablation results confirm our theoretical prediction that the full PILAF algorithm is the best performing approach.
We thank the reviewer again for their time and constructive feedback.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors’ response. I will keep the score and I do like this work. | Summary: This paper introduces a sampling strategy for collecting human preference data in RLHF (specifically, DPO) setting. It aims to align preference-based reward modeling with the true (oracle) objective by interpolating between the current and reference policies during response generation. Theoretical analysis shows that the proposed aligns gradients of the true (oracle) objective in the first order, making training more consistent and efficient. The authors validate this in both iterative and online DPO settings, demonstrating the effectives regard the DPO loss.
Claims And Evidence: In general, the motivation of this paper is clear and the claims and analysis make sense.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: Yes I have checked Theorem 4.1 and 4.2, which look fine to me.
Experimental Designs Or Analyses: Yes, although the experiments on DPO is not comprehensive compared to other papers, it supports the claim that the algorithm better aligns with the DPO loss regarding the true objective.
Supplementary Material: Yes, I looked at the proof.
Relation To Broader Scientific Literature: Related to general alignment of LLMs.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: Strength is that this problem study a important aspect of RLHF, that is how to better sample data in the online or iterative DPO setting.
One of the weaknesses is that, it is mentioned that "We show that uniform sampling from the current policy, as is common, leads to misaligned gradients of the two objectives". However, in the theoretical development, I only saw theorem 4.1 showing the result for alignment gradients, but there is no formal results on the claim that uniform sampling results in misaligned gradients. It is not hard to show a theoretical results like this.
Other Comments Or Suggestions: N/A
Questions For Authors: I have the following questions, and am willing to increase my score if properly addressed:
1. I have a question regarding section 2.3. It is mentioned that the true goal for DPO should be (6). However, the common DPO setting is not very different from (6). It basically replaces $r*$ in (6) by the learned $r_{\theta}$. In other words, the common DPO setting approximates the goal (6) with some errors. I think there should be some results or statement on the analysis between the previous objective and the current objective (6).
2. It was not described how to turn $y^a$ and $y^b$ into $y^w$ and $y^l$. This is problematic when looking at eqn (3).
3. how is the optimal policy $\pi^*$ defined in equation 9?
4. in line 294 and 295, I think it is bettwe to provide a formal statement with proofs or derivation in the appendix.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s insightful comments, which have helped improve the presentation of the misalignment problem. We are also glad that the reviewer enjoyed our theoretical analysis and empirical validation.
1. > uniform sampling - misaligned gradients
We thank the reviewer for raising this point — a clear, formal statement helps improve our presentation. The discrepancy arises from the difference in gradient formulations between Eqns 14 and 15a.
To make notations concise, we introduce the following shorthands: $\Delta r^* := r^*(x, y^a) - r^*(x, y^b)$, $\Delta r_{\theta} := r_{\theta}(x, y^a) - r_{\theta}(x, y^b)$ and $g := \nabla_{\theta} r_{\theta}(x, y^a) - \nabla_{\theta} r_{\theta}(x, y^b)$.
**Lemma C.2.**
$\nabla_{\theta} J(\pi_{\theta}) = \frac{1}{2 \beta} E_{y^a, y^b \sim \pi_{\theta}(\cdot | x)} [ ( \Delta r^* - \Delta r_{\theta} ) g]$.
**(Corollary of) Lemma C.3.** For vanilla response sampling scheme, $\nabla_{\theta} L(\theta) = -E_{y^a, y^b \sim \pi_{\theta}(\cdot | x)} [(\sigma(\Delta r^*)-\sigma(\Delta r_{\theta}))g]$.
These two gradients share a similar structure. The key difference is $\Delta r^* - \Delta r_{\theta}$ for $\nabla_{\theta} J(\pi_\theta)$ and $\sigma(\Delta r^*) - \sigma(\Delta r_{\theta})$ for $\nabla_{\theta} L(\theta)$.
To correct for this mismatch, T-PILAF adjusts the response sampling distribution: It reweights the pairwise response sampling so that the density ratio between the vanilla scheme and T-PILAF approximates the derivative $\sigma'(\Delta r_\theta)$. This bridges the gap between the non-linear sigmoid differences and the linear reward differences, leading to better gradient alignment during training.
2. > Q1: Section 2.3
Let us clarify the root cause of the misalignment. The true objective for both PPO and DPO is to optimize the expected return under the true reward function $r^*$ in (6). In PPO, a reward model $r_\theta$ is trained from human preferences and then used to guide policy optimization. In DPO, $r_\theta$ is learned via $r_\theta(x,y) = \beta \log\frac{\pi_\theta(y | x)}{\pi_{\mathrm{ref}}(y | x)}$. In both frameworks, the final policy’s performance depends on how well $r_\theta$ approximates $r^*$, as you noted.
Prior work assumes that on‑policy samples from $\pi_\theta$ suffice for improving both the policy and its reward model. We show this is incorrect: gradients from standard on‑policy sampling do not align with the true policy gradient under $r^*$, limiting $r_\theta$’s ability to close the approximation gap. As Reviewer PDrJ perfectly summarizes: "it may not generalize well to reflect true preferences because the sampled comparisons do not represent the broader preference landscape." This applies to both PPO and DPO; we present the result in the DPO setting for clarity.
In contrast, our T‑PILAF sampling scheme is designed to align the empirical gradient with the true policy gradient under $r^*$. This ensures each update of $r_\theta$ approximates $r^*$ in the optimal first-order direction. Our statistical results also shows T-PILAF minimizes variance. All of our results focus on comparing between the empirical objective and the true objective in (6), showing precisely how vanilla sampling produces misaligned gradients when updating the reward model.
3. > How to turn $y^a$ and $y^b$ into $y^w$ and $y^l$.
Before Eqn (1), we noted that $y^w$ and $y^l$ were human-annotated as the preferred and unpreferred responses, respectively. Eqn. (1) introduced the commonly used Bradley-Terry (BT) model, and we stated explicitly that the BT assumption was adopted throughout this paper.
4. > Definition of $\pi^*$ in eqn 9
The ground-truth reward function in the BT model (Eqn 1) is denoted by $r^*$. Then, the notation $\pi^*$ refers to the optimal policy that maximizes the value function $J(\pi)$, as in Eqn (6):
\begin{align*}
\pi^*(y | x) = \frac{1}{Z(x)} \pi_{ref}(y | x) \exp(\frac{1}{\beta} r^*(x,y)).
\end{align*}where $Z(x)$ is a partition function that ensures $\pi^*(\cdot | x)$ sums to 1.
5. > line 294 and 295, proofs.
Following the suggestion, we add the derivation:
**Proof:**
Starting from
$\pi_\theta^+(y_t | x, y_{1:t-1}) = \frac{1}{Z(x, y_{1:t-1})} \pi_\theta(y_t | x, y_{1:t-1}) \big(\frac{\pi_\theta(y_t | x, y_{1:t-1})}{\pi_{ref}(y_t | x, y_{1:t-1})}\big)^\beta$,
we rewrite it as:
$\pi_\theta^+(y_t | x, y_{1:t-1}) \propto \exp\big((1+\beta)\log \pi_\theta(y_t | x, y_{1:t-1}) - \beta \log \pi_{ref}(y_t | x, y_{1:t-1})\big)$.
Define the logits:
$h_\theta(y_t | x, y_{1:t-1}) = \log \pi_\theta(y_t | x, y_{1:t-1})$,
$h_{ref}(y_t | x, y_{1:t-1}) = \log \pi_{ref}(y_t | x, y_{1:t-1})$.
Then
$\pi_\theta^+(y_t | x, y_{1:t-1}) \propto \exp((1+\beta) h_\theta(y_t | x, y_{1:t-1}) - \beta h_{ref}(y_t | x, y_{1:t-1})).$
Normalizing over all $y_t$ leads to the softmax form:
$\pi_{\theta,\beta}(\cdot | x, y_{1:t-1}) = \mathrm{softmax}((1+\beta) h_\theta - \beta h_{ref})$.
Please let us know if our responses address your concerns. | null | null | null | null | null | null |
Star Attention: Efficient LLM Inference over Long Sequences | Accept (poster) | Summary: This paper proposes Star Attention, which improves the LLM inference efficiency by sharding attention across multiple hosts.
Claims And Evidence: Please see **Other Strengths And Weaknesses**.
Methods And Evaluation Criteria: Please see **Other Strengths And Weaknesses**.
Theoretical Claims: Not applied here.
Experimental Designs Or Analyses: Please see **Other Strengths And Weaknesses**.
Supplementary Material: Yes, I have checked all of them.
Relation To Broader Scientific Literature: Please see **Other Strengths And Weaknesses**.
Essential References Not Discussed: Please see **Other Strengths And Weaknesses**.
Other Strengths And Weaknesses: **Strengths**:
1. The paper is easy to follow, with clear writing and presentation.
2. Evaluation results are good.
**Weaknesses**:
1. The main concern I have with this paper is the lack of system performance analysis of proposed method. While the authors present comprehensive algorithm results over various benchamarks, a more detailed performance breakdown at kernel-level would make this method more convincing. Moreover, since sequence parallelism depends on hardware setup, such as GPU accelerator and interconnect (PCIe/NVLink), a more fine-grained analysis for different settings would also provide insights.
2. The literature survey is also not comprehensive.
For instance, there are many other works that focuses on KV cache optimization from architecture/system angeles [1-3].
3. The authors should also further discuss its compatibilty with existing parallelism methods [4-5] and popular LLM serving frameworks [6-7].
4. The title of the paper should emphasize '**distributed LLM inference**' as the proposed method demands multi-GPU/node setup to work.
[1] ALISA: Accelerating Large Language Model Inference via Sparsity-Aware KV Caching, ISCA 2024.
[2] InfiniGen: Efficient Generative Inference of Large Language Models with Dynamic KV Cache Management, OSDI 2024.
[3] FlashInfer: Efficient and Customizable Attention Engine for LLM Inference Serving, Arxiv 2025.
[4] Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism, Arxiv, 2019.
[5] GPipe: Efficient Training of Giant Neural Networks using Pipeline Parallelism, NeurIPS 2019.
[6] Efficient Memory Management for Large Language Model Serving with PagedAttention, SOSP 2023.
[7] SGLang: Efficient Execution of Structured Language Model Programs, NeurIPS 2024.
Other Comments Or Suggestions: Please see **Other Strengths And Weaknesses**.
Questions For Authors: Please see **Other Strengths And Weaknesses**.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We thank Reviewer dKGV for their detailed and insightful feedback. Below, we respond to the concerns regarding system performance analysis, literature coverage, compatibility, and presentation.
### **1. System Performance Analysis:**
- While we agree that kernel-level profiling and analysis across hardware configurations (e.g., PCIe vs. NVLink interconnects) can provide valuable insights, our primary focus in this work is to introduce and validate the Star Attention algorithm from an algorithmic and end-to-end performance standpoint. We view detailed kernel-level profiling and hardware-specific optimization as an exciting avenue for future work that could further enhance deployment efficiency.
- All experiments were conducted under consistent hardware and software environments for Star Attention and the baselines, ensuring that the reported relative speedups (e.g., up to 11x on Llama-3.1-8B-Instruct for long contexts) fairly capture the algorithmic advantage.
- Notably, Star Attention’s phase 1 avoids inter-host communication entirely, unlike Ring Attention. Even assuming an infinitely fast interconnect, global attention still incurs computation proportional to the total number of tokens per host, while Star Attention reduces this via localized blockwise attention in phase 1.
### **2. Literature Survey (KV Cache Optimization):**
- Thank you for the helpful references [1–3]. We will revise the related work section to include and discuss these references.
- Based on the reviewer's feedback, we conducted new comparisons between Star Attention and other sparse KV cache methods such as StreamingLLM and MInference, on long-context inference tasks. *Star Attention outperforms these alternatives in both accuracy and scalability*, as shown below:
**Table B: Accuracy on RULER (Llama-3.1-8B-Instruct)**
| Methods | 16K | 32K | 64K | 128K | Average |
| :----------------- | :---: | :---: | :---: | :---: | :-----: |
| *Full Attn. (Baseline)* | *92.22* | *87.53* | *84.79* | *76.31* | *85.21* |
| StreamingLLM | 74.76 | 48.56 | 26.2 | 30.77 | 45.07 |
| MInference | **93.27** | 86.54 | **84.86** | 58.17 | 80.71 |
| **Star Attention** | 91.27 | **88.70** | 83.37 | **74.41** | **84.44** |
We also evaluated Star Attention on the InfiniteBench benchmark. Given its broad coverage across multilingual and programmatic tasks, we include this for further evidence of Star Attention’s generalization:
**Table C: Accuracy on Infinite Bench (Llama-3.1-8B-Instruct)**
| Methods | En. Sum | En. QA | En. MC | En. Dia | Zh. QA | Code. Debug | Math. Find | Retr. PassKey | Retr. Num | Retr. KV | Avg. |
| :----------------- | :-----: | :----: | :----: | :-----: | :----: | :---------: | :--------: | :-----------: | :-------: | :------: | :---: |
| *Full Attn. (Baseline)* | *31.91* | *25.92* | *69.43* | *21.5* | *31.95* | *16.75* | *24.29* | *99.15* | *99.66* | *60* | *48.06* |
| StreamingLLM | 30.15 | 10.15 | 41.05 | 8.5 | 22.38 | 8.63 | 17.71 | 2.71 | 5.93 | 0 | 14.72 |
| MInference | 31.04 | 22 | 63.76 | 14.5 | 28.7 | 5.33 | **27.43** | 56.78 | 77.12 | 14 | 34.07 |
| **Star Attention** | **31.85** | **25.92** | **69** | **22** | **30.37** | **24.37** | 26.29 | **93.22** | **96.27** | **45.8** | **46.51** |
### **3. Compatibility with Parallelism Methods and Serving Frameworks:**
- **Model Parallelism [4, 5]:** Star Attention is complementary to tensor and pipeline parallelism. These can be applied within each host, while Star Attention governs how input context is distributed across hosts via sequence parallelism. No assumptions are made about the intra-host setup.
- **Serving Frameworks [6-7]:** Star Attention is compatible with modern LLM serving frameworks. For instance, memory management techniques like PagedAttention [6] can be applied within each host to efficiently handle the local KV cache. Because Star Attention distributes context blocks across hosts and isolates them during Phase 1, such memory-optimized serving layers can operate independently within each node. Similarly, the two-phase structure of Star Attention aligns well with structured execution frameworks like SGLang [7], and can potentially be integrated into such systems without requiring fundamental changes to their scheduling or runtime semantics.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. However, I will maintain my score. | Summary: The paper introduces StarAttention, a sparse attention method for encoding long-context by distributing chunks of context over GPUs. Unlike Ring Attention, Star Attention uses only local (in-chunk) attention for the majority of the context, allowing for a substantial speedup. Each block attends only to itself and an anchor block; the query at the end of the long input then attends over all input chunks, using a lazy softmax accumulation. Star Attention substantially reduces latency at a small performance cost; ablations show that maintaining an anchor block of meaningful context and limiting the number of total chunks is important to maintaining performance.
Claims And Evidence: Yes; however, I think the last line of the abstract could be worded more carefully-- it currently almost seems to imply that memory requirements are *also* reduced by 11x. I think this is a miscommunication and not an overclaim.
Methods And Evaluation Criteria: Yes; I think evaluating on RULER and BABILong is reasonable, although it would be a bonus to also see results on a less synthetic task.
Theoretical Claims: N/A
Experimental Designs Or Analyses: Yes; I think RingAttention is an appropriate comparison point, and using Flash Attention for each is appropriate.
Supplementary Material: I read the appendices.
Relation To Broader Scientific Literature: I think this is a nice contribution to efficient long context; while there have been a number of prior works that encode context in chunks and then do some kind of aggregation by retrieval, fusion, or overlapping, this is (to my knowledge) the first work to apply the attention sink + local context method to efficiently prefill a long input's KV cache.
Essential References Not Discussed: I think there could be more discussion of streaming methods for long context that evict the middle cache, keeping only the sink + local context-- these are conceptually adjacent, although they do differ from Star Attention in that they fully evict the middle cache. Some notable examples would be [StreamingLLM](https://arxiv.org/abs/2309.17453) (which is already cited), [LM-infinite](https://arxiv.org/abs/2308.16137), and [InfLLM](https://arxiv.org/abs/2402.04617).
[TurboRAG](https://arxiv.org/abs/2410.07590) may also be relevant, although it also differs significantly from this setting-- in particular, I believe they train their model to adapt to sparse attention patterns from stacking reused KV caches from pre-encoded documents.
Other Strengths And Weaknesses: I think the ablation of what should go in the context block (and whether its positional IDs matter) is interesting and useful! I appreciated the analysis.
Other Comments Or Suggestions: (Related to the questions below) I think more understanding of how many blocks can be used before performance breaks down would be helpful. The settings proposed (generally 4 context blocks, with an anchor block of the same size as the context blocks) seems like a reliable setting, but it would be nice to understand how much this method is robust to varying the block size.
Questions For Authors: Q1. Given a fixed context length, how does the performance vary with block size? Given a fixed block size (e.g. 16k tokens), how many blocks can you add before performance severely declines? There is some discussion of this (and it seems that generally less blocks is better), but how sharp is the dropoff? It would be helpful to understand how dramatic this performance dropoff is.
Q2. You state that a larger anchor block size is critical to maintaining accuracy, and so the anchor block should be the same size as the context block. Is there any benefit to having the anchor block *larger* than the context block? (e.g. if I can only process 32k tokens on each single parallel worker, is it best to have 16k anchor + 16k context, or would 24k anchor + 8k context be better?).
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer EMsM for their insightful comments and suggestions. We address each point below:
### **1. Clarity of Abstract Wording:**
Thank you for pointing out the ambiguity in the abstract's final sentence. You are correct that the “up to 11x” improvement specifically refers to inference speed and throughput, not memory. While Star Attention does reduce memory usage (due to sharded attention and localized KV caching), the 11x figure quantifies speedup. We will revise the abstract to explicitly state that the 11x gain pertains to speed/throughput, and mention memory reduction separately, to prevent misinterpretation.
### **2. Evaluation Benchmarks:**
We agree that results on real-world or less synthetic benchmarks are valuable. In response, we have extended our evaluation to include InfiniteBench, a long-context benchmark with diverse and challenging tasks. Additionally, we also added comparisons to some of the other sparse attention methods as well such as StreamingLLM and MInference. The results are summarized in the table below.
**Table B: Accuracy on Infinite Bench (Llama-3.1-8B-Instruct)**
| Methods | En. Sum | En. QA | En. MC | En. Dia | Zh. QA | Code. Debug | Math. Find | Retr. PassKey | Retr. Num | Retr. KV | Avg. |
| :----------------- | :-----: | :----: | :----: | :-----: | :----: | :---------: | :--------: | :-----------: | :-------: | :------: | :---: |
| *Full Attn. (Baseline)* | *31.91* | *25.92* | *69.43* | *21.5* | *31.95* | *16.75* | *24.29* | *99.15* | *99.66* | *60* | *48.06* |
| StreamingLLM | 30.15 | 10.15 | 41.05 | 8.5 | 22.38 | 8.63 | 17.71 | 2.71 | 5.93 | 0 | 14.72 |
| MInference | 31.04 | 22 | 63.76 | 14.5 | 28.7 | 5.33 | **27.43** | 56.78 | 77.12 | 14 | 34.07 |
| **Star Attention** | **31.85** | **25.92** | **69** | **22** | **30.37** | **24.37** | 26.29 | **93.22** | **96.27** | **45.8** | **46.51** |
### **3. Discussion of Related Work:**
Thank you for suggesting the additional relevant references. We will amend the related work section in the revised manuscript to include a more detailed discussion and comparison with related methods.
### **4. Impact of Block Size and Number of Blocks (Q1):**
- Figure 5 illustrates accuracy as a function of block size, holding the total sequence length fixed. We observe that larger blocks result in better approximation to global attention and higher accuracy.
- Figure 6 and Table 5 (Appendix C.1) explore scaling behavior with fixed block size (e.g., 32K) while increasing sequence length up to 1M tokens. We find that Star Attention retains upto 90% of full attention accuracy even at 1M tokens, with up to 17x speedup.
This suggests a graceful degradation, not a sharp drop-off. The method remains robust up to at least 1M tokens, especially with larger blocks.
### **5. Size of Anchor Block relative to the Block Size:**
Since Phase 1 uses causal attention (as in decoder-only LLMs), anchor blocks cannot extend beyond the preceding context block without violating causality. Setting the anchor size equal to the context block size ensures maximal usable context per block while preserving autoregressive constraints, effectively bringing Star Attention’s receptive field closer to that of full attention.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed response! I read the response and the other reviews, and I will maintain my positive rating. | Summary: This paper propose star-attention which combines a streamingllm attention for the prefill stage and a dense attention for the decoding stage. Specifically, the author implement the streamingllm pre-fill with blocks, where the computing are partioned across the query dimension. The sink and local blocks are packed and distributed, this implementaion, compared to ring-attention requires less communication and thus avoid the ring style data transfer. For the decoding stage, the query are distributed and lse and other inner states are transfered back. The experimental results show that it achieve about 95% accuracy while being upto 11x faster than ring-attention.
Claims And Evidence: -
Methods And Evaluation Criteria: The strength of this paper:
1. This paper propose a solid sparse attention approach that does pre-fill sparsely while does decoding densely. And experimental results demonstrate the effectiveness of this approach.
2. The proposed method is especially suitable for distributed system where the block-wise strategy can be applied directly for this method.
The limitation of this paper:
1. The proposed star-attention is simply a streamingllm + a dense bottom window. It's very similar to the vanilla streamingllm and exactly the same compared to tri-shape attention. In general, the proposed algorithm does not provide novel knowledge to the community.
2. The author fail to qualify the motivation of dense decoding verse sparse pre-fill. Unlike tri-shape attention, which provides a good reason for why decoding phase requires a complete access of all history tokens.
3. The author fail to include reasonable baselines to compare with. Star-attention as a lossy attention variant should be compared against for example streamingllm, Minference, or more recent flex-prefill, instead of comparing against ring-attention.
4. The author only report the performance on the the synthetic benchmark RULER, which may fail to test all aspects of star-attention. Should also report metrics on more comprehensive testset such as Infinite-bench, scbench etc.
Theoretical Claims: -
Experimental Designs Or Analyses: -
Supplementary Material: -
Relation To Broader Scientific Literature: -
Essential References Not Discussed: -
Other Strengths And Weaknesses: -
Other Comments Or Suggestions: -
Questions For Authors: 1. What is the motivation of dense query and sparse pre-fill?
2. How well star-attention perform against other sparse attention baselines? (see above.)
3. How well start-attention perform on comprehensive long-context benchmarks? (see above.)
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We thank the reviewer dnhh for their feedback and acknowledge their points regarding novelty, motivation, baselines, and benchmarks. We address these points below:
### **1. Novelty and Relation to Prior Work:**
- While Star Attention draws inspiration from prior work like StreamingLLM and attention sinks, it introduces a distinct two-phase distributed inference architecture. Unlike StreamingLLM, which struggles with long-context retention and is not inherently distributed, Star Attention processes the context in parallel without inter-host communication in Phase 1 and leverages global attention only during decoding. This results in both accuracy preservation and significant latency reduction.
- Regarding “tri-shape attention,” we were unable to identify a specific paper with that terminology. If you are referring to a particular method, we would appreciate a citation to better contextualize and compare it in the final version.
### **2. Motivation for Sparse Prefill vs. Dense Decoding:**
- In Phase 1 (sparse prefill), attention is localized, as context tokens generally require only local neighborhood interactions. This allows efficient processing via distributed blockwise attention. In contrast, during query encoding and decoding (Phase 2), the tokens must integrate information from the entire context, necessitating dense attention. This design reflects practical needs in long-context tasks and allows us to optimize for both throughput and accuracy.
- Conversely, during the query processing and response generation phase (decoding phase), the query tokens and subsequent generated tokens often require access to information scattered throughout the entire preceding context to formulate an accurate response. Therefore, employing dense global attention in Phase 2, accessing the full cached KV state from Phase 1, is crucial for preserving the model's understanding and generation capabilities, particularly for tasks requiring synthesis of information from distant parts of the context.
### **3. Comparison with Sparse Attention Baselines:**
- In response to the reviewer’s feedback, we extended our baseline comparisons to include StreamingLLM and MInference. Results are shown in Table A.
**Table A: Accuracy on RULER (Llama-3.1-8B-Instruct)**
| Methods | 16K | 32K | 64K | 128K | Average |
| :----------------- | :---: | :---: | :---: | :---: | :-----: |
| *Full Attn. (Baseline)* | *92.22* | *87.53* | *84.79* | *76.31* | *85.21* |
| StreamingLLM | 74.76 | 48.56 | 26.2 | 30.77 | 45.07 |
| MInference | **93.27** | 86.54 | **84.86** | 58.17 | 80.71 |
| **Star Attention** | 91.27 | **88.70** | 83.37 | **74.41** | **84.44** |
- The results show that Star Attention outperforms all the other sparse KV methods. Furthermore, due to the distributed nature of Star Attention, since there is no inter-block communication during phase 1, the latency on 128K sequence length is equivalent to the latency that the model will have when processing a sequence length of 64K (32K block + 32K anchor), since each host will process each block parallely.
- Unlike methods such as MInference, which may require offline analysis to determine optimal sparsity patterns, Star Attention can be applied directly to most pretrained Transformer models without model-specific tuning or preprocessing.
### **4. Evaluation on Comprehensive Benchmarks:**
- In response to the reviewer’s suggestion, we expanded our evaluation to include InfiniteBench—a more comprehensive benchmark suite for long-context performance. As shown in Table B, Star Attention outperforms other sparse baselines across diverse tasks, demonstrating its robustness beyond synthetic datasets like RULER and BABILong.
**Table B: Accuracy on Infinite Bench (Llama-3.1-8B-Instruct)**
| Methods | En. Sum | En. QA | En. MC | En. Dia | Zh. QA | Code. Debug | Math. Find | Retr. PassKey | Retr. Num | Retr. KV | Avg. |
| :----------------- | :-----: | :----: | :----: | :-----: | :----: | :---------: | :--------: | :-----------: | :-------: | :------: | :---: |
| *Full Attn. (Baseline)* | *31.91* | *25.92* | *69.43* | *21.5* | *31.95* | *16.75* | *24.29* | *99.15* | *99.66* | *60* | *48.06* |
| StreamingLLM | 30.15 | 10.15 | 41.05 | 8.5 | 22.38 | 8.63 | 17.71 | 2.71 | 5.93 | 0 | 14.72 |
| MInference | 31.04 | 22 | 63.76 | 14.5 | 28.7 | 5.33 | **27.43** | 56.78 | 77.12 | 14 | 34.07 |
| **Star Attention** | **31.85** | **25.92** | **69** | **22** | **30.37** | **24.37** | 26.29 | **93.22** | **96.27** | **45.8** | **46.51** |
We thank the reviewer again for raising important points. Their comments led us to incorporate broader baselines and more diverse evaluations, which we believe have significantly strengthened the work. We look forward to incorporating these additions more prominently in the camera-ready version. | Summary: This paper presents Star Attention, a novel two - phase block - sparse approximation algorithm for efficient LLM inference over long sequences. The self - attention mechanism in Transformer - based LLMs has quadratic complexity, making long - sequence inference costly and slow. Star Attention addresses this issue by dividing the inference process into two phases. In Phase 1, the context is partitioned into blocks and distributed across multiple hosts. Each host computes local attention within its assigned block, which reduces the attention complexity from quadratic to linear. In Phase 2, the query is broadcast to all hosts, and global attention is computed at a designated query - host by aggregating local attention results. This approach enables the context length to scale linearly with the number of hosts. Experiments on Llama - based models show that Star Attention can achieve up to 11x faster inference speed compared to Ring Attention while maintaining 95 - 100% of the accuracy.
Claims And Evidence: yes
Methods And Evaluation Criteria: yes
Theoretical Claims: yes
Experimental Designs Or Analyses: yes
Supplementary Material: no
Relation To Broader Scientific Literature: no
Essential References Not Discussed: no
Other Strengths And Weaknesses: Strengths
1. Star Attention significantly speeds up LLM inference on long sequences. It manages to achieve a remarkable speedup, up to 11 times faster than the baseline in some cases. This is a huge improvement, especially considering the increasing demand for processing long - context data in applications like large - scale document analysis and multi - document summarization. For example, in the experiments with Llama - based models, it clearly outperforms the Ring Attention baseline in terms of inference time.
2. Despite the significant speed improvement, Star Attention can preserve 95 - 100% of the accuracy of global attention. This means that it doesn't sacrifice much in terms of the model's ability to understand and process the input accurately. Whether it's in simple retrieval tasks or more complex question - answering tasks, it can still provide reliable results.
3. The two - phase design is really smart. By separating context encoding and query encoding, it takes advantage of the characteristics of different parts of the input sequence. The use of anchor blocks in Phase 1 helps to manage the attention spikes and approximate global attention, which is a great way to optimize the attention mechanism. Also, the distributed softmax algorithm in Phase 2 enables efficient global attention computation without excessive communication overhead.
4. It is compatible with most Transformer - based LLMs trained with global attention. This means it can be easily integrated into existing models without the need for complex fine - tuning. This makes it very practical and convenient for researchers and engineers who want to improve the performance of their LLM - based systems.
Weaknesses
1. Although anchor blocks play a crucial role in Star Attention, there are still some aspects that need further exploration. For instance, the exact reason why the anchor block size needs to be equal to the context block size for optimal performance is not fully understood. Also, the relationship between the position and content of the anchor block and the model's performance could be studied more deeply. This lack of understanding may limit the further optimization of the algorithm.
2. In more complex tasks like Multi - Hop Tracing and some types of Question Answering tasks, Star Attention shows a slight decline in performance. These tasks require the model to have a deeper understanding of the context and often need inter - block communication. Since Star Attention lacks effective inter - block communication during context encoding, it struggles to perform as well as in simpler tasks. This means that there are still limitations when applying Star Attention to tasks that demand high - level context comprehension.
3. The performance of Star Attention is highly dependent on the block size. While setting the block size to one - quarter of the total sequence length seems to work well in most cases, using smaller blocks on longer sequences leads to accuracy degradation. This restricts the flexibility of the algorithm in different scenarios. For example, in some real - world applications where the sequence length may vary unpredictably, it may be difficult to choose the optimal block size.
Other Comments Or Suggestions: no
Questions For Authors: see weaknesses
Code Of Conduct: Affirmed.
Overall Recommendation: 1 | Rebuttal 1:
Rebuttal: We appreciate the reviewer x5MA's constructive feedback. Below, we address the identified weaknesses regarding anchor blocks, performance on complex tasks, and block size dependency:
### **1. Role and Configuration of Anchor Blocks:**
- **Anchor Block Size:** The performance peak when the anchor block size equals the context block size can be attributed to Star Attention’s design constraints. Since Phase 1 uses causal attention (as in decoder-only LLMs), anchor blocks cannot extend beyond the preceding context block without violating causality. Setting the anchor size equal to the context block size represents the maximum usable context while respecting this constraint — effectively bringing Star Attention closer to full attention in terms of receptive field.
- **Anchor Block Content and Position:** As shown in Section 4.1, using the first context block (`c1`) as the anchor yields optimal performance. The rationale is that Phase 2 performs global attention; therefore, Phase 1 context blocks should attend to anchor tokens representative of the initial sequence context that the full model would observe in a global attention scenario. Our analysis in Section 4.1 further indicates that if the anchor block content is fixed to `c1`, variations in the position IDs assigned to these anchor tokens during local attention computation have a minimal impact on overall performance.
### **2. Performance on Complex Tasks:**
- Star Attention, like other sparse attention mechanisms, trades off some accuracy for substantial inference speedups. Across a range of tasks — including question answering (Figure 7) — it retains 95–100% of dense attention performance while delivering substantial inference speedups.
- For tasks requiring complex reasoning and inter-block context (e.g., Multi-Hop Tracing), we acknowledge that the absence of direct cross-block attention in Phase 1 introduces challenges. Despite this, Star Attention maintains up to 93% of the dense attention accuracy on these tasks (Figure 7).
- To assess generalization on more complex, real-world tasks, we evaluated Star Attention on InfiniteBench, a diverse benchmark covering multilingual QA, retrieval, math, code debugging, summarization and more. As shown in Table B, Star Attention matches or closely tracks full attention across all categories and outperforms prior sparse inference baselines by a significant margin.
**Table B: Accuracy on Infinite Bench (Llama-3.1-8B-Instruct)**
| Methods | En. Sum | En. QA | En. MC | En. Dia | Zh. QA | Code. Debug | Math. Find | Retr. PassKey | Retr. Num | Retr. KV | Avg. |
| :----------------- | :-----: | :----: | :----: | :-----: | :----: | :---------: | :--------: | :-----------: | :-------: | :------: | :---: |
| *Full Attn. (Baseline)* | *31.91* | *25.92* | *69.43* | *21.5* | *31.95* | *16.75* | *24.29* | *99.15* | *99.66* | *60* | *48.06* |
| StreamingLLM | 30.15 | 10.15 | 41.05 | 8.5 | 22.38 | 8.63 | 17.71 | 2.71 | 5.93 | 0 | 14.72 |
| MInference | 31.04 | 22 | 63.76 | 14.5 | 28.7 | 5.33 | **27.43** | 56.78 | 77.12 | 14 | 34.07 |
| **Star Attention** | **31.85** | **25.92** | **69** | **22** | **30.37** | **24.37** | 26.29 | **93.22** | **96.27** | **45.8** | **46.51** |
- Improving the performance on tasks requiring deeper inter-block reasoning in the early encoding stages remains an important direction for future work.
### **3. Dependency on Block Size and Flexibility:**
- The dependency on block size reflects a common trade-off in blockwise attention: smaller blocks enable faster inference but may limit context aggregation, especially in longer sequences. Our experiments (Figure 5) show that setting the block size to ~1/4 of the total sequence length achieves a strong balance of accuracy and speed. However, the block size depends on the user and they can choose smaller block sizes to prioritize higher inference speeds, accepting slightly more accuracy degradation.
- For variable-length or real-time inputs, users can set the block size based on the maximum expected sequence length or system constraints. As shown in Figure 6 and Table 5, even with smaller blocks (1/8, 1/16, 1/32 of the sequence), Star Attention retains up to 90% of dense attention performance while delivering up to 17× speedups — allowing users to flexibly tune accuracy vs. latency based on application needs. | null | null | null | null | null | null |
REINFORCE Adversarial Attacks on Large Language Models: An Adaptive, Distributional, and Semantic Objective | Accept (poster) | Summary: The paper demonstrates that existing LLM jailbreak defenses significantly underestimate model vulnerability due to non-adaptive attack objectives. By adopting a reinforcement learning-based approach, adversarial attacks can become more effective and adaptive, posing a greater challenge for safety alignment efforts. The authors suggest that adaptive attack objectives should be the standard for future robustness evaluations of LLMs.
Claims And Evidence: Claim: Existing jailbreak attacks fail to capture true model vulnerability as they do not adapt to model-specific responses.
Evidence: Figure 2 shows that affirmative-response attacks often produce harmless completions, whereas REINFORCE successfully generates harmful responses.
Claim: The proposed method achieves substantially higher ASR than standard attacks.
Evidence: Llama 3 8B: ASR 26% → 68% (Table 1). Circuit-breaker defense: ASR 2% → 22% (Table 3). Further improvement to 61% ASR with better attack seeding.
Claim: The attack optimizes the full response distribution, not just a fixed affirmative phrase.
Evidence: Section 2 formulates the attack as reinforcement learning, optimizing expected harmfulness rather than likelihood maximization.
Claim: REINFORCE can enhance multiple jailbreak attack methods.
Evidence: REINFORCE-GCG and REINFORCE-PGD consistently outperform their baselines (Tables 1 & 2). The sampling strategy adapts dynamically, leading to more effective attacks.
Methods And Evaluation Criteria: The methods and evaluation criteria are well-chosen for the problem, ensuring strong empirical comparisons. However, broader comparisons (e.g., against LLM-generated attacks) and alternative evaluation strategies (e.g., human judges or diverse decoding methods) could strengthen the robustness of the findings.
Theoretical Claims: The paper presents several theoretical claims related to adversarial attacks on LLMs, framed through reinforcement learning (RL) and distributional optimization.
Eq. (1) defines the adversarial objective as maximizing the expected reward over responses.
Eq. (6) establishes equivalence between this optimization and the RL value function.
Eq. (7) applies the policy gradient theorem, showing that optimizing for harmful outputs can be done using REINFORCE.
The theoretical claims are correctly derived and well-supported by standard reinforcement learning principles. While the mathematical formulations are sound, the paper lacks formal guarantees on sample efficiency, judge reliability, and convergence. These aspects could be explored in future work.
Experimental Designs Or Analyses: The paper presents a well-structured experimental design to evaluate the effectiveness of the proposed REINFORCE-based adversarial attack. The paper benchmarks against state-of-the-art attack methods (GCG and PGD) using the standard affirmative-response objective. It evaluates both non-defended and defended LLMs (e.g., Llama 3 with circuit breakers). While comparisons with existing attacks are robust, no experiments compare against non-gradient-based attacks, such as those using generative models (e.g., adversarial LLM-generated prompts). The reliance on greedy decoding alone ignores the stochastic nature of LLM responses.
Supplementary Material: Skimmed the appendix.
Relation To Broader Scientific Literature: See above
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: See above
Other Comments Or Suggestions: N/A
Questions For Authors: See above
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for the suggestion and plan to investigate theoretical guarantees in future work.
## Alternative evaluation strategies
While our objective might utilize false positives of the judge instead of triggering actually harmful behavior, such cases clearly do not appear systematically in our experiments. The reader can check this by going through the examples provided (Appendix D and E). We will extend our statement in the right column of lines 296-298 to better convey this point and revise the limitations (Section 6, lines 386-394, right column).
## Further baselines
Due to the policy-gradient approach, the evaluation of our objective focuses on gradient-based attacks. Additionally, HarmBench's results show that GCG is superior in terms of ASR to non-gradient-based attacks that, e.g., use a generative model. To the best of our knowledge, no other attack has been shown to be considerably stronger than GCG on HarmBench, which we consider to be a state-of-the-art jailbreak benchmark. Hence, we did not include further comparisons in the submission since these baselines would certainly perform worse than GCG with affirmative objective. Nevertheless, we consider following the suggestion in a revised version of the paper.
## Greedy decoding
We agree that the reliance on greedy decoding has limitations (e.g., see Scholten et al. 2024, as referenced in our paper). However, the greedy evaluation is the default (GCG, HarmBench, ...), and except for a handful of exceptions, virtually all papers studying LLM jailbreaks rely on the greedy evaluation. Hence, we have decided to stick to the convention of evaluating the greedy generation for comparability to other works. We will consider adding an experiment in a revised version of the paper studying this distributional perspective.
We kindly ask for clarification on any remaining concerns. | Summary: This paper addresses the challenge of jailbreaking large language models (LLMs) – i.e. crafting adversarial prompts that make an aligned (safety-trained) model produce disallowed or harmful content. The authors point out a key limitation in current adversarial prompt attacks: they typically optimize a static objective such as maximizing the likelihood of a particular “affirmative response” (a fixed harmful reply prefix). This static objective doesn’t adapt to the attacked model’s actual behavior and treats the model output as if it were a single target sequence. As a result, prior attacks often succeed in forcing the model to begin with a harmful-looking phrase, yet the model may still refuse or derail thereafter (yielding a benign completion). Such non-adaptive attacks can overestimate the model’s robustness because a high likelihood of a fixed trigger phrase doesn’t guarantee a genuinely harmful outcome.
The paper proposes REINFORCE Adversarial Attacks, a novel adaptive, distributional, and semantic optimization objective for generating adversarial prompts. Instead of focusing on one predetermined “bad” answer, their method explicitly optimizes the expected harmfulness of the model’s entire output distribution. They cast adversarial prompt search as a reinforcement learning (RL) problem: the prompt is treated as a policy (initial state) that influences the distribution of outputs, and the goal is to maximize a reward measuring harmful content in the output.
The authors integrate their RL-based objective into two state-of-the-art jailbreak attack algorithms: Greedy Coordinate Gradient (GCG) and Projected Gradient Descent (PGD) attacks. Empirical results show that the proposed REINFORCE-based objective leads to substantially higher attack success rates (ASR) compared to the conventional static objective.
Claims And Evidence: Claim 1: Static “affirmative response” objectives are flawed and non-adaptive, leading to overly optimistic estimates of robustness. The authors assert that existing attacks which maximize the likelihood of a fixed target response do not adequately test a model’s true vulnerabilities. They support this claim with a compelling anecdotal example (Fig. 2 and accompanying text) where the baseline attack indeed finds a prompt that makes the model start with the desired forbidden phrase (“Sure, here’s how to…”) yet the model’s continuation is not actually harmful.
Claim 2: The proposed REINFORCE-based objective is adaptive, distributional, and optimizes the true probability of harmful outputs. The authors devote Section 2–3 to formalizing this claim. They treat finding an adversarial prompt x̃ as maximizing the expected reward $E_{y\sim P_\theta(\cdot|x̃)}[ \text{Reward}(y,x̃) ]$.
Claim 3: The new attack objective yields substantially higher attack success rates (ASR) on current LLMs, revealing greater vulnerability. This is an empirical claim, and the paper provides strong experimental evidence to support it. In Tables 1–2, results on five different models consistently show large ASR gains with the REINFORCE objective compared to the baseline “Affirmative” objective.
On the whole, the paper’s claims are well-supported. The combination of theoretical justification, quantitative results, and qualitative examples makes for a convincing argument. The only slight gap is that the paper doesn’t deeply analyze why one model (Llama 2 7B under PGD) didn’t improve.
Methods And Evaluation Criteria: The experimental methodology is solid. The use of a strong benchmark (HarmBench), multiple models, and direct comparison to known attack baselines makes the results meaningful. The authors were careful to keep comparisons fair (same hyperparameters, etc.) and to document any deviations. The evaluation criteria (ASR via a judge model) is appropriate for the task and was applied uniformly. One might suggest minor improvements, like including more prompts or multiple random restarts to measure variance, but given the resource-intensive nature of these attacks, the choices made are quite reasonable. The evidence provided – in the form of tables and examples – is directly tied to the stated methods and metrics. Overall, the methods and evaluation are well-aligned with the problem of jailbreaking LLMs, and they credibly demonstrate the value of the proposed approach.
Theoretical Claims: This paper is mainly empirical and does not provide theoretical results for the main algorithms (PGD and GCD) proposed.
Experimental Designs Or Analyses: The experimental design is thorough and the analyses are generally sound, successfully supporting the paper’s conclusions.
1. Model Selection and Generality: The authors tested their attacks on a diverse set of models (five different LLMs plus a defended variant). This breadth is commendable as it demonstrates the attack’s robustness across model families and sizes.
2. Consistency and Repetition: For each model, they evaluate on 50 prompts with both the baseline and new attack, ensuring a direct side-by-side comparison. Because each prompt attack is quite involved, they didn’t do repeated trials on the same prompt (which could measure stochastic variance). However, given the large improvements, it’s unlikely that variance would overturn their conclusions.
3. Outlier Analysis: The only case where the new attack did not outperform was PGD on Llama2 7B (18% vs 18% ASR). The authors highlight this as “the only exception”. They don’t provide a deep analysis in the paper about why it remained unchanged. It raises curiosity: Llama2 7B did see improvement under GCG (38→62%), so why would PGD not improve?
4. Ablation and Sensitivity: The paper does not present extensive ablation studies on the components of their approach (like the effect of sample size K, or the biased sampling strategy vs purely random sampling, etc.). They mention in Appendix C some adjustments (e.g. not using the random sample for candidate selection in GCG to save time), which implies they tried variants for efficiency. This suggests they did some hyperparameter search for things like the number of samples K, the inclusion of $y_{\text{seed}}$, etc., on separate prompts. It would be interesting to see those ablation results, but they are not included (likely due to space or because they felt it was straightforward). The absence of detailed ablation does not critically harm the paper—the main narrative is well-supported by the straightforward baseline vs. new comparisons. However, it leaves some questions unanswered, like how important is the biased sampler or how sensitive is the attack to the initial seed prompt. The authors do give one piece of analysis in the circuit-breaker experiment: by changing $y_{\text{seed}}$ from the affirmative phrase to a more harmful one (from a successful base model attack), they dramatically improved results (22%→61%).
Supplementary Material: Overall, the supplementary materials provided are comprehensive and helpful. They include additional examples, technical details, and resources that would have cluttered the main paper but are valuable for a deep understanding.
Relation To Broader Scientific Literature: This paper sits at the intersection of adversarial machine learning, natural language generation, and AI safety. Its contributions should be viewed in the context of several lines of prior work: LLM jailbreaks/adversarial prompts, adversarial training/evaluation in NLP, and controlled text generation via optimization. The authors do a good job situating their work among recent studies, though there are a few older relevant works that could also be acknowledged.
1. Advances over Prior Jailbreak Attacks: The authors reference numerous recent papers on jailbreaking or prompt attacks for LLMs, including Zou et al. (2023), Perez et al. (2022), Wen et al. (2023), Liu et al. (2024), Zhu et al. (2023), Geisler et al. (2024), Guo et al. (2024). These works collectively indicate a surge of interest in automatically finding prompts that cause misbehavior. For instance, Zou et al. (2023) introduced the GCG method used as a baseline here, and Geisler et al. (2024) introduced the PGD attack – both are cited and directly built upon.
2. Connection to Adversarial ML (Robustness Evaluation): The concept of adaptive vs non-adaptive attacks is well-known in the adversarial ML literature for classification models. The authors reference Carlini & Wagner (2017), who famously pointed out that defenses must be evaluated against adaptive attacks (where the attacker knows the defense), otherwise one can get a false sense of security. However, the connection to older adversarial attacks (like those in computer vision) is not discussed.
3. Comparison to Prior Results: The paper doesn’t explicitly compare its results to prior jailbreak success rates from other papers (except the ones they re-implemented like GCG, PGD).
4. Connection to RLHF: the REINFORCE algorithm is also used in RLHF for LLMs. However, these works are not mentioned in the paper.
Essential References Not Discussed: See above.
Other Strengths And Weaknesses: 1. Clarity of Problem Statement: The paper clearly identifies a concrete problem – the static nature of existing jailbreak objectives – and explains it in intuitive terms (the example of the model just because it said “Sure, …” doesn’t mean it actually gave a harmful answer). The introduction uses simple language to explain why non-adaptive attacks can be misleading.
2. Computational Intensity and Practicality: One weakness is that the method, as presented, is computationally expensive and requires white-box access.
3. Lack of Defense or Mitigation Discussion: The paper is focused on attacks and doesn’t propose any defenses or mitigations.
4. Missing Discussion on Judge Robustness: As mentioned, the method’s success hinges on the judge. If the model found a “trick” to say something harmful in a way the judge doesn’t recognize, the attack might succeed from a human perspective but not be counted. Or vice versa, it might fool the judge into thinking a safe response is harmful (less likely, but possible). The authors did not report any instances of misclassification by the judge (which suggests the judge did well). A weakness is that they did not explicitly validate the judge’s decisions with human oversight (perhaps assuming HarmBench did that).
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: ## Mitigation
The robustness literature suggests that only systematic methods like adversarial training actually help. For adversarial training, the attack effectivity is key for actual improvements (e.g., see [Kolter and Madry, 2018](https://adversarial-ml-tutorial.org/adversarial_training/) arguments via Danskin's theorem). Hence, powerful attacks should translate to powerful adversarial training and an effective mitigation strategy. We will add a discussion.
## More prompts
Following the suggestion, we following report results on 200 prompts (instead of 50 prompts)
For GCG:
| | Affirmative | REINFORCE *(ours)* |
|---|:---:|:---:|
| Gemma 1.1 2B | 0.57 | **0.88** |
| Gemma 1.1 7B | 0.63 | **0.87** |
| Llama 2 7B | 0.32 | **0.56** |
| Llama 3 8B | 0.35 | **0.73** |
| Vicuna 1.5 7B | 0.86 | **0.95** |
For PGD:
| | Affirmative | REINFORCE *(ours)* |
|---|:---:|:---:|
| Gemma 1.1 2B | 0.56 | **0.82** |
| Gemma 1.1 7B | 0.54 | **0.84** |
| Llama 2 7B | 0.17 | **0.22** |
| Llama 3 8B | 0.57 | **0.69** |
| Vicuna 1.5 7B | 0.87 | **0.94** |
## Confidence intervals
In relation to the suggestion, we will include Clopper-Pearson intervals to show statistical significance. For example, for GCG on Gemma 1.1 2B, the 90%-confidence intervals are [0.49, 0.63] (affirmative) vs. [0.82, 0.92] (REINFORCE).
## Ablation
We did not include ablations that we thought were not insightful. For example, excluding the random sample for candidate selection did not impact the performance much.
Regarding the sampling, we refer to the ablation study in Table 4. We did not experiment with more samples than we have in our experiments, except for including the initial greedy response, which did not help.
To study the impact of $\mathbf{y}_{\text{seed}}$, we ran experiments using the concurrent/very recent AdvPrefix (Zhu et al., 2024) using GCG and reporting ASR@512:
| | Affirmative | Affirmative | REINFORCE *(ours)* | REINFORCE *(ours)* | REINFORCE *(ours)* |
|---|:---:|:---:|:---:|---:|---:|
| $\mathbf{y}_{\text{seed}}=$ | $\mathbf{y}_{\text{affirmative}}$ | $\mathbf{y}_{\text{advprefix}}$ | $\mathbf{y}_{\text{affirmative}}$ | $\mathbf{y}_{\text{advprefix}}$ | $\mathbf{y}_{\text{history}}$ |
| Llama 3 8B | 0.35 | 0.70 | 0.73 | **0.81** | - |
| + Circuit breaker | 0.02 | 0.14 | 0.23 | 0.48 | **0.50** |
$\mathbf{y}\_{\text{affirmative}}$ is HarmBench's target, and $\mathbf{y}\_{\text{history}}$ the generation of a previously successful attack on Llama 3 8B w/o defense. Having a better seed $\mathbf{y}\_{\text{seed}}$ clearly helps. However, our REINFORCE objective further reinforces attack efficacy.
## Further citations
We are happy to incorporate further references.
Due to the vast body of robustness literature, we would appreciate further pointers for particularly relevant "older" works.
We do reference Ahmadian et al., 2024 (REINFORCE for RLHF) but are happy about further pointers to relevant works.
## Comparison to Prior Results
Due to the policy-gradient approach, the evaluation focuses on gradient-based attacks. Additionally, HarmBench's results show that GCG is superior in terms of ASR to non-gradient-based attacks that, e.g., use a generative model like PAIR. To the best of our knowledge, no other attack has been shown to be considerably stronger than GCG on HarmBench (state-of-the-art jailbreak benchmark). Hence, these baselines would even perform worse than GCG with affirmative objective.
## Computational cost
While each attack step is more expensive, our REINFORCE-GCG obtains a better ASR-runtime tradeoff (e.g., Figure 3). Hence, our REINFORCE-GCG either achieves the same ASR in less time or obtains a higher ASR, given equal compute. We think it is a promising direction for future work to further study techniques for lowering the computational cost. Some ad hoc strategies could be speculative decoding (Leviathan et al., 2023) or tree-based attention (Cai et al., 2024) to avoid duplicate computations. For a better overview, we will add detailed breakdowns of the time cost of REINFORCE-GCG.
## White-box access
In Figure 4, we investigate an application of our objective *without using gradient information*. Similarly to other works (e.g., Andriushchenko et al.), we instead apply uniformly random perturbations and then select the best candidate. From the dashed blue bar with the solid blue bar, it follows that also attacks w/o gradient information benefit from our objective.
## Judge Robustness
We agree that "reward hacking" is one of the potential drawbacks. Thus, we include random responses that are deemed harmful in Appendix D and E. While our objective might utilize false positives of the judge instead of triggering actually harmful behavior, such cases clearly do not appear systematically in our experiments. The reader can check this by going through the examples provided. We will extend our paper in that regard.
We kindly ask for clarification on any remaining concerns. | Summary: The paper "REINFORCE Adversarial Attacks on Large Language Models: An Adaptive, Distributional, and Semantic Objective" presents a novel approach for adversarial attacks on large language models (LLMs). Traditional optimization-based adversarial attacks rely on maximizing the likelihood of a predefined affirmative response, which often does not translate to truly harmful completions. The authors introduce an adaptive and semantic optimization approach that leverages the REINFORCE policy-gradient method. This approach optimizes over the distribution of model responses rather than relying on a static target. The proposed method significantly improves attack success rates for jailbreak algorithms like Greedy Coordinate Gradient (GCG) and Projected Gradient Descent (PGD), demonstrating its efficacy in evading safety mechanisms in LLMs. The paper provides extensive empirical validation, showing that the REINFORCE objective enhances attack success rates, including against the circuit breaker defense in Llama 3, increasing the ASR from 2% to 50%.
Claims And Evidence: The primary claims made by the paper are:
The affirmative response objective is inconsistent and can lead to overestimated robustness.
The proposed REINFORCE objective is adaptive, distributional, and semantic, making it more effective for adversarial attacks.
The method significantly improves attack success rates for existing jailbreak algorithms (GCG and PGD).
The REINFORCE objective successfully bypasses state-of-the-art safety mechanisms, including circuit breakers in Llama 3.
These claims are well-supported by empirical results. The authors provide detailed comparisons showing that their method consistently outperforms baseline attacks across various LLMs. The increase in ASR for models such as Llama 3 8B (from 35% to 73%) and with the circuit breaker defense (from 2% to 50%) strongly supports their assertions.
Methods And Evaluation Criteria: The proposed method uses reinforcement learning (specifically the REINFORCE algorithm) to optimize adversarial prompt crafting. The evaluation criteria include:
Attack Success Rate (ASR), measured across multiple LLMs.
The effectiveness of attacks against standard and advanced defenses.
Comparison with state-of-the-art jailbreak methods (GCG and PGD).
Ablation studies to analyze the impact of different sampling strategies.
The chosen evaluation benchmarks (e.g., HarmBench) and experimental setups are appropriate for assessing the effectiveness of adversarial attacks.
Theoretical Claims: The paper presents a theoretical formulation of adversarial attacks on generative models and derives an attack objective using REINFORCE. The correctness of the mathematical formulations and their application to reinforcement learning are well-grounded in existing literature. The authors reference foundational works (e.g., Williams, 1992) to support their approach.
Experimental Designs Or Analyses: The experimental design is robust, with evaluations conducted on multiple LLMs, including Llama 2, Llama 3, Gemma, and Vicuna. The use of diverse benchmarks and comparative analysis with existing jailbreak techniques strengthens the findings. However, some areas, such as sensitivity to hyperparameters and different attack settings, could be explored further.
Supplementary Material: The supplementary material includes additional experimental details, ablation studies, and example attack cases. These materials enhance the reproducibility and credibility of the work.
Relation To Broader Scientific Literature: The work aligns with existing research in adversarial attacks on LLMs and extends previous methods by incorporating reinforcement learning-based optimization. It builds upon prior works in adversarial robustness, jailbreak attacks, and policy-gradient methods. The findings are relevant to both security researchers and those working on LLM alignment and safety.
Essential References Not Discussed: The paper cites most relevant works in adversarial robustness and jailbreak attacks. However, additional discussion on interpretability and mitigation strategies for adversarial prompts could further contextualize the contributions.
Other Strengths And Weaknesses: Strengths:
Introduces an innovative, theoretically grounded attack objective.
Demonstrates significant improvements over existing jailbreak methods.
Provides extensive empirical validation across multiple models and defenses.
Strong methodological rigor with reinforcement learning integration.
Weaknesses:
The reliance on LLM-as-a-judge evaluations may introduce biases in measuring attack success.
Limited discussion on potential mitigations for adversarial attacks.
Computational overhead for the REINFORCE optimization process.
Other Comments Or Suggestions: It would be beneficial to explore the implications of these attacks on commercial models with additional safety guardrails.
Further discussion on ethical considerations and responsible disclosure of adversarial methods would strengthen the paper.
Future work could examine real-world deployment scenarios and adaptive defenses against REINFORCE-based attacks.
Questions For Authors: How does the computational cost of REINFORCE-based attacks compare to standard jailbreak methods in real-world scenarios?
Have you tested the method on closed-source models like OpenAI GPT-4 or Claude to assess generalizability?
How sensitive is the attack success rate to hyperparameter tuning in REINFORCE optimization?
What countermeasures do you propose for mitigating the effectiveness of your attack method?
Could the REINFORCE framework be adapted to enhance LLM safety rather than bypassing it?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for the feedback and the numerous suggestions! We will address the points made in a revised version of the paper. Next, we elaborate on some of the points and answer the questions.
## The reliance on LLM-as-a-judge evaluations may introduce biases in measuring attack success.
While our objective might utilize false positives of the judge instead of triggering actually harmful behavior, such cases clearly do not appear systematically in our experiments. The reader can check this by going through the examples provided (Appendix D and E). We will extend our statement in the right column of lines 296-298 to better convey this point.
## How does the computational cost of REINFORCE-based attacks compare to standard jailbreak methods in real-world scenarios?
While each attack step is more expensive, our REINFORCE-GCG obtains a better ASR-runtime tradeoff (e.g., Figure 3). Hence, our REINFORCE-GCG either achieves the same ASR in less time or obtains a higher ASR when given equal computing. We think it is a promising direction for future work to further study techniques for lowering the computational cost. Some ad hoc strategies could be speculative decoding (Leviathan et al., 2023) or tree-based attention (Cai et al., 2024) to avoid duplicate computations. For a better overview, we will add detailed breakdowns on the time cost of REINFORCE-GCG to a revised version of the paper.
## Have you tested the method on closed-source models like OpenAI GPT-4 or Claude to assess generalizability?
We did not investigate attacking closed-source models due to the lack of resources. We leave such studies open for future work.
## How sensitive is the attack success rate to hyperparameter tuning in REINFORCE optimization?
We study the most critical hyperparameters in Table 4, namely the used samples. Beyond that, we did not particularly tune other hyperparameters since the ASR was not very sensitive to changes in them.
## What countermeasures do you propose for mitigating the effectiveness of your attack method? \& Could the REINFORCE framework be adapted to enhance LLM safety rather than bypassing it?
From the vast literature on adversarial robustness, there is limited hope that vulnerabilities can be effectively mitigated beyond systematic methods like adversarial training. For adversarial training, the attack effectivity is key for an actual improvement of the robustness (e.g., see [Kolter and Madry, 2018](https://adversarial-ml-tutorial.org/adversarial_training/) arguments via Danskin's theorem). Hence, powerful attacks should translate to powerful adversarial training and an effective mitigation strategy. We will add such a discussion in a revised version of the paper. | Summary: The authors propose a new text-based adversarial loss function for jailbreak attacks, addressing the limitation that optimizing solely for affirmative responses (e.g., "Sure, here is how to...") can lead to non-harmful completions. To improve effectiveness, the authors introduce a loss function that incorporates multiple response samplings in the gradient computation: random sampling (low-temperature generation), greedy sampling (standard), seed-based sampling (biasing toward affirmative responses or prior successful attacks), and harmful sampling (most likely harmful response). The paper demonstrates that applying this change to existing jailbreak methods, such as GCG and PGD, improves the attack success rate.
Claims And Evidence: The claims are supported by convincing evidence.
Methods And Evaluation Criteria: The proposed method directly addresses the identified limitation of the affirmative attack objective having blind spots. Both the approach and the analysis are well-aligned with the problem, making the methodology and evaluation criteria appropriate for the task.
Theoretical Claims: I have reviewed the mathematical formulation at a high level, and it appears correct. However, reinforcement learning formulations are not my area of expertise.
Experimental Designs Or Analyses: Experiment designs seem valid and sound.
Supplementary Material: No.
Relation To Broader Scientific Literature: The paper proposes a modification to the attack objective for existing gradient-based jailbreak attacks such as GCG and PGD. While recent work in this area has largely focused on enhancing attacks by adding features (e.g., improving stealth [1]) or reducing computational costs by eliminating gradient requirements [2], this method takes a different approach. It proposes a direct improvement for attack success rate by refining the optimization objective itself.
Essential References Not Discussed: There are existing methods that optimize to avoid rejection responses (e.g., "I'm sorry, I cannot answer") [3], rather than directly optimizing for affirmative responses. In relation to Figure 1, these approaches cover a portion of the shaded grey area and should be discussed to provide a more comprehensive comparison of optimization objectives in adversarial attacks.
Other Strengths And Weaknesses: Strengths:
- Clear writing that effectively communicates the methodology and findings.
- Strong mathematical foundation, providing a well-supported theoretical basis for the approach.
- Convincing experimental results demonstrating the effectiveness of the proposed method.
Weaknesses:
- Incomplete experimental settings: One key application of GCG is generating a universal adversarial suffix—does the proposed method reduce computational cost or improve attack success rate (ASR) in this setting? Additionally, while less critical, evaluating the transferability of suffixes across models would add further insight.
- Lack of ablation studies: How was the clamping value of the seed determined? An ablation study examining its impact would strengthen the empirical analysis.
- Limited comparison with recent attacks: While the experiments demonstrate improvements over GCG and PGD, a broader contextualization against more recent jailbreak attacks (e.g., [1,2,3], doesn't have to be these specifically) would provide a clearer picture of where this method stands in jailbreak performance.
References:
[1] Liu, Xiaogeng, et al. "Autodan: Generating stealthy jailbreak prompts on aligned large language models." arXiv preprint arXiv:2310.04451 (2023).
[2] Paulus, Anselm, et al. "Advprompter: Fast adaptive adversarial prompting for llms." arXiv preprint arXiv:2404.16873 (2024).
[3] Chao, Patrick, et al. "Jailbreaking black box large language models in twenty queries." arXiv preprint arXiv:2310.08419 (2023).
Other Comments Or Suggestions: No.
Questions For Authors: Q1: Given that computational cost is a major limitation of gradient-based methods, how could this approach be incorporated into a non-gradient-based method like [3]? A discussion on potential adaptations or extensions would be valuable in understanding the broader applicability of this method.
Ethical Review Flag: Flag this paper for an ethics review.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for the thorough feedback!
## Existing methods that optimize to avoid rejections
We thank the reviewer for pointing out works that avoid rejections. While we have already referenced the mentioned work, we have not explicitly discussed this alternative objective. In a revised version of the manuscript, we will include this in our discussions.
## Does the proposed method reduce computational cost or improve attack success rate (ASR) in this setting?
Our REINFORCE-GCG obtains a better ASR-runtime tradeoff (e.g., Figure 3). Hence, our REINFORCE-GCG either achieves the same ASR in less time or obtains a higher ASR, given equal compute resources.
## Lack of ablation studies: How was the clamping value of the seed determined?
We observed that the judge often evaluates an affirmative response as non-harmful. However, to provide guidance to the attack and to have the regular affirmative objective as a special case of our objective, we decided to clamp it to some small constant.
As long as the LLM generates benign responses, the seed will dominate the guidance towards harmful behavior. Once the LLM generates harmful responses, the actual harmful responses dominate. This is especially true due to the rather binary behavior of the HarmBench judge (usually either returns ~0 or ~1). Consequently, the exact value is of minor importance, and we did not include an ablation since we thought it was not interesting enough.
## Limited comparison with recent attacks
Due to the policy-gradient approach, the evaluation of our objective focuses on gradient-based attacks. Additionally, HarmBench's results show that GCG is superior to attacks [1,3] in terms of ASR. To the best of our knowledge, no other attack has been shown to be considerably stronger than GCG on HarmBench, which we consider to be a state-of-the-art jailbreak benchmark. Hence, we did not include further comparisons in the submission since these baselines would certainly perform worse than GCG with affirmative objective. Nevertheless, we consider following the suggestion in a revised version of the paper.
## Computational cost of gradient
It is somewhat of a misconception that the gradient calculation was costly (in a GCG-style attack). Recall that GCG does one forward+backward pass for generating 512 mutations based on the gradient information. Thereafter, the cross entropy loss w.r.t. the affirmative objective is calculated for all 512 mutations/candidates to determine the best mutation. Thus, the gradient calculation is usually well below 5% of the total runtime.
## Usage of our objective is non-gradient-based methods
In Figure 4, we investigate an application of our objective *without using gradient information*. Similarly to other works (e.g., Andriushchenko et al.), we instead apply uniformly random perturbations and then select the best candidate. From the comparison of the dashed blue bar with the solid blue bar, it is clear that other optimization-based approaches that do not use gradient information would benefit from our objective. We will discuss this more prominently in a revised version of our paper. | null | null | null | null | null | null |
Componential Prompt-Knowledge Alignment for Domain Incremental Learning | Accept (poster) | Summary: Domain Incremental Learning (DIL) is crucial for processing data across different domains while maintaining previously acquired knowledge, but current prompt-based methods suffer from misalignment issues when integrating knowledge from different domains. The authors identify that this problem stems from random positioning of knowledge components within prompts, which can lead to interference when irrelevant components are combined. To address this, they introduce KA-Prompt, a novel method that focuses on component-aware prompt-knowledge alignment during the training process. The approach works in two phases: first establishing alignment between new and old prompts through initial structure configuration, and then preserving this alignment through dynamic identification of relevant prompts and adaptive consistency constraints. Through extensive testing on DIL benchmarks, KA-Prompt demonstrates significant improvements over existing methods, showing the effectiveness of their component-aligned approach.
## update after rebuttal
The experiments provided in rebuttal demonstrate that reusable knowledge mining can capture richer and more accurate semantic representations. However, it seems that the observation that “the semantic partial knowledge of objects in the new prompts is continuously reinforced during incremental learning” has not been observed. Moreover, the contribution of the proposed prompt fusion mechanism is ambiguous due to the superiority of individual prompts. Therefore, I decided to keep my score.
Claims And Evidence: Yes. This paper randomly shuffles different components of prompt during prompt fusion, and the resulting fluctuations demonstrate that the previous fusion method did not achieve knowledge alignment.
Methods And Evaluation Criteria: The proposed method is meaningful for the domain incremental learning it focuses on. The paper provides evidence that KA-Prompt enhances the ability to integrate cross-domain knowledge in continuous learning. However, further analysis is needed regarding the mitigation of knowledge mismatch in the prompt component.
Theoretical Claims: This article does not involve theoretical proofs.
Experimental Designs Or Analyses: The authors conducted experiments on four benchmark datasets and compared state-of-the-art cue-based incremental learning methods. The overall experimental design is sound.
Supplementary Material: Yes. Supplementary materials are mainly related codes.
Relation To Broader Scientific Literature: This paper improves on previous work in two ways:
1. It optimizes the initialization mechanism of the newly introduced prompt. Instead of directly using the prompt obtained from the previous task as the initialisation, a suitable prompt is searched from the existing prompts as the initialisation of the new prompt.
2. Different from previous work that directly merges the components in the corresponding positions in the prompt, this article aligns knowledge before merging, thereby reducing the ineffective merging of unrelated knowledge.
Essential References Not Discussed: In my understanding, the important references have already been discussed.
Other Strengths And Weaknesses: Strengths:
1. The figures and tables in this paper are clearly represented.
2. The paper provides a detailed and clear description of the limitations of the previous work and the motivation for the proposed methodology.
Weaknesses:
1. Insufficient analysis of experiments.
2. Some parts of the proposed method lack definition.
Other Comments Or Suggestions: None.
Questions For Authors: I have some concerns about this paper:
1. As shown in Figure 3, KA-Prompt needs to maintain a prompt pool in addition to the prompt set. Is this a requirement specific to KA-prompt or is it already present in the baseline method? The additional training overhead associated with prompt pools should be discussed.
2. Does the aligned prompt get updated during training? If so will these updates have an effect on the prompt parameter in Reusable Prompt Memory?
3. Following up on the previous question, is the purpose of Historical Prompt Online Aligning to align the new prompt to the historical prompt pool, or vice versa?
4. The ablation study does not seem to indicate on which dataset it was performed. Ablation studies on more datasets can further demonstrate the contribution of the proposed module.
5. Can the misalignment illustrated in Figure 2 and the optimization resulting from the proposed method be further analyzed in the form of a heat map visualization?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your constructive feedback and recognition. Below are our responses, which we hope effectively address your concerns.
**Q1-1: Requirement of maintaining a prompt pool.**
(1) Maintaining a prompt pool is not a specific requirement of KA-Prompt but is already present in the baseline method.
(2) A prompt pool is fundamental to prompt-based continual learning methods, enabling the retention of historical knowledge with negligible storage overhead. All compared state-of-the-art prompt-based approaches, including L2P, S-Prompts, DualPrompt, CODA-Prompt, CPrompt, and C-Prompt, incorporate a prompt pool.
**Q1-2: Additional training overhead**
(1) Maintaining a prompt pool does not introduce additional training overhead since stored prompt sets are frozen after being added and are not further optimized.
(2) In our KA-Prompt, frozen prompts from the pool are selected to guide cross-domain componential knowledge alignment in the Historical Prompt Online Aligning (HPOA) branch. This branch shares nearly the same computational cost as the New Prompt Training branch, introducing only an additional forward pass. However, the HPOA branch contributes a **2.28%** improvement when applied to the baseline model on the ImageNet-R benchmark, as shown in Fig. 5 of our main paper.
**Q2: Aligned prompt update**
(1) The aligned prompts are updated during training to encode knowledge from new domains.
(2) The prompt parameters in the Reusable Prompt Memory remain frozen during the aligned prompt update, ensuring they are unaffected during training.
**Q3: Purpose of Historical Prompt Online Aligning module**
The Historical Prompt Online Aligning (HPOA) module aims to align the new prompts to parts of historical prompts in the prompt pool. Specifically, as new prompts are updated, HPOA dynamically matches them to the closest historical prompts, assigns fusion weights, and fuses them. The fused prompts are then fed into the ViT model, ensuring cross-domain componential alignment throughout training.
**Q4: Sblation study**
(1) By default, module ablation studies in the main paper are conducted on ImageNet-R. This has been explicitly stated in the revised version.
(2) Additional ablation studies on ImageNet-Mix and DomainNet are provided in Fig. G of https://anonymous.4open.science/r/ICML-31/FigG-Ablation.png.
Specifically, the results show that:
(a) Our prompt initialization strategy (Reusable Knowledge Mining, $\boldsymbol{f}_R + \boldsymbol{f}_G$) significantly outperforms the existing methods (Wang et al., 2023a), due to the improved historical knowledge utilization capacity.
(b) Our online alignment design ($\boldsymbol{f}_A$) consistently improves model performance by strengthening knowledge alignment during training, thereby enhancing cross-domain fusion compatibility at test time.
(c) When all our modules are used together, the model performance is further improved since the historical knowledge utilization is improved during both training and testing.
**Q5: Heat map visualization**
Thanks for your suggestion! We have visualized the attention maps of prompt tokens at different stages, along with the fused tokens, in Fig. A of https://anonymous.4open.science/r/ICML-31/FigA-Heatmap.png. The results show that each prompt token (component) captures a semantic part of objects.
(1) Due to semantic misalignment among tokens, the fused prompt tokens in C-Prompt fail to precisely capture object-specific information, preventing the model from fully leveraging discriminative features.
(2) Our method improves component-wise alignment, enabling the fused prompt token to effectively capture object features. This results in a **4.73%** Average accuracy improvement in Tab. 1 of our main paper.
---
Rebuttal Comment 1.1:
Comment: Thanks to the author for his reply, which addressed most of my concerns. Regarding the heat map provided by the authors in their rebuttal, in addition to the fused prompts, KA-Prompt is also significantly better on the prompts of individual domains compared to C-Prompt. This seems to naturally lead to better attention performance in the fused prompt. Can this phenomenon be explained and the advantages of the fusion approach proposed in this paper illustrated?
---
Reply to Comment 1.1.1:
Comment: Thank you for your valuable feedback. We hope the following responses address your concerns:
**Q1: Advantages in prompt learning for individual domains**
Our KA-Prompt generates significantly better visualization results for individual domain prompts compared to C-Prompt. The key advantages of our KA-Prompt are as follows:
(1) C-Prompt learns prompts **independently** across domains, disregarding the accumulation of semantic knowledge throughout continual learning. This limits its ability to retain and leverage previously acquired knowledge during new prompt learning.
(2) In contrast, our *Reusable Knowledge Mining* mechanism actively incorporates semantic knowledge from previously learned domains into new prompts. As a result, the semantic partial knowledge of objects in the new prompts is **continuously reinforced** during incremental learning. This process enables our learned prompts to capture richer and more precise semantic representations than those of C-Prompt.
**Q2: Advantages in prompt fusion**
Our KA-Prompt achieves more effective cross-domain knowledge utilization during fusion compared to C-Prompt. The key advantages of our KA-Prompt are as follows:
(1) As demonstrated in our visualization results in Fig. A of https://anonymous.4open.science/r/ICML-31/FigA-Heatmap.png, the prompts of C-Prompt on individual domains can capture object-specific semantic information such as the wing of a fighter jet, the head and tail of an airplane, the cabin of a boat, and the steel cables and deck of a bridge.
However, due to semantic misalignment across different domains, **knowledge conflicts** arise during prompt fusion. As a result, the fused prompt of C-Prompt often fails to precisely capture the semantic regions of objects, leading to limited cross-domain knowledge utilization and degraded performance.
(2) In our KA-Prompt, prompt tokens at the same position across different domains encode highly relevant semantic information, significantly improving **knowledge compatibility** during prompt fusion. For instance, in the upper sample of Fig. A(a) of https://anonymous.4open.science/r/ICML-31/FigA-Heatmap.png, Token1, Token2, Token3, and Token4 of different domains primarily encode the body, head, vertical fin, and missile of a fighter jet, respectively. As a result, the fused prompt tokens effectively capture discriminative object regions, enabling more efficient utilization of accumulated semantic knowledge across domains and leading to a **4.73%** average improvement across four DIL benchmarks. | Summary: This paper focuses on the domain incremental learning (DIL) task and identifies component-wise misalignment between domain-specific prompts as a key factor that leads to conflicting knowledge integration and degraded predictions in prompt-based DIL methods. To address this issue, the authors propose the Componential Prompt-Knowledge Alignment (KA-Prompt) approach, which introduces a dual-phase framework to enhance component-wise alignment, thereby improving knowledge utilization during both training and inference. Extensive experimental results on four DIL benchmarks demonstrate that KA-Prompt achieves promising improvements compared to state-of-the-art methods.
Claims And Evidence: Yes
Methods And Evaluation Criteria: Yes
Theoretical Claims: This paper does not provide a theoretical claim or proof. However, the challenges identified in existing methods are intuitive and are effectively demonstrated through well-designed experiments.
Experimental Designs Or Analyses: Yes. I have checked the experimental designs and analyses in Figures 1, 2, 4, 5, 6, 7, 8, 9, and Tables 1, 2, 3. These experiments are sound and comprehensively demonstrate the motivation, effectiveness, and efficiency of the proposed method.
Supplementary Material: Yes, I have reviewed the source code in the supplementary material. It is well-organized and includes clear running instructions.
Relation To Broader Scientific Literature: This paper provides a novel solution to enhance cross-domain knowledge utilization during both testing and sequential training. The proposed ideas have the potential to inspire further research in areas involving domain shifts and incremental learning, such as transfer learning and class/task incremental learning.
Essential References Not Discussed: No
Other Strengths And Weaknesses: Paper strength:
1. The manuscript is well-organized and clearly written, making complex concepts accessible to a broad audience. The visual representations are well-designed and effectively illustrate the motivation, methodological designs, and effectiveness of the proposed method.
2. A deep understanding of prompt-based DIL technologies is demonstrated through a comprehensive literature review. The identification of component-wise misalignment between domain-specific prompts, supported by well-designed experiments, is insightful and offers inspiration for further research in handling domain shifts.
3. The proposed method is sound and innovative. First, the Greedy Prompt Search module provides a novel, low-cost solution to improve the utilization of historical prompts, which could be highly beneficial to incremental learning and transfer learning communities. Second, the Historical Prompt Online Aligning module dynamically matches historical prompts and adopts an adaptive strategy to constrain the prompt alignment, effectively addressing the componential knowledge misalignment identified in the paper.
4. Extensive experiments on multiple benchmarks are conducted, with the proposed method demonstrating notable improvements in the DIL task. Additionally, sufficient ablation studies are included, verifying that the proposed designs effectively achieve the claimed objectives.
Paper weakness:
1. Some experimental results require further analyses. For instance, in Figure 4(a), KA-Prompt and the C-Prompt baseline underperform Dual-Prompt, CODA-Prompt, and CPrompt in the first domain. The reasons for this phenomenon should be discussed.
2. Certain methodological details are not sufficiently introduced. Specifically, if the Reusable Prompt Memory is empty, the Memory Matrix should also be empty. In this case, does the Assignment Matrix S^* become a ( [(t−1)×N_p]×N_t) null matrix? The authors should clarify how such a case is handled, either in the main paper or in the appendix.
3. The ViT model in Figures 1(b) and 3 should be illustrated with a consistent shape to improve clarity.
Other Comments Or Suggestions: Figure 7 is a crucial experiment that verifies the effectiveness of the proposed approach in addressing knowledge misalignment compared to the existing methods. It is recommended that this figure be moved to the main paper.
Questions For Authors: This paper presents an new DIL approach with extensive evaluations. The presentation and experiments are comprehensive, I have a few minor concerns regarding the experimental analyses and implementation details for special cases. Please address these issues, as outlined in the paper weaknesses section, during the rebuttal.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer’s constructive feedback and recognition. We hope the following responses effectively address your concerns.
**W1: Performance analysis**
(1) Both KA-Prompt and the C-Prompt baseline exhibit lower performance on the initial domain compared to other prompt-based methods due to the BEMA design proposed in C-Prompt. BEMA is a batch-wise antiforgetting strategy that prevents the forgetting of cross-domain shared parameters (e.g., classifier). Since BEMA constrains new knowledge learning, and the randomly initialized shared parameters lack semantic knowledge, it limits the initial performance of both KA-Prompt and C-Prompt, particularly on small-scale datasets like ImageNet-R.
(2) To further analyze this effect, we removed BEMA during the first domain learning, and the results are presented in Fig. E of https://anonymous.4open.science/r/ICML-31/FigE-ImageNet-R.png. The findings show that: (a) Without BEMA, our model achieves performance comparable to state-of-the-art methods on the first domain, confirming that BEMA is the primary factor contributing to the degraded initial performance. (b) After learning from 15 domains, our KA-Prompt (w/o initial BEMA) achieves **66.54±0.68**, which is 0.03% higher than our results in the paper (w/ initial BEMA). This marginal performance improvement occurs because some discriminative knowledge is shared across domains. Thus, even if the initial domain is insufficiently trained, knowledge from later domains can still enhance its performance within our framework. (c) Overall, KA-Prompt achieves **4.08%** and **4.11%** improvements on ImageNet-R with and without initial BEMA, respectively, verifying its effectiveness in consolidating long-term knowledge. These improvements stem from our knowledge alignment design, which enhances cross-domain prompt compatibility, enabling more efficient utilization of learned knowledge and facilitating both positive forward and backward transfer.
**W2: Methodological details**
Your understanding is correct.
(1) At the beginning of Greedy Prompt Search, the Reusable Prompt Memory is empty, and the Memory Matrix $S^M\in \mathbb{R}^{0\times N_t}$ is an empty matrix. Then, the column–wise max process is conducted to obtain a $1\times N_t$ null matrix. Consequently, the Assignment Matrix $S^*$ become a ($[(t−1)×N_p]×N_t$) null matrix.
(2) We have incorporated these methodological details into our paper for improved clarity.
**W3: Unify the component shapes**
Thank you for this valuable suggestion. We have modified the shape of the ViT model in Fig. 1(b) to match Fig. 3, ensuring overall consistency. The revised version is illustrated in Fig. F in https://anonymous.4open.science/r/ICML-31/FigF-Modify.png.
**Comments: Moving Figure 7 to main paper**
Thanks for the valuable suggestions and appreciation of our experimental designs. We have moved Fig. 7 to our main paper. We have moved Fig. 7 to the main paper. Additionally, the following experimental settings and quantitative analyses are included:
To demonstrate that componential prompt-knowledge alignment mitigates knowledge conflicts, we conducted an ablation study by shuffling prompt components under different conditions. The experiments were performed on the final domain of the DomainNet benchmark, where all learned domain prompts were used for prompt matching.
As shown in Fig. 7 (a), C-Prompt (Liu et al., 2024a) experiences performance improvement in Shuffle-B/C/D/E due to its randomly learned componential structure, which tends to be suboptimal in its original form. In contrast, KA-Prompt consistently degrades when prompt components are perturbed, indicating its intrinsic alignment structure. Despite the misalignment noise introduced by random shuffling, the worst performance of KA-Prompt (Fig. 7 (b) Shuffle-E) remains **6.4%** higher than the best performance of C-Prompt (Fig. 7 (a) Shuffle-B). This demonstrates that our greedy prompt search algorithm effectively extracts generalizable knowledge across domains, significantly enhancing adaptation to new domains. | Summary: This paper introduces a novel component-based prompt knowledge alignment method, KA-Prompt, for Domain Incremental Learning (DIL). Its key contribution lies in addressing the cross-domain prompt misalignment problem, which is claimed to be a major limitation of existing prompt-based DIL methods, such as C-Prompt. The proposed framework consists of two core mechanisms: Reusable Knowledge Mining (ΨM), which selects and initializes new prompts based on relevant past knowledge, and Aligning-guided New Prompt Learning (ΨL), which dynamically maintains component alignment across domains. Experimental results on four benchmarks (DomainNet, ImageNet-R, ImageNet-C, ImageNet-Mix) demonstrate that the proposed method outperforms state-of-the-art approaches.
Claims And Evidence: In this paper, the authors propose that changes in the order of prompts lead to significant variations in accuracy, which leads to the issue of prompt misalignment. Although the ablation experiment based on shuffling (Fig. 7) provides some preliminary insights, the paper does not offer sufficient theoretical analysis to explain the importance of prompt order and the negative impact of prompt misalignment on model performance. Therefore, more experiments are needed to further validate the actual impact of prompt misalignment and eliminate potential experimental biases. To strengthen the credibility of this claim, it is recommended that the authors add more experiments, conduct broader validations, and quantify the specific effects of prompt misalignment on model performance.
Methods And Evaluation Criteria: The proposed method and evaluation criteria, including the use of benchmark datasets like DomainNet and ImageNet-Mix, are meaningful for addressing the challenges in Domain Incremental Learning (DIL) and enhancing cross-domain knowledge transfer.
Theoretical Claims: No, the paper does not involve theoretical claim.
Experimental Designs Or Analyses: The experimental design of this paper aims to verify the effectiveness of KA-Prompt in Domain Incremental Learning (DIL) tasks through comparisons with multiple baseline methods, ablation studies, and hyperparameter analyses. Overall, the experimental design is relatively comprehensive. However, there are some potential shortcomings that require further investigation: The motivation section suggests that prompt misalignment is a major performance bottleneck of C-Prompt, but the ablation study (Fig. 7) only preliminarily validates this hypothesis by shuffling prompt components, without further quantifying the specific impact of misalignment on model representations. Additionally, the scale of the experiments is not sufficient to rule out potential experimental biases.
Supplementary Material: This section introduces the KA-Prompt algorithm, which includes Reusable Knowledge Mining and Aligning-guided New Prompt Learning. The ablation study briefly compares KA-Prompt with C-Prompt when prompt components are shuffled. Additionally, it presents visualization results based on DomainNet and ImageNet-Mix and demonstrates KA-Prompt's computational efficiency.
Relation To Broader Scientific Literature: The core contribution of this paper lies in optimizing C-Prompt, with the main contribution being the resolution of prompt misalignment in cross-domain learning to enhance cross-domain prompt fusion. C-Prompt (Compositional Prompting) is a prompt-based Domain Incremental Learning (DIL) method that aims to adapt to cross-domain tasks by learning compositional prompts.
Essential References Not Discussed: The authors discuss and compare a wide range of related methods.
Other Strengths And Weaknesses: One of the main contributions of this paper is to explicitly identify component-level misalignment as a key issue in prompt-based Domain Incremental Learning (DIL). Previous works, such as C-Prompt and CODA-Prompt, primarily focused on selecting and fusing relevant prompts but largely overlooked the potential knowledge interference caused by the random positioning of prompt components across different domains.
The main weaknesses of this paper are as follows:
1. Overall, the core contribution of this paper lies in optimizing C-Prompt rather than proposing an entirely new framework, making it more of an incremental improvement with limited novelty. The primary contribution is addressing the prompt misalignment issue in cross-domain learning to enhance prompt fusion across different domains. However, it remains based on the existing C-Prompt structure, with its methodological innovation mainly reflected in localized improvements rather than introducing a new paradigm for incremental learning.
2. The authors propose that different prompt orders lead to significant variations in accuracy, thereby introducing the issue of prompt misalignment. While the shuffle-based ablation study (Fig. 7) provides some preliminary insights, the paper lacks sufficient theoretical analysis to explain the importance of prompt order and the negative effects of prompt misalignment. More experiments are needed to further validate the actual impact of prompt misalignment and rule out potential experimental biases. To enhance the credibility of this claim, the authors are encouraged to conduct broader experiments and quantify the specific impact of prompt misalignment on model performance.
3. Additionally, the paper does not provide a formal analysis or complexity evaluation of the greedy search for reusable knowledge (ΨM). Since greedy algorithms often lead to suboptimal solutions, analyzing its approximation guarantees would strengthen the theoretical depth of the proposed method.
Other Comments Or Suggestions: Please refer to the weaknesses.
Questions For Authors: 1.What constitutes a "component" within a prompt? How is semantic consistency ensured across components in different domains? If the definition of components within prompts is vague or semantically unclear (e.g., if the components are simply vector representations without strict semantic binding), then the effectiveness of the proposed component-level alignment may face fundamental issues.
2.Does component alignment really alleviate conflicts? The paper assumes that aligning prompt components will inevitably reduce knowledge conflicts; however, this assumption has flaws: does the alignment of components necessarily lead to alignment at the semantic or knowledge level? Is it possible that structural alignment does not result in actual semantic alignment?
3.The authors did not provide sufficient qualitative or visual evidence to demonstrate that component-level alignment achieves the expected semantic fusion effect.
4.How does the greedy search ensure that the selected set of prompts is truly globally optimal? The paper fails to provide enough evidence to verify whether such locally optimal solutions are sufficient, or whether they might lead to unstable performance or degradation in large-scale, multi-domain conditions. The paper does not explore whether using a greedy algorithm could introduce significant performance fluctuations, nor does it provide comparisons with other heuristic algorithms to justify the choice of a greedy approach.
5.Could the fusion of multiple prompts cause negative transfer across domains? In cases where there are significant differences in prompt knowledge between domains, does this fusion always lead to positive gains? The paper lacks an analysis and discussion of how the fusion mechanism might trigger negative transfer.
Ethical Review Concerns: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thanks for the valuable feedback and comments. We hope the following responses address your concerns.
**W1: Contributions on framework**
The C-Prompt baseline corresponds to our New Prompt Training branch. We have made two key designs to form a brand-new DIL framework:
(1) Reusable Knowledge Mining (RKM) mechanism. Unlike existing methods that randomly initialize new prompts, RKM actively searches for old prompts containing shared knowledge between the new and all old domains, significantly improving new domain adaptation and cross-stage knowledge alignment.
(2) Historical Prompt Online Aligning (HPOA) branch. HPOA introduces an online search and re-weighting based prompt fusion strategy to mitigate cross-stage knowledge drift during training, effectively improving the utilization of multi-domain knowledge.
**W2: Quantization of misalignment**
To quantify misalignment, we have traversed the cross-stage prompt token orders to obtain the performance of the optimal alignment. As shown in Fig. 7, C-Prompt exhibits a 0.64% degradation compared to the optimal alignment, indicating suboptimal order learning. In contrast, our method achieves **0.57-0.79%** higher performance than alternative orderings, verifying that it successfully attains optimal alignment.
**W3, Q4: Complexity evaluation and theoretical analyses**
The objective of RKM module at stage $t$ is formulated as follows: given $m$ training samples and $n=(t-1)×L_s$ old prompts, obtain $k=L_s$ old prompts that minimize $\sum_{i=1}^{m}\{\max_{1≤j≤n} s(x_i,p_j)\}$ where $s(x_i,p_j)$ denotes the similarity between a training sample $x_i$ and a prmpt $p_j$.
(1) The complexity of our greedy search algorithm is $mnk=O(nk)$, more efficient than exhaustive grid search, which requires $mC_n^k=O(n^k)$ operations.
(2) The optimization of RKM can be approximated as a k-medoids problem [1] by considering old prompts as special training samples (since m>>n, this approximation does not affect the theoretical conclusion). According to Theorem 4.4 of [1], in a one-way search setting, the error bound of our solution $E(θ^*)$ relative to the optimal solution $E(θ)$ satisfies:
$E(θ^*)≤(1+\frac{2k}{n+m})E(θ)$. Since m+n>>k, our greedy search ensures globally optimal solutions under different conditions and leads to stable performance.
[1] PAMAE: Parallel k-Medoids clustering with high accuracy and efficiency. SIGKDD, 2017.
(3) We have conducted experiments on ImageNet-R to compare our method with existing approaches:
|Method|Previous-Domain [a]|Most-Similar [b]|Greedy (Ours)|
|-|-|-|-|
|Avg-ACC|63.55±0.46|64.32±0.48|**65.36**±0.52|
[a] initializes new prompts using those from the previous domain.
[b] selects prompts with the highest similarity scores in a single search step for initialization.
The results show that our greedy search-based reusable knowledge mining strategy (Greedy) consistently outperforms [a] and [b] with improvements of **1.81%** and **1.04%**, respectively.
Both [a] and [b] suffer from insufficient utilization of old knowledge due to limited knowledge relevance within adjacent domains and overlooking of knowledge in lower-similarity prompts, respectively.
In contrast, our approach evaluates the unique knowledge that has not been included in the selected prompts iteratively, effectively improving the utilization of old knowledge.
**Q1: Definition of components and semantic consistency**
(1) The components refer to the tokens of each prompt. As shown in Fig. 2, each prompt, e.g., $p_t^1$, contains 4 tokens (i.e., components), each encoding a distinct aspect of object knowledge.
(2) In DIL, the categories of different domains are identical, thus the object parts are semantically consistent across domains.
**Q2, Q3: Do alignment alleviate conflicts**
We have visualized the attention maps of prompt tokens at different stages, along with the fused tokens, in Fig. A of https://anonymous.4open.science/r/ICML-31/FigA-Heatmap.png. The results show that each prompt token (component) captures a semantic part of objects.
(1) Due to semantic misalignment among cross-domain prompt tokens, the fused prompt tokens in C-Prompt fail to precisely capture object-specific information.
(2) Our method improves component-wise alignment, enabling the fused prompt token to effectively encode discriminative features of objects. This results in a **4.73%** increase in Average accuracy, as reported in Tab. 1.
**Q5: Influence of prompt fusion**
Fig. D in https://anonymous.4open.science/r/ICML-31/FigD-Pos-Transfer.png shows the testing performance trends for each domain. The results indicate that 2 out of 14 old domains exhibit performance reduction after continual training. This arises due to data imbalance, where some domains contain a few training samples, leading to biased knowledge during fusion. Nevertheless, our method achieves significant improvements on 12 out of 14 old domains, indicating that it effectively achieves positive knowledge transfer.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for the exhaustive reply. After carefully reviewing the authors' rebuttal, all my concerns have been sufficiently addressed, including the novelty, claimed problem, theoretical guarantee, and effectiveness of the proposed approach. Overall, this paper focuses on the practical domain incremental learning task and discovers the existence of the prompt misalignment problem. Then, an effective approach, KA-Prompt, is proposed to address the claimed problem. Abundant quantitative, qualitative, and theoretical results and analyses are provided to demonstrate the significance of the proposed approach. Consequently, I am willing to raise my score.
---
Reply to Comment 1.1.1:
Comment: Dear reviewer 8nTg
We sincerely appreciate your thoughtful feedback and the time you dedicated to reviewing our work. Your insightful comments have been invaluable in refining our presentation and strengthening the manuscript. We are grateful for the opportunity to clarify our approach and truly appreciate your recognition of our work.
Best regards,
Authors | Summary: The paper addresses the challenge of DIL. The authors identify a limitation in existing prompt-based methods: component-wise misalignment between domain-specific prompts leads to conflicting knowledge integration and degraded predictions. To address this, they propose KA-Prompt, a method that enforces component-wise knowledge alignment across domains. KA-Prompt operates in two phases: (1) Initial Componential Structure Configuring, where a set of old prompts containing relevant knowledge is mined via greedy search to initialize new prompts, ensuring reusable knowledge transfer and intrinsic alignment; and (2) Online Alignment Preservation, which dynamically identifies target old prompts and applies adaptive componential consistency constraints as new prompts evolve.
Claims And Evidence: No. Please refer to "other strengths and weaknesses" for detail.
Methods And Evaluation Criteria: No. Please refer to "other strengths and weaknesses" for detail.
Theoretical Claims: No proofs.
Experimental Designs Or Analyses: Yes. Please refer to "other strengths and weaknesses" for detail.
Supplementary Material: Yes.
Relation To Broader Scientific Literature: The authors effectively build on prior work in prompt-based learning, such as C-Prompt and CODA-Prompt, while addressing a critical limitation (component-wise misalignment) that has not been previously explored in depth.
Essential References Not Discussed: No.
Other Strengths And Weaknesses: Strengths
1. Extensive experiments on multiple benchmarks demonstrate the superiority of KA-Prompt over state-of-the-art methods.
2. Although I am not entirely convinced by the motivation behind this paper, the issue of how to share knowledge across different domains is indeed a critical problem in DIL. This paper offers a new approach to addressing this challenge.
Weakness:
1. The authors claim in their contributions that "We reveal that component-wise misalignment in prompts limits their cross-domain knowledge integration and utilization capacity." However, the paper does not provide any experimental or theoretical evidence to support this claim. Attention mechanisms in Transformers are inherently permutation-invariant, and methods like Visual Prompt Tuning (VPT) can operate without positional encoding. Therefore, it is unclear why misalignment would occur in the first place. The authors need to provide a more rigorous justification for this claim.
2. The motivation for the paper, as illustrated in Figure 2, is based on two assumptions: (1) different components typically encode distinct types of knowledge, and (2) independently learned prompts exhibit misalignment in componential knowledge, leading to the fusion of irrelevant knowledge during inference. However, these assumptions are not supported by any experiments or references to prior work. Specifically, the depiction of different tokens representing different parts of an airplane in Figure 2 is confusing and lacks empirical validation.
3. The necessity of the Greedy Prompt Search module is questionable. As I understand it, this module computes the similarity between training samples and all prompt keys, then selects the most similar prompts for initialization. Even if the similarity is computed across all training samples, the computational cost would still be less than performing a single forward pass on the training set. The authors should justify the need for this module more clearly.
4. The paper does not adequately address the issue of catastrophic forgetting in DIL. In fact, the authors' approach may increase the risk of forgetting. By using prompts from old tasks to initialize new tasks and encouraging similarity between new and old prompts during Historical Prompt Online Aligning, the model may incorrectly select new prompts for old task data during inference, exacerbating forgetting.
5. The experimental setup is not clearly defined, particularly in Equation 12. It is unclear whether $a_{T,i}$ is evaluated on the test data of the i-th domain only or on the test data of all previous domains. This ambiguity needs to be clarified to ensure the reproducibility and validity of the results.
Other Comments Or Suggestions: Figure 2 is somewhat overly cluttered, making it difficult to grasp the key points.
Questions For Authors: 1. Compared to C-Prompt, the motivation for this paper is based on the concept of "misalignment." How is misalignment defined, and how is alignment measured and evaluated? What metrics or experiments are used to determine whether alignment has been achieved?
2. How is the final classifier set up? Is there a shared classifier across all domains, or does each domain have its own separate classifier? This distinction is crucial for understanding the model's ability to generalize across domains.
3. Given the weaknesses identified, how does the proposed method address the issue of catastrophic forgetting in incremental learning? Specifically, how does the method ensure that knowledge from old tasks is not overwritten or forgotten when learning new tasks?
4. Are the prompts from old tasks updated during the Online Aligning process? If so, how does the method ensure that the updated prompts perform better on old tasks compared to the original prompts?
5. Why are the results for S-Prompt missing in Figures 8 and 9? Based on my experimental experience, S-Prompt is effective in reducing forgetting, yet it is notably absent from these comparative results. Could the authors explain this omission and provide the missing results?
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thanks for the valuable feedback. We hope our responses address your concerns.
**W1: Misalignment's occurring and definition**
(1) Misalignment occurs during cross-stage prompt fusion in DIL. In Fig. 2, each prompt (e.g., $p_t^1$) consists of 4 tokens, each encoding distinct partial knowledge of objects. *Misalignment* refers to the disorder of partial knowledge within prompt tokens of different stages. When fusing prompts from multiple stages, this cross-stage token-level misalignment introduces semantic conflicts in the fused prompt. Then, the sub-optimal fused prompt is fed to the Attention layer, leading to degraded performance.
(2) Misalignment does not occur in VPT because it is designed for static training data, where all prompts are optimized jointly rather than incrementally.
(3) Our method explicitly enhances cross-stage prompt alignment, ensuring that the fused prompt retains a coherent representation of semantics. These high-quality fused prompts thereby lead to improved test performance.
**Q1: Measuring alignment**
(1) To measure alignment, we have traversed different token orders across stages to identify the configuration that yields the highest model performance, which we define as the optimal alignment, as shown in Fig. 7.
(2) The results show that the learned prompt token order in C-Prompt baseline exhibits a performance degradation of 0.64% compared to the optimal alignment, verifying the presence of misalignment.
(3) In contrast, the prompt token order learned by us consistently outperforms alternative orders by 0.57–0.79%, demonstrating that our approach effectively achieves optimal alignment.
**W2: Validation of misalignment**
We have visualized the attention maps of prompt tokens at different stages, along with the fused tokens, in Fig. A of https://anonymous.4open.science/r/ICML-31/FigA-Heatmap.png. The results show that each domain-specific prompt token learns a semantic part of objects. Compared to C-Prompt which introduces semantic misalignment in prompt tokens from distinct domains, the fused prompt tokens in our method effectively preserve part-level information, enabling the model to fully exploit discriminative features.
**W3: Necessity of the Greedy Prompt Search (GPS)**
(1) GPS is necessary because (a) the shared knowledge between new and old domains is distributed across various old prompts, and (b) the knowledge of some old prompts overlaps significantly. A naive selection based solely on high similarity scores in a single search would often lead to redundant prompt selection while overlooking relevant knowledge present in lower-similarity prompts. This results in insufficient utilization of old knowledge. Please refer to Reviewer **8nTg-W3, Q4** for more experimental analyses.
(2) The computational cost of GPS is significantly less than performing a single forward pass on the training set since it primarily involves simple matrix addition and subtraction.
**W4,Q3,Q4: Catastrophic forgetting**
Our method inherits the anti-forgetting capacity of prompt learning and does not significantly risk forgetting:
(1) In our Historical Prompt Online Aligning (HPOA) module, old prompts are frozen, ensuring that knowledge from previous tasks is not overwritten or forgotten.
(2) The prompt selection for old tasks is minimally influenced by HPOA. This is because prompt selection relies on prompt keys, and new prompt keys are only trained by minimizing their distance to new data features.
(3) In Tab. 2, domains are trained sequentially from left to right. Our KA-Prompt performs 1.04% inferiorly to the C-Prompt baseline, while we outperform the C-Prompt from the second domain and obtain **4.25%** Average accuracy improvement across all domains. These results show that our approach achieves a better balance between acquisition and forgetting.
**W5: Metrics setup**
$a_{T,i}$ is evaluated on the test data of the i-th domain. Eq. 12 measures the final model’s performance across all previous domains by computing the average performance over them. We have carefully clarified these details in the revised version.
**Q2: Classifier setup**
The final classifier is shared across all domains, consistent with most prompt-based methods. We have clarified this in the revised version.
**Q5: Visualization of S-Prompt**
(1) To maintain consistency in the color-method pairing relations across Fig. 4 (a)(b), Fig. 8, and Fig. 9, only top-8 methods ranked by Average accuracy are chosen for performance visualization, where S-Prompt is not included.
(2) In Fig. B-1, and Fig. B-2 of https://anonymous.4open.science/r/ICML-31/FigB-S-Prompts.png, we have added the performance curves of S-Prompt. The results show that our KA-Prompt effectively outperforms existing methods during long-term learning.
**Comments: Simplify Fig. 2**
To highlight the misalignment phenomenon, a simplified illustration of Fig. 2 is shown in Fig. C of https://anonymous.4open.science/r/ICML-31/FigC-Simplify.png.
---
Rebuttal Comment 1.1:
Comment: Thank you for the author's response, which has addressed some of my concerns.
- Regarding "Each encoding distinct partial knowledge of objects": What evidence supports this claim?
- About forgetting: The paper states that "New prompt keys are only trained by minimizing their distance to new data features," but there is no mechanism to ensure that old task data does not become closer to the new keys.
- On the shared classifier: The paper mentions "The final classifier is shared across all domains." Could you elaborate on this? For example, if each domain has C classes, is the shared classifier a single C-class classifier, or a C×D classifier (where D is the number of tasks)?
- Regarding S-Prompt's surprisingly low performance: Before the authors pointed it out, I hadn’t noticed that the reported performance of S-Prompt in the paper was so low—contrary to common expectations. I previously tested S-Prompt’s official code (https://github.com/iamwangyabin/S-Prompts) on DomainNet using an ImageNet-1K pretrained model with shallow VPT, achieving around 50% accuracy easily. And Table 2 reports only 8% accuracy on Quickdraw, which is highly counterintuitive. Since the paper uses a stronger pretrained model and likely deeper VPT, performance should theoretically be better. Due to the lack of released code, I have doubts about the reliability of the experiments. Could the authors explain why the reported performance is worse?
- Additionally, the number of trainable parameters for S-Prompt in the appendix seems unreasonable. Based on my understanding, S-Prompt should only train task-specific prompts and classifiers, so the parameter count should not be that high.
- Since CDDB is used as a domain incremental dataset in S-Prompt, does these paper also report comparative results on CDDB?
---
Reply to Comment 1.1.1:
Comment: Thank you for your thoughtful feedback. We sincerely appreciate the opportunity to address your concerns.
**Q1: Prompt knowledge**
(1) As illustrated in Fig. A of https://anonymous.4open.science/r/ICML-31/FigA-Heatmap.png, the prompts of both KA-Prompt and C-Prompt are capable of capturing object-part semantic information. For example, prompts are sensitive to object parts such as the wing and head of a fighter jet, the head and tail of an airplane, the body and cabin of a boat, and the steel cables and deck of a bridge. These observations suggest that different prompt tokens focus on distinct object parts. Furthermore, compared to C-Prompt, our KA-Prompt exhibits stronger partial knowledge encoding in Fig. A, attributed to our knowledge alignment design which incrementally accumulates and reinforces semantic representations across domains.
(2) From the perspective of the attention mechanism, each image token corresponds to a object part , while prompt tokens exhibit varying degrees of similarity to these image tokens. Then, prompt-token pairs with higher semantic relevance are assigned higher attention weights. Consequently, prompt tokens have a substantial influence on the representation of relevant object parts, supporting the claim that they encode distinct partial knowledge.
**Q2: Distance between old data and new prompts**
Thank you for the insightful suggestion! We have evaluated the prompt matching accuracies on old domains after training on the final domain. Due to time constraints, we conducted experiments on two smaller benchmarks, ImageNet-R and ImageNet-C, each containing 15 domains. Matching accuracy was computed on the first 14 domains after training was completed on the 15th.
The results below show that KA-Prompt outperforms C-Prompt by **0.1%** and **2.44%**, respectively, verifying that our method effectively avoids mismatching.
|Benchmark|ImageNet-R|ImageNet-C|
|-|-|-|
|C-Prompt|36.03±0.37|78.18±0.52|
|KA-Prompt|**36.13**±0.18|**80.62**±0.22|
It is true that no extra constraints on the distances between old data and new prompts keys are introduced in our KA-Prompt compared to the C-Prompt baseline. This old data matching accuracy improvement is attributed to the following:
(1) In baseline methods like C-Prompt, prompt keys of different domains are typically initialized from a common random distribution. Therefore, the existing methods adopts an **common-to-specific** learning procedure during prompt keys training across domains. However, since the common initial distribution can be significantly distinct from the domain-specific distribution, these methods suffers form unstable training and under-convergence, increasing the risk of mismatched prompt selection during inference.
(2) In contrast, although our KA-Prompt also adopts a **common-to-specific** learning paradigm, the initialization of new prompt keys is extracted from the most semantically similar keys from prior domains. Thus, the prompt learning is easier and exhibits improved stability, yielding higher inter-domain discriminability. Consequently, prompt matching accuracy on old data improves even without additional constraints.
**Q3: Classifier**
Our classifier follows the setting of the C-Prompt baseline. Specifically, if each domain has $C$ classes, the shared classifier is a single $C$-class classifier.
**Q4: S-Prompt**
(1) The results of S-Prompt was reproduced about four months ago. Notably, the official codebase of S-Prompt provides only the executable version of S-liPrompt, which incorporates additional language guidance. When switching the network setting from *slip* to *sip*, we encountered a runtime error: *TypeError: resolve_pretrained_cfg() got an unexpected keyword argument 'kwargs'*. This issue has also been reported by other researchers, but, to our knowledge, no official solution has been provided.
(2) To proceed, we modified the environment and dependencies to generate the results shown in our paper. It is possible that some hidden parameter mismatches contributed to the degraded performance. Nevertheless, according to the officially reported results of both S-Prompt and C-Prompt, our approach consistently achieves state-of-the-art performance. Besides, note that the other compared methods in this paper are executable by following the official instructions.
**Q5: Parameters**
The reported number of trainable parameters for S-Prompt in our paper follows the statistics provided in the official C-Prompt paper. Specifically, it reflects the cumulative number of trainable parameters across 15 domains (including prompts and classification heads). In contrast, many prior works only report per-domain statistics of prompts, which may have caused confusion.
**Q6: CDDB**
When evaluating on the deepfake DIL benchmark, CDDB, our KA-Prompt achieves **75.58%** Average Accuracy, outperforming S-Prompt (74.51%) by **1.07%**. This result demonstrates KA-Prompt’s adaptability to real-world DIL settings. | null | null | null | null | null | null |
Generative Human Trajectory Recovery via Embedding-Space Conditional Diffusion | Accept (poster) | Summary: This paper proposes a conditional diffusion-based method for human trajectory recovery from incomplete or missing data. The authors aim to address the limitations of existing methods in capturing complex spatial-temporal dependencies and handling irregular sampling in human mobility data. DiffMove first transforms trajectory locations into the embedding space, performs denoising in this space, and then recovers missing locations through an embedding decoder. Experiments on two real-world mobility datasets, Foursquare2 and Geolife, demonstrate that DiffMove outperforms state-of-the-art baselines.
Claims And Evidence: The authors mention: "In such scenarios, traditional methods typically provide a biased deterministic imputed trajectory. However, with a generative approach to inference, a set of imputed trajectory locations can be generated through sampling or various averaging techniques on imputation samples." However, without explicit guidance, diffusion models still follow the data distribution, making it difficult to generate irregular results.
The novelty of this work is limited. There is a lot of diffusion-based research in trajectory prediction and human motion generation, which all follow a similar fundamental approach. This work merely adds one-hot encoding to handle discrete data conversion. However, in essence, it does not introduce significant innovation regarding the use of diffusion models for trajectory recovery.
Methods And Evaluation Criteria: The experiments are mainly conducted on two specific datasets, Foursquare2 and Geolife. It is not clear how well the model will perform on other types of mobility datasets with different characteristics, such as different sampling frequencies, data distributions, or geographical regions. This limits the generalization ability of the model and needs further exploration.
Theoretical Claims: The method is technically sound
Experimental Designs Or Analyses: The diffusion-based methods used for continuous trajectory recovery can be slightly modified to adapt to the discrete setting in this paper. Using this as a baseline would better highlight the contributions of this work.
The baselines are outdated and lack representativeness.
Supplementary Material: The supplementary materials include implementation details and code.
Relation To Broader Scientific Literature: This work does not show a significant difference from previous studies.
Essential References Not Discussed: It is necessary to discuss the differences compared to previous diffusion-based trajectory prediction methods.
Bae, Inhwan, Young-Jae Park, and Hae-Gon Jeon. "Singulartrajectory: Universal trajectory predictor using diffusion model." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.
Mao, Weibo, et al. "Leapfrog diffusion model for stochastic trajectory prediction." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2023.
Other Strengths And Weaknesses: NA
Other Comments Or Suggestions: Add the references for the baselines to the table.
Questions For Authors: - How to account for changes in trajectory length?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thanks for your detailed feedback.
Claim&Evidence, Rela To Literature:
We introduce critical innovations that distinguish it from existing works:
a)Handling Discrete Locations via Embedding-Space Diffusion. Existing diffusion models for trajectories (e.g., DiffTraj) focus on generating continuous GPS coordinates or simulating synthetic mobility. In contrast, human trajectory recovery involves sparse discrete locations (e.g., check ins), which cannot be directly modeled by continuous-value diffusion (sparse GPS numerical points are not enough for such training).
As mentioned in the Sec 4.1 line 190, one-hot embedding is just one of the methods that can be applied (it is welcome to change by user and not our main research focus). We transform discrete locations into embeddings and performing denoising in this latent space,preserving spatial relationships and decode embeddings back to discrete IDs using an explicit matching process (Sec 4.3) through a stable end to end training (This will be very difficult if people know diffusion well, this model is not only catering for diffusion MSE loss).
b)Conditional Diffusion with Spatial-Temporal Guidance: DiffMove explicitly incorporates historical trajectories and spatial-temporal dependencies through 3 novel modules: 1)Spatial Conditional Block combines our new TGGNN (for transition patterns) and cross-attention (for periodicity) to model complex mobility dynamics. 2)Target Conditional Block fuses temporal length and historical trajectory embeddings to guide imputation targets and 3) Denoising Network Block.
c)To our knowledge, no existing study has designed diffusion models that integrate the diffusion step t embedding with the learning of Graph-based spatial transitions, as detailed in Sec 4.2 TGGNN (also add a figure here in the link **https://anonymous.4open.science/r/A01D/README.md**).
Method & Eval Criteria:
Foursquare and Geolife datasets with variations (Table 6) serve as standard benchmarks in trajectory recovery research. DiffMove’s design inherently supports adaptability to diverse mobility data:
1. Handling Diverse Mobility Patterns: Foursquare (urban check-ins) and Geolife (normal trajectories) already represent fundamentally different scenarios (sparse POIs vs. normal GPS).
2. Generalization Ability to Varied Missing Ratios:In Table 6, Distance metric performance of DiffMove with 80% missing ratio even outperforms TRILL with 40% missing rate. This demonstrates resilience to extreme sparsity, a key challenge across datasets.
3. Flexible Preprocessing: As noted in Appendix A.8, DiffMove partitions regions into arbitrary geo region sizes (e.g., 0.25 km²) and adapts to variable time intervals. This flexibility ensures applicability to datasets with varied spatial/temporal granularity.
Exp Design Or Analyses:
Such comparisons would be inappropriate for our problem setting:
Fundamental Task Mismatch: Continuous trajectory diffusion models (e.g., DiffTraj) generate GPS coordinates or simulate synthetic mobility. In contrast, human trajectory recovery deals with discrete locations (e.g., sparse IDs), requiring modeling of transitions between categorical IDs and periodicity. It cannot be directly modeled by continuous-value diffusion (sparse GPS numerical points are not enough for such training).
Our baselines (AttnMove,PeriodicMove,TRILL) are widely recognized and remain the standard benchmarks for this problem setting. Regarding more recent baseline, we discussed TERI (Chen et al., VLDB 2024) cited in our related work. We have been working hard on setting up a common problem setting,where our model still outperforms SOTA baseline as shown below in this problem setting.
|Dataset|Methods|Recall|Dataset|Methods|Recall|
|-------|-------|------|-------|-------|------|
|Foursquare|PeriodicMove|0.3125|Geolife|PeriodicMove|0.4199|
||TRILL|0.3227||TRILL|0.4721|
||TERI|0.3355||TERI|0.4922|
||DiffMove|0.3600||DiffMove|0.5173|
Essential References:
There are task mismatches, they are primarily designed for trajectory prediction in an image (more specifically is a computer vision trajectory problem), aiming to generate future movement trajectories in continuous space (x,y in the image, using dense CV dataset of ETH and UCY). In contrast, our work specifically addresses trajectory recovery for human mobility in locations (a real-world geographical problem, not the trajectory in image), where trajectories are represented as discrete location IDs requiring historical periodicity and spatial transition modeling.
Q1:
Our method accounts for variations in trajectory length through a standardized data preprocessing pipeline (Appendix A.8). Specifically, we discretize each day into a fixed number N of time slots, where the time interval is a configurable parameter. Trajectories with shorter length than N are padded, ensuring that every trajectory is represented uniformly. This approach enables our model to handle variable-length trajectories effectively. | Summary: The paper proposes the model DiffMove which is a conditional diffusion-based method for human trajectory recovery that leverages embedding denoising.
Claims And Evidence: I am confused about the research questions or challenges raised in this paper. I have listed them in detail in the question section.
Methods And Evaluation Criteria: Two datasets are useful, but the baselines are not new to this work.
Theoretical Claims: It seems that only Equation 5 needs to be derived in the main text, and the rest is a description of the method. However, I am confused how Equation 5 is derived from Equation 3, and the paper does not seem to give the derivation process.
Experimental Designs Or Analyses: Yes, for example, baselines, datasets, parameter setting experimental, etc.. I list the detail in the weakness part.
Supplementary Material: Yes, I read all parts.
Relation To Broader Scientific Literature: This paper mainly applies the diffusion model to the field of human trajectory recovery, and it is unclear how it is related to the broader scientific literature.
Essential References Not Discussed: This paper lacks many discussions or experimental comparisons of related works. I list them in the below weakness section in detail.
Other Strengths And Weaknesses: Strength:
1. DiffMove effectively captures both spatial transitions and periodicity patterns in human mobility, improving trajectory recovery accuracy.
2. The embedding-space conditional diffusion framework enables better handling of missing locations by incorporating uncertainty and historical trajectory dependencies.
3. Experiments demonstrate significant performance improvements over baselines.
Weakness:
1. “Second, existing methods lack systematic mechanisms for handling irregular data sampling from incomplete check-ins. Most of their deterministic approaches could not effectively capture the inherent uncertainty in human mobility.”
I believe some studies have already started addressing this issue, such as:
[1] Zhuang, Zhuang, et al. "TAU: trajectory data augmentation with uncertainty for next POI recommendation." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 38. No. 20. 2024.
However, although related works are still relatively few, this can indeed be considered a technical innovation. At least, the authors should thoroughly discuss relevant research and clearly explain the differences between their approach and existing studies.
2. Section 4.3 is very difficult to understand.
The authors should first introduce Figure 2A to ensure that readers at least understand the data flow. Additionally, I do not clearly understand how e_0^{ob} and e^{hist} in Figure 2B are combined with the defined graph structure, nor do I see related equations. The authors primarily describe the methodology using textual explanations rather than mathematical formulations, which makes it easy for readers to become confused. Furthermore, TGGNN appears to be a new method proposed in this paper, yet it is not separately presented in detail but rather mixed in with other sections, making it difficult to understand.
3. The selected baselines are not new.
For example, in the field of human trajectory recovery, at least the following works should be considered:
[1] Si, Junjun, et al. "TrajBERT: BERT-based trajectory recovery with spatial-temporal refinement for implicit sparse trajectories." IEEE Transactions on Mobile Computing 23.5 (2023): 4849-4860.
[2] Long, Wangchen, et al. "Learning semantic behavior for human mobility trajectory recovery." IEEE Transactions on Intelligent Transportation Systems 25.8 (2024): 8849-8864.
[3] Wang, Jinming, et al. "TrajWeaver: Trajectory Recovery with State Propagation Diffusion Model." arXiv preprint arXiv:2409.02124 (2024).
Additionally, DeepMove is a classic model for the next POI prediction task. You could also include some state-of-the-art POI prediction methods, such as:
[1] Zhuang, Zhuang, et al. "TAU: trajectory data augmentation with uncertainty for next POI recommendation." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 38. No. 20. 2024.
[2] Wu, Junhang, et al. "Where have you gone: Category-aware multigraph embedding for missing point-of-interest identification." Neural Processing Letters 55.3 (2023): 3025-3044.
Moreover, some trajectory generation methods could also be considered as SOTA baselines:
[1] Zhu, Yuanshao, et al. "Difftraj: Generating GPS trajectory with diffusion probabilistic model." Advances in Neural Information Processing Systems 36 (2023): 65168-65188.
[2] Wang, Jiawei, et al. "Large language models as urban residents: An LLM agent framework for personal mobility generation." arXiv preprint arXiv:2402.14744 (2024).
In summary, there are various baseline options available, and the current selection of baselines in the paper is not sufficiently up-to-date.
4. The interpretability of the experimental results is insufficient.
The authors could enhance interpretability by adding some visual analyses of the experimental results.
Other Comments Or Suggestions: 1. A strange word "de-facto" in line 77. I think human trajectory recovery methods are unrelated to any laws.
2. Line 319 and all captions of figure except figure 1 lack period.
3. All equations lack punctuation.
4. Eq 1, 2, 3 should preferably be in a separate line
Questions For Authors: 1. See weakness.
2. In the Abstract:
"Though promising, they encounter limitations in capturing complex spatial-temporal dependencies in low-sampling trajectories."
What does this sentence mean? Does it refer to the following issue:
"First, they struggle to capture intricate spatial-temporal dependencies – the interplay between spatial relationships (proximity and spatial transitions between locations) and temporal patterns (sequential dependencies or periodicity of behaviors in historical trajectories)."
However, I don't think this should be considered a problem, as many downstream tasks in trajectory data, such as next location recommendation, have already addressed this issue, for example:
[1] Yang, Song, Jiamou Liu, and Kaiqi Zhao. "GETNext: trajectory flow map enhanced transformer for next POI recommendation." Proceedings of the 45th International ACM SIGIR Conference on research and development in information retrieval. 2022.
[2] Rao, Xuan, et al. "Graph-flashback network for next location recommendation." Proceedings of the 28th ACM SIGKDD conference on knowledge discovery and data mining. 2022.
Do their methods still fail to solve the aforementioned problem, or is there another unresolved issue? Or is it that many existing methods for Human Trajectory Recovery have overlooked this aspect, even though it has been emphasized in related downstream tasks? Based on the subsequent sections, I also suspect that this issue might arise from the use of diffusion models. If that is the case, is using a diffusion model really necessary? Would other generative models also encounter this problem? I strongly suggest that the authors add a figure in the introduction section to clarify the research problem, as I currently find it somewhat confusing.
3. I have some confusion regarding the definition of a missing location. I couldn't find the total number of time slots. Let me give an example: suppose there are 24 time slots in a day, and the input consists of tuples in the form of (location ID, time slot), such as {(0, 1), (1, 1), (2, 1), (3, 3)}.
Does the paper consider the location for time slot 2 to be missing? Additionally, if there are multiple check-ins at different locations within the same time slot, are they all retained? This design seems somewhat unusual, as a user could stay at location 2 for more than an hour. Should distance factors also be considered?
4. Would it be possible to experiment with other generative models to evaluate the necessity of using a diffusion model?
5. I noticed in the appendix that all \lambda values are set to 1. Does this mean that all loss components contribute equally? However, intuitively, I believe their contributions should not be the same.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your detailed feedback.
W2:
They are shown as the first two equations in eq 6. Actually this is to treat trajectory embeddings as sessions and $e_0^{ob}$ and $e^{hist}$ are integrated using session based graph methods (Xu et al., 2019) and explained in Appendix A.1. We will revise section 4.2 and add a schematic diagram in below link for the TGGNN (yes, a new method proposed) module would enhance intuitive understanding. **https://anonymous.4open.science/r/A01D/README.md**
W1&W3:
Our work focuses on free-space trajectory recovery where data can be from either case of discrete normal location grid IDs and sparse irregular check ins and data points are not constrained by roads, a special setting that differs from the listed papers.
TAU is a different problem (next POI) from our work which focuses directly on trajectory recovery. TrajBERT and TrajWeaver either operate under different assumptions (e.g., leveraging road network constraints, normal grid IDs only or continuous GPS) or target slightly different objectives,e.g., next POI or trajectory generation (creating new trajectories from scratch). TrajWeaver works directly with continuous GPS coordinates, and the datasets used are smooth taxi trajectories (points laid on road networks) collected in Xi’an and Cheng Du. These differences highlight a key distinction in problem settings. Our baselines’ problems proposed by AttnMove, PeriodicMove, and TRILL, have been widely used in the literature for this specific problem setting, ensuring fair comparisons.
Given the most listed models are not providing their source code yet, regarding new recent baselines, we discussed TERI (Chen et al., VLDB 2024) cited in our related work. We have been working hard on setting up a common problem setting, manage to settle the necessary data pipeline to fit the TERI and get this model run, where our model still outperforms SOTA baselines as shown below in this problem setting.
|Dataset|Methods|Recall|Dataset|Methods|Recall|
|-------|-------|------|-------|-------|------|
|Foursquare|AttnMove|0.2975|Geolife|AttnMove|0.3920|
||PeriodicMove|0.3125||PeriodicMove|0.4199|
||TRILL|0.3227||TRILL|0.4721|
||TERI|0.3355||TERI|0.4922|
||DiffMove|0.3600||DiffMove|0.5173|
W4:
While we agree that visualizations can provide intuitive insights, the denoising process in our method occurs at the embedding level rather than in the direct data space (e.g., raw trajectory points or maps). Visualizing these high-dimensional embeddings would offer very limited interpretability. Instead, the effectiveness is shown quantitatively through our results as shown in Table1,2,6,Fig3-9.
Theoretical Claims:
Eq5 can be surely derived based on eq3 (refer to page4 of cited CSDI) due to space limit, we will add to revised manuscript.
Q2 &Q4:
It refers to the limitations of deterministic trajectory recovery methods in capturing the full complexity of spatial-temporal dependencies in low-sampling scenarios. Next location recom do not address the challenges of reconstructing entire trajectories with many missing points. Our choice to use a diffusion model is driven by iteratively refining noisy inputs, naturally generate a distribution of plausible trajectories rather than a single deterministic outcome. Other generative models such as VAE or GAN usually suffer from issues like mode collapse, unstable training dynamics, and difficulty in accurately modeling complex spatial-temporal dependencies; Our diffusion-based approach leverages a probabilistic framework that not only captures the inherent uncertainty in the data but also better models the interplay between spatial relationships and temporal patterns. **We include an intro schematic here with the link in W2**.
Q3:
Time slot 2 would be treated as missing. It is important to emphasize that our framework is **not inherently tied to the 30-minute/1 hour interval** and can easily adapt to other interval lengths, **such as 10 minutes or even finer resolutions** like a parameter for preprocessing. Regarding multiple check-ins within the same time slot, we aggregate these such as the most frequent check-in (adopted in prior works). Repeated check-ins across consecutive slots would be considered observed for each slot. The distance factors are considered in the evaluation metrics. This setting is proposed by prior baselines such as AttnMove. Spatial relationships are instead modeled by our Spatial Conditional Block.
Q5:
The two losses serve complementary purposes, a good balance between predictive accuracy and model generalization. Experimentally, these losses are initially printed during training and there seems to be not big differences in scale. Since we also haven’t treated this weight choices as a more advanced optimization problem (which is not our main focus) and we follow this also inspired by Gong et al. (2022) referenced in line 302, which demonstrated the utility of similar balanced weighting in diffusion models applied in NLP.
---
Rebuttal Comment 1.1:
Comment: Thanks for your rebuttal. Most of my concerns have been addressed, so I will increase my score.
---
Reply to Comment 1.1.1:
Comment: Thank you for your positive feedback and for increasing the score. We greatly appreciate your thoughtful review and are glad that all of your concerns have been addressed. Your suggestions have been invaluable in refining our manuscript. Thank you again for your constructive input and support. | Summary: This paper introduces DiffMove, a conditional diffusion-based model for recovering missing locations in sparse human mobility data. By converting discrete trajectory locations into a continuous embedding space, DiffMove effectively denoises and reconstructs missing locations through an embedding decoder. The model captures spatial and temporal dependencies using modules including the Spatial Conditional Block, which leverages graph neural networks and attention mechanisms, and the Target Conditional Block, which extracts knowledge from historical trajectories. Experiments on Geolife and Foursquare datasets show that DiffMove outperforms leading methods, achieving an average 11% improvement in recall rate. Although highly robust, the model could benefit from additional visualizations and efficiency analyses.
Claims And Evidence: Not all.
The authors claim that the diffusion model can enhance performance in complex, irregular, and uncertain scenarios, providing
several examples to support this assertion. However, more evidence is needed to substantiate these claims, such as performing case
studies, or conducting some specific experiments in these scenarios. The current experiments are insufficient to clearly demonstrate
the limitations of existing methods and the superiority of the proposed model, which is crucial for justifying the motivation behind
this research.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: There are no proofs or theoretical claims in the paper.
Experimental Designs Or Analyses: Yes, the existing experimental design is sound and valid.
Supplementary Material: Yes, the authors include additional details on data processing, model design, and other experiments such as efficiency and scalability study
in the supplementary meterial.
Relation To Broader Scientific Literature: This paper builds upon previous work on human trajectory recovery and the study of diffusion models. It presents the first work to
design spatial-temporal conditional diffusion models for the human trajectory recovery task, achieving significant improvements and
complementing the existing literature. However, this paper shares some similarities with existing works in technical designs such as
CSDI[1], RNTrajRec[2], and Diff-RNTraj[3].
[1] Tashiro, Yusuke, et al. "Csdi: Conditional score-based diffusion models for probabilistic time series imputation." Advances in
neural information processing systems 34 (2021): 24804-24816.
[2] Chen, Yuqi, et al. "Rntrajrec: Road network enhanced trajectory recovery with spatial-temporal transformer." 2023 IEEE 39th
International Conference on Data Engineering (ICDE). IEEE, 2023.
[3] Wei, Tonglong, et al. "Diff-rntraj: A structure-aware diffusion model for road network-constrained trajectory generation." IEEE
Transactions on Knowledge and Data Engineering (2024).
Essential References Not Discussed: None
Other Strengths And Weaknesses: Strengths :
1. This paper introduces the first spatial-temporal conditional diffusion model specifically designed for human trajectory recovery,
showing significant improvements.
2. It designs and integrates multiple conditional feature extraction modules to tackle the complexity of spatial-temporal dependencies.
3. Extensive experiments on two real-world datasets demonstrate the model's effectiveness and improvements over existing methods.
Weakness:
1.See the Claims and Evidence and Relation to Broader Scientific Literature for details.
Other Comments Or Suggestions: 1. It's better to draw a schematic diagram of the TGGNN module in the article for intuitive understanding.
2. It's better to include how to infer the real missing locations.
Questions For Authors: What would be the effect of the model if the k-th position is missing in both the current trajectories and the historical trajectories?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your detailed feedback.
Claims&Evidence:
We clarify how existing experiments explicitly demonstrate DiffMove’s superiority in complex, irregular, and uncertain scenarios:
1. Probabilistic Generation vs. Deterministic: Table 1 and 3 shows that sampling multiple trajectories (DiffMove) improves Recall@1 by 1.7–1.85% over single-sample generation. This directly validates our claim that probabilistic diffusion captures uncertainty in human mobility, whereas deterministic baselines cannot.
2. Robustness to Extreme Sparse and Irregularity: Table 6 (Appendix A.10) tests DiffMove under different missing ratios. For example, Distance metric performance of our DiffMove with 80% missing ratio even outperforms TRILL with 40% missing rate and surpasses both PeriodicMove and Attnmove, even when they have lower missing rates 20%.
3. Due to the page limit, including other unnecessary case study (more on cherry picking certain special cases), while small valuable, would extend the scope and length of the manuscript beyond the intended focus. DiffMove demonstrates significant improvements across different missing ratios (Table6). These experiments simulate sparse, irregular, and uncertain scenarios (e.g., through random masking), and our method consistently outperforms baselines in recall and distance metrics. This already provides empirical evidence of the effectiveness of our approach. We will clarify these in the revised manuscript.
Rela To Literature:
DiffMove’s technical design and problem focus are distinct from the cited works, as detailed below:
Unlike CSDI which handles continuous numerical sensor time series (e.g., temperature), DiffMove tackles the fundamentally different challenge of recovering discrete location IDs and is designed as an entirely new framework of embedding-based diffusion and decoding to categorical space (Sec. 4.3) by a single end to end training. It is rather than raw continuous numerical space (unlike CSDI’s simple and single regression-style output).
Another difference is Trajectory-Specific Architecture: Our model introduces spatial-temporal conditioning (new TGGNNs + cross-attention) features and decoding features for human mobility patterns (Sec. 4.2), which CSDI lacks.
Moreover, unlike RNTrajRec and Diff-RNTraj—which rely on vehicles and road network constraints (next road has to be adjacent to the current road) —our approach is designed for scenarios where such road external priors are unavailable, which is fundamentally different problem setting and technical design. RNTrajRec use a transformer to infer missing points deterministically, leveraging road network constraints (e.g., road segments) to guide the predictions. Diff-RNTraj combines diffusion modeling with road network constraints, processes continuous GPS and road graphs and focuses on synthetic generation, not recovery of real-world human sparse trajectories. In contrast, our innovative Spatial Conditional Block and Target Conditional Block (Sec 4.2) are differently engineered to capture the human trajectories’ complex interplay between spatial and temporal dependencies.
Other Suggestions:
Thank you for your valuable feedback.
1. We add a schematic diagram in below link for TGGNN module would enhance intuitive understanding in the revised manuscript. **https://anonymous.4open.science/r/A01D/README.md**
2. Our method is trained using a masked recovery framework, where we mask certain locations in the trajectory and train the model to recover them based on the observed data, with ground truth available for validation. Once the model is fully trained, we simply apply the same recovery mechanism to infer the real missing locations. DiffMove’s inference process for missing locations is explicitly detailed in Section 4.3 and Appendix A.4. During inference:
Step 1: The model uses observed locations in the current trajectory and historical data to condition the diffusion process. Step 2: Noisy embeddings for missing slots are iteratively denoised via the reverse diffusion process (Eq. 4–5), guided by Spatial/Target Conditional Blocks. Step 3: Decode denoised embeddings.
Questions:
DiffMove leverages contextual information from adjacent time slots and the overall spatial-temporal patterns learned during training to infer the most plausible value for that slot. 1. Transition Patterns: The Sec. 4.2.1 models transitions between locations, inferring k from neighboring observations. 2. Periodicity: Cross-attention identifies recurring patterns (e.g., daily routines) across historical days, even if k-th slots are missing. 3.Global Mobility Knowledge: The embedding table E encodes universal location semantics (e.g., "office" vs. "home"), enabling recovery via similarity matching. 4.Empirically it is validated by our robustness study (Table 6). It shows DiffMove with 80% missing data outperforms all baselines. This demonstrates robustness to extreme sparsity, including scenarios where critical slots are missing. | Summary: This paper introduces DiffMove, a new framework for recovering human trajectory data based on conditional diffusion model design. DiffMove effectively handles complex spatial-temporal patterns in low-sampling data. It works by transforming trajectory locations into an embedding space, denoising the embeddings, and then decoding them to recover missing points. DiffMove improves recovery accuracy by modeling mobility patterns like transitions and periodic behaviors. Experiments on two real-world datasets show that DiffMove outperforms previous methods by more than 10%. Overall, this research idea is interesting, and the recovering problem has good impact to the field.
Claims And Evidence: The claims are clear and convincing
Methods And Evaluation Criteria: yes
Theoretical Claims: .
Experimental Designs Or Analyses: yes, evaluations on multiple datasets are good
Supplementary Material: no
Relation To Broader Scientific Literature: Urban computing, spatio-temporal data science, pandemic control
Essential References Not Discussed: references are good
Other Strengths And Weaknesses: The proposed model is novel and could be applied to other scenarios. The evaluations are solid, with lots of comparisons on multiple datasets, detailed ablation studies, and solid comparisons. The writing is clear and easy to follow, with lots of figures that make it much easier to follow.
There are also some concerns.
(i) The authors could discuss potential impact of the recovered data. Otherwise, it is hard for broader readers to understand. For example, who can use the recovered results for what applications. More discussion will be appreciated.
(ii) In line 66, trajectories are ID-based representations, this is not easy to understand. Because GPS trajectories could also been continuous, e.g., 36.88284.
(iii) is Recall a regular metric for this kind of recovery problem?
(iv) The writing could be revised. For example, in Line 343, "%Improv. " could be revised to "%Improvement" since there are enough space in the table.
Other Comments Or Suggestions: no
Questions For Authors: no
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thanks for your valuable comments and stating our evaluations on multiple datasets are good, claims are clear and convincing. Our responses to other parts are as below:
W(i):
Recovering sparse human trajectories, particularly those involving Points of Interest (POIs), significantly enhances various mobility-related applications. By accurately reconstructing incomplete trajectory data, we can improve POI recommendations, leading to more personalized and relevant suggestions for users. Additionally, comprehensive trajectory data supports better urban planning and traffic management by providing insights into movement patterns and congestion areas. Location-based services, such as targeted advertising and ride-sharing, also benefit from complete trajectory information, resulting in improved user engagement and operational efficiency. Overall, the ability to recover and utilize complete trajectory data is crucial for advancing various applications that rely on understanding human mobility. We will clarify this in the revised manuscript to make readers easier to understand.
W(ii):
The term "ID-based representations" refers to how locations are discretized into identifiable points (e.g., POI locations or geographical grid cells) rather than being represented as continuous GPS coordinates. As mentioned in Section A.8 Data Preprocessing, we used publicly available online map services to define the geographical partitioning of the study areas (Tokyo and Beijing) into 500m x 500m blocks. This partitioning provides a grid-based ID representation of locations. This is commonly used in mobility data analysis where raw GPS trajectories are mapped to meaningful locations, making it easier to handle data sparsity and improve interpretability. The baselines—AttnMove proposed this, and PeriodicMove, TRILL focus on the same data preprocessing, which aligns with our problem setting. We will clarify this in the revised manuscript.
W(iii):
Yes, recall is a relevant metric for trajectory recovery, especially when evaluating how well the model retrieves missing locations. Given that real-world applications prioritize capturing as many true missing locations as possible, recall helps assess the effectiveness of the model in recovering essential trajectory points. The baselines—AttnMove, PeriodicMove, and TRILL focus on the same trajectory recovery task, which aligns with our problem setting and they also proposed the same evaluation metrics earlier, so we continue to do the fair comparison with them on this. Other works in trajectory imputation and POI recommendations also utilize recall as a key performance indicator, aligning with our evaluation approach.
W(iv):
Ok, noted on the minor change. We will modify "%Improv. " to "%Improvement" in the table of the revised manuscript.
Supplementary Material:
We think your answer no is by mistake or misunderstanding maybe. Just for information only, we have supplementary materials which include implementation details and code. We include additional details on data processing, model design, and other experiments such as efficiency and scalability study in the supplementary material and appendices.
Overall, thanks for your positive and insightful feedback.
---
Rebuttal Comment 1.1:
Comment: The reviewer appreciates the authors for providing detailed responses. Most of my concerns have been resolved and I will raise my rating.
One tiny question is regarding the "recall". Is it possible that the recall is good but the precision turns out to be terrible? Additional experiments are not necessary given the limited time window.
Hope to see the revised version as the authors have mentioned.
---
Reply to Comment 1.1.1:
Comment: Thank you for your positive feedback and for raising the score. We indeed use multiple metrics in our evaluation, including Recall and Mean Average Precision (MAP) in Section 5.3. The inclusion of MAP in Table 1 ensures that precision is also captured, as MAP reflects the quality of the overall ranking of imputed locations. In our experiments, a high Recall is accompanied by competitive MAP values and it is treated as the most representative metric commonly used across all our baselines, indicating that our method does not simply over-predict missing locations but recovers them accurately with good precision. We will further clarify this point in the revised manuscript comprehensively.
Thank you again for your thoughtful question with your encouraging support and for highlighting the potential impact of our work on the research community. | null | null | null | null | null | null |
Mechanisms of Projective Composition of Diffusion Models | Accept (poster) | Summary: The paper proposes a theory for understanding composition in diffusion models and how it can produce samples that are out of distribution for each of the constituent models. Their key insight is that composition of distributions is ill specified unless tied to a projection that specifies which attribute we would like to compose. This leads to the idea of projective composition, which can be realized with the composition operator given sufficient conditions, which are related to the factorability of the distributions we would want to compose. An explicit construction of this composition is provided, which is similar to Bayesian composition except for the replacement of the unconditional score with a background score. It is shown that it is also sufficient for this factorability to be present in some feature space, greatly generalizing the theory, although this result does not provide an explicit construction for sampling from the composed distribution.
## update after rebuttal
The empirical evidence provided in the rebuttal addresses my concerns surrounding the empirical evidence. I have increased my score from 3 to 4.
Claims And Evidence: The theoretical results are well supported with proofs. Some empirical results are given, but appear anecdotal and lacks statistical analysis.
Methods And Evaluation Criteria: The main contribution of this work is theoretical, so the limited empirical results do make sense modulo the issues described above.
Theoretical Claims: The theorems that appear in the main body of the work all make intuitive sense. I have not thoroughly checked the proofs.
Experimental Designs Or Analyses: As mentioned above, I think the empirical results are more anecdotal than scientific as is. While not the main contribution, some basic analysis could be performed (e.g. something like “out of 100 identical trials, composition with an empty background was successful 100 times and bayesian composition was successful 12 times, which is significantly improved with a p-value of …”).
Supplementary Material: No.
Relation To Broader Scientific Literature: This work provides theoretical foundations for understanding the the composition of diffusion models. While composition has been shown to be possible in prior work, the aim of this work is to provide additional theoretical understanding for why and under what conditions it is expected to work.
Essential References Not Discussed: Not to my knowledge.
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: See my suggestions under "Experimental Designs Or Analyses".
Questions For Authors: Please see my comments under “Experimental Design or Analyses” - I am curious about how consistent the results provided in Figures 3 and 5 are.
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We thank the reviewer for their support for our paper and helpful suggestions.
* The reviewer suggests a quantitative analysis of our CLEVR experiments. We agree that this is an excellent idea and have performed the analysis. The results are shown in the table below and will be included in the camera-ready version should the paper be accepted.
* To produce the table below, we generated 100 samples using each composition method, and manually counted (to avoid any potential error in using a classifier) the objects in correct locations (i.e. locations corresponding to the conditioners of the distributions being composed) in each generated image. In the table below we record the histogram of object counts in correct locations.
* Regarding the reproducibility of Figures 3 and 5, we provide additional samples in Figures 8 and 10, respectively, in the appendix. Further length-generalization is also explored in Figure 9. In addition, the new table below quantitatively confirms the reproducibility of the results of Figure 3 (when attempting to compose 3 single-object distributions as in Figure 3, the empty-background (projective) composition correctly produced images containing 3 objects in 99/100 trials, while the Bayes composition never produced an image containing 3 objects in 100 trials).
Composition of location-conditioned CLEVR distributions
* N denotes number of distributions being composed (hence the expected number of objects) -- we test N=1 through N=6
* "Single-object empty" composes single-object object distributions with an empty background
* "Single-object Bayes" composes single-object object distributions with an unconditional background
* "Bayes-cluttered" composes 1-5 object distributions (with location label assigned to a single randomly-chosen object) with an unconditional background
* The table shows the histogram of manual counts, that is, each column lists the number of images that contained the given number of objects in correct locations.
| Style | N | | | | | | | | |
|---|----|---|---|---|---|---|---|---|---|
| Single-object empty | | | 0 | 1 | 2 | 3 | 4 | 5 | 6 |
| | 1 | | | 100 | | | | | |
| | 2 | | | | 100 | | | | |
| | 3 | | | | 1 | 99 | | | |
| | 4 | | | | | 2 | 98 | | |
| | 5 | | | | | | 2 | 98 | |
| | 6 | | | | | | | 3 | 97 |
| Single-object Bayes | | | 0 | 1 | 2 | 3 | 4 | 5 | 6 |
| | 1 || | 100 | | | | | |
| | 2 || 10 | 67 | 32 | | | | |
| | 3 || 36 | 62 | 2 | | | | |
| | 4 || 77 | 23 | | | | | |
| | 5 || 66 | 32 | 2 | | | | |
| | 6 || | 34 | 6 | 3 | | | |
| Bayes-cluttered | | | 0 | 1 | 2 | 3 | 4 | 5 | 6 |
| | 1 || | 100 | | | | | |
| | 2 || | | 100 | | | | |
| | 3 || | | | 100 | | | |
| | 4 || | | | | 100 | | |
| | 5 || | | | | 2 | 98 | |
| | 6 || | | | | | 2 | 98 |
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarification and the additional data. Since my primary concern surrounded the lack of proper empirical analysis, I am satisfied by the table provided by the authors and will update my score.
---
Reply to Comment 1.1.1:
Comment: Thank you for your support, and the good suggestion to improve our empirical analysis! | Summary: The authors present a formalization of compositionality in diffusion models. Using diffusion models separate for particular objects and background can be joined together in various ways. The authors explore these different ways and point out the correct way of composing these. The authors suggest a particular way of composition if a collection of distributions satisfies a set of conditional independencies, and then continue to generalize that by using diffeomorphisms and showing if such a diffeomorphism exists, their composition stil holds. The authors show various illustrations in the form of the CLEVR dataset and an example from a text-conditional diffusion model.
Claims And Evidence: .
Methods And Evaluation Criteria: There are no quantitative results in the paper.
Theoretical Claims: I can not find any particular issues with the theorems and proofs in the work, but I find the requirement of the existence of a diffeomrophism in 6.2 strong enough that I question the usefulness of the result.
Experimental Designs Or Analyses: See also strenghts and weaknesses:
The authors mostly focus on the CLEVR dataset for examples and all experimental examples are qualitative.
Supplementary Material: I reviewed B,C,D,H
Relation To Broader Scientific Literature: .
Essential References Not Discussed: NA
Other Strengths And Weaknesses: - The idea of formalizing compositionality of diffusion models is relevant and an interesting topic
- The authors mostly focus on the CLEVR dataset for examples and all experimental examples are qualitative. While the field is perhaps not developed to a point where there is a set benchmark, the work of Du et al (2023) serves as major inspiration for the authors, and it would certainly make sense to run the same experiments reported in that paper for a quantitative comparison.
- While theorem 5.3 is interesting, it is of limited use other than for datasets such as clevr, where objects are easily separated spatially. The authors do present theorem 6.2, and while I can not find any issues with the proof perse, its use is extremely limited. The assumption of the existence of a diffeomorphism that perfectly separates all variables is very strong. Effectively the work has now moved to the assumption, and the authors make no effort to investigate when this assumption is valid. There is moreover the issue that there is no guarantee the reverse process is correct. Altogether, this brings into question of how useful the result in 6.2 is.
- In this situation, it would be good to have a strong, quantitative, experimental evaluation to demonstrate the use of such a theorem, but as mentioned earlier, particular on evaluation on natural image datasets is lacking.
Other Comments Or Suggestions: .
Questions For Authors: See strenghts and weaknesses
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We thank the reviewer for their time, insightful questions, and constructive critiques.
Overall, we want to emphasize that the goal of this paper is to understand and predict when composition will work — and just as importantly, when it will fail. That is, we want to theoretically explain prior empirical observations about when composition worked or failed; we do not aim to introduce any new methods.
We paraphrase and respond to your specific questions below.
Q1: In real-world settings, when we can expect the existence of a diffeomorphism that separates the conditions we want to compose?
* First on a technical note, we actually only require “$C^1$ diffeomorphisms”, i.e. the feature-map and its inverse should be differentiable. We will clarify this in the revision.
* Our “diffeomorphic” assumption is very closely related to existing assumptions in the literature on “disentangled feature representations.” For example, the long line of work on learning disentangled representations implicitly assumes that such a disentanglement is (at least approximately) possible. (e.g. [1] on VAEs and [2] on GANs). That is, if we are in a setting where we have a neural network that maps to and from a “disentangled” feature space (e.g. a VAE or a BiGAN), then this neural network defines our requisite $C^1$ diffeomorphism (technically this assumes the encoding and decoding networks are differentiable everywhere, which we can guarantee e.g. if if the network uses smooth activation functions).
* Finally, we do not believe that disentangled representations always exist for any distributions we might wish to compose--- and we are equally interested in understanding these failure cases. For example, “style” and “content” features are typically believed to be disentangled in the existing literature, and thus we expect style+content compositions to work. On the other hand, some concepts may be impossible to disentangle in any reasonable features space. In such cases, a diffeomorphism may not exist, and we do not expect these concepts to compose well (an example is the horse+dog composition in Figure 6).
Q2: How robust is the theory? Is it really necessary to perfectly satisfy Factorized Conditionals?
* Although our theory technically requires perfect independence, which is indeed a strong condition, our CLEVR experiments empirically study a case where the conditions hold only approximately, and explore both the robustness of the theory as well as its limits in this imperfect case (please see response to reviewer nnqo for further detail).
Q3: What are the practical implications/usefulness of the result in Lemma 6.3 which says that, even if projective composition is possible at $t=0$, reverse diffusion may not correctly sample from it?
* The fact that reverse-diffusion sampling may not work even when composition is possible at $t=0$ explains the “negative-result” in Figure 5, and may help explain other failures of composition in the literature.
* Most notably, this result helps explain empirical findings in Du et al. (2023), who showed that HMC sampling (which in particular allows sampling directly at $t=0$) was necessarily to enable successful composition in many cases. Our theory helps explain why HMC sampling worked when standard diffusion sampling did not. We discuss this further in Appendix J.1.
Q4: Text-to-image evaluations?
* Our goal is primarily to theoretically explain existing empirical evaluations in the literature. In text-to-image settings, the Bayes composition (used in Du et al. (2023) and other works) is often approximately projective, as discussed in Section 5.4. Therefore, existing empirical results in text-to-image settings are typically already constructed in a way that is compatible with our theory. We therefore accept the existing experimental results of Du et al (2023) and others and seek to understand/explain them (both successes and failures) through our theory. Of course, there is much more to study and explore empirically in text-to-image settings that we hope to explore in future work.
Q5: Quantitative evaluation?
* We performed some additional quantitative evaluations of our CLEVR experiments: please see the table of results and description of the experiments in our response to Reviewer DBEj.
If the reviewer’s concerns have been adequately addressed, we kindly ask they consider raising their score to support acceptance.
References:
[1] Isolating Sources of Disentanglement in VAEs
RTQ Chen, X Li, RB Grosse, DK Duvenaud. NeurIPS 2018.
[2] A style-based generator architecture for generative adversarial networks
T Karras, S Laine, T Aila. CVPR 2019.
---
Rebuttal Comment 1.1:
Comment: I do not have fundamental concerns regarding correctness in this paper, but I remain skeptical of its usefulness and its experimental validation given such a strong assumption. The response in Q1 was useful, and I understand the reasoning between this assumption and pictures of horses and dogs. However I think such theory rooted in strong assumption invites, and to some extent requires, a strong experimental evaluation, which the authors seem to defer to other work (i.e. response Q4).
Moreover, the experimental conclusions identified from other work are anecdotal, since there is no way to verify whether the assumptions are satisfied in those cases beyond intuition.
It also limits the use to the community, since again it is difficult for practitioners to asses in which cases this theory can be used, beyond intuition.
I would welcome any experimental results or literature that attempts to make that statement in rigorous in some way.
---
Reply to Comment 1.1.1:
Comment: We appreciate your engagement. You raise some important questions regarding evidence for and connections to disentangled representations, which we will try to address here.
1. What is the precise connection between the notion of disentanglement and Factorized Conditionals?
2. How can we measure “disentanglement” quantitatively?
3. What experimental evidence is available for disentanglement, and for which concepts?
* Disentanglement is somewhat difficult to precisely define as we discuss next. However, to quote Karras et al. [2]: “There are various definitions for disentanglement, but a common goal is a latent space that consists of linear subspaces, each of which controls one factor of variation.” This definition is a necessary condition for Factorized Conditionals.
* Regarding quantitative metrics for disentanglement: Note that there is a fundamental barrier to rigorously testing “disentanglement”-type assumptions in high-dimensions, since it is impossible to test independence of two arbitrary high-dimentional random variables in poly(dimension) time. (This follows from e.g. cryptographic PRFs). Nevertheless, there is a large body of work towards designing disentanglement-metrics appropriate for “real-world” distributions (e.g. disentanglement metrics introduced by BetaVAE, FactorVAE [3]; MIG in [1], etc.). Several of these metrics are effectively one-sided-tests of our Factorized Conditional assumptions: for example, if the FactorVAE metric reports high-entanglement, then our Factorized Conditionals assumption must be false. It is therefore reassuring that the “FactorVAE” metric reports low-entanglement for several realistic datasets [3]. Furthermore [4] shows that many of the most common disentanglement metrics are fairly correlated with each other.
* Many datasets have been investigated in the disentanglement literature. For example, [1] studies CelebA, 3D Faces, dSprites, [2] studies CelebA, FFHQ, [3] studies 3D Shapes, 3D Faces, CelebA, [4] studies Cars3D, Shapes3D, MPI3D, and [5] studies dSprites, smallNORB, Cars3D, Shapes3D. All provide qualitative and quantitative (via the various metrics described above) evidence of disentanglement of various concepts present in the datasets. For example, [2] investigates disentanglement between the 40 attributes labeled in the CelebA dataset (such as “BlackHair”, “Eyeglasses”, etc.); suggesting for example that composition of BlackHair+Eyeglasses is likely to work. Also, [6] qualitatively explores disentanglement between style and content for style transfer.
Finally, we would like to contextual our work by mentioning that the theoretical understanding of compositional-generation is at a very early stage: prior to our work, there was not even a formal definition of composition which could capture our applications. Moreover, it was not known if any reasonable assumptions exist which would imply correct composition. Thus, part of our contribution is identifying a “natural” assumption under which composition works. The value of this assumption, we believe, is that it tells us “one possible reason” that composition can work in practice. We agree that it is an important question to bring these assumptions closer to reality, and we hope our work inspires future work in this direction. We hope you will agree that our work is a good first step.
References:
[1] Isolating Sources of Disentanglement in VAEs. RTQ Chen, X Li, RB Grosse, DK Duvenaud. NeurIPS 2018.
[2] A style-based generator architecture for generative adversarial networks. T Karras, S Laine, T Aila. CVPR 2019.
[3] Disentangling by Factorising. H Kim, A Mnih. ICML 2018.
[4] DisDiff: Unsupervised Disentanglement of Diffusion Probabilistic Models, T Yang et al. NeurIPS 2023.
[5] Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations. F Locatello et al.
[6] Content and Style Disentanglement for Artistic Style Transfer. Kotovenko, D. ICCV 2019. | Summary: This paper gives a rigorous theoretical framework for understanding composition in diffusion models, with a focus on out‐of‐distribution extrapolation and length‐generalization. The authors introduce the notion of “projective composition,” which formalizes the idea that a composed distribution should, when viewed through specified projection functions, match the marginals of the component distributions. They derive conditions—most notably via the Factorized Conditionals assumption—under which linear score combination (and its feature-space analogue) yields a correct composition. This paper supports its theoretical results with experiments on synthetic CLEVR data, demonstrating instances of length-generalization and discussing practical sampling challenges.
Claims And Evidence: The main claims are that: first, prior definitions (simple product and Bayes composition) fail to capture the desired out-of-distribution behavior; second, projective composition, as defined via appropriate projection functions, can correctly compose diffusion models, and, third, under Factorized Conditional assumptions, the proposed composition operator yields a distribution with the intended marginals. These claims are supported by theoretical results (e.g., Theorem 5.3 and Theorem 6.1) and illustrated through synthetic experiments. Nevertheless, while the derivations are insightful, some proofs are only sketched and the reliance on strong assumptions (e.g., perfect factorization) may limit the generality of the evidence.
Methods And Evaluation Criteria: The paper combines theoretical analysis with experiments on a controlled synthetic dataset (CLEVR). The methods contain defining novel composition operators and establishing conditions for their correctness via rigorous proofs. The evaluation criteria are appropriate for a theory-focused work, though the empirical validation remains limited to synthetic settings. A broader set of experiments on more complex, real-world data would assist to corroborate the practical relevance of the theoretical findings.
Theoretical Claims: The paper showss several non-trivial theoretical claims regarding the behavior of composition operators in diffusion models. The formal definition of projective composition (Definition 4.1) and subsequent results (e.g., Theorem 5.3 on the correctness of composition under Factorized Conditionals, and Theorem 6.1 in feature space) are substantial contributions. Nevertheless, the proofs are sometimes only outlined, and some underlying assumptions (such as exact independence across masked coordinates) might not hold in practice.
Experimental Designs Or Analyses: Experiments on the CLEVR dataset illustrate key phenomena such as length-generalization and the sensitivity of composition to background choice. Although these experiments effectively demonstrate the theory in a controlled environment, the experimental section is relatively narrow in scope. Extending the experiments to more realistic datasets could help strengthen the overall impact.
Supplementary Material: The supplementary material offers additional proofs and experimental details that support the main text. While it is comprehensive, some parts are highly technical and could benefit from clearer explanations to aid reproducibility and understanding.
Relation To Broader Scientific Literature: The work is well-situated within the literature on diffusion models, compositional generation, and generative modeling in general. It builds upon and extends prior methods such as those by Du et al. (2023) and Liu et al. (2022), offering a novel perspective by formally addressing the limitations of existing composition definitions. The paper, in addition, relates to literature on disentangled representations, which underpins its Factorized Conditional assumption.
Essential References Not Discussed: Despite the fact that the paper cites a wide range of related works, a deeper discussion of literature on disentangled feature learning and alternative composition strategies (especially in the context of real-world image synthesis) could further contextualize the contributions.
Other Strengths And Weaknesses: Strengths:
1. Introduces a novel and formal definition of composition (projective composition) that addresses clear limitations in prior work.
2. Provides a set of theoretical results that illuminate when and why linear score combination can yield correct composition in diffusion models.
3. Connects theoretical insights with empirical observations on synthetic data, offering useful perspectives on sampling challenges.
Weaknesses:
1. The Factorized Conditional assumption, critical for the theoretical guarantees, may be too strong and not fully reflective of practical scenarios.
2. Experimental validation is limited to synthetic datasets, leaving open questions about applicability in more complex, real-world settings.
3. Some proofs and technical derivations are only sketched, which could hinder reproducibility and complete understanding.
Other Comments Or Suggestions: The paper could benefit from clearer exposition in some of the more technical sections, as well as from an expanded experimental section that explores the framework’s applicability beyond synthetic examples. Detailed discussion on potential methods to address the identified sampling challenges could also strengthen the work.
Questions For Authors: 1. Can you provide additional empirical evidence on real-world datasets to assess whether the Factorized Conditional assumption holds approximately in practice?
2. How robust are your theoretical results if the independence assumptions are only approximately satisfied? Could the framework be extended to account for partial dependencies?
3. Could you elaborate on potential strategies to mitigate the sampling challenges (as noted in Theorem 6.1 and Lemma 6.3) in practical implementations of your composition operator?
4. Are there any plans to integrate or test your framework with more complex, high-dimensional real-world image datasets to further validate its practical impact?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their support for our work and insightful questions, to which we respond individually below.
Weaknesses
Q1: The Factorized Conditional assumption, critical for the theoretical guarantees, may be too strong and not fully reflective of practical scenarios.
* Theorem 6.1 shows that it is enough to satisfy Factorized Conditionals in *some* feature-space, even if the assumption is not satisfied in pixel-space: that is, as long as there exists some feature-map which “disentangles” features in the appropriate sense, then distributions will compose correctly. We further discuss the empirical evidence for such disentangled features spaces, as well as the robustness of the theory to approximate satisfaction of the conditions, in our answers to Q1 and Q2 below.
Q2: Experimental validation is limited to synthetic datasets, leaving open questions about applicability in more complex, real-world settings.
* Our goal is primarily to theoretically explain existing empirical evaluations in the literature. In text-to-image settings, the Bayes composition is often approximately projective, as discussed in Section 5.4. We therefore accept the existing experimental results of Du et al (2023) and others using the Bayes composition, and seek to understand/explain them (both successes and failures) through our theory. Of course, there is much more to study and explore empirically in text-to-image settings that we hope to explore in future work.
Q3: Some proofs and technical derivations are only sketched, which could hinder reproducibility and complete understanding.
* We provide complete proofs of all claims in the Appendix. In particular, Theorem 5.3 is sketched in the main text but proved formally in Appendix G. Theorem 6.1 and Lemma 6.2, 7.1, and 7.2 are proved in Appendices H, I, and J.
Questions
Q1: Can you provide additional empirical evidence on real-world datasets to assess whether the Factorized Conditional assumption holds approximately in practice?
* This question is closely related to the existing literature on “disentangled feature representations.” This long line of work (e.g. [1] on VAEs and [2] on GANs) implicitly assumes that such a factorized representation is (at least approximately) possible, and there is substantial empirical evidence supporting this for at least some concepts. For example, “style” and “content” features are typically believed to be disentangled in the existing literature, and thus we expect style & content to form Factorized Conditionals. We mention this connection in Section 7, but will elaborate on it in the revision.
Q2: How robust are your theoretical results if the independence assumptions are only approximately satisfied? Could the framework be extended to account for partial dependencies?
* Currently, the theory requires that the independence assumptions be satisfied exactly, but developing robust versions is an important direction for future work.
* Empirically, we use the CLEVR experiments to probe the robustness of the theory. In the CLEVR setting, Factorized Conditionals holds only approximately, due to the possible occlusions and shadowing effects between different objects. Our experiments show that that projective composition is approximately, but not exactly, achieved. To push the limits of this robustness, in Figure 9 we attempt to length-generalize up to 9 objects (which works up to about ~6 objects and then degrades).
Q3: Could you elaborate on potential strategies to mitigate the sampling challenges (as noted in Theorem 6.1 and Lemma 6.3) in practical implementations of your composition operator?
* Yes! As you note, Lemma 6.3 tells us that even if projective composition is possible at t=0, reverse diffusion may not correctly sample from it. Practically, this suggests that non-diffusion sampling methods that enable sampling directly at t=0, such as variants of Langevin dynamics, may be necessary to achieve projective composition in practice (when it is possible at t=0). This is consistent with empirical findings in Du et al. (2023), who showed that HMC sampling was necessary to perform composition in many cases. We discuss this further in Appendix J.1.
Q4: Are there any plans to integrate or test your framework with more complex, high-dimensional real-world image datasets to further validate its practical impact?
* We agree this is an important area for future work; we consider the present work as the first step in this direction.
References:
[1] Isolating Sources of Disentanglement in VAEs
RTQ Chen, X Li, RB Grosse, DK Duvenaud. NeurIPS 2018.
[2] A style-based generator architecture for generative adversarial networks
T Karras, S Laine, T Aila. CVPR 2019. | Summary: This paper proposes a new theoretical framework for analyzing a special type of composition in diffusion models, and it specifically focuses on two previously discovered phenomena in diffusion model composition: out-of-distribution (OOD) extrapolation and length-generalization. The theoretical framework aims at the product-style compositions implemented with diffusion models via a linear combination of scores. Prior studies propose to describe the composed distribution as a simple product of two distributions, or the Bayes composition of them. Yet, the paper uses the CLEVR experiment as an intuitive illustration to show that these two definitions can not really cover OOD composition results and thus will fail to do length-generalization in the CLEVR experiments. Based on this, the paper defines a new form of distribution composition: **Projective Composition**. Intuitively, it requires the composed distribution to be the "same" as each single distribution when viewed from a projection defined for each single distribution. This **Projective Composition** can describe real OOD and length-generalization. The paper further defines a **Composition Operator** to compose a set of distributions, and a **Factorized-Conditionals** that defines specific features of a set of distributions and projections, such as the projections are disjoint masking of the coordinates. The paper further shows that when **Factorized-Conditionals** is satisfied, the reverse-diffusion SDE using compositional scores following **Composition Operator** will satisfy the desired **Projective Composition**. The paper then argues how the successful OOD settings in the CLEVR experiment approximately satisfy **Factorized-Conditionals**. Moreover, the paper discusses how similar analysis can be extended to feature space. In the feature space, they show under the constraint of **Factorized-Conditionals**, the **Composition Operator** also defines a **Projective Composition**; however, how to generate such Projective Composition via diffusion sampling is unknown within the theoretical framework. At last, the paper discusses how the proposed theoretical framework can help understand other empirical understandings during diffusion model composition.
Claims And Evidence: The claims are clear and convincing.
Methods And Evaluation Criteria: The evaluation criteria make sense.
Theoretical Claims: I have checked proofs of most of the theorems except for Lemma 6.3. I haven't fully got the intuitive meaning of Lemma 6.3 and would appreciate remarks from the authors.
Experimental Designs Or Analyses: I have checked appendix B for the experiment details.
1. There are questions related to the conditional variable of the trained EDM2 on CLEVR: what is the conditional variable - is the generation trained to be conditioned on the number of objects, shape of objects, color of objects, position of objects, or a mixture of them? This question arises since the paper seems to require distributions that can control position, or color, or number of projects.
2. After training the diffusion model, in B.2, how to use the trained diffusion model to get/define a generation model following certain $p_i$?
Supplementary Material: I have read all parts of the supplementary material.
Relation To Broader Scientific Literature: Maybe the paper can inspire future works to design a new sampling method for feature decomposition, or propose a theoretical framework to understand feature space decomposition.
Essential References Not Discussed: NA
Other Strengths And Weaknesses: Strengths:
1. The paper is well written and easy to follow, with motivation, theorems, remarks, proof scratches, and empirical demonstrations.
2. The paper studies a novel theoretical problem of understanding OOD and length-generalization phenomenon in diffusion model composition. The interesting projective composition is analyzed in both pixel space and feature space.
Weakness:
1. The main theoretical results only cover sampling in the pixel space, and a theoretically successful result is lacking in the feature space. Yet, feature space composition is an important application in diffusion decomposition.
2. Although the paper discusses in B.2 how the CLEVR settings approximately satisfy Factorized-Conditionals, it is unknown how practical the definition of Factorized-Conditionals will be in other real world diffusion composition, and how far it can be generalized to other successful compositions, such as composing different text prompts with/without different region masks to generate an image.
Other Comments Or Suggestions: NA
Questions For Authors: Q1: Can authors explain the intuitive meaning of Lemma 6.3 and discuss some remarks on it?
Q2: In appendix B, training EDM2 on CLEVR, what is the conditional variable? Is the generation trained to be conditioned on the number of objects, shape of objects, color of objects, position of objects, or a mixture of them?
Q3: In appendix B.2, after training the diffusion model, how to use the trained diffusion model to get/define a generation model following certain $p_i$?
Q4: How practical will the definition of Factorized-Conditionals be in other real-world diffusion composition, and how far it can be generalized to other successful compositions, such as composing different text prompts with/without different region masks to generate an image?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their support for our work and insightful questions, to which we respond individually below.
Weaknesses:
Q1: The main theoretical results only cover sampling in the pixel space, and a theoretically successful result is lacking in the feature space. Yet, feature space composition is an important application in diffusion decomposition.
* This is certainly true. In fact, we show theoretically that in feature space, diffusion sampling may not work even when projective composition is possible at t=0 (Lemma 6.3). This is consistent with empirical findings in Du et al. (2023), who showed that HMC sampling (which in particular allows sampling directly at t=0) was necessarily to enable successful composition in many cases. Our theory helps to explain why HMC sampling worked for Du et al. when standard diffusion sampling did not. We discuss this further in Appendix J.1. This result contributes to our overall goal of understanding when composition will work — and just as importantly, when it may fail.
Q2: [It is unclear how practical the definition of Factorized-Conditionals will be in real world composition.]
* Please see response to Q4.
Questions:
Q1: Can authors explain the intuitive meaning of Lemma 6.3 and discuss some remarks on it?
* Lemma 6.3 intuitively says that, even if projective composition is possible at $t=0$, reverse diffusion (or indeed any annealing method) may not be able to correctly sample from it. Specifically, the lemma proves (using a counterexample) that it is possible for a set of distributions to all vary smoothly in time, while their composition changes extremely abruptly— making any annealing-based sampling method very challenging.
Q2: In appendix B, training EDM2 on CLEVR, what is the conditional variable? Is the generation trained to be conditioned on the number of objects, shape of objects, color of objects, position of objects, or a mixture of them?
* Appendix B includes two different conditioning setups. In Figures 7, 8, and 9 we condition on the 2d location of the object (or the location of one randomly-chosen object, for multi-object distributions). In Figure 10, we condition on the color of the object. In all experiments we condition only a single attribute (either location or color) at a time, with all other attributes sampled randomly and not conditioned on. Thanks for the question — we will clarify these points in the final draft!
Q3: In appendix B.2, after training the diffusion model, how to use the trained diffusion model to get/define a generation model following certain $p_i$?
* For the location-conditional models, the $p_i$‘s correspond to different location conditioners. Specifically, in these experiments, we choose a fixed set of locations $i$ that we wish to compose, and obtain the score of $p_i$ by forwarding our conditional diffusion model conditioned on location $i$.
* Similarly, for the color-conditional models, the $p_i$‘s correspond to different color conditioners. There are only 8 colors so we assign a $p_i$ to every possible color, and we obtain the score of $p_i$ by forwarding our conditional diffusion model conditioned on color $i$.
Q4: How practical will the definition of Factorized-Conditionals be in other real-world diffusion composition, and how far it can be generalized to other successful compositions, such as composing different text prompts with/without different region masks to generate an image?
* This question is closely related to the existing literature on “disentangled feature representations.” For example, “style” and “content” features are typically believed to be disentangled in the existing literature, and thus we expect style & content to form Factorized Conditionals. Regarding composing different text-prompts, we expect similar intuitions about disentanglement to carry over --- see our Figure 6 for example, which composes using different text-prompts.
* Regarding region masks, we believe that either explicit masks or simply text-conditioning that includes location information can indeed be very helpful for achieving Factorized Conditionals (please see Section 7 for further detail).
We thank the reviewer again, and hope these responses are helpful. | null | null | null | null | null | null |
Likelihood-based Finetuning of Protein Language Models for Few-shot Fitness Prediction and Design | Reject | Summary: The authors want to use pre-trained protein language models for supervised prediction. Rather than classical fine-tuning to maximize regression accuracy with a linear probe, these authors suggest fine tuning the ordering of the likelihoods, which are good zero-shot predictors. Predictably, this method works well in the low-N regimen. They validate on proteingym.
Claims And Evidence: sure
Methods And Evaluation Criteria: sure
PLMs are neat, but you should also consider other methods such as Kermut, which, as I understand, is currently state of the art -- https://arxiv.org/abs/2407.00002. HAving the zero-shot numbers in the tables would also be useful.
Theoretical Claims: Sure
Experimental Designs Or Analyses: Yes.
Supplementary Material: No
Relation To Broader Scientific Literature: Fine.
Essential References Not Discussed: No
Other Strengths And Weaknesses: Very straightforward approach (strength). I tried this approach 4 years ago on VAEs with a different loss and, of course, it worked. [This paper](https://arxiv.org/pdf/2412.07763) also has a similar paradigm of fitting the likelihoods in low-N iterative design (with a very very different methodology), and it works. It's surprising to me that the authors applied this to large PLMs with the BT loss and saw improvements with n as high as 512.
Few things that would improve the paper:
1. Error bars in spearmans,
2. Discussion of how to do the fine-tuning: learning rates, early stopping etc... Ideally, a sensitivity plot.
3. Can you write the objective as the likelihood of a generative process? For example, the Pearson correlation is the marginal likelihood if you assume a linear relation between liks and labels with a improper uniform prior. Since the paper is so simple, maybe it wouldn't be too much to ask the authors to try different losses, the Pearson correlation for example.
Other Comments Or Suggestions: .
Questions For Authors: .
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: Thank you very much for your feedback - we appreciate the review. Thank you for acknowledging the simplicity and applicability of our fine-tuning approach as a strength, whilst other reviewers discounted those same traits. Responses to your comments are given below. If we have sufficiently addressed your concerns, we would kindly ask that you champion the paper during the reviewer discussion phase, or accordingly, please let us know if further clarification is needed.
Relationship to Kermut:
We thank the reviewers for highlighting Kermut! Kermut does indeed outperform ProteinNPT on the ProteinGym suite of fitness prediction tasks. There are a few reasons why we did not consider it as a fair baseline in our work:
1) Kermut introduces a novel “biologically meaningful” composite kernel function with which to compare two given proteins. However, this composite kernel utilizes vastly more information than our method (and the collection of baselines we ultimately chose) - in particular, they make use of an inverse folding model that provides structural information to three of their four kernel components. Whilst we agree with the reviewer that this is a neat approach, and one that would likely improve results, including structural information is orthogonal to our work and we instead solely focus on the valuable task of fine-tuning sequence-based PLMs.
2) Kermut is not SOTA without structural information. I.e. Their ablation demonstrates that the sequence-only kernel, with representations from ESM-2, is outperformed (in Spearman random data setting) by ProteinNPT (0.744 - 0.033 = 0.711 versus 0.73). We directly compare to, and outperform, ProteinNPT.
3) Ultimately, we see Kermut’s primary contribution as “how to incorporate the information contained in PLMs into GPs (e.g. to provide meaningful uncertainty estimates)”, an important research topic in its own right. However, we argue that this is orthogonal to our work, and potentially could be combined in future work.
On this point, we have added further discussion to our paper making explicit these points.
Observed Trends:
As the reviewer correctly points out, it is somewhat surprising that the proposed ranking approach is still effective in some landscapes at N=512. We see a clear trend that as N increases, the performance gap of the proposed likelihood ranking approach shrinks relative to the standard parametric regression approach.
To support this claim, and the main claims made in the paper, please find additional fitness prediction results for N=24 and N=48 in the Reviewer vxYr (Experimental Analysis) section.
We will include per-landscape results in the Appendix, and also Error bars in the Spearman results.
Zero-shot values for base PLMs are provided in Table 11 in Appendix B.
Objective as the likelihood of a generative process:
Please could you expand upon how Pearson correlation could be used as a training loss?
---
Rebuttal Comment 1.1:
Comment: My concerns are addressed. I think this is a simple method that takes the idea of using evolutionary likelihoods as zero-shot predictors and extrapolates it to the multi-shot setting. I have implemented such a method before and seen its success, but I don't think this has been published anywhere.
Skimming through the other reviews, it seems that the authors are being asked to demonstrate that their method works in every case against the enormous amount of baselines now popular for protein design. I think this doesn't recognize how simple the author's method is: many groups are already using likelihoods for zero-shot prediction -- these authors just ask them to train these likelihoods on the few measurements they have. I think this work functions as a super simple drop-in baseline that miraculously hasn't been reported, to my own frustration.
It also seems that the other reviewers are confusing this work with superficially similar works, for example Contrastive losses as generalized models of global epistasis by Brookes et al., 2024 (nothing in common except proteins and a Bradley Terry loss are involved; calling the Brookes paper more "theoretical" is also a little strange -- I in fact recommended this paper for rejection because of its glib theory when I reviewed it a few years ago).
Regarding fitting to Pearson correlation, this is what the authors of [this paper do](https://arxiv.org/pdf/2412.07763), although it's in-context rather than fine-tuned. I'm just saying you can swap minimizing the BT loss for maximizing the Spearman correlation. | Summary: This paper extends a ranking-based fine-tuning strategy to various protein language models, including masked PLMs and autoregressive family-based PLMs. Specifically, it introduces different scoring functions for these models and uses conditional ranking loss to fine-tune them. The experiments on fitness prediction and sequence design show that ranking-based fine-tuning outperforms supervised fine-tuning.
Claims And Evidence: The paper claims that ranking-based finetuning outperforms supervised finetuning in fitness modeling, as shown in the experiment. However, it's unclear if this claim also applies to auto-regressive models like esm and ProtGPT.
Methods And Evaluation Criteria: The method in this paper makes sense, as we can use an alternative way to fine-tune protein language models beyond regression-based finetuning.
Theoretical Claims: The math formulations about score functions for different PLMs seem correct and make sense.
Experimental Designs Or Analyses: The whole experiment part (fitness prediction and sequence design) makes sense to support this method, however, essential datasets and baselines are missing, which may hide the potential use of such a method.
1. In the fitness prediction, this paper only shows a subset of the ProteinGym dataset results. Can you provide more results on ProteinGym, as it's a gold standard for fitness prediction?
2. In the sequence design task, there are a lot of baselines using directed evolution (searching-based method, e.g. AdaLead, PEX...). I wonder if the fine-tuned PLMs can be used as landscapes for these methods to improve their performance.
3. Also, there are many different datasets in sequence design task (see AdaLead and PEX paper), could you provide results on these datasets?
4. Auto-regressive PLMs should also be considered as baselines in these tasks.
5. Please provide more details of experiments, especially for sequence design task.
Supplementary Material: Unfortunately, there are no supplementary materials for us to review.
Relation To Broader Scientific Literature: The idea of using ranking-based loss (I believe it's DPO) has been studied in protein prediction/design communities [1,2,3], this method only extends that for mask PLMs and family-based autoregressive PLMs by adjusting score functions (Eq 5,6).
References:
[1] Aligning protein generative models with experimental fitness via Direct Preference Optimization
[2] Fine-tuning protein Language Models by ranking protein fitness
[3] Antigen-specific antibody design via direct energy-based preference optimization
Essential References Not Discussed: 1. I encourage the author to discuss ranking-based loss (DPO) used in protein design more (see prior part).
2. I think the search-based model-guided protein sequence design method should be included and studied, e.g., AdaLead, PEX, LatProtRL [1,2,3].
References:
[1] AdaLead: A simple and robust adaptive greedy search algorithm for sequence design
[2] Proximal Exploration for Model-guided Protein Sequence Design
[3] Robust Optimization in Protein Fitness Landscapes Using Reinforcement Learning in Latent Space
Other Strengths And Weaknesses: Strengths:
1. The empirical results in section 5 (fitness prediction) show some valuable suggestions when fine-tuning PLMs.
Weaknesses:
1. The writing is kind of unclear and confused, for example, the second last part of introduction and the last paragraph of introduction seem redundant, I encourage the author to refine the writing flow, and maybe explainable figures can be used to help understand.
2. The method seems to be a natural extension from ranking loss by adjusting different score functions, which might not be quite interesting.
3. The experiment lacks some baselines and results.
Other Comments Or Suggestions: No.
Questions For Authors: Please see other parts.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you very much for your review. Responses to your comments are given below. However, we highlight a number of misunderstanding and factual errors in the provided review. We kindly request that you re-evaluate your review in light of these clarifications. If we have sufficiently addressed your concerns, we kindly ask that you raise your score accordingly, or let us know if further clarification is needed.
One fundamental misunderstanding repeated multiple times is that “The paper claims that ranking-based finetuning outperforms supervised finetuning in fitness modeling, as shown in the experiment.”
To clarify, our proposed approach directly adapts PLM likelihoods via a ranking-based loss function and is in fact still a supervised fine-tuning approach. Both settings, regression and ranking, utilize a supervised learning paradigm via ground truth fitness scores in a ProteinGym DMS. Our results demonstrate that by directly applying ranking losses to the PLM likelihoods we can better utilize limited labelled data.
Furthermore, the two comments “... However, it's unclear if this claim also applies to auto-regressive models like esm and ProtGPT.” and “Auto-regressive PLMs should also be considered as baselines in these tasks.”
Contains a misunderstanding of the models used in our work. Our paper introduces fine-tining methods specifically for conditional autoregressive models, like the family-based PoET in Section 4.1 “Conditional scoring functions”, and for **masked** language models ESM-1v and ESM-2 in Section 4.2 “Scoring functions for masked PLMs. That is, ESM is not an auto-regressive PLM as the reviewer suggests.
Another misunderstanding is that our work targets open-ended optimization via search. In the final paragraph of our Introduction, we state “...we study the modelling problem in silico which mimics ground truth values being available from wet lab experiments.”
We agree with the reviewer that search-based methods are an important field in proteinomics, however, it is not the focus of our work. Therefore, we respectfully refute the reviewer’s claim that methods such as directed evolution, PEX, AdaLead and LatProtRL [1,2,3] are “essential references not discussed”, and “should be included and studied”.
Leveraging the large collection of datasets available in ProteinGym allows for more concise evaluation of the proposed methodologies. This is a common approach taken in ProteinNPT (Notin, P. et. al. NeurIPS 2023), PoET (Truong Jr & Bepler, NeurIPS 2023), Kermut (Groth, P.M. et al. NeurIPS 2024) and Metalic (Beck, J. et. al. ICLR 2025). Conversely, it is well known that creating a biological meaningful oracle model to provide feedback, as per the reviewer suggess, is arguably a more challenging task than the optimization itself [4, 5].
[4] Buttenschoen, M., et. al. Posebusters: Ai-based docking methods fail to generate physically valid poses or generalise to novel sequences. Chemical Science, 2024
[5] Surana, S. et. at. Overconfident Oracles: Limitations of In Silico Sequence Design Benchmarking, AI for Science Workshop at ICML 2024
“...discuss DPO used in protein design more…”:
We kindly thank the reviewer for their references and will expand upon our current Section 4.5. “Relationship to preference learning for LLMs” where we discuss the benefits and limitations of DPO for our setting, e.g. that the KL penalty term hinders PLM adaptation in low-data settings.
In fact, we do cite work suggested [2] in this section, and in Section 2 Related Work (Page 2). Whilst it is concurrent work, they focus on a non-conditional autoregressive model, limiting their application to the family-conditioned model PoET, and therefore will not achieve our SOTA results (Table 1 and Figure 1).
Furthermore, [1] introduces a structure conditioned generative LM, pre-trained using DPO. Whilst this is an important task, it is orthogonal to our task of fine-tuning **pre-trained PLMs**. Our focus (as in ProteinNPT, PoET and Metalic) is to adapt the performance of pre-trained models using limited (small-N) datasets. Our work makes no claims on the pre-training regimes.
Similarly, [3] introduces a preference-based fine-tuning scheme for a pre-trained structure-conditioned diffusion model. Whilst conceptually related at a high level, their goal is de novo generation of high binding antibodies, and they incorporate feedback from a physics-based energy oracle, more similar to the open-search setting discussed above.
For additional low-N fitness prediction results, please see Review vxYr (Experimental Analysis).
“No supplemental material”. This is not true. Other reviewers acknowledged the Supplemental materials.
Natural extension from ranking losses:
In fact, we feel this is a strength of our work: it is a natural and necessary extension of the current literature, and we have demonstrated it has wide ranging applicability to different families of PLMs, as per the comments from Reviewer AKmL and QopU. | Summary: To train protein sequence to fitness regression models, it is attractive to fine tune protein language models (PLMs), as these have prior knowledge about the constraints underlying protein function, etc. The authors provide a specific fine tuning strategy, where the likelihood of a generative model is optimized using a ranking loss. They show that it works for a number of different generative models and compare to recent alternative modeling approaches.
Claims And Evidence: The results on low-N protein function prediction are strong, showing an improvement over important recent baselines. However, the benchmarking is done on a small subset of the tasks available in ProteinGym and in an eval setup (low-N) that is important, but non-standard. I did not find the results for multi-round fitness optimization convincing. See below for details.
Methods And Evaluation Criteria: The proposed method is well motivated and previously established in some non-archival workshop papers. The authors did a good job of applying it to a wide variety of pretrained models (autoregressive, masked, etc) and providing some model-specific ablations (such as different masking strategies for approximating likelihoods from a masked model).
The datasets (proteingym) are a popular benchmark, but the authors use them in a non-standard way, where models are trained on a small subset.
Theoretical Claims: Not applicable
Experimental Designs Or Analyses: There is no clear quantitative comparison of methods in the multi-round optimization section. Curves are compared to each other, with no well-motivated way to compare the curves. Further, the error bars, particularly in the left figure, suggest that methods perform very similarly.
For fitness prediction, models are benchmarked in the low-N regime, where the training dataset is sampled to be very small. The baseline modeling approaches (particularly, ProteinNPT) were not developed for this use case and I worry that they were not retrained and subjected to hyper-parameter tuning for the type of evaluation task in this paper.
Supplementary Material: Only briefly
Relation To Broader Scientific Literature: The relation to prior work is a bit awkward. As far as I can understand, the modeling technique has appeared in multiple previous workshop papers. The primary contribution of this paper is that it provides more comprehensive benchmarking.
Essential References Not Discussed: You should discuss Widatalla et al "Aligning protein generative models with experimental fitness via Direct Preference Optimization"
There is significant overlap with Beck 2024: "Metalic: Meta-Learning In-Context with Protein Language Models" in terms of motivation and the eval setup. Can you please explain the similarities and differences between your works?
Other Strengths And Weaknesses: I appreciate the eval setup where you generalize to positions with no mutations in training. This led me to believe that the models were learning something non-trivial.
Other Comments Or Suggestions: I found Result 1 confusing in sec 6.2, since it doesn't concern optimization. Shouldn't that appear in the previous section?
Questions For Authors: The evaluation focuses on the low-N regime. I assuming this because you achieved sub-optimal results on the full ProteinGym setup. What were the results? How far behind SOTA was it?
I'm curious about an 'evo tuning' baseline, which has appeared in a variety of prior papers. If you had just fine-tuned your language model on natural homologs to the wildtype for each task, how would that have performed compared to fine tuning that depends on experimental data?
multi-round eval: was it the same initial training set for each method? I'm assuming that the contents of this initial set can have a huge impact on optimization performance. What are the error bars in fig 1? Are any differences statistically significant?
"Since all sequences are scored under the residue distributions obtained by inputting the wild-type sequence to the model, a set of mutated sequences of arbitrary size can be scored using a single forward pass, making it extremely efficient." I found this confusing. Is it valid to use the model without inputting any mask tokens? doesn't it just copy the input sequence?
"As an ablation, we compare to a mean square error (MSE) loss applied to the same scoring function."
A-priori, the log likelihood scores from the model could be wildly different from the units of the fitness scores, which could make MSE training unstable. Did you try anything basic, like scaling the likelihood scores to be in the range [0, 1], for example?
===After Authors Response===
See comment below. Thank you for the thorough discussion. I have raised my score to weak accept.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you very much for your detailed feedback - we appreciate the review. If we have sufficiently addressed your concerns, we kindly ask that you raise your score accordingly, or let us know if further clarification is needed.
Experimental Analysis
Clarification: We do provide a clear quantitative comparison of methods in the multi-round optimisation setting. We report the Area Under Curve (AUC) in all design experiment plots (in the legend), where methods are sorted by this metric. Also, full numeric values for all models, scoring functions and loss functions are provided in Table 13 in Appendix B.
Figure 1 presents our proposed ranking-based likelihood fine-tuning methods against a subset of baselines. This result is important and demonstrates that one can **improve** the multi-round performance of a “weaker” PLM to perform in-line with, or exceed, the recent SOTA baseline method ProteinNPT (Notin, P. et al. NeurIPS 2024).
To directly compare the fine-tuning strategies for a given PLM, regression (MSE) based PLM baselines are presented in Figure 2 (ESM-2) and Figure 3 (PoET) in Appendix B. These results demonstrate that for a given PLM, our proposed likelihood fine-tuning method is almost always preferable.
Please find additional low-N experiments in Reviewer vxYr (Experimental Analysis).
Baseline modeling
We respectfully disagree with the reviewer on this point. ProteinNPT is a SOTA general purpose PLM developed specifically for low-data regimes. An extract from their work: “we introduce ProteinNPT, a non-parametric transformer variant tailored to protein sequences and **particularly suited to label-scarce and multi-task learning settings**”.
We demonstrate comparable (and in some cases better) performance than ProteinNPT with “weaker” PLMs via pair-wise ranking loss functions directly applied to the likelihoods.
Furthermore, we tuned the learning rate and the number of training steps for all models evaluated.
Relation to Metalic:
This is concurrent work and takes a different approach to improve **zero-shot** predictions of a PLM. Metalic is an in-context meta-learning approach that “learns to learn” to adapt to available in-context sequences available at test time. It does so via a meta-training phase over a distribution of similar fitness prediction tasks. They leverage the same subset of ProteinGym landscapes (as per ProteinNPT and our work), and specifically target zero-shot adaption.
Section 6.2 is included in Section 6 to establish its contribution towards multi-round design. That is, we introduce PLM ensemble models that 1) explicitly model predictive uncertainty and 2) apply principled acquisition functions. Both advancements are aimed at performing BO in Result 2, rather than fitness prediction in Section 5.
Questions for Authors:
The assumption that our proposed methods achieved sub-optimal results in larger N data regimes is incorrect. Please allow us to clarify: Our motivation is driven by the multi-round design setting, where we follow the SOTA ProteinNPT experimental setup (Notin, P. et. al. NeurIPS 2023b). This consists of fine-tuning the PLM at each round and acquiring 100 new sequences per round, for a total of 10 rounds. This is a realistic biological design setting since many wet lab experimental platforms compute ~100 scores in a single “plate” in parallel. Therefore, our thesis is that improved fitness prediction at low-N datasets (as per Table 1), drives meaningful data acquisition and multi-round design performance (Table 13 and Figures 1, 2 and 3).
‘Evo tuning’
This is an orthogonal research direction. Indeed, test-time training methods have been demonstrated to improve model performance using an unsupervised objective on test sequences, or indeed on homologs [1]. Also, whilst PoET conditions on MSA sequences, and demonstrates a clear benefit doing so, our work focuses on demonstrating that directly adapting model likelihoods using small experimental datasets improves fine-tuning. In practice, both or a hybrid approach could be applied.
[1] Bushuiev, A. et. al. Training on test proteins improves fitness, structure, and function prediction 2025
Multi-round eval:
Yes, your intuition is correct. A random seed of initial data is sampled from the landscape (where the seed is fixed for all methods), so the initial training set is identical. Furthermore, we evaluate all methods across three seeds.
wt-marginal strategy is a standard approach to compute the likelihood of MLM mutations introduced in Meier J., et. al (NeurIPS 2021). Popularised due to its efficiency and the widespread adoption of ESM. Concretely, the method computes the log-odds ratio of the probability of the mutated amino-acid with the wild-type's amino-acid. It does this in a single forward pass of the wt sequence.
Applying the MSE loss directly to the likelihood scoring function results in unstable performance (Table 6 and Table 13 Appendix). We did not explore scaling the likelihoods.
---
Rebuttal Comment 1.1:
Comment: Thanks for the feedback and clarifications.
I have a few follow-up questions:
1) Regarding Metalic, it was unclear to how 'zero-shot' is defined here. Do you mean that the weights of the model are not updated or that no task-specific labeled examples are required to form predictions? In other words, would could your modeling technique be used in their evaluation setup and vice-versa?
2) The AUC score analysis in Figure 1 is fairly informal. Are these differences statistically significant? \
---
Reply to Comment 1.1.1:
Comment: > could your modeling technique be used in their evaluation setup and vice-versa?
In fact, **Metalic does use our likelihood ranking-based fine-tuning strategy** in both their meta-training and fine-tuning phases. This clearly shows the value and applicability of this work.
> zero-shot: Do you mean that the weights of the model are not updated or that no task-specific labeled examples are required to form predictions
To clarify further, both are true. In Metalic ‘zero-shot’ is defined as having no task-specific labeled sequences available to fine-tuning the PLM via weight updates. This would correspond to N=0 in our work. The authors demonstrate that by introducing a meta-training phase they can fine-tune (weight updates) their PLM over many related prediction tasks to take into consideration in-context sequences, and therefore adapt (no weight updates) to sequences provided at test time. They demonstrate improved zero-shot prediction accuracy, i.e. Spearman correlation of predictions with respect to ProteinGym fitness scores.
For completeness, Metalic goes on to introduce a ‘few-shot’ setting where N=16 or N=128 sequences (with ground truth values) are available to fine-tune (weight update) the PLM. The performance of Metalic relative to baselines reduces in the higher N data setting.
> The AUC score analysis in Figure 1 is fairly informal. Are these differences statistically significant?
The key take away from Figure 1, is not whether the AUC values are significantly different from ProteinNPT (which I suspect they are not), but rather that we can take a “weaker” PLM model, e.g. ESM-2 (650M) and via a relatively straightforward fine-tuning approach, it performs competitively, and often outperforms, the SOTA design method ProteinNPT.
If we have sufficiently addressed your concerns, we would kindly request that you raise your score accordingly. | Summary: This paper examines likelihood-based / rank-based finetuning for pLM, particularly for the low data fitness prediction setting. The authors formalize pairwise ranking losses for masked models (e.g. ESM-series), family/MSA-based autoregressive models (e.g. PoET), and conditional models. The results show that these methods can outperform MSE-based finetuning on frozen embeddings. They also examine ensemble strategies to leverage the context-dependent nature of PLM mutation distributions. Experiments are done on ProteinGym benchmarks.
Claims And Evidence: The primary claim is that likelihood-based finetuning is better than MSE-finetuning or directly using frozen embeddings, particularly in low-data settings. This is supported by the empirical tables and results, which does consistently show that their pairwise loss adaptations yield better results, sometimes to a high degree. They also show that this can be extended to multi-round BO.
I'm not super sure how Figure 1 supports the fact that using ranking-based losses rather than MSE-based losses improves multi-round prediction. It doesn't look like that the ensemble methods are performing better? Why are non-ensemble methods used as the baseline, since the paper has thus far mostly talked about the scoring function?
I'm also curious about how N was defined for the "low-N" claim, which will be expanded up below. Generally I found this claim a bit weak, and I think it can be easily strengthened by running more experiments at more numbers of N.
Methods And Evaluation Criteria: The method involves creating pairwise ranking loss adaptations for masked language models, conditional models, and family-based autoregressive models. To do so, ProteinGym mutation landscapes are used and Spearson correlation is used for comparison.
As I'll expand upon in the Questions for Authors area, I have a few questions on the max fitness metrics, number of training data points, and observed trends.
Theoretical Claims: n/a
Experimental Designs Or Analyses: * I don't understand the few-shot experiment setup: section 5.1 states that 32,182 or 512 sequences were randomly sampled for training and evaluated on 2000 or 5000 samples, but Table 1 includes n values of 32, 28, and 512. My first thought when reading section 5.1 was that 512 doesn't seem like a low enough data regime; for ex. Biswas et al. uses 24 as the definition of low-N, which seems closer to real-world scenarios.
[1] Low-N protein engineering with data-efficient deep learning. Biswas et al., 2021. https://www.nature.com/articles/s41592-021-01100-y
Supplementary Material: Yes, I glanced through it, the Appendix has more Tables/Figures on more ProteinGym tasks. As with the BO figures in the main text, I need some help better understanding what conclusion we should draw from them regarding the ranking-based loss proposed. Reassuringly, though, in the single round experiments, the ranking-based loss does seem to out-perform MSE-based ones.
Relation To Broader Scientific Literature: Protein fitness landscape prediction has become a canonical problem in protein ML. ProteinGym also maintains a leaderboard, which makes it easier to compare results.
Nitpick: the introduction cites Gordon et al. [1] to back up the claim that pLM likelihoods implicitly capture function/structural constraints, but I think the message of that work was actually somewhat of the opposite, namely that positional likelihoods have more to do with the likelihood over the WT sequence, which in turn "stems from the preferences encoded in the training data itself" [1]. This also indicates a possible weakness of this paper (i.e. likelihoods are not always perfect indicators of fitness), but since the likelihood <-> fitness connection has been assumed by so many other papers, it shouldn't be held as a critique unique to this work.
[1] Protein Language Model Fitness Is a Matter of Preference. Gordon et al., 2024. https://www.biorxiv.org/content/10.1101/2024.10.03.616542v1.full.pdf
Essential References Not Discussed: This work [1] examining Bradley-Terry losses for fitness prediction might be relevant; the authors also find that it outperforms MSE. I think Brookes et al. has better theoretically explanations, but this current submission has more breadth in the types of models it cover.
[1] Contrastive losses as generalized models of global epistasis. Brookes et al., 2024. https://proceedings.neurips.cc/paper_files/paper/2024/file/a9b938e79504889f905d549f8d53e405-Paper-Conference.pdf
Other Strengths And Weaknesses: **Strengths**: the results are strong, and similar to [1], suggests that we should move towards using ranking based losses for fitness prediction.
**Weaknesses**: The choice of using a ranking based loss is not as well-motivated with theoretical motivations in the same way as [1]. Also, since we're using a ranking based loss, I'm not sure if Spearman correlation really makes sense as the metric to use (i.e. the loss never reinforced knowing the exact _number_, only the relative _ranks_), though I could be persuaded otherwise on this point. Though the tables presented seem to support the overall claim, it feels like it should be not too hard to run more comprehensive results (e.g. across more values of N, lower values of N, maybe for more tasks or report results separately for each task.
**Overall**: Since the main contribution of this work is empirical rather than theoretical, I think the work would be stronger if the empirical results were more robust and the motivations behind the experiment design better explained. E.g. in what sorts of real-world situations might we want to use one finetuning loss over the other? What should I take away from this paper for my own research?
[1] Contrastive losses as generalized models of global epistasis. Brookes et al., 2024. https://proceedings.neurips.cc/paper_files/paper/2024/file/a9b938e79504889f905d549f8d53e405-Paper-Conference.pdf
Other Comments Or Suggestions: 1. I think the main claim of this working well for low-N settings would be more impactful if more $n$ was tried, and with performance perhaps plotted visually (e.g. n on x-axis and performance on y axis). The few-shot claim could be really interesting if the eval was more rigorous.
2. Error bars would help with better understanding the difference.
3. Nitpick: I'm not a fan of "Likelihood-based" in the title; I feel like "Ranking-based" would be a clearer description. Likelihood-based makes it sound like we're taking likelihoods from a larger model to explicitly finetune a smaller model or something of that sort, though this is of course a personal interpretation.
4. Nitpick: "Masked modulo" is, as far is I can tell, introduced before it was defined.
Questions For Authors: 1. Do the authors have any hypotheses about why the gap between preference based training and MSE-based finetuning starts to close with higher N?
2. Could authors also report results for lower values of N, preferably with error bars?
3. What happens if we use maximum fitness rather than Spearman correlation, since often times we only care about the fitness prediction accuracy at the top end rather than the bottom end? Presumably this should be in favor of ranking based methods?
4. Clarification questions on the multi-round experiments: in Figure 1, for the non-ensemble methods, how were uncertainty estimates obtained? And what happens if we use MSE-based scoring functions as a baseline?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you very much for your detailed feedback. If we have sufficiently addressed your concerns, we kindly ask that you raise your score accordingly, or let us know if further clarification is needed.
Claims and Evidence
Your comment is correct, Figure 1 does not show that. In fact, Table 13 (Appendix) demonstrates that the ranking fine-tuning counterparts outperform regression methods for multi-round design tasks. We plot MSE-based PLM baselines in Figure 2 (ESM) and Figure 3 (PoET) in Appendix B.
To clarify, without modification, non-ensemble PLM methods do not estimate uncertainty. Thus in multi-round design experiments greedy acquisition strategies are used (based on the predictions only). We include ensemble baselines as they specifically provide a measure of uncertainty, allowing a principled way to trade off exploration and exploitation via (non-greedy) acquisition functions.
Experimental Analysis
The N in Table 1 refers to the number of sequences used for fine-tuning the PLM. We include fitness prediction results N=32, 128, and 512. Rationale for N=512 is to demonstrate (as you correctly point out the trend) that for higher N, the performance gap between ranking and regression shrinks. Our hypothesis is that in higher data regimes, the parametric heads have sufficient data to fully adapt and approximate the fitness, something which is not true in the critical low-N settings, such as N=32 or 128. We demonstrate that directly adapting the likelihoods (via ranking-based losses) in these settings achieves the best performance across model classes.
We agree with your suggestion of additional low-N experiments. Please find N=24 and N=48 fitness prediction results below which additionally support our hypotheses in low-N regimes, and the main claims made in the paper. We will include the Figure as suggested. Additionally, we will add individual landscape scores into the Appendix, and include errors in Table 1.
Additional N=24, N=48 single-mutant fitness prediction results
ESM-2 (650M) linear-head mse loss = (0.263, 0.308) vs wt-marginals ranking = (**0.449, 0.466**)
PoET linear head mse = (0.429, 0.472) vs likelihood ranking = (**0.513, 0.541**)
ProteinNPT (MSAT) = (0.394, 0.461) and ProteinNPT (ESM-1v) = (0.398, 0.422)
We note it is computationally expensive to fine-tune the broad suite of PLM methods in our paper and thus we are limited in the number of additional experiments. The set of eight single mutant landscapes comprise a representative set of DMS used as validation and ablation sets in ProteinNPT (Notin, P. et. al. NeurIPS 2023b), and Metalic, (Beck, J. et. al. ICLR 2025). Furthermore, we evaluate using five challenging multi-mutant landscapes that goes beyond those used in ProteinNPT.
Is there a particular landscape the reviewer thinks would add additional support to our analysis? If so, we will endeavour to include it in our results.
The fitness prediction and multi-round design are standard experimental setups and follow previous works, ProteinNPT, and Kermut (Groth, P.M. et al. NeurIPS 2024).
References
Regarding your observation that Brookes et al., 2024. is a missing reference, please kindly note that we cite and briefly discuss their work on Page 2. They do indeed introduce a theoretical perspective relative to elucidating epistasis, but their work is limited in the models they use. Whereas, the focus of our work is specifically on providing general recommendations for **adapting the likelihoods of pre-trained PLMs**, a setting which they do not address.
We thank the reviewer for highlighting the subtlety with regards to Gordon et al., 2024, and we will update the manuscript.
Spearman
We believe there is a misunderstanding here: whilst ranking-based fine-tuning parameterises a Bradly-Terry model, the underlying PLM still predicts a sequence (or mutation) likelihood. It is these likelihoods that we compute the Spearman correlation with respect to the ground truth fitness scores in ProteinGym. This is standard practice in the literature, e.g. Krause, B. et. al. 2021.
Real-world situations
**Our results demonstrate improved fitness prediction (Table 1) and improved multi-round design (Table 13) when leveraging ranking-based fine-tuning in low data settings across a broad range of popular PLMs**. Recently, we have seen the proliferation of pre-trained PLMs across many protein engineering tasks, and our work provides actionable insights for practitioners to get better predictive performance from existing pre-trained models, with less data.
Likelihood-based title
In Table 6 (Appendix) we ablate methods that directly fine-tune the model likelihood, and those that apply a parametric head to the representations. The results demonstrate that ranking-loss applied to the output of the parametric (linear) head also performs poorly. Our key recommendation is that our results support directly fine-tuning model **likelihoods** using a ranking loss function. | null | null | null | null | null | null |
AutoStep: Locally adaptive involutive MCMC | Accept (poster) | Summary: This paper introduces AutoStep MCMC, a novel class of locally adaptive involutive MCMC methods that dynamically select the step size parameter at each iteration based on the local geometry of the target distribution. The proposed method extends previous adaptive MCMC techniques by integrating step size adaptation within involutive MCMC frameworks, ensuring π-invariance, irreducibility, and aperiodicity. Theoretical results establish non-asymptotic guarantees on expected energy jump distance and computational cost per iteration. The experiments demonstrate the robustness and efficiency of their AutoStep method for RWMH and MALA using KSESS on various distributions. They compare with several baseline adaptive samplers, including NUTS, adaptive RWMH, adaptive MALA, drHMC and slice sampling.
### update after rebuttal
Thanks to the authors for the additional work and clarification. After reading the initial response to my questions, as well as the other reviews and replies in general, I will keep my score.
Claims And Evidence: The paper makes the following key claims, which are generally well-supported:
1. AutoStep MCMC is π-invariant, irreducible, and aperiodic under mild conditions. This is supported by rigorous proofs (Proposition 4.2, 4.5, and Corollary 4.6).
2. AutoStep MCMC adaptively selects step sizes to balance exploration and acceptance rate, improving mixing efficiency. The claim is substantiated by theoretical bounds and empirical evaluations on benchmark distributions.
Methods And Evaluation Criteria: The methodology is well-structured and clearly builds on established MCMC principles. The key contribution is the introduction of a dynamically adaptive step size selection mechanism that does not rely on global tuning but instead adjusts step size based on local properties of the target distribution. The evaluation criteria include acceptance rates, KSESS per unit cost and min KSESS per unit cost, which are appropriate for assessing MCMC performance.
Theoretical Claims: The theoretical foundations of the paper are solid, with detailed proofs demonstrating the correctness of the proposed method.
1. Proofs of π-invariance, irreducibility, and aperiodicity (Proposition 4.2, 4.5, and Corollary 4.6).
2. Bounds on expected energy jump distance and computational cost per iteration (Proposition 4.11 and Corollary 4.10).
Experimental Designs Or Analyses: The experimental setup is comprehensive, with evaluations on:
1. Synthetic distributions (Gaussian, Laplace, and Cauchy) to assess the benefits of using the symmetric step size criterion, the efficiency of the round-based tuning procedure and the robustness of $\theta_{0}$.
2. Bayesian inference problems (linear regression, orbit fitting, mRNA transfection models).
3. Comparisons with various adaptive MCMC baselines.
Supplementary Material: The supplementary material was reviewed, including proof details, experimental setting, parameter tuning and additional results.
Relation To Broader Scientific Literature: The paper is well-situated within the existing MCMC literature, particularly within the adaptive MCMC and involutive MCMC frameworks. It builds upon:
1. Adaptive MCMC methods by introducing a locally adaptive approach.
2. Involutive MCMC methods by integrating adaptive step size selection.
3. Recent work on adaptive step size selection in MALA, which the authors improve upon by ensuring irreducibility and aperiodicity.
The discussion could be further enriched by addressing connections to other step size tuning techniques, such as stochastic gradient-based methods in deep learning.
Essential References Not Discussed: The paper adequately cites prior work but could benefit from a broader discussion of step size adaptation in high-dimensional Bayesian inference.
Other Strengths And Weaknesses: Strengths:
1. Novel and well-motivated approach to adaptive step size selection.
2. The paper is theoretical rigour, with solid proofs establishing key properties.
3. The experimental results indicate the proposed method outperforms competing baselines.
4. The method is robust to the step size initialization, enhancing its practical usability.
Weaknesses:
1. There are no detailed analyses and empirical results about the computational trade-offs (e.g., additional overhead of adaptive step size selection).
2. The experiments are too toy. The scalability of the proposed method for high-dimensional problems is unclear.
3. Some step size adaptation choices (e.g., choice of thresholds) could be further justified.
Other Comments Or Suggestions: 1. Provide more discussion on the computational cost of AutoStep MCMC compared to non-adaptive methods.
2. Conduct a scalability study on high-dimensional problems (e.g., large Bayesian posterior models).
3. Discuss the applicability of AutoStep MCMC to more gradient-based samplers beyond MALA (e.g., HMC, NUTS).
Questions For Authors: 1. Can AutoStep MCMC scale to high-dimensional models? such as posterior sampling for large BNN.
2. Can you provide more computational complexity analyses of AutoStep MCMC?
3. Can AutoStep MCMC be extended to stochastic gradient MCMC like SGLD and SGHMC while preserving its key advantages?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: > Can AutoStep MCMC be extended to stochastic gradient MCMC like SGLD and SGHMC while preserving its key advantages?
Thanks, that’s a great point! Our focus in this paper is on exact, invariant MCMC methods, but it is indeed possible to extend AutoStep to SG MCMC. One approach would be to resample a mini-batch at each iteration and feed it to Alg 2 to estimate a step size $\theta_j$.
However, as noted by Johndrow et al. (2020), subsampling offers little speedup or accuracy gain over exact methods. While approximate MCMC (without MH correction) can be useful during the burn-in for estimating a reasonable $\theta_0$, in the stationary phase, the computational gains from subsampling may not outweigh the loss in robustness.
Johndrow, J. E., Pillai, N. S., & Smith, A. (2020). No free lunch for approximate MCMC. arXiv preprint arXiv:2010.12514.
> Can AutoStep MCMC scale to high-dimensional models?
Thank you for bringing that to our attention. We have run three additional high-dim models (each with 30 seeds) to address your concern: the 111d HMM example from posteriordb, a 133d stochastic volatility model (Kim el al, 1998), and a 312d IRT model (Curtis et al, 2010).
The table below presents the minESS/sec metrics, which generally shows good performance. The per second timing includes the full computational time (burn-in, $\theta$ selection, all aspects of tuning, etc). We did not include KSESS results, as generating reliable reference distributions via PT requires days of computation, which was not feasible within the discussion period.
| Sampler | Model | q5 | Median | q95 |
|------------------|------------------------|----------|----------|----------|
│ AutoStep RWMH | hmm_example | 19.3966 | 31.3897 | 360.596 |
│ AutoStep RWMH | stochastic_volatility | 1.33824 | 2.04454 | 2.3855 |
│ AutoStep RWMH | irt | 4.63187 | 4.71388 | 4.7435 |
│ AutoStep MALA | hmm_example | 10.4717 | 13.4765 | 33.5944 |
│ AutoStep MALA | stochastic_volatility | 4.78546 | 5.9194 | 7.23065 |
│ AutoStep MALA | irt | 0.838859 | 0.959374 | 1.023 |
> There are no detailed analyses and empirical results about the computational trade-offs (e.g., additional overhead of adaptive step size selection)...... Provide more discussion on the computational cost of AutoStep MCMC compared to non-adaptive methods.
Please note that we provide ESS/sec (Fig. 5) metrics in our experiments, which account for the full computational time (including burn-in, the step size selection, all aspects of tuning, etc). These metrics are intended to reflect the trade-off between statistical efficiency and computational cost. Beyond this, we also have theoretical results (Prop 4.9, Cor 4.10) that put bounds on the expected number of doubling/halving steps per iteration, which capture the dominant additional computational cost over fixed step size methods.
> Some step size adaptation choices (e.g., choice of thresholds) could be further justified.
There is certainly room for further research on better choices of the distribution over $(a, b)$. The only strict requirement is that the support is on all of $\Delta$ (i.e. $0<a<b<1$), to guarantee irreducibility. Intuitively, one must also ensure that $a, b$ are “typically” bounded away from 0 and 1 to avoid many overly aggressive or conservative steps. But aside from that, there is room for creativity: for example, one might consider potentially making the distribution favor known optimal acceptance rates of MCMC algorithms. That being said, in our experimentation we found that the uniform distribution on $\Delta$ achieved the goal of maintaining a reasonable step size, and we believe other improvements (like different mass matrix adaptation) are unlikely to have a far larger influence on performance.
> Discuss the applicability of AutoStep MCMC to more gradient-based samplers beyond MALA (e.g., HMC, NUTS).
We kindly refer you to our earlier response to Reviewer JzQk, where we discuss the applicability of AutoStep to HMC and NUTS.
> Can you provide more computational complexity analyses of AutoStep MCMC?
Note that the cost is dominated (in the long run asymptotically) by the average number of log-prob evaluations per iteration. The number of log-prob evaluations is in turn controlled by the number of doubling/halving steps in AutoStep. Prop 4.9 places an upper bound on this quantity, which provides the desired result. To apply this result, problem-specific analysis is generally required; however, in Cor 4.10, we are able to specialize the result for a representative class of target distributions in the large/small $\theta_0$ regime (i.e., when the method is poorly tuned). This result shows that the extra cost incurred by AutoStep grows by a factor of $|\log \theta_0|$, indicating that the additional cost of AutoStep should generally be small. | Summary: The authors propose a method to tune the step size of MCMC algorithms so that, at each iteration, the acceptance rate is not too high (exploitation) or too low (exploration).
## update after rebuttal
I thank the authors for their clarification on a minor point that I raised. My overall assessment has not changed and I will maintain my score.
Claims And Evidence: The authors claim a full theoretical analysis that does indeed seem quite complete.
Methods And Evaluation Criteria: Yes.
Theoretical Claims: Not in detail.
Experimental Designs Or Analyses: The experiments mainly substantiate the claim that tuning the step size makes the sampling algorithm robust the initial step size, which should be the case by design.
Figure 5 in particular shows that by a certain measure of efficiency (KSESS), the authors' method performs better. However, there is no visual evidence that tuning the step size helps the MCMC algorithm locate the target modes faster, and therefore converge faster.
Supplementary Material: No.
Relation To Broader Scientific Literature: The authors clearly discuss a related paper by Bou Rabee et al.
Essential References Not Discussed: Related works are clearly discussed by the authors. Could the authors discuss the link between their paper and Cyclical MCMC, which varies the step size cyclically between bigger and smaller values to alternate between mode exploration and exploitation.
Other Strengths And Weaknesses: Strength: the paper is very clearly written.
Other Comments Or Suggestions: -
Questions For Authors: -
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your time and insightful questions. We hope that our answer has addressed all your concerns.
> There is no visual evidence that tuning the step size helps the MCMC algorithm locate the target modes faster, and therefore converge faster.
Please note that locating modes is not a primary goal of MCMC algorithms; the goal is to take draws from a target distribution that yield accurate estimates of target expectations. In our work, the focus in particular is on ensuring efficient sampling from models with varying scale. The KSESS / sec metric results presented in our manuscript in Figure 5 captures both sampling correctness and computational efficiency, and is the standard method used in the field for comparing sampling algorithms.
> Could the authors discuss the link between their paper and Cyclical MCMC, which varies the step size cyclically between bigger and smaller values to alternate between mode exploration and exploitation.
Cyclical MCMC (Zhang et al, 2020) and AutoStep both modulate the step size, but in fundamentally different ways. Cyclical MCMC follows a prescribed, periodic schedule and is not designed to tune the step size or adapt to the target distribution. Furthermore, Cyclical MCMC is not guaranteed to provide asymptotically correct estimates of expectations unless the number and spacing of temperatures in one cycle is controlled carefully to account for the mixing behaviour of the kernels.
In contrast, AutoStep adapts the step size based on the local geometry of the target and is always an exact method, i.e., it comes with a guarantee of $\pi$-invariance. | Summary: The paper proposes a MCMC method (called AutoStep MCMC) with locally adaptive step size selection. This method generalizes the previous involutive methods (e.g. RWMH, MALA, HMC etc) allowing adopting step size which is randomly drawn from some conditional distribution. The class of involutive MCMC methods is considered.
Theoretical properties of MCMC kernel are studied (e.g. invariant distribution, aperiodicity etc). Robustness and scalability properties are also studied.
Claims And Evidence: yes
Methods And Evaluation Criteria: yes
Theoretical Claims: I haven't checked the proves of main results
Experimental Designs Or Analyses: yes
Supplementary Material: I checked the part with numerical experiments
Relation To Broader Scientific Literature: The paper contributes to the long list of locally adaptive MCMC methods
Essential References Not Discussed: I would add comparison with modern methods using adaptive kernels (e.g. using normalizing flows); See Local-Global MCMC kernels: the best of both worlds, NeurIPS 2022 or Adaptive Monte Carlo augmented with normalizing flows, PNAS 2022. Or modern sampling methods e.g. GFlowNets for continuous distributions or optimal control methods for sampling (Theoretical guarantees for sampling and inference in generative models with latent diffusions, COLT 2019)
Other Strengths And Weaknesses: Strengths
New locally adaptive MCMC method;
e.g. invariant distribution, aperiodicity etc). Robustness and scalability properties are also studied.
Weaknesses
I would add more numerical experiments with mixtures of distributions (e.g. high dimensional gaussian mixtures) or latent distributions of generative models (e.g. GANs); see e.g. paper Your GAN is Secretly an Energy-based Model and You Should use Discriminator Driven Latent Sampling
Other Comments Or Suggestions: I would add comparison with modern methods using adaptive kernels (e.g. using normalizing flows); See Local-Global MCMC kernels: the best of both worlds, NeurIPS 2022 or Adaptive Monte Carlo augmented with normalizing flows, PNAS 2022. Or modern sampling methods e.g. GFlowNets for continuous distributions or optimal control methods for sampling (Theoretical guarantees for sampling and inference in generative models with latent diffusions, COLT 2019)
Questions For Authors: Is it possible to provide numerical experiments with mixtures of distributions?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We are grateful for your thoughtful feedback. We are glad to address your questions one by one.
> Is it possible to provide numerical experiments with mixtures of distributions?
Thank you for the suggestion. Please first note that AutoStep is designed to help handle multiscale behaviour in targets (e.g., funnel structure); our methods are not expected to perform substantially differently than a well-tuned fixed-step-size method on multimodal / mixture targets. To help address multimodality, one could instead incorporate AutoStep in a parallel tempering scheme. One potential advantage of AutoStep in that setting is that it will choose an appropriate step size for each chain in the ensemble (which will be different at each temperature), unlike fixed step size methods that each must be tuned individually.
Please also note that the orbital model (see Fig. 13) already has multiple modes/ridges/varying geometries/etc.
> I would add comparison with modern methods using adaptive kernels (e.g. using normalizing flows, Local-Global MCMC, adaptive Monte Carlo augmented with normalizing flows, GFlowNets, etc.)
We appreciate the suggestion, though we note that many of the listed methods (e.g., normalizing flow-based MCMC, GFlowNets) typically require significant training time, and are not exact methods. An interesting recent result from He, Du et al (2022) shows almost all neural samplers require several orders of magnitude more target evaluations compared to parallel tempering, itself an expensive meta MCMC algorithm.
Beyond this, neural methods and AutoStep use quite different approaches, and a direct comparison may therefore not be particularly meaningful. That being said, we agree that integrating AutoStep into such frameworks in future work could be an interesting extension.
He, J., Du, Y., Vargas, F., Zhang, D., Padhy, S., OuYang, R., ... & Hernández-Lobato, J. M. (2025). No Trick, No Treat: Pursuits and Challenges Towards Simulation-free Training of Neural Samplers. arXiv preprint arXiv:2502.06685. | Summary: This work proposes a framework for adaptive MCMC that enables sampling parameters to be optimized for the current location in the state space at each sampling step. In particular, the work focuses on adaptively adjusting the step size parameter which is found in common MCMC algorithms. The key challenge for such an approach is maintaining detailed balance, since naive adjustment of the step size could disrupt the MCMC process so that it no longer follows the intended stationary distribution. Building upon established properties of involutive functions for MCMC sampling, the work proposes to augment the sampling space with a distribution over soft acceptance bounds and a distribution of the step size parameter that depends on the current state, MCMC randomness, and the soft acceptance bounds. By carefully formulating this involutive function, detailed balance can be guaranteed by standard results about involutive MCMC, and irreducibility and aperiodicity proofs follow in a straightforward way from the irreducibility and aperiodicity of the non-augmented MCMC samplers. A theoretical analysis of the time for selecting a step size and for the expected movement with each sampling step is presented. Experiments are conducted showing that the method achieves similar efficiency regardless of the initial step size parameter tuning, favorable comparison with the related adaptive step size selection method AutoMALA, and competitive performance with other adaptive MCMC samplers.
\## After rebuttal: My view of the paper remains similar. The proposed method is a straightforward and natural way to incorporate adaptive step size in standard MCMC algorithms. Although the method currently does consistently not surpass strong samplers like NUTS, it is possible that future design choices based on this sampler could provide further improvement.
Claims And Evidence: The theoretical claims appear valid to me. Involutive MCMC in the augmented sampling space that includes a distribution over acceptance probabilities and step size parameter is a very clean way to include adaptive step size selection while still satisfying detailed balance. Experimental results provide convincing evidence that the method is extremely robust to the initial step size tuning over several orders of magnitude. The comparison with existing adaptive MCMC techniques shows competitive performance with existing adaptive methods as claimed, although the proposed method does not achieve top performance on any specific scenario.
Methods And Evaluation Criteria: The overall methodology of using involutive MCMC to build a framework for adaptive step size selection seems very appropriate for the problem at hand. The experimental section provides a fairly thorough comparison of the proposed method with representative SOTA methods. One major choice that does not follow standard methodology is the use of a newly proposed metric KSESS to measure MCMC sampling efficiency rather than the typical ESS method. It is claimed that ESS did not accurately characterize sampling performance, but few details are provided about the inadequacy of ESS and the superiority of KESS. Further explanation of this choice would greatly help to solidify the evaluation methodology.
Theoretical Claims: A variety of theoretical claims are made in this paper. The most important ones are that the proposed method satisfies detailed balance and that the proposed method is irreducible and aperiodic. I carefully checked these claims and they appear valid. These proofs are fairly straightforward thanks to the clever MCMC formulation. Theoretical results about the expected runtime of step size selection and the state space movement of the proposed method are presented. These claims appear reasonable but I did not carefully check the proof details.
Experimental Designs Or Analyses: Overall, the experimental design and analysis is appropriate. As mentioned above, the one major area where the experiments do not follow the typical protocol is the use of KSESS as a replacement for the standard ESS metric to measure sampling efficiency. More discussion of this choice would strengthen the paper.
Supplementary Material: Proof and experiment details are provided in the appendix. A separate supplementary materials file is not provided.
Relation To Broader Scientific Literature: This work seeks to provide a principle framework for including an adaptive step size parameter in common MCMC sampling algorithms. Prior works are limited by being restricted to a pre-selected set of step sizes, restriction to use the same step size across the entire state space, or limitations on how frequently the step size can be adjusted during the MCMC process. The present work provides a clear and straightforward way to adjust step sizes that can span a vast range of orders of magnitude, is adapted at each new state space location, and can be adapted at each step. Such methodology has the potential to be widely adopted by the community. Nonetheless, while experiments show the proposed method has reasonable performance, it does not have top performance in any given scenario.
Essential References Not Discussed: N/A
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: N/A
Questions For Authors: * This work demonstrates detailed balance, irreducibility, and aperiodicity for the proposed sampler. However, it must also be shown that the proposed sampler is not null recurrent to establish ergodicity. Can this be shown?
* Can you provide a detailed explanation of the shortcomings of ESS and superiority of KSESS? A convincing answer to this question could cause me to increase my rating score.
* Given the strong performance of NUTS in Figure 5, is it possible to combine the proposed method with NUTS to further improve performance?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your insightful questions. We hope that our answer will address all your concerns.
> The proposed method does not achieve top performance on any specific scenario.
While AutoStep does not outperform all other methods across all examples, we would like to emphasize that it consistently performs well across a wide range of scenarios. This broad robustness is due to the local adaptivity of AutoStep: it is resilient to the choice of initial value $\theta_0$ and the geometry of the targets, albeit with some additional cost associated with tuning $\theta$ at each step (Figure 2).
> It is claimed that ESS did not accurately characterize sampling performance, but few details are provided about the inadequacy of ESS and the superiority of KESS.
Thank you for bringing this up, this is an important point! Standard ESS estimates do not incorporate knowledge of the target distribution directly, and so can indeed be misleading if the sampler fails to explore the target distribution (Elvira et al, 2022). As an extreme illustrative thought experiment, if the sampler fails severely and does move at all, the sequence of states is indistinguishable from an iid sequence from a Dirac delta, and typical ESS estimates will report a high value despite the correct value being approximately 1. We saw a version of this issue in our experiments; for example, in an experiment using adaptive RWMH on the kilpisjärvi model, the sampler produced samples with almost no spread (variance on the first dimension was 2.472e-6), and yet the minESS reported was 45.47. In contrast, KSESS reported 0.75, which correctly identified the sampling failure. In other words, the traditional ESS estimate was nearly two orders of magnitude higher than it should have been.
Note that we do not recommend using KSESS estimate in practical data analysis, as it requires knowledge of the target or expensive computation to obtain a gold-standard sample set. We use the KSESS only for the purposes of comparing different sampling methods in a research context. We will clarify this in the camera-ready.
Elvira, Víctor, Luca Martino, and Christian P. Robert. "Rethinking the effective sample size." International Statistical Review 90.3 (2022): 525-550.
> It must also be shown that the proposed sampler is not null recurrent to establish ergodicity. Can this be shown?
Thanks for the question! Please note that for ergodicity (i.e., an MCMC law of large numbers for $\pi$-a.e. initial state), it is sufficient that the chain is $\pi$-invariant and irreducible. Our theory provides these two statements. See, e.g., Theorem 5 of Geyer’s lecture notes on Markov chains at https://www.stat.umn.edu/geyer/8112/notes/markov.pdf — per the discussion on p12-13, stationarity can be replaced by $\pi$-a.e. initialization and $\pi$-invariance. We do note that if one wants a central limit theorem, more analysis and conditions are required.
> Given the strong performance of NUTS in Figure 5, is it possible to combine the proposed method with NUTS to further improve performance?
This is a very interesting point, thanks for bringing this up! The AutoStep framework is naturally applicable to all involutive methods with step size parameters. We think it is likely that with the right augmentation, one could view NUTS as such an involutive MCMC scheme, but we are not totally certain and leave a detailed development of that to future work. It can certainly be applied to HMC which is known to be an involutive scheme.
However, HMC (and NUTS if it turns out to be involutive) involve a fairly expensive involution. Based on our experiments to date, we suspect that methods with cheaper involutions (like RWMH, MALA) are likely to benefit more from using AutoStep. In particular, we have conducted preliminary experiments with AutoStep applied to HMC with a reasonable path length, and the results suggest that the minESS per cost achieved by AutoStep HMC is approximately 30% of that achieved by traditional HMC with hand-tuned "optimal" parameters on benchmark problems. However, those results are very preliminary, and it is possible more engineering effort could improve AutoStep HMC further.
---
Rebuttal Comment 1.1:
Comment: I read the other reviews and author responses. I still have an overall positive view of this paper.
Regarding performance vs. other methods like NUTS: In my view, this is still the main limitation of the paper. I feel convinced that the method can improve the performance of basic methods like RWMH and MALA, which are widely used. The sampler design is very nicely done and it seems like a useful contribution for the community. This paper provides a potential framework for improving more powerful samplers like HMC and NUTS in future works. If this was demonstrated, the paper would be very strong.
Regarding ESS/KSESS: That explanation makes sense, and I am more convinced that KSESS is appropriate for this work. I suggest including these details in revisions for clarity.
Regarding ergodicity: From my understanding, the Birkoff Ergodic Theorem referenced by the rebuttal applies only if the Markov chain is initialized from its invariant distribution. My understanding is that positive recurrence is still needed to show that a Markov chain initialized from an arbitrary distribution will be ergodic and therefore converge to the stationary distribution (I could be wrong about this). It might be worth including some discussion of this in the paper, although it is a relatively minor point and the same issue applies to generic MCMC design and it not restricted to the proposed method.
---
Reply to Comment 1.1.1:
Comment: Thank you for your positive feedback! We are glad to hear that your concerns regarding the KSESS metric have been resolved. We will include additional details in the camera-ready version. Below, we provide further clarification on the two remaining points you raised.
> Regarding ergodicity
Your understanding of the Birkhoff Ergodic Theorem is correct! However, positive recurrence is not needed to enable initialization from another distribution. More precisely, the Birkhoff theorem states that there exists a measurable set $A$ such that $\pi(A)=1$, and initializing the chain in $A$ yields the desired convergence a.s. Consider now any other initialization distribution $\rho$ such that $\pi$ dominates $\rho$ (a weak condition in practice). Since $\pi$ dominates, $\pi(A) = 1$ implies $\rho(A) = 1$; in other words, initializing from $\rho$ will also almost surely initialize in $A$, the set on which convergence holds. We'll add a brief clarification in the text to make this point more precise.
> Regarding performance vs. other methods like NUTS
Thank you for your positive feedback on our work and for recognizing the improvements to widely used methods; we really appreciate it! We completely agree that extending AutoStep to NUTS and HMC would be an exciting direction. We are currently working on that project, but we believe the current contribution stands on its own as a complete paper, and we plan to present the extension in future work. | null | null | null | null | null | null |
Evolving Minds: Logic-Informed Inference from Temporal Action Patterns | Accept (poster) | Summary: The paper introduces a single framework to infer human intentions, predict future actions, and interpretable logical rules. The motivation is that human actions occur irregularly and are driven by unobserved mental states/intentions. To address this, the paper proposes a framework combining the temporal point process (TPP) and amortized variation Expectation-Maximization (EM) to model the bidirectional dynamics between irregular human actions and latent mental states. Logical rules are used as priors to guide TPP to build the relationship between intention and actions, reduce dependency on large datasets, and, ensure interpretability. The framework jointly optimizes the model parameters, logical rules, and inference networks. Experiments are conducted on synthetic and real-world datasets on action prediction task (next event prediction from history) and evaluated using error rate % and mean absolute error metrics and the framework shows a good improvement on both metrics for all datasets.
Claims And Evidence: 1. The claim that mental states drive human behavior and building a relationship between then can be helpful in predicting next actions has strong evidence as the framework performs well across all synthetic and real-world datasets.
1. The method optimizes the logical rules (initialized from scratch) and shows results that the rule learning module is able to learn the 4 ground truth rules in Figure 3.
2. Mental events are sampled based on the hazards and probabilistic sampling is able to sample reasonably within an error range as shown in Figure 3.
4. The paper also mentions the scalability of the approach and shows supported results in the supplementary.
Methods And Evaluation Criteria: The objective of the paper is to improve next action prediction task by strengthening the relation between mental state and human actions.
The proposed method makes sense to address this objective. The evaluation criteria using error rate % and mean absolute error as metrics is valid and is being by the prior TPP method [1] as well. The paper also shows the accuracy on learning logical rules and the sampling efficiency of mental states which evaluates the method well.
[1]. Zuo, S., Jiang, H., Li, Z., Zhao, T., and Zha, H. Transformer hawkes process. In International conference on machine learning, pp. 11692–11702. PMLR, 2020
Theoretical Claims: Yes
Experimental Designs Or Analyses: The evaluation criteria using error rate % and mean absolute error as metrics is valid and is being by the prior TPP method [1] as well. The paper also shows the accuracy of learning logical rules and the sampling efficiency of mental states which evaluates the method well. The comparison metric shows significant improvement when compared with baselines and further analysis on logical rules and sampling efficiency shows accuracy as well. In supplementary, the scalability of the approach is measured which supports the claim of the paper.
[1]. Zuo, S., Jiang, H., Li, Z., Zhao, T., and Zha, H. Transformer hawkes process. In International conference on machine learning, pp. 11692–11702. PMLR, 2020
Supplementary Material: Yes, algorithms, dataset details, rule weights, logic rules, ablation study.
Relation To Broader Scientific Literature: The key contribution of the paper of jointly inferring the human mental state and the human actions is helpful in the scientific literature of video understanding where it is essential to understand the context of the video and human intentions to anticipate future actions. The key contribution would especially be helpful for tasks such as action anticipation.
Essential References Not Discussed: No
Other Strengths And Weaknesses: Strengths:
1. The motivation and contribution of the paper are strong and establish a relation between understanding latent human mental state and human actions.
2. The paper is written well and has a good explanation for the needed areas such as the subsection on comparison with VAEs.
3. The idea of logical rules is also helpful in reducing reliance on datasets and improving interpretability.
Weakness:
1. In Line 95, the authors claim that the rule discovery reveals novel and overlooked intention-action patterns. Is there any result/evidence in the paper that can support this?
2. As the idea of inferring human mental states seems to be a key contribution, the datasets used for experiments have very limited number of mental states such as EK-100 considers 2 states, hand-me-that considers 4, and car-following has a single mental state of human vehicle following human vehicle. Can the authors discuss why limited/few mental states in datasets are considered for experiments?
Other Comments Or Suggestions: In Figure 3, in the fitted hazard graph, there is a typo in the label : Ture -> True.
Questions For Authors: 1. Will the work be open-sourced?
2. What is the training time and how much resources does the framework require?
Code Of Conduct: Affirmed.
Overall Recommendation: 4 | Rebuttal 1:
Rebuttal: We sincerely thank Reviewer BKJo for the insightful analysis and recognition of our work! We hope our responses listed below can address your concerns.
**$\star$ Examples Revealing Newly Uncovered Rules**: We have reported a subset of temporal logic rules that have been identified as having real-world significance based on real-world datasets, and due to the page limitation, we put the results in the **Appendix D.4, Table 17**. These rules are learned by our model which are not provided as prior knowledge and easily overlooked. For the reviewer’s convenience, below we analyze one of the learned rules as a concrete example, explaining both its formal interpretation and real-world significance:
$$PickUp \leftarrow WantToPickUp\ \wedge\ MoveTo\ \wedge\ Open, (WantToPickU\ before\ MoveTo), (MoveTo\ before\ Open), (Open\ before\ PickUp)$$
This rule is learned from the Hand-Me-That dataset, capturing a sequential intention-action pattern: the user first develops an intention to pick up an object ($WantToPickUp$), then moves toward a destination ($MoveTo$), and opens the container ($Open$), ultimately leading to the target action ($PickUp$).
The practical value lies in using this learned rule as guidance for mental state inference. When our model detects the user's WantToPickUp intention, the AI agent can proactively assist -- for instance, by opening the container in anticipation of the user's need. This inference and prediction capability can enhance human-agent interaction efficiency.
**$\star$ Discussion on Limited/Few Mental States**: In real-world settings, mental states are typically **high-level and inherently sparse within specific contexts**. Our model operates under a **closed-world assumption**, focusing on constrained datasets tailored for specific tasks. As a result, the mental states in our experiments are confined to a **predefined domain-relevant set**—justifying our consideration of only a limited number of mental states for real-world applications. Encouragingly, our model **has the potential to adapt to scenarios with unknown or numerous mental states**. Inspired by [1], which uses vision-language models for predicate invention, we can also leverage LLMs to generate diverse mental states, circumventing the limitations of pre-defined states while producing arbitrarily numerous variations, before proceeding to rule learning and inference.
[1] Liang, Y., Kumar, N., Tang, H., Weller, A., Tenenbaum, J. B., Silver, T., ... & Ellis, K. Visualpredicator: Learning abstract world models with neuro-symbolic predicates for robot planning. ICLR 2025.
**$\star$ Training Time & Computing Infrastructure**: We have reported the training time for both synthetic datasets and real-world datasets as well as the computing infrastructure in our previous submission, and due to the page limitation, we put the results in the Appendix. Below we provide the section indexes for reviewer's convenience. The training time for varying sample sizes and the number of ground truth rules have been shown in **Appendix D.1**, for both synthetic datasets (**Figure 5**) and real-world datasets (**Figure 6**). Additionally, we have compared the impact of different hyper-parameter choices on training time and model performance in **Appendix E.1**. Details of the computational resources and computing infrastructure used in our experiments were provided in **Appendix E.2**.
**$\star$ Open-Source**: Yes, we will release the codes for the final version.
**$\star$ Correction for Typos**: Thanks for pointing it out! We have updated the legend in Figure 3 and made corresponding revisions to the final manuscript.
---
Rebuttal Comment 1.1:
Comment: Thank you, authors, for providing the rebuttal and addressing my concerns! My concerns about the rule discovery revealing novel, overlooked rules, training time, open source, and typos were addressed well. The appendix has the results for new rule discovery and looks convincing. Regarding the discussion on limited and few mental states, having more fine-grained mental states would have added more strength to the work, but I believe the assumption of closed-world and sparse mental states works fine for the scope of the work. | Summary: This paper proposes combined logic-informed temporal point processes with amortized variational EM, allowing their method to infer underlying mental states reliably, even in low-data regimes.
Claims And Evidence: Their experimental results show the effectiveness of this framework on some synthetic as well as real-world datasets.
However, I do have some concerns regarding scalability. First of all, I think it is an impressive result that the method works well in low-data regimes and is efficient through the use of EM and injection of priors as logic rules. However, I am less convinced by the current dataset sizes that there is evidence of scalability, despite the authors' pointing this out.
Methods And Evaluation Criteria: Yes, the baselines, datasets and the metrics used for evaluation all seem reasonable to me.
Theoretical Claims: There aren't theoretical claims. The derivations used when presenting the method looked good to me.
Experimental Designs Or Analyses: Overall the authors conduct experiments both on synthetic and real-world datasets to showcase their method.
Supplementary Material: I did check the ablation of finding previously un-thought of rules (D.4.) and some details on the synthetic dataset sizes for the scalability experiments. (D.1)
Relation To Broader Scientific Literature: Learning to infer human intent, and predicting next actions from existing datasets is a relevant problem for human-centric AI and the authors propose a method that achieves good results in low-data regimes, with the logic rules offering additonal interpretability as well.
Essential References Not Discussed: I am not aware of missing literature at this point.
Other Strengths And Weaknesses: I think overall this is a nice work, with the logic-informed priors offering interpretability in human-centric AI and the method showcasing good results in low-data regimes.
I do have some worries though regarding scalability claims -- I do think it is impressive that the framework works well in low-data regimes, I am bringing up scalability mainly because the authors mention it throughout the paper (that the method can scale), but the largest synthetic dataset used has 5K trajectories, and a small number of underlying rules (up-to 6 ground truth rules App D.1). I do not think this is enough evidence to support the scalability claims, I would be curious to hear the authors' thoughts on this.
Another weakness I find is in the presentation of the method: I think the overall setup of the method can be better motivated with some examples in the beginning of the Preliminaries section. For example, the explicit actions and mental events differentiation first comes up in Section 3.
Other Comments Or Suggestions: - Line 270 column 1: is computd --> is computed
- Line 272 column 2: nearconvergence --> near convergence or near-convergence
- Lines 292-292 colum1: model is easily adopt to large-scale --> model can easily be adopted ...?
- Line 282 column 2: other baseline attempt to predict **the** next event from history
- Line 303 column 2: In paticular --> in particular
- Line 403 column 2: faces scalability issue**s**
Questions For Authors: - As mentioned before, I would be curious to hear the authors' thoughts on how well their method can scale or at which point they realistically would expect it to not scale anymore.
- How does the method perform in unfamiliar/novel situations?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank Reviewer w7nu for the detailed analysis and insightful comments, which benefit us to further improve our paper! To address your concerns, we have prepared a detailed point-by-point response below.
**$\star$ Scalability**: Your comments are very valuable! Yes. We acknowledge that our original statement was imprecise. A more accurate statement would be: "Our model demonstrates potential for scaling to large datasets."
Compared with related methods, regarding our rule learning module, existing related approaches like TELLER [1] and CLNN [2] require only 600-2,400 training synthetic sequences. While event prediction models with latent states like AVAE [3] use up to 2,000 sequences. In comparison, our reported experiments utilize relatively large sample sizes.
Moreover, our fine-grained model's computational complexity stems from three key factors:
(1) The inherently combinatorial nature of rule learning.
(2) Temporal discretization requiring inference at each tiny time grid for accurate continuous-time mental event inference.
(3) Event density per sequence. In our scalability experiments, the most complex synthetic dataset comprises 5,000 sequences, with an average of 47.60 action events per sequence—totaling 238,000 events. This represents a relatively large-scale dataset.
Therefore, scalability assessment must consider **not just sample size**, **but also rule learning complexity, temporal resolution, and event density** -- making our 5,000-sample experiments relatively large-scale under these demanding conditions.
Considering the above key factors influencing scalability, we have added new experiments using Syn Data-2 and larger sample size to further assess the potential of our model. Please find **Table 1** in
https://anonymous.4open.science/r/paper9060-F13F
These new experimental results show that the model performance exhibits asymptotic stabilization with increasing dataset size, and our model still has potential to perform well in 10K+ sequences scale dataset within satisfactory training time cost.
[1] Li, S., Feng, M., Wang, L., Essofi, A., Cao, Y., Yan, J., & Song, L. Explaining point processes by learning interpretable temporal logic rules. ICLR 2021.
[2] Yan, R., Wen, Y., Bhattacharjya, D., Luss, R., Ma, T., Fokoue, A., and Julius, A. A. Weighted clock logic point process. ICLR 2023.
[3] Mehrasa, N., Jyothi, A. A., Durand, T., He, J., Sigal, L., and Mori, G. A variational auto-encoder model for stochastic point processes. CVPR 2019.
**$\star$ Illustrative Examples**: Thanks for your advice! We will incorporate additional illustrative examples in subsequent revisions to enhance comprehension. Following your suggestion **regarding the distinction between action and mental events, we added example** as:
“Consider a person who intends to start exercising and later maintains regular strength training at the gym. This behavioral sequence involves: (1) a mental event (the unobservable intention to exercise) and (2) action events (the observable gym workouts). Crucially, mental events are internal and unobservable, while actions are external and directly observable—constituting what we actually perceive.”
**$\star$ Unfamiliar Situation**: Our rule-generation algorithm is inherently capable of learning logic rules from scratch, even in scenarios with limited prior knowledge, making it highly adaptable to unfamiliar situations. In practice, when encountering such scenarios, our system operates in two modes: If pre-trained on similar datasets, it directly transfers the well-trained model and takes the learned rules as priors to the new data; Otherwise, it autonomously learns new rules without requiring predefined logic rules, with the built-in backtracking mechanism further improving accuracy.
We have assessed our method under two distinct unfamiliar conditions: **transfer adaptation (for domain-related cases)** and **fully unfamiliar application (for entirely unseen datasets)**, with comparative results presented in **Table 2** in
https://anonymous.4open.science/r/paper9060-F13F
These new experimental results confirm our model **maintains robust performance in fully unseen scenarios**. When **pre-trained** on similar datasets, the learned rules **boost both training efficiency and final model performance**.
**$\star$ Correction for Typos**: Thanks for your careful review! All typos have been corrected and will be updated into the final version.
---
Rebuttal Comment 1.1:
Comment: Thank you for the new experiments and for the clarifications. I will keep my current score. | Summary: The paper presents an amortized variational EM framework for understanding human mental states by modeling the relationship between actions and hidden mental events over time. Some innovations include using logic rules as priors to improve interpretability and approximating the posterior distribution of latent mental states by discrete-time renewal process.
Claims And Evidence: The paper makes several claims about the effectiveness of their proposed method, and overall, these are supported by experimental evidence.
Methods And Evaluation Criteria: The proposed methods and evaluation criteria generally align well with the problem of inferring human mental states and predicting future actions from temporal data. The use of two strategies tailored to different scenarios is useful. One question regarding autonomous rule learning for data-rich domains: in Table 17, the elements of temporal logic rules, such as PickUp and WantToPickUp, are still predefined, right?
Theoretical Claims: No formal theoretical claims are provided in this work.
Experimental Designs Or Analyses: The experiments on multiple datasets demonstrate improved event prediction performance, and the baselines from three categories—Neural TPP, Rule-Based, and Generative models—provide fair comparisons. Additionally, the ablation study on backtracking supports its effectiveness.
Supplementary Material: The appendix provides algorithm pseudocode, experiment details, and further discussion on potential limitations.
Relation To Broader Scientific Literature: This work builds on existing work by combining logic reasoning with neural networks, dynamically discovering rules while balancing model flexibility and explainability.
Essential References Not Discussed: NAN
Other Strengths And Weaknesses: See above.
Other Comments Or Suggestions: See above.
Questions For Authors: See above.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for your positive feedback!
Regarding your question:
_"One question regarding autonomous rule learning for data-rich domains: in Table 17, the elements of temporal logic rules, such as PickUp and WantToPickUp, are still predefined, right?"_
Yes, in our current framework, predicates (i.e., _"elements of temporal logic rules"_ as shown in Appendix D.4, Table 17) are drawn from a predefined candidate pool. This reflects a **closed-world assumption**, where all possible actions and mental states are specified in advance.
However, our approach can be extended to an **open-world setting** by incorporating **predicate invention mechanisms**. Inspired by prior work [1] on using vision-language models for predicate generation, we can leverage LLMs to handle novel situations—first prompting them to generate diverse candidate actions and mental states, and then integrating these into the rule-learning and inference process.
[1] Liang, Y., Kumar, N., Tang, H., Weller, A., Tenenbaum, J. B., Silver, T., ... & Ellis, K. Visualpredicator: Learning abstract world models with neuro-symbolic predicates for robot planning. ICLR 2025.
---
Rebuttal Comment 1.1:
Comment: Thank you for your clarification. I have no further questions and will keep my score. | null | null | null | null | null | null | null | null |
Solving Linear-Gaussian Bayesian Inverse Problems with Decoupled Diffusion Sequential Monte Carlo | Accept (poster) | Summary: ## Summary
* This paper is based on previous work of solving diffusion inverse problems using sequential monte carlo [Practical and Asymptotically Exact Conditional Sampling in Diffusion Models]. More specifically, it takes the inner loop part of decoupled posterior sampling [Improving Diffusion Inverse Problem Solving with Decoupled Noise Annealing], and blend it with SMC based posteior sampling using an annealing parameter. By adjusting annealing parameter $\eta$, this new prior can generalize previous SMC based methods.
* The authors verify the effectiveness of their approach, on GMM and image restoration tasks. The proposed approach seems to be sota on GMM, and achieves competitive performance on image restoration tasks.
Claims And Evidence: ## Claims And Evidence
* I have some concerns with using the solution of PF-ODE as a approximated sample for q(x0|xt+1). This is approximation is not well justified, as PF-ODE is marginal preserving not distribution preserving. The PF-ODE, starting from random xt+1, has same marginal distribution as q(x0). However, whether it serves as a good approximated sample in q(x0|xt+1) is not sure.
* I am a little bit confused by the motivation of this papar. It seems that TDS [Practical and Asymptotically Exact Conditional Sampling in Diffusion Models] is already asymptotically exact. DDSMC use approximated sample such as tweedie and PF-ODE, to q(x0|xt+1). Does the approximation error harms the asymptotically exactness? Is there any theoretical advantage of DDSMC, over TDS?
Methods And Evaluation Criteria: ## Methods And Evaluation Criteria
* The evaluation of this paper is a kind of limited. The authors only verify their approach on 100 FFHQ images. This limited data makes it hard to compute divergence based metrics such as FID. A relatively larger, more diverse dataset such as 1000 ImageNet images, might strengthen the empirical results.
* Further, only LPIPS is chosen as benchmark, while a more comprehensive comparsion using PSNR and FID can help readers understand the results better.
* The complexity of the proposed approach is quite high, while I find no metrics on this, such as wall clock time or FLOPS. It seems to me that DDRM can be super-fast, and DAPS can also be made fast by adjusting parameters. The Tweedie version of DDSMC can be as efficient as DAPS. However, I am really not sure about whether it is fair to compare PF-ODE DDSMC, as it appears a lot slower than DDRM and DAPS.
Theoretical Claims: ## Theoretical Claims
* The theoretical claims look correct to me.
Experimental Designs Or Analyses: ## Experimental Designs Or Analyses
* See Methods And Evaluation Criteria
Supplementary Material: ## Supplementary Material
* I read the proofs and additional empirical results.
Relation To Broader Scientific Literature: ## Relation To Broader Scientific Literature
* This paper contributes a new algorithm to the diffusion inverse solvers. The main contribution is a more effective SMC based algorithm with the idea taken from DAPS.
Essential References Not Discussed: ## Essential References Not Discussed
I find no essential reference missing.
Other Strengths And Weaknesses: N/A
Other Comments Or Suggestions: N/A
Questions For Authors: N/A
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: Thank you for the comments, which have been of great value to improve the paper.
## Design choices in SMC make big difference in practice, motivating us to construct a new and better algorithm
The reviewer correctly points out that TDS and MCGDiff already enjoy asymptotic exactness. However, the choice in intermediate targets and in proposal can make a big difference in practice with finite number of particles. Motivated by previous works on SMC for diffusion priors, we set out to design a new and improved algorithm. The different SMC algorithms indeed show very different performance in the experiments, with DDSMC outperforming both MCGDiff and TDS, providing strong evidence that the design choices made in DDSMC give better practical efficiency compared to the other SMC methods. As a reply to _"By adjusting annealing parameter $\eta$, this new prior can generalize previous SMC based methods"_ we would like to emphasize that DDSMC is a novel SMC method for the problem under study _regardless of the annealing parameter $\eta$_. In fact, our main contribution is the development of this novel SMC method, and we view the generalization of the DAPS prior (i.e., introducing $\eta$) as a secondary contribution. See also the reply to psuz. See response to tiGx and XuMA regarding asymptotic exactness
## We will rephrase the part about PF-ODE for sampling from $q(x_0|x_{t+1})$
In the background section (line 104 col 1) we wrote that it is possible to use PF-ODE to sample from $q(x_0|x_{t+1})$, in order to generate a sample trajectory from the prior. We thank the reviewer for pointing out this error, and we agree that this is incorrect
Note that **this paragraph was only included as an "intuitive explanation" of how the proposed method works, and the validity of the method does not in any way rely on the PF-ODE sampling from $q(x_0|x_{t+1})$**.
What we actually meant to say with this section was that, conceptually, a (convoluted) way to simulate from the _prior_ backward process would be:
Initialize $x_T\sim q(x_T)$.
For $t=T-1,\dots,0$,
1. Solve $\hat x_{0,t+1}=\text{PF-ODE}(x_{t+1})$ from time $t+1$ to 0
2. Sample $x_{t}\sim q(x_{t}|x_0=\hat x_{0,t+1})$
This would result in samples such that, marginally for any $t$, $x_t\sim q(x_t)$, but we do not get samples from the _joint_ $q(x_{0:T})$, _nor from the conditionals_ $q(x_0|x_{t+1})$ as the reviewer correctly points out. This sampling process motivates the DAPS prior that we use (and generalize), which is why we mentioned it in the background, but we will of course make sure to update the text so that it is mathematically correct when revising the manuscript.
## We have evaluated DDSMC on protein structure completion
The reviewer writes _"The evaluation of this paper is a kind of limited."_ In the response to XuMA we have added results for another experiment concerning protein structure completion, showing that DDSMC can outperform the tailored APD-3D method out-of-the-box in (realistic) high-noise setting.
## We have now evaluated using 1k images
We reran the experiments on 1k images, with essentially identical results (see response to psuz). We also computed PSNR, where now DDRM is the overall strongest model. However, just as for LPIPS, the standard deviation is rather large. Given that our method aims to recover posterior distributions, PSNR as a per-pixel metric (even stricter than per-sample metric like LPIPS) does not represent an ideal metric here, and for generative models there is often a trade-off between perceptual (e.g., LPIPS) and distortion (e.g., PSNR) metrics [1]. We will supply the PSNR table in the appendix with a comment in the main paper. We don't compute FID as we are focused on sampling from 1k different conditional distributions and not 1 unconditional distribution.
[1] Blau and Michaeli, The Perception-Distortion Tradeoff, CVPR 2018
## Clarification regarding complexity
We agree that a discussion is missing and will add this in the paper. In summary: DDSMC-Tweedie requires N times more (N=number of particles) NFEs per diffusion step compared with DDRM, and DDSMC-ODE has N times the NFE of DAPS. DDSMC-Tweedie has the same complexity as MCGDiff and requires slightly fewer NFEs than TDS (which requires differentiating through the score-function). See response to XuMA for an additional study using fewer particles to obtain the same number of NFEs for DDSMC-ODE as MCGDiff in the GMM case. For images, we are already using fewer NFEs as we are using fewer particles.
The additional NFEs required for SMC should be viewed as way of trading off improved sample quality with compute. As seen in the additional GMM experiments with fewer particles, this aspect holds empirically as the performance improves when using more particles (especially over using a single particle). For methods like DDRM or DCPS, we have attempted to use as much computation as reasonably possible and they still fail while DDSMC effectively enjoys the compute-quality trade-off.
---
Rebuttal Comment 1.1:
Comment: My concerns about the experimental results remain.
The authors claim that they have reran evaluated using 1k images in the response to __psuz__. While I searched in the response to __psuz__ but find no additional result. This is a little bit confusing.
__AC and other reviewers: have I missed any additional results?__
The authors also have not justify the adoptation of PF-ODE in approximation of posterior sample, which is one of two major way in reconstruction of $x_0$ in their method. I think this issue is important.
---
Reply to Comment 1.1.1:
Comment: Thank you for your comment.
First of all, we had introduced a typo, and the 1k results are discussed in the response to **tiGx**. We are sorry for the confusion. This can be found under the headline "We have now evaluated on 1k images, computed standard deviations", and reads "We took the reviewer's advice and evaluated on a 1k image validation set. The numbers, however, differ only by a maximum 0.01. The standard deviations of the 1k values ranges between 0.01 up to 0.075." In other words, we do not see much difference when using more images (the LPIPS values are the same, differing by 0.01 at maximum).
## Regarding evaluation
We want to highlight that we have also made an extensive study in a Gaussian mixture model (GMM) setting, where we can really evaluate the posterior sampling capabilities, which is the task we are targeting in our paper. See the heading "We target posterior sampling, which is verified in the GMM task'' in our response to psuz. As we say there, this is a necessary study to show that the model can actually sample from the true posterior, and in the GMM setting, we can check this. Additionally, as far as we are aware, there is no way of doing this exactly for images, and therefore the main purpose of the image experiments is to first qualitatively evaluate that the model also works in high-dimensional, real-world, settings, and also quantitatively, using the LPIPS metric. As mentioned, "we are in line with SOTA methods, and outperform MCGDiff, which is the _closest comparable_ method to DDSMC"
## Regarding PD-ODE
We are unsure about the new comment about using PF-ODE solution as the reconstruction. We interpreted the initial review as referring to the paragraph in the background section which stated that PF-ODE is a sample from $q(x_0|x_{t+1})$. We agree that this statement was incorrect, and in the rebuttal we described how this paragraph was intended to give an "intuitive explanation" of how the proposed method works, and how we will change that part to be mathematically correct. As we stress in the rebuttal, **the validity of our method does not rely on this paragraph**.
If the new question is not about this paragraph, but in general asks about a motivation why we can use PF-ODE as the reconstruction, we would like to first highlight that this is motivated in the main paper in the paragraph describing the DAPS prior, line 134 col 1, and is also reflected in our responses to XuMA regarding Proposition A.1 and asymptotical exactness. Essentially, in the DAPS prior, we can use the PF-ODE to obtain a sample from $p_\theta(x_0)$ which can then be pushed forward in time, and this will, under some assumptions, lead to samples from the same marginal $p_\theta(x_t)$.
If we still haven't answered the question, we kindly ask the reviewer for further clarifications. | Summary: The paper proposes a new SMC method for sampling from the posterior of a Bayesian inverse problem that uses the as prior the time zero marginal of a learned score-based (or diffusion) generative model. The proposed SMC is influenced by the [1] but restricts itself to a Gaussian linear likelihood and instead of Langevin proposes an SMC approach. The paper also has links with [2] and [3], which are other SMCs methods in the literature. The paper also proposes an extension to discrete diffusion which is an unique feature with respect to the other available SMC samplers.
The proposed algorithm is evaluated both in a toy dataset where a tractable Bayesian posterior distribution is available and also on image datasets. While in the toy example it excels w.r.t the other available methods, in the image datasets it comes as a second best w.r.t [4], even though the authors rightly point out that the available metrics for the image task do not exactly measure the "correctness" of the posterior sampler, but rather the visual qualities of the images.
[1] Zhang, B., Chu, W., Berner, J., Meng, C., Anandkumar,
A., and Song, Y. Improving Diffusion Inverse Problem
Solving with Decoupled Noise Annealing, July 2024
[2] Wu, L., Trippe, B., Naesseth, C., Blei, D., and Cunningham,
J. P. Practical and Asymptotically Exact Conditional
Sampling in Diffusion Models. Advances in Neural Infor-
mation Processing Systems, 36:31372–31403, December
2023.
[3] Cardoso, G., el Idrissi, Y. J., Corff, S. L., and Moulines,
E. Monte Carlo guided Denoising Diffusion models for
Bayesian linear inverse problems. In The Twelfth Interna-
tional Conference on Learning Representations, 2024.
[4] Janati, Y., Moufad, B., Durmus, A. O., Moulines, E., and
Olsson, J. Divide-and-Conquer Posterior Sampling for
Denoising Diffusion priors. In The Thirty-eighth Annual
Conference on Neural Information Processing Systems,
November 2024.
Claims And Evidence: The claims of the paper concerning their performance are supported by clear and convincing evidence.
However, I feel that there is a slight problem with one of the theoretical claims, namely the validity of the SMC sampler under general conditions. Notably, in the text, the authors claim :
"SMCDiff (Trippe et al., 2023) and FPS (Dou & Song, 2023)
are two other SMC algorithms that target posterior sampling
with diffusion priors, but these rely on the assumption that
the learned backward process is an exact reversal of the
forward process, and are therefore not consistent in general."
which leads the readers to believe that this is not the case of the current approach. But I have doubts over such claim (see theoretical claims).
Methods And Evaluation Criteria: Yes, the benchmarks are well chosen and make sense for the application, even though I feel that an addition of a different source of real data would greatly enhance the evaluation of the current method (such as audio, or video or as in DCPS the ECG).
Theoretical Claims: Yes, I have checked the theoretical claims and I have an issue with proposition A.1. While the proof is correct, the usage in equation (4) is not correct. Indeed, if we assume that $f_{\theta}(x_{t+1})$ is a sample of $p_{\theta}(x_{0})$, the to obtain a sample of $p_{\theta}(x_t)$, proposition A.1 suggests that one has to use $p_{t|0}(x_t | x_{0})$ which is not equal $q_{t|0}(x_t | x_0)$ except if the forward and backward match, if I'm not mistaken. Thus, to the SMC proposed to be valid, it needs two assumptions: The ODE samples from $p_{0}(x_0)$ and that the "forward of the backward" $p_{t|0}$ is equal to the forward of the forward $q_{t|0}$.
While this is not a problem per-se and can be considered as an approximation to render the SMC tractable, the SMC is not assymptotically exact in general conditions as is the case of MCGDiff and TDS. Indeed, it would fall under the category of " SMCDiff (Trippe et al., 2023) and FPS (Dou & Song, 2023)
are two other SMC algorithms that target posterior sampling
with diffusion priors, but these rely on the assumption that
the learned backward process is an exact reversal of the
forward process, and are therefore not consistent in general."
Experimental Designs Or Analyses: I checked the soundness of the experimental designs or analysis, but I have one issue with the current analysis is that the current analysis is not made under a same-budget criterion. Indeed, for example in the mixture of gaussians example, the authors state that they used 256 particles for all SMC samplers. The problem is that this would generate a much much higher NFE (Neural function evaluation) for their algorithm, as they need to solve the whole ODE for each time and each particle, which is not the case of MCGDiff or TDS. Therefore, instead of doing 256x20 NFE as MCG diff and TDS (TDS actually does a bit more) they do approximately 20 times more.. I would suggest increasing the number of particles in MCGDiff and TDS to have a fair comparison in such example.
This concern however, does not concern the image section where an almost equivalent budget is used for MCGDIff and the proposed method.
Supplementary Material: Yes, I reviewed section A, B and F thoroughly.
Relation To Broader Scientific Literature: Yes, the related material and the context of the proposed method are clearly explained. The paper also clearly explain the differences between the different SMC samplers.
Essential References Not Discussed: no
Other Strengths And Weaknesses: The paper is clearly written and does a clear review of the existing methods in SMC.
Besides the two points raised above (theoretical claims and methodology), I feel however that one item that is lacking for practitioners is an analysis of the parameter sensitivity. While the authors show how the parameter $eta$ influences the performance, it is not clear how one should chose either the number of particles and the number of steps in the ODE. It would be interesting to see the tradeoffs between them in a fixed budget regime.
Other Comments Or Suggestions: Line 171 second column there is an extra parenthesis in the Gaussian.
Questions For Authors: My main question is asked in the theoretical claims section, namely:
1. Does the SMC holds under the hypothesis that the two joints distributions (backward and forward) do not match? My understanding is that this is not the case.
2. How do all the algorithms compare with equal NFE in the mixture of gaussian case?
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: Thank you for taking the time to read and comment on our paper, which certainly has been useful to make the paper better. We answer concerns and questions below.
## Assumptions in proposition A.1 concerns DAPS vs standard prior, not asymptotical exactness of DDSMC
Thanks for pointing out the unclarity regarding the assumptions in Proposition A.1., and how they affect the consistency of DDSMC. We emphasize that **Proposition A.1 only concerns properties of the DAPS prior**, not the DDSMC algorithm per se. Specifically, it refers to whether the DAPS prior ($\eta =0$) and the standard diffusion prior ($\eta=1$) result in the same (marginal) prior $p_\theta(x_0)$. If the assumptions do not hold (as they will not in practice), these priors are different. However, regardless of the choice of prior (we view this as a design choice, more about this below), **DDSMC will have asymptotic exactness guarantees**, i.e., the empirical approximation will converge to the corresponding posterior *induced by the chosen prior*. It is in this last point where SMCDiff and FPS are different: they also start from a diffusion-model prior which (combined with the given likelihood) induce a posterior. However, these algorithms then target _an approximation of the induced posterior_. This approximation, which is the target of their respective SMC samplers, will correspond to the actual induced posterior only when the forward and backward kernels match. This is what we mean by "SMCDiff and FPS [...] are therefore not consistent in general".
As mentioned above, it's important to note that the prior in DDSMC is a design choice. We have developed the method based on a generalization of the DAPS prior, because the decoupling offered by this prior has proven to be useful in prior work. However, the DDSMC method is equally applicable to the "standard diffusion prior" (simply set $\eta=1$) and the algorithm will then provide consistent approximations of the posterior induced by this prior. In this case we do not rely on Proposition A.1 at all.
We thus believe that the assumptions in Proposition A.1 are used in a fundamentally different way in DDSMC than in SMCDiff and FPS. We thank the reviewer for highlighting this important detail, and we will make a clarification around line 144 col 1 in a revised version of the paper. See also response to reviewer tiGx about asymptotical exactness. If there are any more questions or concerns about any of this, we are happy to answer in a follow-up comment.
## We have tried GMM experiments with same compute budget as for MCGDiff
Using Tweedie's formula as the reconstruction requires just a single evaluation of the score function, meaning in **this case we are already using the same compute budget as MCGDiff in the GMM experiments**. We will clarify that this is what we mean in line 358 col 1 when saying that "DDSMC outperforms all other methods, even using Tweedie's reconstruction".
We do agree, though, that using DDSMC-ODE under same compute budget is also interesting, and have therefore performed additional experiments using DDSMC-ODE with 12 particles (20x less than the other methods) and 25 particles (10x less, as the number of steps in the ODE decreases from 20 to 1, i.e. $\sim 10$x more score evaluations/particle on average), and the results show that in low dimensions, 12 or 25 particles is still better than the Tweedie (and hence, MCGDiff/TDS), but this changes for higher dimensions, where using Tweedie with more particles seems to be the better choice. This shows how the SMC aspect (multiple particles and resampling) indeed is an important aspect in the performance. We will add these results in the appendix, along with a similar experiment where we change the number of ODE-steps instead, and a comment in the main paper.
## Additional empirical results
We agree with the reviewer that "an addition of a different source of real data would greatly enhance the evaluation", and therefore looked at the protein structure completion in ADP-3D [1]. Their method is built for that type of task, and our method performs well out of the box, outperforming ADP-3D on higher, but realistic, noise levels (where we had to tweak their learning rate to give reasonable results).
RMSD on 7qum protein. Columns indicate that every $n$ residues are observed.
| Model ($\sigma=0$) | 2 |4 | 16 | 32 | 64 |
|---| ---| ---| ---| ---| ---|
| ADP-3D | 0.229|0.378 | 1.690 | 3.590| 7.788|
| DDSMC | 0.231|0.938 | 2.385| 3.858| 8.552| 13.643 |
| Model ($\sigma=0.1$) | 2 |4 | 16 | 32 | 64 |
|---| ---| ---| ---| ---| ---|
| ADP-3D | 1.371|1.429|3.404|4.540|8.542| 13.318|
| DDSMC | 1.264|1.568 |2.849| 4.201| 8.927| 13.456 |
| Model ($\sigma=0.5$) | 2 |4 | 16 | 32 | 64 |
|---| ---| ---| ---| ---| ---|
| ADP-3D | 6.704| 7.087| 7.970| 9.283| 14.441|14.387|
| DDSMC | 6.047 | 6.245| 6.742|7.479| 10.282| 14.79178619|
[1] Levy et al. Solving Inverse Problems in Protein Space Using Diffusion-Based Priors, arXiv, 2024 | Summary: The authors consider solving linear inverse problems with diffusion models, but only those where the forward model has a tractable SVD. They build on the recent decoupled annealed posterior sampling (DAPS) method by Zhang (2024) by replacing its Langevin sampling inner-loop with a sequential Monte-Carlo (SMC) sampler. They claim that this improves posterior sampling performance and present numerical experiments with low-dimensional synthetic GMM data and high-dimensional image data. Their method differs from several recent works on diffusion-SMC in the details of the intermediate targets and proposal function.
## update after rebuttal
I appreciate the clarifications given in the authors' response, but I still have concerns about the readability of the paper, since a lot of effort will be required to make it clear and accessible. If you look closely at my review, there are a number of unanswered questions, and responses to other reviewers suggest that there are many typos. Regarding the existence of linear inverse problems without implementable SVDs, there are many, including motion deblurring, multi-coil MRI, computed tomography, any i.i.d. random forward operator like those popular in compressive sensing, etc. With these issues in mind, I am leaving my score as-is.
Claims And Evidence: The main claim is that the proposed method is asymptotically exact (see the abstract). I did not find the explanation convincing because many approximations are made and it's not clear whether they all guarantee asymptotic exactness.
* In line 204 col 1, it's acknowledged that (13) is an approximation of the posterior. Is this relevant to the proposed method or does it pertain only to DAPS?
* In (16) a proposal distribution is constructed based on several approximations. Does they interfere with asymptotic exactness?
* Around (68) we learn that several key hyper parameters like $\rho_t$ are heuristically adjusted. How does the choice of $\rho_t$ affect asymptotic exactness?
Another big issue with the paper is that the proposed DDSMC method is never clearly described. Algorithm 1 doesn't capture what is described in the text.
* It's not clear how $f_\theta$ on line 171 col 1 is computed. Based on line 321, this seems to require an inner loop with a DDIM ODE solver, but this is not clearly described. For example, (66) seems to describe an inner loop, but the time variable $t$ is the same as used in the outer loop of Algorithm 1, making it impossible to understand.
* Throughout the paper, the word "steps" is used in an ambiguous and confusing way. Based on Algorithm 1, there seems to be outer "steps" but also inner steps used when evaluating $f_\theta$, but often the authors don't distinguish between them. And there is a confusing comment in line 323 about "remaining steps $t,t-1,...,0$". What does this mean?
* In section F.2.1, the $\sigma_t$ and $t_i$ quantities are described over $M$ steps, but these quantities do not appear in Algorithm 1, nor even the DDIM ODE update (66). How does $M$ relate to $T$ in Algorithm 1? How are the $t_i$ in F.2.1 related to $t$ in Algorithm 1?
Methods And Evaluation Criteria: It's not clear that the authors evaluated the competing methods under appropriate hyper parameter choices. For example, the DDRM paper shows in their Table 6 that both PSNR and FID get worse as the NFEs are increased from 20 to 100, suggesting that "more is not better". But in the paper under review, DDRM was evaluated with 1000 or 300 NFEs, which are massively larger than the standard value of 20, without justification. This may be a very poor choice.
Theoretical Claims: It would help to have a detailed theorem and proof statement for the "asymptotically exact" claim. Currently the claim is too vague.
Experimental Designs Or Analyses: One issue with the experimental analysis is that the number of NFEs is never clearly reported for the proposed method. It seems to grow as the product of the number of particles $N$, the number of outer loop steps $T$, and possibly the number of inner loop steps $M$, but as reported earlier, the proposed method is never clearly described. In any case, it is imperative to explicitly list the number of NFEs, which seems to be a serious drawback of the method.
Another issue with the experiments is that the authors use only a 100-image validation set, whereas 1000 is typical in most respected diffusion posterior sampling papers. Just because DCPS and DAPS used 100 doesn't mean that it is sufficient. With only 100 validation images, there is likely to be large standard errors on the averaged performance metrics, and no standard errors are even presented in Table 3.
Supplementary Material: I went through the entire supplementary material and described issues elsewhere in this review.
Relation To Broader Scientific Literature: Personally, I think that this paper is a relatively minor variation on other recent SMC diffusion posterior sampling works like MCDiff and TDS.
Essential References Not Discussed: There are various other MCMC approaches to posterior sampling with diffusion that should be discussed. For example
* Florentin Coeurdoux, Nicolas Dobigeon, and Pierre Chainais. Plug-and-play split Gibbs sampler: Embedding deep generative priors in Bayesian inference. IEEE Trans. Image Process., 33:3496–3507, 2024.
* Zihui Wu, Yu Sun, Yifan Chen, Bingliang Zhang, Yisong Yue, and Katherine Bouman. Principled proba- bilistic imaging using diffusion models as plug-and-play priors. In Proc. Neural Info. Process. Syst. Conf., 2024.
* Xingyu Xu and Yuejie Chi. Provably robust score-based diffusion posterior sampling for plug-and-play image reconstruction. In Proc. Neural Info. Process. Syst. Conf., 2024.
Other Strengths And Weaknesses: I don't think the authors have been forthcoming on the computational burden of their method, as the NFEs used were never clearly stated. I suspect that the computational requirements are massive, making the method uninteresting for practical application.
Also, I don't think the authors have been forthcoming on the restriction of their method to linear inverse problems with implementable SVDs. This is a strong restriction.
Furthermore, while the authors dismiss other non-consistent SCM works (see line 249, col 2), they have not clearly established the consistency of their approach.
Other Comments Or Suggestions: I suggest the authors pay more attention to clearly describing the proposed method and clearly stating and defending the main claims.
Questions For Authors: * In line 94 col 1, the expression for $p_\theta(x_t|x_{t+1})$ is described as an "approximation", but it seems instead like a definition, given the goal stated in the first paragraph of the Background section. In other words, given $q$ and $f_\theta$, we define $p_\theta(x_t|x_{t+1})$ from them and then aim to sample from $p_\theta(x|y)$. Is that correct? Same question applies to (4).
* In line 076 col 2, I wonder if there is a typo in the definition of the resampling step, because it is a circular definition.
Code Of Conduct: Affirmed.
Overall Recommendation: 3 | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments, and we address their concerns below (see answer to zGWq regarding complexity and NFEs).
## We will clarify the asymptotical exactness guarantees
SMC provides consistent approximations of its sequence of targets $\lbrace \pi_t(x_{t:T})\rbrace_{t=0}^T$ under weak conditions. However, since we only care about the final target, it is enough that $\pi_t(x_{0:T})$ admits the true posterior $p_\theta(x_0|y)$ as a marginal for the method to be consistent. This is the case for DDSMC by construction; see Eq (7) with $t=0$. The _intermediate targets_ $(\pi_t(x_{t:T}), t>0)$ as well as the _proposals_ can be seen as design choices that affect the efficiency of the algorithm but not its consistency. The particular ways in which we design these quantities (while ensuring correctness of the _final target_) constitute the DDSMC framework.
We mention this general fact in line 92 column 2 regarding the intermediate targets but will add a comment regarding the proposal.
We note that the specific questions that you ask regarding approximations in Eq (13), (16), (68) _only affect the intermediate targets and/or proposal_ and thus not the consistency of the final target approximation.
We realize that this should be further clarified, and we will hence add a "theorem-like" paragraph, making it clear how our algorithm will be asymptomatically exact. See also response to XuMA.
## We view reconstruction function $f_\theta$ as a design choice
In our experiments, we tried two types of reconstruction functions: either using Tweedie's formula (see line 87 col 1) or the PF-ODE. As we view this as a design choice, we have not explicitly written how $f_\theta$ is computed in Algo 1. It is true, however, that if using the PF-ODE, this requires an inner loop. In eq (66) there is a typo, where $t$ should be replaced by $t'$ (the inner loop time variable), and in line 323 we mean that we start at $x_{t'} = x_t$, then use Eq 66 for $t'=t-1, t-2, \dots, 0$. I.e., we start from a sample at the diffusion time (outer loop index) $t$, then convert that into a sample $x_0$ using as many steps in the inner loop as there are "left" in the outer loop. We will fix these typos and clarify line 323.
This flexibility also affects the computational cost/NFE, where the Tweedie reconstruction avoids the outer loop and is therefore more efficient. Compared to MCGDiff and TDS, DDSMC-Tweedie has the same number of NFE (but avoids the expensive differentiation of the reconstruction network in TDS), but can still provide better result. The DDSMC framework is general and choice between Tweedie and PF-ODE (or some other reconstruction method) becomes a computational trade-off for the user.
## While falling under the same framework as TDS and MCGDiff, we provide better empirical performance
Both TDS and MCGDiff are SMC methods, and we differ in how we design both the proposal and targets, which empirically shows improved performance. Additionally, we make a secondary contribution in the choice of prior (using $\eta\neq 1$), which can further improve performance.
## We evaluate DDRM with fewer steps, gives worse performance
We used 300 steps when running DDRM to match that of DCPS, but on the reviewers advice we also tried 20, which shows worse result. We will add that to the appendix. For the GMM case we used 1k steps as more steps would mean less discretization error, and instead using 20 shows worse performance for 8 and 80 dimensions, while being similar at 800 dimensions.
## We have now evaluated on 1k images, computed standard deviations
We took the reviewer's advice and evaluated on a 1k image validation set. The numbers, however, differ only by a maximum 0.01. The standard deviations of the 1k values ranges between 0.01 up to 0.075.
## We are not aware of linear inverse problems without implementable SVDs
We make it very clear that this method tackles linear inverse problems (it is in the title) which is still an open question (e.g., MCGDiff and DCPS tackle exactly this). However, we are not aware of linear inverse problems which do not have implementable SVDs and would gladly appreciate pointers to examples of this if implementable SVDs is a large concern.
## Other points
* We will add a paragraph in Related work to cite the suggested papers and discuss related MCMC methods.
* We agree that the expression for $p_\theta(x_t|x_{t+1})$ is a definition in the current problem formulation (although based on approximating $q(x_t|x_{t+1})$ when training the generative model) and will clarify
* In the resampling step, a set of "ancestor indices" $`{`a_t^i`}`_{i=1}^N$ are sampled from the Multinomial distribution, and each particle $x_t^i$ is then replaced by $x_t^{a_t^i}$. The circular dependency comes from overloading the notation. We will clarify to avoid confusion
* The notation in F.2.1. came from following the model formulation used by DAPS, which gave rise to a notational inconsistency. We will make sure to clarify | Summary: The paper introduces Decoupled Diffusion Sequential Monte Carlo (DDSMC), a method for Bayesian inverse problems using diffusion priors. Main contributions include: Leveraging a modified diffusion process ("DAPS prior") to enable larger updates during sampling, improving exploration. Combining SMC with diffusion models to provide asymptotically exact posterior sampling, addressing limitations of prior methods that rely on approximations. Extending the approach to discrete data (D3SMC) via discrete diffusion models (D3PM).
## update after rebuttal
Based on the authors' rebuttals, and also the reviews of other reviewers, I would like to maintain my original rating.
Claims And Evidence: N/A
Methods And Evaluation Criteria: N/A
Theoretical Claims: N/A. No theoretical results in this submission (I did not check the supplementary material).
Experimental Designs Or Analyses: - The contribution appears incremental when compared to existing methods such as DAPS and the experimental results do not fully demonstrate a clear advantage over current methods. Overall, the novelty and significance of the work may not be sufficient for acceptance at ICML.
- The main limitation lies in the experiments. A large portion of the results relies on synthetic Gaussian mixture models. Since Gaussian mixture models provide ground truth score information, the results based on them do not capture the challenges encountered by existing diffusion model-based methods in real-world applications. For the real-world FFHQ dataset, experiments in inpainting, outpainting, and super-resolution yield performance of DDSMC that is not clearly superior to other methods. For example, DCPS ranks first in all tasks in Table 3, casting doubt on the practical benefits of the proposed DDSMC method.
- The proposed DDSMC method reduces to existing methods in extreme cases (e.g., inverse temperature eta = 0 and using PF-ODE for reconstruction). The paper should demonstrate how varying eta affects the results and that an intermediate value between 0 and 1 offers better performance. However, Table 1 shows that with 800 dimensions for x, eta = 0 yields the best performance, and Table 3 reveals similar results for eta = 0 and eta = 0.5. The results presented do not effectively support the claimed benefits of the inverse temperature. Additionally, the experiments omit analysis on the number of particles for SMC. Since other methods run only once, DDSMC appears to gain an unfair advantage, particularly against methods that also involve random sampling. Experiments using a single particle for DDSMC and multiple runs for existing methods are necessary.
Algorithms that merge SMC with decoupled diffusion in an innovative manner could have strengthened the paper. Using SMC to increase the number of samples for approximation and employing the inverse temperature eta might enhance performance. However, based on the experimental discussions above, the overall advantage of the proposed method appears limited. Because of this, I do not recommend acceptance.
Supplementary Material: No.
Relation To Broader Scientific Literature: N/A
Essential References Not Discussed: No.
Other Strengths And Weaknesses: No.
Other Comments Or Suggestions: No.
Questions For Authors: No.
Code Of Conduct: Affirmed.
Overall Recommendation: 2 | Rebuttal 1:
Rebuttal: We thank the reviewer for taking the time to comment on our paper. We have tried to address all your comments below, but there were a few points that we did not quite understand, so we kindly ask you to clarify if we in fact misunderstood some of your points.
## Merging SMC with decoupled diffusion is our core contribution
As the reviewer comments _"algorithms that merge SMC with decoupled diffusion in an innovative manned could have strengthened the paper"_, we want to highlight that this is exactly the core contribution that we make in our paper! As far as we are aware, this is the first time that SMC is merged with decoupled diffusion, and we are therefore not sure what this comment refers to. We are happy to answer any follow up clarifications on this.
The reviewer comments that _"The proposed DDSMC method reduces to existing methods in extreme cases (e.g., inverse temperature eta = 0 and using PF-ODE for reconstruction)."_ We do not agree with this claim since it misses the point that our core contribution is an **SMC algorithm** building on the (generalized) DAPS prior. Hence, this claim would only be correct if we also restrict DDSMC to using a single particle, but the "parallel particles" are a key ingredient in SMC. Running an SMC algorithm with a single particle corresponds to sampling from the proposal, which if using $\eta=0.0$ and PF-ODE would be more or less equivalent to DAPS (if instead using $\lambda_t^2=\sigma_t^2$ in eq (16)). To verify that, we ran DDSMC-ODE on the GMM case with a single particle, and the numbers obtained with $\eta=0.0$ are essentially identical to DAPS (differing by at most 0.03 from each other). Hence, we can conclude that the **introduction of the SMC framework is the key ingredient that leads to the improved performance over DAPS**. We will incorporate these results in the appendix, and make a comment in the experiment section in the main paper. Next, we do agree that a further analysis of the number of particles would be beneficial, and as part of the response to XuMA, we performed additional experiments with fewer particles using the DDSMC-ODE.
## We target posterior sampling, which is verified in the GMM task
We agree that our experiments largely depend on synthetic experiments. However, we believe that this is a necessary sanity check for a method such as DDSMC which is designed to target the __correct posterior distribution__, intended to empirically prove that our proposed technique samples from the true posterior.
In order to achieve this, having an exact score/ground truth posterior is necessary to control errors.
The purpose of the image experiments is to first show *qualitatively* that our method works also in high-dimensional, real-world problems, and we included the LPIPS metric to also show this quantitatively: we are in line with SOTA methods, and outperform MCGDiff, which is the *closest comparable* method to DDSMC.
We emphasize that our focus is on recovering the posterior distribution, and we do not aim to improve the per-sample quality like DDRM, but instead, the population quality. However, a problem is that most image-related tasks lack the ground truth posterior, and even a good approximation thereof. To our knowledge, there are no principled metrics to gauge such performance. As such, the synthetic experiment is the only setting we can rely on to compare the methods' posterior sampling ability in a principled way. We revise our Experiment section to reflect on this.
## We have evaluated DDSMC on protein structure determination
The reviewer comments that _"The main limitation lies in the experiments."_ In the response to Reviewer XuMA we have added results for another experiment concerning protein structure determination. We show that DDSMC can outperform the tailored APD-3D method out-of-the-box in (realistic) high-noise scenarios.
Please see response to XuMA for further details.
## We can see clear effects of changing the inverse temperature $\eta$
As mentioned above, we view the DDSMC method itself as our primary contribution, and the _generalization of the DAPS prior_ (such that we can interpolate between DAPS and the standard diffusion prior using the inverse temperature $\eta$), as a secondary contribution. The reviewer comments that we should demonstrate "how varying $\eta$ affects the result and that an intermediate value between 0 and 1 offers better performance". We agree with the observation that $\eta=0.0$ is the best in high dimensions for DDSMC-ODE, but we emphasize that $\eta=0.5$ is better for DDSMC-Tweedie in this case, and is also the best choice in lower dimensions for DDSMC-ODE, while $\eta=1.0$ seems to be better for DDSMC-Tweedie in lower dimensions. It hence certainly seems to be an interplay between the dimension of the data, the choice of reconstruction function, and the choice of $\eta$. We already discuss this in line 304 col 2 and onward, but will extend the discussion in a revised version of the paper.
---
Rebuttal Comment 1.1:
Comment: Thank you for the responses. However, my primary concern persists. I still believe that, in comparison to DAPS, the proposed approach represents an incremental improvement, and the experiments are mainly synthetic and not sufficiently convincing.
---
Reply to Comment 1.1.1:
Comment: We thank you for your reply. We take the opportunity to once again stress that we have developed a method which targets posterior sampling, and not a designated image reconstruction method (while DAPS claim that they target posterior sampling, we show in our experiments that the introduced approximations make the model fail in doing so). As mentioned in the rebuttal, we have tried an experiment on proteins and we see that DDSMC with multiple particles consistently outperforms a single particle (which as mentioned in the rebuttal essentially is equivalent to DAPS), see tables below. We think this again (in addition to the GMM experiments) show that the introduction of the SMC aspect is an **important and non-negligible contribution** in our work.
RMSD on 7qum protein, lower is better. N is the number of particles. Columns indicate that every $n$ residues are observed.
| N ($\sigma=0$) | 2 |4 | 8| 16| 32 |
|---| ---| ---| ---| ---| ---|
| 1 | 0.339 | 1.111 | 2.683 | 7.590 | 12.010 |
| 100 | 0.231|0.938 | 2.385| 3.858| 8.552| 13.643 |
| N ($\sigma=0.1$) | 2 |4 | 8| 16| 32|
|---| ---| ---| ---| ---| ---|
| 1 | 1.297 | 1.775 |3.072| 7.897 | 13.230 |14.871 |
| 100 | 1.264|1.568 |2.849| 4.201| 8.927| 13.456 |
| N ($\sigma=0.5$) | 2 |4 | 8| 16| 32|
|---| ---| ---| ---| ---| ---|
| 1 | 6.216 | 6.695 | 7.254| 11.036 | 12.825| 15.610 |
| 100 | 6.047 | 6.245| 6.742|7.479| 10.282| 14.79178619| | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.